More than 2,600 tech leaders and researchers have signed an open letter urging a temporary "pause" on artificial intelligence (AI) development, fearing "profound risks to society and humanity."
Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs, and researchers were among the signatories to the letter, which was written by the US think tank Future of Life Institute (FOLI) on March 22.
The institute called on all AI companies to "immediately pause" AI training systems that are more powerful than GPT-4 for at least six months, sharing concerns that "human-competitive intelligence may present profound risks to society and humanity," among other things. :
We call on AI Labs to temporarily stop training powerful models!
Join the call of FLI together with Yoshua Bengio, @stevewoz, @harari_yuval, @Elon Musk, @GaryMarcus and over 1,000 others who have signed: https://t.co/3rJBjDXapc
A brief summary of why we called for this - (1/8)
โ Future of Life Institute (@FLIxrisk) March 29, 2023
โAdvanced AI could represent a profound change in the history of life on Earth, and it must be planned and managed with care and resources. Unfortunately, this level of planning and management is not happening," the institute wrote in its letter.
GPT-4 is the latest version of OpenAI's AI-powered chatbot, which launched on March 14. To date, it has passed some of the most rigorous tests. US high school and law exams within the 90th percentile. It is understood that it is 10 times more advanced than the original version of ChatGPT.
There is a "race out of control" among AI companies to develop more powerful AI, which "no one, not even its creators, can reliably understand, predict or control," FOLI claimed.
BREAKING: A petition is circulating to PAUSE all major AI development.
For example, no more updates to ChatGPT and many others.
Signed by Elon Musk, Steve Wozniak, CEO of Stability AI, and thousands of other technology leaders.
Here's the breakdown: pic.twitter.com/jR4Z3sNdDw
โ Lorenzo Green ใฐ๏ธ (@mrgreen) March 29, 2023
Among the main concerns were whether the machines could flood the information channels, potentially with "propaganda and falsehood" and whether the machines would "automatically eliminate" all employment opportunities.
FOLI took these concerns a step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:
โShould we develop non-human minds that can eventually outnumber us, outsmart us, outgrow us, and replace us? Should we risk losing control of our civilization?
โSuch decisions should not be delegated to unelected technology leaders,โ the letter adds.
Having some existential AI angst today
โ Elon Musk (@elonmusk) February 26, 2023
The institute also agreed with a recent Statement by OpenAI founder Sam Altman suggesting that an independent review may be required before training future AI systems.
Altman in his February 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.
However, not all artificial intelligence experts have been quick to sign the petition. Ben Goertzel, CEO of SingularityNET, explained in a March 29 Twitter reply to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) will not become AGIs, which, to date , has been few developments of
In general, human society will be better off with GPT-5 than with GPT-4; it's better to have slightly smarter models. AIs taking human jobs will finally be a good thing. Hallucinations and banality will diminish and people will learn to get around them.
โBen Goertzel (@bengoertzel) March 29, 2023
Instead, he said that research and development should slow down for things like biological and nuclear weapons:
In addition to language learning models like ChatGPT, artificial intelligence deep fake tech it has been used to create convincing image, audio, and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns that it could violate copyright laws in certain cases.
Related: ChatGPT can now access the Internet with new OpenAI plugins
Galaxy Digital CEO Mike Novogratz recently told investors He was surprised about the amount of regulatory attention that has been given to cryptocurrencies, while little attention has been paid to artificial intelligence.
โWhen I think about AI, it amazes me that we are talking so much about regulating cryptocurrency and nothing about regulating AI. I mean, I think the government has it completely turned upside down,โ he opined during a shareholder call on March 28.
FOLI has argued that if the AI โโdevelopment pause is not quickly enacted, governments should get involved with a moratorium.
โThis pause must be public and verifiable, and include all key players. If such a pause cannot be quickly enacted, governments should step in and institute a moratorium,โ he wrote.
Magazine: How to prevent AI from "annihilating humanity" using blockchain