Elon Musk and tech execs call for โ€˜pauseโ€™ on AI development


More than 2,600 tech leaders and researchers have signed an open letter urging a temporary "pause" on artificial intelligence (AI) development, fearing "profound risks to society and humanity."

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs, and researchers were among the signatories to the letter, which was written by the US think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to "immediately pause" AI training systems that are more powerful than GPT-4 for at least six months, sharing concerns that "human-competitive intelligence may present profound risks to society and humanity," among other things. :

โ€œAdvanced AI could represent a profound change in the history of life on Earth, and it must be planned and managed with care and resources. Unfortunately, this level of planning and management is not happening," the institute wrote in its letter.

GPT-4 is the latest version of OpenAI's AI-powered chatbot, which launched on March 14. To date, it has passed some of the most rigorous tests. US high school and law exams within the 90th percentile. It is understood that it is 10 times more advanced than the original version of ChatGPT.

There is a "race out of control" among AI companies to develop more powerful AI, which "no one, not even its creators, can reliably understand, predict or control," FOLI claimed.

Among the main concerns were whether the machines could flood the information channels, potentially with "propaganda and falsehood" and whether the machines would "automatically eliminate" all employment opportunities.

FOLI took these concerns a step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

โ€œShould we develop non-human minds that can eventually outnumber us, outsmart us, outgrow us, and replace us? Should we risk losing control of our civilization?

โ€œSuch decisions should not be delegated to unelected technology leaders,โ€ the letter adds.

The institute also agreed with a recent Statement by OpenAI founder Sam Altman suggesting that an independent review may be required before training future AI systems.

Altman in his February 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

However, not all artificial intelligence experts have been quick to sign the petition. Ben Goertzel, CEO of SingularityNET, explained in a March 29 Twitter reply to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) will not become AGIs, which, to date , has been few developments of

Instead, he said that research and development should slow down for things like biological and nuclear weapons:

In addition to language learning models like ChatGPT, artificial intelligence deep fake tech it has been used to create convincing image, audio, and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns that it could violate copyright laws in certain cases.

Related: ChatGPT can now access the Internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors He was surprised about the amount of regulatory attention that has been given to cryptocurrencies, while little attention has been paid to artificial intelligence.

โ€œWhen I think about AI, it amazes me that we are talking so much about regulating cryptocurrency and nothing about regulating AI. I mean, I think the government has it completely turned upside down,โ€ he opined during a shareholder call on March 28.

FOLI has argued that if the AI โ€‹โ€‹development pause is not quickly enacted, governments should get involved with a moratorium.

โ€œThis pause must be public and verifiable, and include all key players. If such a pause cannot be quickly enacted, governments should step in and institute a moratorium,โ€ he wrote.

Magazine: How to prevent AI from "annihilating humanity" using blockchain