Elon Musk and Tech Experts Call for a Pause on ‘Dangerous’ AI Experiments
Elon Musk, Apple’s co-founder Steve Wozniak, and a host of other tech experts have signed an open letter calling for a pause on artificial intelligence (AI) development.
The letter asks AI labs around the world to temporarily cease the development of large-scale AI systems to manage the “profound risks” that deep learning models pose to society.
“Contemporary AI systems are now becoming human-competitive at general tasks,” reads the letter. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The letter’s authors argue that now is the time for an independent review of AI over a six-month period in which time AI labs should “immediately pause” activities.
“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
The tech experts want to see a set of shared safety protocols developed for AI that can be overseen by independent outside experts.
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the authors add.
The letter alludes to AI machines “flooding our information channels” with untruths, a nod to the recent fake images of Donald Trump being arrested and ChatGPT’s ability to write essays and take exams.
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” warns the letter.
“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
The signees accuse AI labs of being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
Pretty much, I don't agree with everything in the letter but the race condition ramping as H100s come along is not safe for something the creators consider as potentially an existential risk.
Time to take a breath, coordinate and carry on.
This is only for largest models. https://t.co/1vWDrEAmIo
— Emad (@EMostaque) March 29, 2023
The chilling letter has been signed by more than 1,000 people including notable experts and thinkers such as Elon Musk, author Yuval Noah Harari, Skype co-founder Jaan Tallinn, Apple co-founder Steve Wozniak, and Emad Mostaque who co-founded Stability AI one of the companies behind AI image generator Stable Diffusion.
The letter was published by the Future of Life Institute which, according to Reuters, is primarily funded by the Musk Foundation.
Notable signature exceptions include Sam Altman, the chief executive at OpenAI, and Midjourney founder David Holz.
“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,” says Gary Marcus, a professor at New York University who signed the letter.
“They can cause serious harm… the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”