For once Elon Musk has been accused of not going far enough in his doomsday warnings that AI will destroy mankind. Together with a 1,000 or so other big tech names, Musk has written an open letter asking the world’s leading artificial intelligence labs to pause their latest research into super-powerful advanced AI systems for six months.
Coming just two weeks after the release of OpenAI’s GPT-4, the most powerful AI system yet created, Musk and the signatories claim that these recent advances present “profound risks to society and humanity.”
The reason? It’s because these tech gurus fear that GPT-4 has been so successful in its superior “thinking”, that researchers are closer than ever to being able to design the next stage of Artificial General Intelligence, one which they believe will surpass and overtake human cognitive ability.
Signing the letter alongside Musk are Apple co-founder, Steve Wozniak, Sapiens author Yuval Noah Harari, and, interestingly, four Google experts from its AI lab DeepMind, Emad Mostaque, the chief executive of Stability AI and Tristan Harris, executive director of the Center for Humane Technology. The letter calls on all AI labs and other research centres to stop all training of AI systems which are more powerful than GPT-4 to give researchers time to think.
DeepAI founder, Kevin Baragona, who also signed the letter, commented: “It’s almost akin to a war between chimps and humans. If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it.”
Taking an even more apocalyptic view is one of the world’s AI masterminds who warns that a six-month break is not enough to stop the catastrophe facing humanity. Instead, Eliezer Yudkowsky says we should stop all work on AI now. And if we don’t, then humanity will be wiped out – by the very machines that mankind has created.
In an article for Time magazine published just hours after Musk’s letter went public yesterday, Yudkowsky agrees that a six-month pause would be better than no moratorium, and that he respects everyone who stepped up to the mark and signed.
But he didn’t sign the letter, he says, because he fears it understates the seriousness of the situation and asks for too little to solve it.
He goes on to write that the key issue is “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence: “Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.”
It’s worth quoting Yudkowsky in full: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.”
He points out that – without precision and preparation – it is entirely possible that AI does not do what we want, and “does not care for us nor for sentient life in general.
“That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how. Absent that caring, we get ‘the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.’”
And here comes the killer line. “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
He gives some metaphors of what the battle between humans and machines might look like: “A 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.
As he says, this is not a fair fight. “We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.”
Which is why Yudkowsky’s call for a halt is so powerful. As he also says, many AI experts feel and fear the same catastrophe but are so caught up in the research that they daren’t speak out. But they are expressing those fears in private.
And it’s why he wants an immediate halt to training and GPU clusters – the large computer farmers where the most powerful AIs are refined – everywhere and worldwide: the military and governments being no exceptions. New multinational agreements should be brought in, say between the US and China, so that no one country tries to pull the wool over each other’s eyes.
Not everyone agrees with either Musk, and other signatories to his letter, or Yudkowsky in their doomsday warnings. Those who disagree, and who hail ChatGPT-4 as a great innovation, include Microsoft founder, Bill Gates, Google chief, Sundar Pichai, and futurist Ray Kurzweil. In contrast, they claim that the latest AI systems are our era’s most revolutionary innovations, ones that can help us solve some of the planet’s greatest challenges from cancer to climate changes.
But Yudkowsky has a response to this. He refers to the public statement in February by Satya Nadella, CEO of Microsoft, that the new Bing would make Google “come out and show that they can dance.”
“I want people to know that we made them dance,” he said.
Yudkowsky added: “This is not how the CEO of Microsoft talks in a sane world. It shows an overwhelming gap between how seriously we are taking the problem, and how seriously we needed to take the problem starting 30 years ago.”
Once upon a time, mankind feared meteorites, aliens landing on earth, nuclear war and even a new ice age destroying civilisation. The idea we could destroy ourselves through our own cleverness was not an obvious threat.
So when the maestro who understands what’s going on speaks up, it’s time for the world to sit up and listen. And shut AI down now. But that begs the question: are we clever enough to do so?
Write to us with your comments to be considered for publication at letters@reaction.life