Do people seriously worry about existential risk from AI?
Many people with a deep understanding of AI are highly concerned about the risks of unaligned superintelligent
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
In 2023, leaders from the world’s top AI labs, along with some of the most prominent academic AI researchers, signed a statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included the founders of major AGI
Co-founder and CEO of OpenAI.
Co-founder and CEO of Anthropic. He left OpenAI due to what he perceived as a lack of emphasis on AI safety there.
A risk of human extinction or the destruction of humanity’s long-term potential.
Co-founder of Deepmind, where he is now employed as Chief AGI Scientist.
Co-founder of Deepmind, where he is now CEO.
Stuart Russell
Computer science professor at UC Berkeley, founder of CHAI, and co-author of the textbook Artificial Intelligence: A Modern Approach.
The Turing Award is considered the equivalent of the Nobel Prize for AI. The recipients of the 2018 Award6
-
Geoffrey Hinton
(who also won the Nobel Prize in Physics in 2024)7Geoffrey HintonAI researcher who won the Turing Award in 2018 and a physics Nobel Prize in 2024. He left his job at Google to talk about existential risk from AI.
Hinton mentioned existential risk from AI in his short Nobel prize acceptance speech. -
Yoshua Bengio
Yoshua Bengio2018 Turing Award winning AI researcher who is the scientific director at MILA.
-
Yann LeCun
Yann LeCunAI researcher who won the Turing Award in 2018 and is the Chief AI Scientist at Meta.
In 2023, Hinton resigned from Google to be able to focus on speaking about the dangers of advancing AI capabilities. He worries that smarter-than-human intelligence is not far off, and he thinks that AI wiping out humanity is “not inconceivable”.8
In late 2023, Turing Award recipient Andrew Yao and others9
In 2024, Hinton won the Nobel Prize in Physics together with John Hopfield, another pioneering machine learning researcher, who has signed a letter calling for a pause on the development of frontier AI systems.
Many other science and technology leaders have worried about superintelligence for years. Late astrophysicist Stephen Hawking said in 2014 that superintelligence “could spell the end of the human race.” In 2019, Bill Gates described himself as “in the camp that is concerned about superintelligence” and stated that he “[doesn't] understand why some people are not concerned”. Russell, Hinton, Bengio, and Gates have all signed the Statement on AI Risk.
Altman had previously claimed that if things go poorly, it could be “lights out for all of us”. ↩︎
Amodei has spoken publicly about the existential risks from AI. ↩︎
Legg has stated that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”. ↩︎
Hassabis has also talked about the risks. ↩︎
The three winners have been called the “Godfathers of Deep Learning” for their crucial contributions to the field. ↩︎
Hinton mentioned existential risk from AI in his short Nobel prize acceptance speech. ↩︎
In fact, Hinton’s own view is that existential risk from AI is over 50%, though he gives a lower number after taking into account that others are more optimistic. ↩︎
Notable co-authors include Dawn Song, Yuval Noah Harari and Daniel Kahneman. ↩︎