Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

Do people seriously worry about existential risk from AI?

Many people with a deep understanding of AI are highly concerned about the risks of unaligned superintelligent AI.1

In 2023, leaders from the world’s top AI labs, along with some of the most prominent academic AI researchers, signed a statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included Sam Altman (CEO of OpenAI, the company behind ChatGPT), who has stated that if things go poorly it could be “lights out for all of us”, and Shane Legg (Google DeepMind cofounder), who has said he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”.

Stuart Russell, AI expert and co-author of the “authoritative textbook of the field of AI”2

, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern. His book Human Compatible focuses on the dangers of artificial intelligence and the need for more work to address them.

The recipients of the 2018 Turing Award, Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, have been called the “Godfathers of Deep Learning” for their crucial contributions to the field. In 2023, Hinton resigned from Google to be able to focus on speaking about the dangers of advancing AI capabilities. He worries that smarter-than-human intelligence is not far off, and he thinks that AI wiping out humanity is “not inconceivable”. Bengio, who was not previously concerned about existential risks from AI, changed his stance in 2023 and argued that we need to put more effort into mitigating them. LeCun, however, is a vocal skeptic of AI posing an existential risk.

In late 2023, Turing Award recipient Andrew Yao and others3

joined Hinton and Bengio in authoring a paper outlining risks from advanced AI systems.

In 2024, Hinton won the Nobel Prize in Physics together with John Hopfield, another pioneering machine learning researcher, who has signed a letter calling for a pause on frontier AI systems.

Many other science and technology leaders have worried about superintelligence for years. Late astrophysicist Stephen Hawking said in 2014 that superintelligence “could spell the end of the human race.” In 2019, Bill Gates described himself as “in the camp that is concerned about superintelligence” and stated that he “[doesn't] understand why some people are not concerned”. Russell, Hinton, Bengio, and Gates have all signed the Statement on AI Risk letter.

The rapid progress in AI, and the prominence of serious discussions about AI causing human extinction, have caused many people to join Snoop Dogg in asking, “[are] we in a f*cking movie right now?” This sentiment is understandable, but no, this is not a movie.


  1. Of course, experts can be wrong, and deference to experts isn't a substitute for reasoning about the underlying issues. ↩︎

  2. According to Wikipedia. ↩︎

  3. Notable co-authors include Dawn Song, Yuval Noah Harari and Daniel Kahneman. ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.