Do people seriously worry about existential risk from AI?

Many of the people with the deepest understanding of AI are highly concerned about the risks of unaligned superintelligence.

The leaders of the world’s top AI labs, along with some of the most prominent academic AI researchers, have jointly signed a statement expressing that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This includes Sam Altman (CEO of OpenAI, the company behind ChatGPT), who has stated that if things go poorly it could be “lights out for all of us”, and DeepMind cofounder Shane Legg, who has said he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”.

Stuart Russell, distinguished AI expert and co-author of the “authoritative textbook of the field of AI”, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern. His book Human Compatible focuses on the dangers of artificial intelligence and the need for more work to address them.

Geoffrey Hinton, one of the “Godfathers of Deep Learning”, resigned from Google to be able to focus on speaking about the dangers of advancing AI capabilities. He worries that smarter-than-human intelligence is no longer far off and he thinks that AI wiping out humanity is “not inconceivable”. Yoshua Bengio, one of Hinton’s co-recipients for the 2018 Turing Award, was not previously concerned by existential risks from AI, but changed his stance in 2023 and declared that we need to put more effort into mitigating them.

Many other science and technology leaders have worried about superintelligence for years. Late astrophysicist Stephen Hawking said in 2014 that superintelligence “could spell the end of the human race.” In 2019, Bill Gates described himself as “in the camp that is concerned about superintelligence” and stated that he “[doesn't] understand why some people are not concerned” and has signed the Statement on AI Rrisk letter.

Recently, headline-making large language models such as ChatGPT and the predictions of the respected scientists above have caused many people to join Snoop Dogg in asking, “[are] we in a f*cking movie right now?” This sentiment is understandable, but no, this is not a movie.