What are existential risks (x-risks)?

Philosopher Toby Ord defines existential risks as "risks that threaten the destruction of humanity's long-term potential"1. In the context of AI safety, "existential risk" usually refers to the risk of human extinction, though the term can also be used to refer to the risk of irreversibly locking in a drastically worse state of affairs (such as permanent global autocratic rule).

There have been at least 5 mass extinction events in the history of the earth, during which a large percentage of species went extinct. For most of human history, existential risk had only natural sources, e.g. impact events or supervolcanoes.

Technological advances have resulted in man-made sources of existential risk, such as nuclear catastrophes and catastrophic biological threats, which Ord argues are orders of magnitude more likely2 than natural risks to cause extinction within the next century. Some argue that the most significant existential threat to humanity is posed by powerful AI systems of a sort that might be built in the not-too-distant future.

We believe AI is an existential threat because it is plausible that powerful agentic AI will be built that is vastly smarter than humans, and no working plans currently exist for keeping such an AI under human control. In the pursuit of its goals, the AI may end up wiping out humanity (even just as a side effect) or locking humans into a perpetual dystopia.


  1. From Toby Ord’s The Precipice. This is a rephrasing of Nick Bostrom’s definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." ↩︎

  2. Toby Ord, in The Precipice. "I estimate anthropogenic risks to be more than 1,000 times more likely than natural risks. And within anthropogenic risks, I estimate the risks from future technologies to be roughly 100 times larger than those of existing ones… "(p.163). He estimates the chance of natural existential risks to be within an order of magnitude of 1 in 10,000 and that of anthropogenic existential risks to be about 1 in 6. ↩︎