What is an "AI doomer"?

“AI doomer” is a label for someone who is concerned about AI-caused extinction — often used pejoratively, by people who take a dim view of such concerns.

It’s worth distinguishing a few kinds of views that, depending on the speaker, might make someone an “AI doomer”:

  1. “AI-caused extinction is plausible enough to pay attention to and to make tradeoffs to prevent.” This view is held by many AI researchers, engineers, policy analysts, safety researchers, and heads of AI labs, as illustrated by the CAIS statement on AI risk.

  2. “AI-caused extinction is highly probable.” People with this view generally think that although we will probably fail to avoid extinction, it’s worth trying. This includes Eliezer Yudkowsky and others at MIRI.

  3. “AI-caused extinction is inevitable and therefore not worth attempting to prevent.” This sense of “doomer” is often used in other contexts, like climate change, but it’s rare for people to be fatalistic about AI risk in this way.

“Doomer” isn’t used consistently to indicate that someone assigns a particular probability to AI extinction (“p(doom)”) or holds particular beliefs about AI x-risk more broadly. With this in mind, Rob Bensinger suggests some better-defined alternatives:

  • “AGI risk fractionalists”: p(doom) < 2%

  • “AGI-wary”: p(doom) around 2-20%

  • “AGI-alarmed”: p(doom) around 20-80%

  • “AGI-grim”: p(doom) > 80%

As illustrated above, Bensinger also suggests terms for people’s preferred policies about whether and when to build AGI. The question of whether we should stop, pause, slow, or accelerate AGI development is only loosely related to the question of how likely AI is to cause extinction, but the word “doomer” tends to conflate these questions by suggesting both a high p(doom) and a preference against near-future AGI development.

Further reading: