Why would a misaligned superintelligence kill everyone in the world?

The human species as a whole seems fairly resilient to even very severe global catastrophes. Events like nuclear war and severe pandemics would kill huge numbers of people and cause enormous suffering, but would probably not lead to human extinction. It might be tempting to assume that a catastrophe caused by a misaligned superintelligent AI would similarly fall short of causing human extinction.

However, there are at least three reasons that a misaligned superintelligent AI could end up killing literally everyone, assuming it didn’t value human survival:

  • Removing competition: Humans might interfere with the AI’s goals, e.g. by trying to turn it off or by building a rival superintelligent AI. The AI might decide to prevent that from happening by killing us.

  • Gathering resources: Humans, along with the rest of the biosphere, contain resources that an AI could harvest. In Eliezer Yudkowsky’s words, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

  • Side-effects: Whatever the AI’s goals are, it’s unlikely that they would be best served by keeping the Earth broadly in its current state, unless the AI specifically values preserving the status quo (e.g. for the sake of humans and other life). An AI might undertake large-scale projects that make the world unlivable for humans, or which use the materials lying around (including humans) as resources.

These three reasons are analogous to, respectively, humans driving animals to extinction because they’re dangerous, for meat and other parts, or by destroying their habitats.

A superintelligence with the ability to build advanced self-replicating industry wouldn’t take long to far outgrow the human economy. Eventually, when it controlled the Earth’s matter in fine-grained detail, the cost of wiping us out would be a tiny fraction of its resources. So even if the threat of competition from humans or the total value of their bodily resources were relatively small, it would still be likely that the superintelligence would choose to eliminate them.