What concepts underlie existential risk from AI?

Theorizing about existential risk from AI uses existing concepts from various fields and has also produced its own.

For example, one possible case for misalignment combines the orthogonality thesis, Goodhart’s law, and instrumental convergence. Some other attempts to characterize the core of the problem have been made by Richard Ngo and Eliezer Yudkowsky.

Some broad categories into which we can group related concepts are: