What are some arguments why AI safety might be less important?

This is an index of arguments against AI existential safety concerns. Note that views represented by authors here are often substantially different from the views of our editors.

Notes

Some recommended pieces are in bold.

Some of these arguments are substantially better than others. Additionally, some pieces are arguing for the importance of AI safety, while discussing counterarguments. Overall, the title of this document may be misleading, as many of these pieces are simply providing some important ideas to consider, rather than giving a comprehensive and conclusive argument.

It may be a useful exercise to contemplate how these arguments interact with arguments in various introductions to AI safety.

Author classification

  • ~ means the person was working ~full-time on AI existential risk reduction

  • ^ means the person was at least somewhat part of the AI existential risk community and/or related communities

  • The ~ or ^ applies based on the date of publication. People only have an asterisk or caret if this designation was true around the time of publication. If someone critiques AI safety and then starts working on it 5 years later, they will not have an asterisk.

  • Some classifications might be incorrect.

The list

Other collections