Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Objections and responses

Is smarter-than-human AI unrealistic?
Is AI alignment easy?
Flawed proposals for setting AI goals
Flawed proposals for controlling AI
Dealing with misaligned AGI after deployment
Other issues from AI
Morality
Objections to AI safety research
Miscellaneous arguments

What are some arguments why AI safety might be less important?

This is an index of arguments against AI existential safety concerns. Note that views represented by authors here are often substantially different from the views of our editors.

Notes

Some recommended pieces are in bold.

Some of these arguments are substantially better than others. Additionally, some pieces are arguing for the importance of AI safety, while discussing counterarguments. Overall, the title of this document may be misleading, as many of these pieces are simply providing some important ideas to consider, rather than giving a comprehensive and conclusive argument.

It may be a useful exercise to contemplate how these arguments interact with arguments in various introductions to AI safety.

Author classification

The list

Other collections

Keep Reading

Continue with the next entry in "Objections and responses"
Isn't capitalism the real unaligned superintelligence?
Next


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.