Isn't the real concern misuse?

Misuse is a problem with every powerful technology. Fire can be used to cook food or to burn down a village, and nuclear reactions can be used to produce electricity or to power terrible weapons.

Misuse of AI has the potential to cause harms at the individual, societal, and civilizational levels, such as:

One of the aims of AI governance is to find ways to prevent such harms.

However, misuse is not the only risk from AI. Even when used with the best of intentions, technology can cause accidents: a fire can spread to burn down a city, and a reactor can melt down.

Accidents with advanced AI could be particularly bad because AI can act to pursue its own goals, and we do not currently know how to specify safe goals and impart them to highly advanced systems. Future AI could be powerful enough to outmaneuver humanity, and accidents resulting from misalignment could threaten human civilization as a whole.

In other words, for advanced AI to go well, we would need to both:

  1. Coordinate to prevent bad actors from misusing AI.

  2. Ensure powerful AIs can be safely used at all, without causing harms that no human intended, by solving the alignment problem.