Why don't we just not build AGI if it's so dangerous?

In March of 2023, the Future of Life Institute put out an open letter calling for a pause on "giant AI experiments" of "at least six months”. Less than a week later, AI researcher Eliezer Yudkowsky, writing in Time magazine, argued that AI labs should "shut it all down."1 Not building AGI is certainly a live idea on the table.

But this isn’t an easy solution, because avoiding dangers from unaligned AGI requires that no one ever builds unaligned AGI. There are strong competitive pressures to produce more capable AI, and each individual actor might worry that if they stop researching AGI, they’ll be overtaken by others who are more reckless. Some work to solve these kinds of coordination problems is being done in

the field of AI governance.


  1. Some people have been even less diplomatic. ↩︎