What are the key problems in AI governance?
Some key topics in AI governance are:
-
What regulations should national governments adopt to prevent AI from becoming dangerous?
-
What kinds of international treaties would be helpful?
-
Should cutting-edge AI research be slowed down or paused? How can the details of such a policy be chosen to maximize the benefits and minimize the harms?
-
What measures can reduce the alignment tax — i.e., the extent to which it’s costlier to create aligned than unaligned systems?
-
How can AI systems be monitored for imminent danger?
-
How can the major actors avoid an arms race dynamic, in which they are incentivized to develop AI faster and with fewer safeguards to make sure they get to advanced AI before the competition?
-
How can stakeholders be convinced that alignment is an important problem worth paying attention to?
-
What internal processes can companies adopt to make sure the systems they deploy are safe?
-
What measures (e.g. information security or restricting computing power or physical access) are needed to keep dangerous systems out of the hands of malicious actors?
We’re still working on expanding our content in this area. In the meantime, consider visiting this collection of resources.