What technical research would be helpful for governance?

Technical AI research could help governance efforts by:

  1. Making AI risk more concrete for policymakers by illustrating our inability to control existing AI systems. For example, the White House only started taking concerns about AI seriously once there were real-life examples of capable AI systems behaving undesirably. This research would not necessarily expand the capabilities of AI, but rather show alignment failures which occur in real, but not especially dangerous, systems.

  2. Enabling policies that can only be enforced using further technological advances, such as ways to monitor AI systems.

  3. Identifying which types of AI systems pose the greatest risk, so that regulating those systems can be prioritized.

  4. Identifying and/or developing types of AI systems which are easier to align and encouraging developers to build those systems instead of other systems.

Research which could be useful for the above includes developing :

  1. Simple and scalable techniques for evaluating AI systems' alignment and capabilities, to help AI labs decide whether and how to deploy them.

  2. Developing concrete proposals which can be adopted by government agencies when designing regulations.

  3. Developing methods for identifying illegal data centers and detecting secret development of advanced systems.

  4. Developing cryptographic techniques which would allow monitoring without requiring companies to reveal trade secrets.