How promising is compute usage regulation for AI governance?

The recent development of AI has been characterized as relying on the ‘AI triad’: compute, data, and algorithms. AI development cannot proceed without all three, making regulating these components a candidate strategy for AI governance. Out of these three, rRegulating compute may be especially viable.

Compute is easier to govern because of some inherent features. They include:

  • Compute cannot be used simultaneously by more than one actor. In contrast, once collected or developed and made public, data and algorithms can be copied and used by anyone. This means access to compute can be more effectively restricted, and revoked if necessary.

  • Compute requires substantial physical space to put it in, and physical materials to build and maintain it.

  • Compute is easily quantifiable – it is most often quantified in terms of FLOPS (“floating point operations per second”), but can also be measured in other ways, for example, by counting CPUs.

It’s also easier to govern compute because of some related features of the computing hardware industry today:

  • Training the best AI models today require especially large amounts of compute:

ML training runs are more expensive than they used to be, so much so that Sevilla et al. call the current era of ML the “Large-Scale Era”[1]

- This means they require a lot of physical infrastructure. This can involve “[football-field-sized supercomputer centers](https://forum.effectivealtruism.org/posts/g6cwjcKMZba4RimJk/compute-governance-and-conclusions-transformative-ai-and#6__Compute_Governance)'', which have extremely high energy and water demands, making them easy to track and detect.

However, there are a number of limitations to compute governance. For example:

  • Improvements in algorithms might mean that less compute will be necessary to make dangerous models.

  • Improvements in hardware efficiency might make it possible to train dangerous models more cheaply and in a way that is more difficult to track.

  • Compared to other interventions like supporting alignment research, compute usage regulation is more potentially adversarial and poses the risk of increasing political or geopolitical tensions. For example, some have identified this as an effect of the early 2023 US export controls for advanced microchips to China.


  1. Sevilla et al. identified three trends: An 18-month doubling time, between 1952 and 2010, is in line with Moore’s law (according to which transistor density doubles roughly every two years) a 6-month doubling time between 2010 and 2022, and a new trend starting in 2015 with a 10-month doubling time, but it started 2 to 3 orders of magnitude over the previous trend. Increases in compute requirements exceeding Moore’s law require increasing monetary and space investment. ↩︎

  2. 92% global market share of commercial production of chips below 10nm (Netherlands Innovation Network report, June 2022) ↩︎