What is the EU AI Act?

The EU AI Act is a proposed law to regulate AI which is part of the European Union’s AI strategy. The proposal distinguishes between systems with different levels of risk that will be subject to different levels of regulation:

  • Non-high risk: Systems like those used in spam filters and video games will continue to be minimally regulated. However, companies will be required to make it clear when users are interacting with an AI, such as a chatbot.

  • High risk: Critical systems, including those used in transportation, education, and law enforcement, will be required to undergo extensive risk assessment and review to test for robustness, security, and accuracy. Companies will also be required to maintain detailed documentation to show compliance.

  • Unacceptable risk: Systems that directly threaten people’s lives and fundamental rights will be completely banned. These include things like governmental social scoring and children’s toys which encourage dangerous behavior.

There have been a number of analyses of the proposal, including recommendations to improve it. Critiques include concerns that this regulation will prevent Europe from competing in developing and deploying AI, concerns about enforcement, and on the other hand concerns that this act will be insufficient to prevent major harms including existential risks. There are also concerns that the framework isn’t flexible enough to quickly deal with unexpected risks that arise from new AI technologies.

As of August 2023, the EU AI Act is the subject of ongoing negotiations between the European parliament and the EU member states.