Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Alignment research

Current techniques
Benchmarks and evals
Prosaic alignment
Interpretability
Agent foundations
Other alignment approaches
Organizations and agendas
Researchers

What is DeepMind's safety team working on?

DeepMind has both a machine learning safety team focused on near-term risks, and an alignment team working on risks from artificial general intelligence

. The alignment team is pursuing many different research agendas.

Their work includes:

See Shah's comment for more research that they are doing, including a description of some that is currently unpublished.

Keep Reading

Continue with the next entry in "Alignment research"
What projects is CAIS working on?
Next


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.