Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Alignment research

Current techniques
Benchmarks and evals
Prosaic alignment
Interpretability
Agent foundations
Other alignment approaches
Organizations and agendas
Researchers

What is the Center on Long-Term Risk (CLR)'s research agenda?

The Center on Long-Term Risk (CLR) is focused primarily on reducing suffering-risk (s-risk): the risk of a future that has a large negative value. They do theoretical research in game theory and decision theory, primarily aimed at multipolar

AI scenarios.

CLR also works on improving coordination for prosaic AI scenarios, risks from malevolent actors, and forecasting the future of AI. The Cooperative AI Foundation shares personnel with CLR, but is not formally affiliated with CLR, and does not focus just on s-risks.

Keep Reading

Continue with the next entry in "Alignment research"
What is Redwood Research's agenda?
Next


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.