What is the Center on Long-Term Risk (CLR)'s research agenda?
The Center on Long-Term Risk (CLR) is focused primarily on reducing suffering-risk (s-risk): the risk of a future that has a large negative value. They do theoretical research in game theory and decision theory, primarily aimed at multipolar
Multipolar scenario
A scenario in which there end up being multiple powerful decision makers.
CLR also works on improving coordination for prosaic AI scenarios, risks from malevolent actors, and forecasting the future of AI. The Cooperative AI Foundation shares personnel with CLR, but is not formally affiliated with CLR, and does not focus just on s-risks.
Keep Reading
Continue with the next entry in "Alignment research"
What is Redwood Research's agenda?
Next