Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Alignment research

Current techniques
Benchmarks and evals
Prosaic alignment
Interpretability
Agent foundations
Other alignment approaches
Organizations and agendas
Researchers

What projects is CAIS working on?

The Center for AI Safety

(CAIS)1 is a San Francisco-based research non-profit that "focuses on mitigating high-consequence, societal-scale risks posed by AI". It pursues technical research aimed at improving the safety of existing AI systems, as well as multi-disciplinary conceptual research aimed at framing and clarifying problems and approaches within AI safety.

CAIS also works on field-building to help support and expand the AI safety research community. Its field-building projects include:


  1. Not to be confused with Comprehensive AI Services, a conceptual model of artificial general intelligence proposed by Eric Drexler, also usually abbreviated CAIS. ↩︎

Keep Reading

Continue with the next entry in "Alignment research"
What is the Alignment Research Center (ARC)'s research agenda?
Next


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.