What is the Center for AI Safety (CAIS)'s research agenda?

The Center for AI Safety (CAIS)1 is a San Francisco-based non-profit directed by Dan Hendrycks that "focuses on mitigating high-consequence, societal-scale risks posed by AI". They pursue both technical and conceptual research alongside work on expanding and supporting the field of AI safety.

Their technical research focuses on improving the safety of existing AI systems, and often involves building benchmarks and testing models against those benchmarks. It includes work on:

Their conceptual research has included:

Their field-building projects include:


  1. Not to be confused with Comprehensive AI Services, a conceptual model of artificial general intelligence proposed by Eric Drexler, also abbreviated CAIS. ↩︎