I’d like to get deeper into the AI alignment literature. Where should I look?
The AI Safety Fundamentals course is a great way to get up to speed on alignment; you can apply to go through it together with other students and a mentor, but note that they reject many applicants. You can also read their materials independently, ideally after finding a group of others to take the course together with.
Other great ways to explore:
-
AXRP is a podcast with high quality interviews with top alignment researchers.
-
The AI Safety Papers database is a search and browsing interface for most of the AI safety
literature.AI safetyView full definitionA research field about how to prevent risks from advanced artificial intelligence.
-
Reading posts on the Alignment Forum can be valuable (see their curated posts and tags).
-
Taking a deep dive into Yudkowsky
's models of the challenges to aligned AI, via the Arbital Alignment pages.Eliezer YudkowskyCo-founder of MIRI, known for his early pioneering work in AI alignment and his predictions that AI will probably cause human extinction.
-
Reading through the archives for the Alignment Newsletter for an overview of past developments (or listening to the podcast).
-
Reading some introductory books.
-
Taking a course from the AISafety.com list of courses.
-
More on AI Safety Support's list of links, Nonlinear's list of technical courses, reading lists, and curriculums, other answers on Stampy’s AI Safety Info, and Victoria Krakovna's resources list.
You might also consider reading Rationality: A-Z, which covers skills that are valuable to acquire for people trying to think about complex issues.