Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

Build your knowledge

If you’re somewhat new to AI safety, we recommend an introductory overview

Browse our introductory content

Our “Intro to AI safety” micro-course is a collection of short readings that serve as a comprehensive introduction to the topic of AI safety.

Our Intro to AI safety video playlist illustrates many of the most important points about AI safety in a way that is entertaining and easy to understand.

Listen to an introductory podcast episode (or a few)

We recommend Dwarkesh Patel’s interview with Paul Christiano, a leading AI safety researcher. The interview provides an introduction to AI risk and discusses many important AI safety concepts.

If you want to dive deeper

Take an online course

We recommend taking an online course if your interests have narrowed to a specific subset of AI safety, such as AI alignment research or AI governance.

The AI Safety Fundamentals (AISF) Governance Course, for example, is especially suited for policymakers and similar stakeholders interested in AI governance mechanisms. It explores policy levers for steering the future of AI development.

The AISF Alignment Course is especially suited for people with a technical background interested in AI alignment research. It explores research agendas for aligning AI systems with human interests.

Note: If you take the AISF courses, consider exploring additional views on AI safety to help avoid homogeneity in the field, such as The Most Important Century blog post series.

Note: AISF courses do not accept all applicants. We recommend taking the courses through self-study if your application is unsuccessful.

Get into LessWrong and its subset, the Alignment Forum

Most people who are really into AI existential safety ultimately end up in this online, forum-based community which fosters high-quality discussions about AI safety research and governance.

Sign up for events

Events, typically conferences and talks, are often held in person and last one to three days.

We've highlighted EAGx, an Effective Altruism conference dedicated to networking and learning about important global issues, with a strong focus on AI safety. Several EAGx's are held annually in various major cities across the world.

Sign up for fellowships

AI safety fellowships typically last one to three weeks and are offered both online and in person. They focus on developing safe and ethical AI practices through research, mentorship, and collaboration on innovative solutions.


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.