14: You can learn more and perhaps help
Existential risk from AI is a thorny problem. It’s not clear how the future will unfold and what strategies will be effective, different people have very different perspectives on what’s going on, and many still doubt the issue’s importance. The lack of empirical data makes it hard to resolve these disagreements.
One thing that seems like it would help, regardless of how these uncertainties play out, is for people to become better-informed, and more familiar with the concepts and arguments around AI safety. aisafety.info is intended to fulfill that purpose.
From here on, you can explore many more questions about whatever topics you’re specifically interested in. The “articles” header above leads to other site sections with articles on different aspects of the problem. Or you can ask our chatbot anything, even if we don’t have an article about it.
If you want to not just learn more, but contribute to solving the problem, we have a section of the site that guides people through ways they can help.
Finally, here are some websites with more advanced information. Each paints a different picture on AI safety, but is roughly consistent with that given here:
- The Alignment Forum hosts many different contributions from individual researchers.
- BlueDot Impact offers interactive courses on AI alignment, AGI strategy, AI governance, and the future of AI. These courses are aimed at people interested in working in the AI safety field.
- The Alignment Problem from a Deep Learning Perspective is a literature review that explains AI alignment in terms of the current deep learning paradigm.
- Arbital, best viewed through this index, explains some of MIRI’s views in depth. It’s an older source, but a good place to get caught up on the historical conceptual background behind much of today’s alignment research.
- For a more up-to-date expression of MIRI’s views aimed at the general public, with answers to various objections, see the supplementary resources for the book If Anyone Builds It, Everyone Dies.
Shaping the future of AI is a crucial task for humanity. Learning more and getting involved helps support the wider endeavor toward a positive outcome.