I want to work on AI alignment. How can I get funding?

See the Future Funding List and the funding forest[1] on aisafety.world for up to date information!

The organizations that most regularly give grants to individuals working towards AI alignment are the Long Term Future Fund, Survival And Flourishing, the OpenPhil AI Fellowship and early career funding, the Future of Life Institute, the Future of Humanity Institute, and the Center on Long-Term Risk Fund. The Nonlinear Network will let you apply for grants to do projects related to AI safety and reach many different funders at the same time. There are also opportunities from smaller grantmakers that you might be able to pick up if you get involved.

If you want to work on support or infrastructure rather than directly on research, the EA Infrastructure Fund may be able to help. You can talk to EA funds before applying.

Each grant source has their own criteria for funding, but in general they look for candidates who demonstrate that they're keen and able to do good work towards reducing existential risk (for example, by completing an AI Safety Camp project).

Another option is to get hired by an organization that works on AI alignment; see other questions for advice on that.

It's also worth checking the AI Alignment tag on the EA funding sources website for up-to-date suggestions.


  1. “Where money grows on trees!” ↩︎