How can I provide significant financial support to AI alignment?

The short version: Follow this guide.

The long version:

Although many projects in the space have funding, many also remain unfunded, and additional money can definitely be useful toward AI safety1. The simplest way to support AI alignment is to donate to grantmakers such as the Long Term Future Fund or specific organizations you’ve seen good work from.

If you want to be more involved with directing your money, it helps if there’s a person or grantmaker you know and trust to make good funding decisions, such as one of the regranters on Manifund. If you have the time, you can sometimes find even better opportunities if you form inside views on what’s beneficial and interact regularly with different parts of the AI safety community, think and talk about what projects are likely to be effective, and notice opportunities to fund them.

Small grants are generally less well-covered by major grantmakers. Grantmakers’ time is scarce, and if you have local knowledge, funding projects based on that takes work off their hands. So don’t be afraid to give to people in your network who seem to be working on good projects. It can often make sense to offer grants without making grantees go through an application process, since it allows grantees to focus fully on useful work rather than searching for funds.

A way to give financial support that requires less of your time and attention is to contribute to initiatives like Lightspeed Grants. This uses a clever mechanism that allows donors to delegate their decisions to people whose judgment they trust. At the same time, the mechanism is designed to avoid problems where multiple donors interested in funding the same projects risk either double-funding or failing to fund them due to “donor chicken” (where multiple donors fail to fund a project because each donor is hoping another donor will fund it). The Nonlinear Network is another initiative that gives funders a menu of projects to choose from.


  1. Total funding for long-term future and catastrophic risk prevention causes from the major grantmakers at the peak of the Future Fund boom was projected to be only around $220 million (0.5% of Google’s 2022-23 R&D budget of $41 billion), with AI safety being only a fraction of that and total funding having declined since. This field’s funding is tiny compared to its importance, and people considering joining the effort are regularly turned off by career instability caused by funding constraints. ↩︎