Effective Altruism Funds is dedicated to moving money to projects that aim to solve the world’s most pressing and important problems, as effectively as possible. The Funds address key issues, such as health and development problems that affect millions of humans, the suffering of billions of animals, and risks from emerging technologies such as advanced artificial intelligence and bioengineering.
Some of these causes have shovel-ready interventions that already have a solid evidence base behind them, and just need additional money directed to where it is needed most. Others may require more speculative interventions that are less well-tested, or further research that gives us more information about which kinds of interventions will be most promising.
These more speculative ventures are at higher risk of having little or no impact. When Fund management teams recommend these types of grants, they’re making a bet that, while some (maybe even many) of them will have little or no impact, the best could have a very large impact. Because it’s very difficult to know in advance which approaches will have this kind of outsize impact, grantmakers may take bets . This approach is sometimes known as hits-based giving.
Grantmakers make judgements about risk vs. reward in line with our overall grantmaking approach.
Why have risk profiles?
We want donors to feel comfortable with the level of risk that they’re taking on. That’s why all of the Funds have a risk profile, which gives a broad sense of how much risk the Fund management team will likely take on. These risk profiles are intentionally broad, intending to give a general sense of how the Fund managers are likely to make decisions. The are assessments based on both the nature of the particular field that the Funds are working in, as well as the grantmaking styles of the individual Fund managers. They are also non-binding, and sometimes Fund managers may choose to make grants that are outside their Fund’s stated risk profile.
Note that some Funds will have wider risk profiles, making some ‘safe bets’, and some high-risk/high-reward ones. You should familiarise yourself with the aims of each Fund, and decide whether donating to it is right for you.
What do we mean by ‘risk’?
At Effective Altruism Funds we distinguish between two broad types of risk in our grantmaking:
- Grants that may not have an impact: The risk that a grant will have little or no impact
- Grants that are net-negative: The risk that a grant will be significantly harmful or damaging, all things considered
The risk profiles below are intended to capture the former sense – i.e. grants that may have little or no positive impact. Grantmakers aim to avoid recommending grants of the latter type – i.e. those that risk causing significant harm.
Low-risk grants are those that have a solid evidence base, and are well-vetted by experienced charity evaluators.
For example, the Global Health and Development Fund has made substantial grants to the Against Malaria Foundation (AMF), which distributes insecticide-treated bednets in sub-Saharan Africa. AMF has been consistently recommended by GiveWell as one of their top charities on the basis of extensive investigation into both the general intervention of insecticide-treated bednets, and the operations of AMF specifically.
Medium-risk grants are those which have a less-solid evidence base, but still have a fairly plausible path to impact.
For example the Effective Altruism Infrastructure Fund has made several grants to help scale up Founders Pledge, which encourages startup founders to sign a legally-binding pledge to donate a percentage of their exit proceeds to charity. Founders Pledge is backed by a number of high-profile supporters and has had impressive results to date, but is a relatively new organization, and it’s not yet certain whether the grantmakers expectations of shifting the culture of smart major philanthropy will pan out.
High-risk grants are those which have a very low certainty of success, but are likely to have a high upside if the expectations of the grantmakers are realized.
For example, the Long-Term Future Fund has made a number of grants to independent researchers working on important problems, such as improving the safety of advanced artificial intelligence. Because of the speculative nature of the work, there’s a high chance that any given piece of research won’t end up being useful. However, if the research turns out to solve important problems, it would be particularly beneficial.