FundsLong-Term Future Fund

Long-Term Future Fund

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Long-Term Future Fund
Long-Term Future Fund

Focus areas

The Fund has a broad remit to make grants that promote, implement and advocate for longtermist ideas. Many of our grants aim to address potential risks from advanced artificial intelligence and to build infrastructure and advocate for longtermist projects. However, we welcome applications related to long-term institutional reform or other global catastrophic risks (e.g., pandemics or nuclear conflict). We intend to support:
  • Projects that directly contribute to reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects
  • Training for researchers or practitioners who work to mitigate existential risks, or help with relevant recruitment efforts, or infrastructure for people working on longtermist projects
  • Promoting long-term thinking

Impact

The Long-Term Future Fund has recommended several million dollars' worth of grants to a range of organizations, including:

Created an instruction-generalization benchmark for LLMs

With a small grant from the LTFF, Joshua Clymer ran experiments on instruction-following generalization to determine LLMs’ abilities to faithfully follow instructions in vastly different contexts from their training data. Per the published paper and benchmark, the authors found that reward models trained to follow instructions on easy tasks do not naturally follow instructions on hard tasks. This research offers important insights on the need for better interpretability methods.

Built and maintained digital infrastructure for the AI safety ecosystem

The Alignment Ecosystem Development have created approximately 15 different projects, including the AI Safety Map (aisafety.world), AI Safety Info (aisafety.info), and AI Safety Quest (aisafety.quest), to grow and improve the AI safety ecosystem. These projects include maintenance of comprehensive AI safety resources and organizations, guidance for careers in AI safety, and coordination of volunteers on AI safety and alignment projects.

Ran a biorisk summit

Tessa Alexanian launched Catalyst, a biosecurity summit convening 100 participants, including members of the biotech industry, DIY biologists, and biosecurity researchers. Participants found the conversations, networking, workshops, and problem-solving challenges incredibly useful and reported the summit generally exceeded expectations, according to the retrospective written by Catalyst’s team.

Ran an AI safety independent research program

LTFF provided funding for SERI MATS, a seminar and independent research program, which identifies and accelerates the development of junior AI safety researchers. MATS alumni have published original research in top conferences, joined safety teams at AI labs, and launched their own AI safety research organizations.

Payout reports

Payout date
Total grants
No. of grantees
Payout report

Payouts over time

About the fund

The Fund is temporarily managed by Caleb Parikh, Project Lead of EA Funds. We are currently hiring for a new fund chair. The team comprises technical and policy researchers, and is advised by philanthropists and grantmakers from Open Philanthropy and the Centre for Effective Altruism, among others.
The Fund has historically made grants to researchers working on cause prioritization, existential risk identification and mitigation, and technical research toward the development of robust and beneficial artificial intelligence.
The Fund managers can be contacted at longtermfuture[at]effectivealtruismfunds.org

Why donate to this fund?

The future could include a large number of flourishing humans (or other beings). However, it is possible that certain risks could make the future much worse, or wipe out human civilization altogether. Actions taken to reduce these risks today might have large positive returns over long periods of time, greatly benefiting future people by making their lives much better, or by ensuring that there are many more of them. Donations to this fund might help to fund some of these actions and increase the chance of a positive long-term future.
Many people believe that we should care about the welfare of others, even if they are separated from us by distance, country, or culture. The argument for the long-term future extends this concern to those who are separated from us through time. Most people who will ever exist, exist in the future.
However, the emergence of new and powerful technologies puts the potential of these future people at risk. Of particular concern are global catastrophic risks. These are risks that could affect humanity on a global scale and could significantly curtail its potential, either by reducing human civilization to a point where it could not recover, or by completely wiping out humanity.
For example, tech companies are pouring money into the development of advanced artificial intelligence systems; while the upside could be enormous, there are significant potential risks if humanity ends up creating AI systems that are many times smarter than we are, but that do not share our goals.
As another example, previous disease epidemics, such as the bubonic plague in Europe, or the introduction of smallpox into the Americas were responsible for many millions of deaths. A genetically-engineered pathogen to which few humans had immune resistance could be devastating on a global scale, especially in today’s hyper-connected world.
In addition to supporting direct work, it’s also important to advocate for the long-term future among key stakeholders. Promoting concern for the long-term future of humanity — within academia, government, industry, and elsewhere — means that more people will be aware of these issues, and can act to safeguard and improve the lives of future generations.

Why you might choose not to donate to this fund

You don’t think that we should focus on the long-term future

Donors might conclude that improving the long-term future is not sufficiently tractable to be worth supporting. It is very difficult to know whether actions taken now are actually likely to improve the long-term future. To gain feedback on their work, organizations must rely on proxy measures of success: Has the public become more supportive of their ideas? Are their researchers making progress on relevant questions? Unfortunately, there is no robust way of knowing whether succeeding on these proxy measures will cause an improvement to the long-term future. Donors who prefer tractable causes with strong feedback loops should consider giving to the Global Health and Development Fund.

You don’t think that future or possible beings matter, or that they matter significantly less

Some donors may think that future or possible beings do not matter morally, or matter less than beings who currently exist. For example, one might have a moral position similar to what philosophers term the Person-Affecting View. According to this view, “an act can only be bad if it is bad for someone, so that there is no moral obligation to create people, nor moral good in creating people” (Parfit (1991), p. 114). Donors who hold these views should consider supporting organizations which focus on helping existing people, perhaps through the Global Health and Development Fund.

You have a preference for supporting more established organizations

Donors may prefer to support established organizations. The fund's most recent grants have mostly funded newer organizations and individual researchers. This trend is likely to continue, provided that promising opportunities continue to exist.

You are pessimistic about room for more funding

Donors may be pessimistic about the room for more funding available in this area. Open Philanthropy has made global catastrophic risk reduction a major focus area and may fund many of the opportunities that the fund managers would find promising.

You have identified projects or interventions that seem more promising to you than our recommendations

Well-informed donors with a good knowledge of the space may be in a position to identify opportunities that may be more promising than the recommendations of the Fund. These donors may be able to have a bigger impact by continuing to conduct their own research, rather than deferring to the Fund managers.

You are skeptical of the risks posed by advanced artificial intelligence

Some donors may be skeptical that artificial intelligence constitutes a significant global catastrophic risk. While the Long-Term Future Fund is open to funding organizations that seek to reduce any type of global catastrophic risk — including risks from extreme climate change, nuclear war, and pandemics — grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term.

You have different views about how to improve the long-term future

Some donors in this area favor interventions which make humanity more likely to have a future, through activities like reducing existential risks. (This is the approach of most of the Fund’s grants so far.) Others favor interventions which reduce the likelihood that future beings experience suffering. Finally, some favor interventions which focus on increasing the likelihood that we achieve extremely positive futures. Donors with strong views in these areas should consider directly supporting organizations that work to achieve their desired outcomes.

Fund managers

Daniel Eth

Independent
Fund Manager

Fund advisors


Frequently asked questions

How do I make a donation to an EA Fund?

What is the risk profile of the Long-Term Future Fund?

Why donate to the Long-Term Future Fund instead of donating directly to individual organizations?

Can I apply for funding to the Long-Term Future Fund?

Rigorous grantmaking for high-impact projects