Sorry for the interruption!

This is the new version of the EA Funds website. If you've got any feedback or suggestions for how we can make it better, we'd love to hear it – feedback helps us improve.

Long-Term Future Fund

Basic Info

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.


You can use this dashboard to view statistics on historical giving to this Fund.


Fund Balance

Fund Payouts

$4,171,735

to date

See all payout reports

Fund Scope

The Fund has a broad remit to make grants that promote, implement and advocate for longtermist ideas. Many of our grants aim to address potential risks from advanced artificial intelligence and to build infrastructure and advocate for longtermist projects. However, we welcome applications related to long-term institutional reform or other global catastrophic risks (e.g., pandemics or nuclear conflict). We intend to support:

  • Projects that directly contribute to reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects
  • Training for researchers or practitioners who work to mitigate existential risks, or help with relevant recruitment efforts, or infrastructure for people working on longtermist projects
  • Promoting long-term thinking

Read more about Fund scope and limitations

About the Long-Term Future Fund

The Fund is managed by a team led by Matt Wage, a philanthropist who was among the first donors to several now-prominent organizations focused on the long-term future (including CSER, FLI, and BERI). The team comprises technical and policy researchers, and is advised by grantmakers from the Open Philanthropy Project and the Effective Altruism Foundation.

The Fund has historically made grants to researchers working on cause prioritization, existential risk identification and mitigation, and technical research toward the development of robust and beneficial artificial intelligence.

The Fund managers can be contacted at ealongtermfuture[at]gmail[dot]com

Grantmaking and Impact

The Long-Term Future Fund has recommended several million dollars' worth of grants to a range of organizations, including:

Supporting researchers working on relevant topics

Fields such as AI alignment and biosecurity are still relatively new, and it’s crucial to develop talent and provide researchers with the opportunity to make progress on important issues. The Fund has made numerous grants that support individual researchers – working in academia or alongside it – to develop skills and work on key problems.

Improving prediction and forecasting infrastructure

An important way that we can make progress on problems affecting the long-term future is to get better at making accurate predictions. Recent research by academics like Philip Tetlock has shown that good predictive reasoning systems can outperform seasoned experts. The Fund has made grants to a number of emerging prediction platforms that aggregate and refine predictions about future events, including Metaculus and Foretold, with the aim of systematically improving our ability to make good judgements about the future.

Helping researchers working on global catastrophic risks to collaborate

Researchers around the world need to connect and collaborate in order to make progress on important problems in their field. The Fund has supported events such as the Catalyst Biosummit, which brings together synthetic biologists, policymakers, academics, and biohackers to collaborate on mitigating biorisks, and the AI Safety Camp, which helps aspiring AI safety researchers to meet peers and receive mentoring as they begin their career.

Ought

Ought is a research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems. Ought conducts research on deliberation and amplification, concepts with a bearing on AI alignment.

Producing video content on AI alignment for YouTube

Recruiting talented people to work on AI alignment is difficult, in part because many technically-minded people aren’t aware that their skills can be applied to solving relevant problems. Robert Miles is a YouTuber who produces engaging videos that aim to explain important concepts related to AI alignment in an accessible, accurate way. His recent videos have averaged around 75k views each.

For more information, please check the full list of the Long-Term Future Fund’s Payout Reports.

Why donate to this Fund?

The future could include a large number of flourishing humans (or other beings). However, it is possible that certain risks could make the future much worse, or wipe out human civilization altogether. Actions taken to reduce these risks today might have large positive returns over long periods of time, greatly benefiting future people by making their lives much better, or by ensuring that there are many more of them. Donations to this fund might help to fund some of these actions and increase the chance of a positive long-term future.

Many people believe that we should care about the welfare of others, even if they are separated from us by distance, country, or culture. The argument for the long-term future extends this concern to those who are separated from us through time. Most people who will ever exist, exist in the future.

However, the emergence of new and powerful technologies puts the potential of these future people at risk. Of particular concern are global catastrophic risks. These are risks that could affect humanity on a global scale and could significantly curtail its potential, either by reducing human civilization to a point where it could not recover, or by completely wiping out humanity.

For example, tech companies are pouring money into the development of advanced artificial intelligence systems; while the upside could be enormous, there are significant potential risks if humanity ends up creating AI systems that are many times smarter than we are, but that do not share our goals.

As another example, previous disease epidemics, such as the bubonic plague in Europe, or the introduction of smallpox into the Americas were responsible for many millions of deaths. A genetically-engineered pathogen to which few humans had immune resistance could be devastating on a global scale, especially in today’s hyper-connected world.

In addition to supporting direct work, it’s also important to advocate for the long-term future among key stakeholders. Promoting concern for the long-term future of humanity — within academia, government, industry, and elsewhere — means that more people will be aware of these issues, and can act to safeguard and improve the lives of future generations.

Why you might choose not to donate to this Fund

We think it’s important that donors are well informed when they donate to EA Funds. As such, we think it’s useful to think about the reasons that you might choose to donate elsewhere.

You don’t think that we should focus on the long-term future

Donors might conclude that improving the long-term future is not sufficiently tractable to be worth supporting. It is very difficult to know whether actions taken now are actually likely to improve the long-term future. To gain feedback on their work, organizations must rely on proxy measures of success: Has the public become more supportive of their ideas? Are their researchers making progress on relevant questions? Unfortunately, there is no robust way of knowing whether succeeding on these proxy measures will cause an improvement to the long-term future. Donors who prefer tractable causes with strong feedback loops should consider giving to the Global Health and Development Fund.

You don’t think that future or possible beings matter, or that they matter significantly less

Some donors may think that future or possible beings do not matter morally, or matter less than beings who currently exist. For example, one might have a moral position similar to what philosophers term the Person-Affecting View. According to this view, “an act can only be bad if it is bad for someone, so that there is no moral obligation to create people, nor moral good in creating people” (Parfit (1991), p. 114). Donors who hold these views should consider supporting organizations which focus on helping existing people, perhaps through the Global Health and Development Fund.

You have a preference for supporting more established organizations

Donors may prefer to support established organizations. The fund's most recent grants have mostly funded newer organizations and individual researchers. This trend is likely to continue, provided that promising opportunities continue to exist.

You are pessimistic about room for more funding

Donors may be pessimistic about the room for more funding available in this area. The Open Philanthropy Project has made global catastrophic risk reduction a major focus area and may fund many of the opportunities that the fund managers would find promising.

You have identified projects or interventions that seem more promising to you than our recommendations

Well-informed donors with a good knowledge of the space may be in a position to identify opportunities that may be more promising than the recommendations of the Fund. These donors may be able to have a bigger impact by continuing to conduct their own research, rather than deferring to the Fund managers.

You are skeptical of the risks posed by advanced artificial intelligence

Some donors may be skeptical that artificial intelligence constitutes a significant global catastrophic risk. While the Long-Term Future Fund is open to funding organizations that seek to reduce any type of global catastrophic risk — including risks from extreme climate change, nuclear war, and pandemics — grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term.

You have different views about how to improve the long-term future

Some donors in this area favor interventions which make humanity more likely to have a future, through activities like reducing existential risks. (This is the approach of most of the Fund’s grants so far.) Others favor interventions which reduce the likelihood that future beings experience suffering. Finally, some favor interventions which focus on increasing the likelihood that we achieve extremely positive futures. Donors with strong views in these areas should consider directly supporting organizations that work to achieve their desired outcomes.

Payout Reports

Long-Term Future Fund FAQ

How do I make a donation using EA Funds?

You can donate to any of the EA Funds by following this link, or clicking the blue button at the top of each Fund’s page.

First, choose the Funds or organizations you would like to make a donation to. You can choose up to 10 Funds/organizations as part of a single allocation.

If you are donating to more than one Fund/organization you'll need to choose how to split your donation between them. By default, all the sliders will be split equally between the Funds/organizations you've chosen. To change this, simply drag the sliders around until you have the allocation you want.


What is the risk profile of the Long-Term Future Fund?

Because of the speculative nature of the space in which the Fund operates, and the difficulty of making judgements about which actions are likely to positively impact the long-term future, grants made by the fund are likely to be higher risk than those made by other Funds.

For more information on how we think about grantmaking risk, please read our Risk Profiles page.


How often does the Long-Term Future Fund make grants?

The Long-Term Future Fund makes grants on the regular EA Funds grantmaking schedule, with grant rounds commencing in February, July, and November each year.


Why donate to the Long-Term Future Fund instead of donating directly to individual organizations?

To make an effective donation, individual donors must spend a lot of time answering questions about which interventions are most likely to make progress in this area, which organizations are most effectively executing these interventions, and which organizations have funding gaps that are unlikely to be filled through other sources.

For areas such as global health and development or animal welfare, donors can get guidance from charity evaluators like GiveWell or Animal Charity Evaluators. No such organizations exist which focus on improving the long-term future.

Finding promising opportunities in this area is therefore especially challenging for individual donors. The fund managers have strong networks in this area which they can use to identify and evaluate new opportunities. In particular, they have a track record of being early funders of promising organizations like CSER and FLI. These opportunities are very hard for individual donors to find without first building strong networks in the space.


Can I apply for funding to the Long-Term Future Fund?

The Long-Term Future Fund accepts applications for funding. Please submit your application by using the link below.

Apply here


For more information about EA Funds in general, see our FAQ page.

Fund Managers

Asya Bergal (Chair), AI Impacts

Biography

Asya Bergal works as a researcher at AI Impacts and occasionally writes for the AI Alignment Newsletter. Previously, she worked as a trader and software engineer for Alameda Research, as a Fall Research Analyst at Open Philanthropy, and as a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science from MIT.


Adam Gleave, Centre for Human-Compatible AI

Biography

Adam Gleave is an AI PhD candidate at UC Berkeley, working on technical AI safety with the Center for Human-Compatible AI (CHAI). His research focuses on improving the evaluation of deep reinforcement learning systems. Adam previously worked as a quantitative trader. He has been researching and donating to effective charities since 2014, including making several grants as part of the 2017 donor lottery.

Adam studied Computer Science at the University of Cambridge, where he led the 80,000 Hours: Cambridge group.


Oliver Habryka, LessWrong.com

Biography

Oliver Habryka is the current project lead for LessWrong.com, where he tries to build infrastructure for making intellectual progress on global catastrophic risks, cause prioritization, and the art of rationality. He used to work at the Centre for Effective Altruism US as strategic director, ran the EA Global conferences for 2015 and 2016, and is an instructor for the Center for Applied Rationality. He has generally been involved with community organizing for the Effective Altruism and Rationality communities in a large variety of ways. He studied Computer Science and Mathematics at UC Berkeley, and his primary interests are centered around understanding how to develop communities and systems that can make scalable progress on difficult philosophical and scientific problems.


Nick Beckstead (advisor), Open Philanthropy

Biography

Nick Beckstead is a Program Officer at the Open Philanthropy Project, where he oversees a substantial part of the organization's research and grantmaking related to global catastrophic risk reduction. Previously, he led the creation of grantmaking programs in scientific research and effective altruism.

Previously, he studied mathematics and philosophy as an undergraduate, completed a PhD in philosophy at Rutgers University, and worked as a research fellow at the Future of Humanity Institute at Oxford University. A lot of Nick’s research has been about the importance of helping future generations, and how we might best do that.

Since 2012, one of Nick’s side projects has been co-running a donor-advised fund that donates to organizations working in the fields of effective altruism and mitigating global catastrophic risks.


Nicole Ross (advisor), Centre for Effective Altruism

Biography

Nicole conducts analysis on broad community health questions and provides internal consultation to other teams at the Centre for Effective Altruism (CEA). Before joining CEA, Nicole worked at the Open Philanthropy Project, GiveWell, and the Center for Healthcare Ethics at Cedars-Sinai.


Matt Wage (advisor)

Biography

Matt Wage works as an algorithmic trader at a quantitative trading firm and donates 50% of his income to EA charities. He has been researching and donating to meta charities since 2012 and was one of the first funders of many now-established EA organizations, including 80,000 Hours, CEA, CSER, FLI, Charity Science and BERI.

Matt studied at Princeton University where he started the Princeton chapter of Giving What We Can and helped Peter Singer start the organization The Life You Can Save. His senior thesis on existential risk won the prize for best undergraduate philosophy thesis.

Matt was featured in the New York Times article The Trader Who Donates Half His Pay and Peter Singer's TED talk  The Why and How of Effective Altruism.