July 2021: Long-Term Future Fund Grants

Payout Date: July 1, 2021

Total grants: USD 982,469

Number of grantees: 18

Discussion: EA Forum comments

Contents

Introduction

The Long-Term Future Fund made the following grants through July 2021:

  • Total grants: $982'469
  • Number of grantees: 18
  • Payout date: July 2021
  • Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger, Luisa Rodriguez

This payout report is substantially delayed, but we’re hoping to get our next payout report covering grants through November out very soon. Updates since our last payout report:

Consider applying for funding from the Long-Term Future Fund here.

Grant reports

Note: Many of the grant reports below are very detailed. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees’ preferences. We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for a grant.

Grant reports by Asya Bergal

Any views expressed below are my personal views and not the views of my employer, Open Philanthropy. In particular, receiving funding from the Long-Term Future Fund should not be read as an indication that an organization or individual has an elevated likelihood of receiving funding from Open Philanthropy. Correspondingly, not receiving funding from the Long-Term Future Fund (or any risks and reservations noted in the public payout report) should not be read as an indication that an organization or individual has a diminished likelihood of receiving funding from Open Philanthropy.

Ezra Karger, Pavel Atanasov, Philip Tetlock ($572,000 $200,000)

Update on payout amount: We paid for part of the cost of this tournament; another $372,000 was covered by another donor.

Existential risk forecasting tournaments.

This grant is to Ezra Karger, Pavel Atanasov, and Philip Tetlock to run an existential risk forecasting tournament. 

Philip Tetlock is a professor at the University of Pennsylvania; he is known in part for his work on The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments, and for his book Superforecasting: The Art and Science of Prediction, which details findings from that study. Pavel Atanasov is a decision psychologist currently working as a Co-PI on two NSF projects focused on predicting the outcomes of clinical trials. He previously worked as a post-doctoral scholar with Philip Tetlock and Barbara Mellers at the Good Judgement Project, and as a consultant for the SAGE research team that won the last season of IARPA’s Hybrid Forecasting Competition. Ezra Karger is an applied microeconomist working in the research group of the Federal Reserve Bank of Chicago; he is a superforecaster who has participated in several IARPA-sponsored forecasting tournaments and has worked with Philip Tetlock on some of the methods proposed in the tournament below.

Paraphrasing from the proposal, the original plan for the tournament was as follows:

  1. Have a panel of subject-matter experts (SMEs) choose 10 long-run questions about existential risks and 20 short-run, resolvable, early warning indicator questions as inputs for the long-run questions.
  2. Ask the SMEs to submit forecasts and rationales for each question.
  3. Divide the superforecasters into two groups. Ask each person to forecast the same questions as the SMEs and explain those forecasts. For short-run questions, evaluate forecasts using a proper scoring rule, like Brier or logarithmic scores. For long-run questions, use reciprocal scoring to incentivize accurate forecasts by rewarding forecasters based on their closeness to the median forecast from the group they are not in.
  4. Divide the SMEs into two groups and match them to the superforecaster groups. Ask each combined SME-forecaster team to work together to produce the most persuasive rationale for each of the median forecasts from the superforecaster groups. Provide a monetary incentive to each team as a linear function of how much the rationales persuade SMEs from the other team to move their forecasts towards the superforecaster median.
  5. Recruit a representative national sample of respondents. Ask participants for initial forecasts on the 20 short-run and 10 long-run questions, then provide half of the participants with the superforecasters’ forecasts, and half with both the forecasts and the explanations. Ask both groups to update their forecasts, and measure accuracy on final forecasts to quantify the value of good explanations.
  6. After 1-3 years, resolve the short-run indicators and evaluate whether this tournament

caused SMEs and/or superforecasters to hold more accurate beliefs about those short-term indicator questions.

I think the main path to impact for this tournament is having existential risk forecasts that are more externally-credible than our current best, which I’d guess are the odds given by Toby Ord in The Precipice. This credibility comes from the backgrounds of those running the tournament — in particular Philip Tetlock, who is widely known for his work on forecasting.

The three people running this program have significant backgrounds in forecasting, but are less well-versed in existential risk, so I asked Max Daniel to check in with them regularly and help connect them to relevant people in the existential risk community.

Zach Freitas-Groff ($11,440)

Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades.

Zach Freitas-Groff is a PhD student in economics at Stanford University. He asked for funding to hire research assistants on Upwork to help gather data on close referenda in the U.S. as part of a project estimating how long the impact lasts from different types of ballot initiatives. Zach suggested this research could:

a) Boost the profile of longtermist economics research, by showing it’s possible to make empirical progress on a concrete question about how to have longer impact; and b) Provide concretely useful information on how  to most effectively influence policy via ballot initiative.

I was most interested in the way this grant could further Zach’s career; references described him as one of the most promising global priorities researchers and economists in the EA movement.

New Governance of AI nonprofit ($177,000)

Setting up a nonprofit focused on longtermist AI governance field-building.

This grant was funding the setup of the Centre for the Governance of AI (GovAI) as an independent nonprofit, co-led by Allan Dafoe and Ben Garfinkel. GovAI has historically been a research centre within University of Oxford, but Allan Dafoe has recently taken a senior role at DeepMind which prevented him from heading the centre as part of the university. GovAI is now spinning off into a nonprofit, while the AI governance researchers at FHI are forming an AI governance research group led by Ben Garfinkel.

The application sought $257,000 of funding to cover 3 months of Markus Anderljung’s salary as he sets up the nonprofit, legal and miscellaneous expenses, and salaries for the nonprofit’s first two hires. $80,000 of their budget was covered by another funder, and we covered the rest.

From the application:

> The nonprofit will occupy a unique role in the longtermist AI governance space, similar to that which The Forethought Foundation has played in the global priorities space. The organisation will be in a particularly good position to e.g. build and maintain a community of scholars and practitioners interested in longtermist AI governance, to engage with established academics in fields such as political science and economics, to convene conferences, and to provide scholarships and prizes. It may also be in a particularly good position to do grantmaking and house AI governance projects. 

Looking back on GovAI’s history, I’d guess that it’s had most of its impact through training longtermist researchers and practitioners who have gone on to do good work at GovAI and elsewhere. GovAI has close ties to academia, which I suspect has allowed them to recruit talented people who would be less excited to work at a non-academically-affiliated institution.

GovAI has also produced several research outputs that I have found informative, including O’Keefe et al.’s report on the Windfall Clause, Ding’s report on Deciphering China’s AI dream along with his newsletter, and Zaidi and Dafoe’s report on lessons from the Baruch Plan for nuclear weapons. Some of their work has also informed Luke Muelhauser’s thinking.

I’m interested in supporting GovAI’s new nonprofit in recruiting more talented young people to work on AI governance and policy, as well as in facilitating additional research work. I think as a nonprofit, GovAI risks losing some of the academic prestige it used to attract people when it was part of the University of Oxford, but I’m hopeful that its existing reputation and connections have a comparable effect, especially given that there are still affiliated AI governance researchers working at FHI.

Effective Altruism Switzerland ($11,094)

Stipends, work hours, and retreat costs for four extra students at CHERI’s summer research program.

This grant was for Effective Altruism Switzerland’s new Swiss Existential Risk Initiative (CHERI) to pay for four extra students in their 2021 summer research program. The students were referred by other summer research programs which had run out of capacity; CHERI had told the students that they could accept them, but that it was unclear whether they would be able to provide a research stipend.

FHI’s summer research program was cancelled this year, so it seems plausible to me that there is not enough summer capacity for students who could turn out to be very strong researchers. CHERI is very new, so I am unsure about the quality of their program, but I think some students can learn and get further involved in longtermism just from being given a somewhat structured environment in which to do research. I also think it would be a bad outcome if CHERI accepted some students but didn’t pay them; I think this could create an unhealthy dynamic for the program cohort.

Thomas Moynihan ($27,819)

6-month salary to write book on philosophy + history of longtermist thinking, while longer-term funding is arranged.

Thomas Moynihan asked for 6 months of salary to pay for his time writing a book about why longtermism has become a popular philosophy now, as opposed to earlier in history. Thomas previously wrote a similar book about existential risk, which was featured on the 80,000 Hours podcast.

I was in favor of this grant because I think books can be an effective way to do movement-building and to bring longtermist ideas further into the mainstream. I’m not sure how successful Thomas’s book will be, but I expect it to appeal to educated audiences and contribute to the idea that longtermism is worth taking seriously as a philosophical position.

This was a bridge grant to cover Thomas’s salary while a grant from another funder came through.

Fabio Haenel ($1,571)

Participation in a 2-week summer school on science diplomacy to advance my profile in the field of science policy.

This grant was for Fabio Haenel to pay for a 2-week Science Diplomacy Summer Schoolin Barcelona. Fabio has a Masters in International Relations from the University of Wroclaw and has worked in a variety of jobs promoting science and technology for impact, but he has recently decided to try to pursue a career in policy and emerging technologies, with the goal of reducing risks from technological development and global conflict.

Fabio was familiar with this summer school and thought it would be helpful for building relevant expertise. Jonas Vollmer, the head of EA Funds, has previously met Fabio and had a positive impression of him; since this was a very small grant amount, I decided to defer to Fabio’s judgement on the usefulness of the program.

Grant reports by Evan Hubinger

Nick Hay ($150,000)

Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment.

We’re funding Nick Hay to do AI alignment research as a visiting scholar at CHAI, advised by Andrew Critch and Stuart Russell. He will focus on creating environments to test AI’s ability to culturally acquire desirable properties. The idea is to design ML environments where the goal is to “fit in” with a group of existing agents in the environment—e.g. learn what the different roles are that the other agents are playing and be able to join the group to fulfill one of those roles.

This is probably the grant I’m most excited about in this payout report. Nick has a solid ML background, having previously completed an ML PhD at Berkeley under Stuart Russell. He also has great advisors in Critch and Russell, and is working on an excitingly novel project.

In evaluating new benchmarks, one of the main things I look for is whether they are likely to be hard for an interesting reason, such that I expect solving them to yield useful alignment insights. I think that is very much true for what Nick is trying to build here—fitting in with a group is an ill-defined task that’s likely to require learning from feedback in a complex way that I expect will shine light on real, difficult problems in AI alignment.

Aryeh Englander ($100,000)

Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD.

We’re funding Aryeh to drop to half-time on his AI-related work at Johns Hopkins’s Applied Physics Laboratory in order to pursue an AI PhD at the University of Maryland, Baltimore County.

I think that this grant is quite marginal, and there are many reasons that it might end up being a bad idea—it means Aryeh will be doing less work at APL, work that might have otherwise been directly useful, and has him putting a lot of time and effort into getting a PhD from a school with little name recognition outside of APL, though Aryeh assured us that UMBC was positively looked on within APL. Aryeh also assured us that he plans on staying at APL for quite some time, which helps to mitigate the downside of this credential being not very useful elsewhere.

Overall, I like the idea of there being AI-safety-knowledgeable people like Aryeh at APL—if APL ends up being a major player in AI, which seems very possible given that it’s one of the largest scientific research labs affiliated with the US federal government, I’d like to have people there who are knowledgeable and interested in AI safety. And according to Aryeh, APL values PhDs very highly, as one might expect for a government-affiliated lab—in particular, if Aryeh wants to lead an AI safety project, he has to have a PhD. Thus, we ended up deciding that it was worth sponsoring Aryeh to get a PhD as an investment in his ability to support AI safety at APL.

James Bernardi ($28,320)

8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI.

In our last payout report, I wrote about our grant to David Reber to work with Michael Cohen on empirical demonstrations of Michael’s work. David has now stopped working with Michael, and Michael is looking to work with James instead (on the same project).

My thoughts on this grant are mostly the same as my thoughts on our previous grant to David to work with Michael, so I’ll just refer the reader to that report rather than repeat the same points here. The only additional nuance is how excited we are about James compared to David. I generally want to let Michael make his own decisions about who he wants to work with rather than second-guess him, so I won’t comment too much on that question. But I will note that James has a solid ML engineering background, and it seems like there is a reasonable chance that the project will end up working out.

Andrei Alexandru ($27,645)

Grant to cover fees for a master's program in machine learning.

We’re funding Andrei Alexandru to pursue an ML master’s from Cambridge. Andrei has previously done mostly general software engineering and is looking to pivot into AI, and ML specifically, in order to eventually work on AI safety. I’m generally in favor of funding most applications of this form—someone aligned wants funding to skill up/get a credential in ML—and my evaluation of this application didn’t end up very far from that prior.

Andrei is clearly interested and engaged in AI safety—he previously received a grant from Open Philanthropy to self-study AI—but isn’t yet very knowledgeable or experienced. In my opinion, I think it’s worth giving him the opportunity to gain that knowledge and experience; I think AI safety absolutely needs more researchers right now, there’s a reasonable chance Andrei ends up in a position to do good work, and I didn’t think there were any strongly negative signs that might suggest that Andrei would end up negatively affecting the field.

AISS Inc ($25,000)

6-month salary for JJ Hepburn to continue providing 1-on-1 support to early AI safety researchers and transition AI Safety Support.

We’re funding JJ Hepburn to continue providing 1-on-1 support to aspiring AI safety researchers, as well as giving him a runway to consider what he should do next. AI Safety Support as an organization currently has an unclear future and this grant is to make sure that JJ is supported regardless of what happens with AI Safety Support.

I think that JJ is currently doing good work supporting people trying to get into AI safety, and I would be disappointed to see him have to get a day job instead, which is what JJ thinks he would likely have to do without funding. I don’t have a strong belief about exactly what JJ should be doing— I think he’d also be a great fit for other ops roles—but he’s clearly motivated, competent, and aligned, and I want to give him time to figure that out for himself.

Dan Hendrycks ($10,000)

A competition for wellbeing trackers / hedonometers.


Update on this Grant (September 2022):

Dan Hendrycks ended up not taking the grant to work on this project.


We’re funding Dan Hendrycks to run a competition to build a better hedonometer, in the spirit of projects like these two.

Current hedonometers are clearly quite bad—they track things like Twitter sentiment, which is a very weak proxy for actual general happiness. GDP is probably our best current proxy for the happiness of a population, but is also clearly flawed in many ways.

That being said, I think my inside-view case for this project is pretty weak, as I don’t fully understand what would make a better hedonometer useful. The best positive case I could come up with after talking with Dan was just that building a better hedonometer might teach us a lot about how to measure happiness, which could be useful at some point down the road for aligning AI systems or setting public policy.

However, I think the downside risk is pretty minimal; it mostly comes from the lost value of the time people spend on this competition rather than other projects. I hope that people won’t work on it unless they actually feel like they see the project as useful and think it’s worth their time. Overall, I’m willing to defer to Dan here—since I think he has a pretty solid track record—and fund this competition despite not having a strong positive case for it. 

Grant reports by Adam Gleave

Dmitrii Krasheninnikov ($85,530)

Financial support to improve Dmitrii’s research productivity during his PhD.

We are providing financial support to Dmitrii, a PhD student at Cambridge advised by David Krueger, to improve his research productivity during his PhD. We have previously made several grants providing supplementary funding to graduate students. I've outlined my rationale for this kind of grant in the May 2021 report, so I will focus on what's unique about Dmitrii's case.

Dmitrii's past research shows a clear interest in working on important topics in AI safety. Moreover, I'm particularly excited that he is being advised by David Krueger (who the LTFF has also previously supported). I generally expect PhDs to be more successful when there is strong overlap between the research interests of the student and advisor, which I expect to be the case here.

Kai Sandbrink ($4,950)

DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations.

We are funding Kai for his PhD in Experimental Psychology at the University of Oxford. Kai intends to focus on improving reinforcement learning algorithms with approaches inspired by cognitive science. Prior to starting his PhD, Kai completed an MS in Neural Systems and Computation at ETH Zurich and an MA in China Studies at Peking University.

I expect most of the impact from this grant to come from the PhD program being good preparation for Kai's future career. Kai is interested in continuing to perform AI research, and this seems like a good pathway into it given his existing background in neuroscience. Additionally, it is possible his PhD research will increase the robustness of RL agents that could directly improve AI safety, although I find the case for this tenuous (and it is not the primary motivation for the research project).

Additionally, Kai has a strong interest in China, and is hoping to spend some of his PhD at Peking University. I generally expect greater collaboration and understanding between Chinese and Western AI researchers to be positive for both sides. Many problems in AI safety are necessarily global: in the extreme case, one actor deploying a powerful but misaligned AI system could cause a global catastrophe. I expect increased communication between AI researchers in different countries to help prevent this, both by  enabling the sharing of relevant safety techniques and by helping to develop shared standards or regulation for this technology.

Joe Collman ($35,000)

Research on amplification approaches to AI alignment.

We are funding Joe to continue his independent research into improved understanding of amplification approaches to AI alignment. We previously funded Joe in September 2020 and November 2019. I found his work over the past year to have produced several valuable insights. Most notably, he identified a novel problem in the popular AI safety via debate paradigm. While I think there are more pressing challenges for debate, I am glad this problem is now in the literature.

Joe has also developed a new conceptual approach for debate, based on splitting the debater into separate agents with one behind a veil of ignorance, which he argues results in better equilibria. This approach seems promising to me, but I have been unable to conduct a full assessment, as at the time of evaluation there was no detailed write-up.

After interviewing Joe, I think his proposal has significant potential, and so think his work merits continued funding. However, I do have some hesitation here due to the limited tangible research output produced so far. While Joe has made clear progress, I would usually like to see a clear track record of successful research projects after several years of funding. However, Joe has been focused on high-risk, high-reward research problems, which by their nature may not produce immediate positive results. Given this, I am inclined to give the benefit of the doubt here, but I intend to place greater weight on concrete output in evaluations of any future renewals.

Brad Saad ($3,500)

Investigating implications of the simulation hypothesis.

We are funding Brad Saad, a philosophy post-doc at Rutgers University, to investigate connections between civilization-scale simulations and existential risk. In particular, the simulation hypothesis proposes that we might be living in a simulation. If true, the chance the simulation is shut off would (from our perspective) be a source of existential risk. While the simulation hypothesis is obviously highly speculative, it seems credible enough to be worth seriously considering the consequences of its being true. I expect this work to involve organizing the literature on this topic and outlining directions for further research, which seems valuable.

Grant reports by Oliver Habryka

Alex Flint ($80,000)

Independent research into the nature of optimization, knowledge, and agency, with relevance to AI alignment.

Alex Flint has been doing independent AI alignment research for about a year, and his work strikes me as among the best that independent researchers have produced. From his application: 

My research focuses on the frames being used by the AI and AI safety communities to understand and construct intelligent systems. My goal is to identify the frames that we don’t know we’re using and elucidate them, so that we can choose whether to endorse or discard them. I am most interested in the frames that currently underlie technical work in AI and AI safety. [...] I am seeking funding to continue thinking and writing about the frames that underlie our thinking in technical AI alignment. 

AI alignment as a field is in a very early stage, which means that a substantial number of the problems we are trying to solve are very underspecified and often don't have clear formulations. I have found Alex's work to be particularly good at clarifying these problem statements, and teasing out the implicit assumptions made by those who formulated the problems of the field. This strikes me as an important step towards enabling more researchers to gain traction on solving the important problems in AI alignment. 

It also seems like a crucial step to ensure that the field at large still maintains a focus on its core problems, instead of substituting the hard problems of AI alignment with easier ones. (This is a serious risk, because easier problems are easier to make progress on and might let researchers accrue prestige more easily — which makes them tempting to work on, even if the results aren’t as useful.)

Some specific posts of Alex that I thought did a good job here: 

All of these feel to me like they take some high-level problem formulation and pare it down towards something more essential. Importantly, I think that I sometimes see something important being lost in the process, but that in itself is also often useful, since it helps us identify where the important elements lie. 

Overall, I am happy to fund Alex to do more independent alignment research and am looking forward to what he can produce with the funding provided by this grant.

Grant reports by Luisa Rodriguez

ALLFED ($3,600)

Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture.

This small grant will allow Adin Richards to spend the fall up-skilling as a longtermist researcher by conducting research on civilizational collapse and resilience. Adin is an undergraduate at Brown University, studying Geology-Biology and Public Health. 

The funding will cover Adin’s wages while he works on a research project under the supervision of Mike Hinge, a researcher at ALLFED. From the application (lightly edited for brevity):

“Nuclear war is perhaps the best known and oldest anthropogenic x-risk, and while we have avoided it for decades, there is a disturbing litany of near misses. Nuclear deproliferation, no-first-use policies, and improved international relations and communications help address the first two layers of defense against extinction from a large nuclear exchange: prevention and response. However, these and other efforts have not brought the risk of a nuclear war starting and escalating to zero, leaving us vulnerable to this as one of several pressing extinction threats, and less attention has been paid to improving resilience. 

Luisa Rodriguez has explored the likelihood that different societal collapse scenarios quickly lead to extinction, and much of the uncertainty in this research centers on the extent of post-catastrophe resource conflict and the ability of initial survivors to restart food production. By exploring the biophysical possibility of continuing agriculture following severe climate change, this project lays the groundwork for contingency plans to bolster resilience, decreasing the likelihood that even if a nuclear conflict arises and escalates, humanity, and possibly some semblance of global civilization, can still survive. […] 

In my work in global food security, I’ve seen how an often-bemoaned source of vulnerability to smaller scale production shocks is a lack of pre-disaster agreements between countries on how to respond to emergencies, such as pacts that would limit sudden export bans or designations of surplus resources for humanitarian purposes. ALLFED hopes to consult with national and international agencies to help institute emergency plans, but first needs a fleshed-out map of the possibility space before these can be drawn up and agreed to. Success in this project will mean arriving at a comprehensive picture of what preparatory actions and disaster responses would maximize the ability of society to continue meeting its nutritional needs through agricultural adaptations, so that decision makers are empowered to implement these.

On a more personal level, … this project represents an opportunity for me to develop and apply skills like cost effectiveness assessment, empirical modelling, going through the process of writing and hopefully publishing policy-relevant findings, and potentially advocating for decision makers to prioritize longtermist considerations in governance.”

I agree with Adin that the ability to restart some level of food production following civilizational collapse is an important consideration in understanding whether civilization is likely to rebound from collapse, though I’m not sure whether the output from this particular project will end up being especially helpful in resolving that uncertainty, or will end up making it to the policymakers ALLFED hopes to reach.

Despite that uncertainty, I’m excited about the idea of Adin getting more research experience and becoming more involved with the longtermist community, as I think he shows promise as a researcher. I spoke with Adin and thought he displayed impressive subject-area knowledge despite being relatively new to those fields. I've also received some good references for him. Finally, his undergraduate studies in Geology-Biology and Public Health at Brown, a top American university, are relevant to his proposed project as well as a broader range of longtermist cause areas he’s exploring, including biosecurity. This makes me think he’s on track to become well-suited for longtermist research more generally.