November 2020: Long-Term Future Fund Grants

Payout Date: November 29, 2020

Total grants: USD 505,000

Number of grantees: 10

Contents

Introduction

  • As part of its November 2020 grant application round, the Long-Term Future Fund supported ten projects that we expect to positively influence the long-term trajectory of civilization, with a total of up to $355,000. We also made an additional off-cycle grant of up to $150,000.

  • Compared to our previous round, we received triple the number of total applications this round, and double the number of high-quality applications (as measured by average fund manager score). We've added an optional referral question to our application form to understand why this is. Our current guess is that it's largely a result of better outreach, particularly through the 80,000 Hours job board and newsletter.

  • To help improve fund transparency, we've written a document describing our overall process for making grants. We'll also be running an 'Ask Me Anything' session on the Effective Altruism Forum from December 4th to 7th, where we answer any questions people might have about the fund.

  • We received feedback this round that our payout reports might discourage individuals from applying if they don't want their grant described in detail. We encourage applicants in this position to apply anyway. We are very sympathetic to circumstances wherein a grantee might be uncomfortable with a detailed public description of their grant. We run all of our grant reports by grantees and think carefully about what information to include to be as transparent as we can while still respecting grantees' preferences. If considerations around reporting make it difficult for us to fund an application, we can refer to private donors who don't publish payout reports. We might also be able to make an anonymous grant, as we did in this round.

Highlights

Our grants include:

  • An up-to-$150,000 grant to Richard Ngo to do a PhD at Cambridge University on understanding the analogy between the development of human intelligence and artificial general intelligence (AGI). This grant is part of our efforts to reduce potential risks from transformative artificial intelligence. Richard has a strong background for this work: He previously worked at DeepMind and completed a Bachelor's degree in computer science and philosophy at the University of Oxford, and a Master's degree with distinction in computer science at the University of Cambridge. He has also released impressive work in this area before. Human-AGI analogies form the foundation of many researchers' current beliefs about future AI systems; further clarifying them is likely to bring major benefits to the field of AI safety research.

  • A $3,579 grant to Maximilian Negele to investigate the historical longevity of institutions, in order to better understand the feasibility of setting up charitable foundations that last hundreds of years. This grant is part of our efforts to set up institutions that protect future generations. Existing work on patient philanthropy relies on the ability to transfer wealth and resources into the future; understanding how likely an institution is to be able to do this will be hugely informative for understanding whether to spend long-termist resources now or later.

Grant recipients

See below for a list of grantees' names, grant amounts, and project descriptions. Most of the grants have been accepted, but in some cases, the final grant amount is still uncertain.

Grants made during the last grant application round:

  • Anonymous (up to $40,000): Supporting a PhD student's career in technical AI safety.
  • David Bernard (up to $55,000): Testing how the accuracy of impact forecasting varies with the timeframe of prediction.
  • Lee Sharkey ($44,668): Researching methods to continuously monitor and analyse artificial agents for the purpose of control.
  • Maximilian Negele ($3,579): Investigating the historical longevity of institutions in order to better understand the feasibility of setting up charitable foundations that last hundreds of years.
  • Mrinank Sharma ($9,798): Supporting an AI PhD student's machine learning research on the effects of removing and then reimplementing COVID-19 restrictions.
  • Nick Hollman ($24,000): Understanding and advising legal practitioners on the long-term challenges of AI in the judiciary.
  • Nuño Sempere ($41,337): Conducting independent research on forecasting and optimal paths to improve the long-term future.
  • Sam Clarke ($4,455): Supporting Jess Whittlestone's work on a project regarding 'mid-term AI impacts'. 
  • Tamara Borine, Cambridge Summer Programme in Applied Reasoning (CaSPAR) ($32,660): Organizing immersive workshops for STEM students at top universities.
  • Vanessa Kosoy ($100,000): Creating a mathematical theory of AGI and AGI alignment, based on the Learning-Theoretic AI Alignment Research Agenda.

Off-cycle grants:

  • Richard Ngo (up to $150,000): Doing a PhD on understanding the analogy between the development of human intelligence and AGI.

Grant reports

Grant reports by Oliver Habryka

David Bernard: up to $55,000

Testing how the accuracy of impact forecasting varies with the timeframe of prediction.

From the application (lightly edited for clarity):

A common objection to longtermism is that the effects of our actions on the long-term future are essentially impossible to predict. Thus, despite the huge potential value in the future, extreme uncertainty around long-term impacts means the expected value of our options is mostly determined by their short-run impacts. There is some theoretical work by EAs on this topic, notably Tarsney (2020), but empirical evidence is thin and has two shortcomings for longtermists.

Firstly, Tetlock-style forecasting is about 'state' forecasts (what the world will look like in the future) rather than 'impact' forecasts (the difference between what would happen if we take an action and what would happen if we did not take that action). Impacts are more important than states for altruists trying to improve the world. See here for graphical clarification on the state-impact distinction. Secondly, Tetlock shows that forecasting accuracy degrades quickly over a 3-5 year timeframe, but we care about longer timescales.

I will improve on existing evidence in two ways:

1. Look at impact forecasts, rather than state forecasts

2. Look at timescales from 1-20 years.

I will collect forecasts of impacts from randomised controlled trials in social sciences where impacts are easy to observe and long-run follow-ups are often conducted, and then study the relationship between timescale and accuracy. This is shorter than the timescales longtermists tend to care about, but still provides empirical evidence on the relationship between time and accuracy for impact forecasts.

I think Tetlock's research into forecasting has been quite valuable, and has been quite influential on a number of important decision-makers in the long-term future space. But the biggest problem with Tetlock's research is that it has only been evaluated on pretty short timescales, making it very unclear what its implications are for making forecasts for anything more than 5 years out. This research proposal tries to address this by studying forecasts with a longer timescale, and focusing on impact forecasts instead of state forecasts.

I don't know whether studying forecasts of impacts from randomized controlled trials is the best way to go about this, and I could imagine David just not finding much evidence, but the overall question of long-term forecasting ability strikes me as quite important to a lot of work on improving the long-term future (if we want to improve the distant future, it's crucial that we learn how to model it).

A similar line of research that I've referenced multiple times in the last few years is Stuart Armstrong's evaluation of predictions made by Ray Kurzweil in 1999 --- once 10 years after the predictions, and once 20 years after the predictions.

David is doing a PhD at the Paris School of Economics. We've also received some good references for David from researchers at the Global Priorities Institute (GPI), which were important for my assessment of this grant. David also gave an EA Global talk (plus a paper) on related topics that seemed high-quality to me (and provided substantial evidence of his skill in this area). His PhD research also covers similar topics, making me think he is, overall, well-suited for this kind of research.

David was also recently hired part-time by Rethink Priorities. As a result of that salary, he may accept less money from the fund than he originally proposed in his application.

Cambridge Summer Programme in Applied Reasoning (CaSPAR): $32,660

Organizing immersive workshops for STEM students at top universities.

We gave CaSPAR a grant last year. In our report for that grant, I wrote:

CaSPAR is a summer camp for Cambridge students that tries to cover a variety of material related to rationality and effective altruism. This grant was originally intended for CaSPAR 2020, but since COVID has made most in-person events like this infeasible, this grant is instead intended for CaSPAR 2021.

I consider CaSPAR to be in a similar reference class as SPARC or ESPR, two programs with somewhat similar goals that have been supported by other funders in the long-term future space. I currently think interventions in this space are quite valuable, and have been impressed with the impact of SPARC; multiple very promising people in the long-term future space cite it as the key reason they became involved.

The primary two variables I looked at while evaluating CaSPAR were its staff composition and the references we received from a number of people who worked with the CaSPAR team or attended their 2019 event. Both of those seemed quite solid to me. The team consists of people I think are pretty competent and have the right skills for a project like this, and the references we received were positive.

The biggest hesitation I have about this grant is mostly the size of the program and the number of participants. Compared to SPARC or ESPR, the program is shorter and has substantially fewer attendees. From my experience with those programs, the size of the program and the length both seemed integral to their impact (I think there's a sweet spot around 30 participants -- enough people to take advantage of network effects and form lots of connections, while still maintaining a high-trust atmosphere).

I expect that this grant will eventually lead to a greater number of talented researchers working to improve the long-term future. CaSPAR's team plans to run the camp originally planned for 2020 in 2021, and this grant will go towards the 2022 cohort (which was originally planned to happen in 2021). Since there hasn't been another camp since we made the last grant, we haven't received much additional evidence, so my assessment is mostly the same. We have received some additional positive references, though, which makes me reasonably confident that this continues to be a good use of resources.

This grant also provides additional funding to help the organizers scale up the camp scheduled for 2021 by admitting graduate students in addition to undergraduates.

Vanessa Kosoy: $100,000

Creating a mathematical theory of AGI and AGI alignment, based on the Learning-Theoretic AI Alignment Research Agenda.

This grant is to support Vanessa for independent research on her learning-theoretic research agenda for AI alignment. Vanessa has been a long-time contributor to AI alignment research, with dozens of posts and hundreds of comments on the AI Alignment Forum.

I sadly haven't had the opportunity to come to a full understanding of Vanessa's research agenda myself, so I primarily relied on external references in order to evaluate this grant. We received strongly positive references from researchers at MIRI and FHI, as well as a few other independent researchers in the field. I wouldn't say that all researchers think that her research is the single most promising open direction we have for AI alignment, but almost all of them agreed that it seems to be an angle worth pursuing, and that Vanessa should have the relevant funding to do more research in this space.

I am familiar with and have benefited from Vanessa's research on quantilization and IRL. She has also written many critiques and comments on the research of other contributors in AI alignment. Examples include this recent comment on Andrew Critch's overview of a number of research areas and their relevance to AI existential risk, these excellent reviews of a number of articles during the 2018 LessWrong Review, and many other critiques and comments.

Overall, of the grants we are making this round, this is the one I am most excited about: While I think that technical work on existential risks from AI is currently one of the most valuable interventions to support with additional funding, I've found it historically very hard to find researchers in this space that I am excited about. My sense is that most people find it very hard to find traction in this space, and to orient on the seemingly insurmountable problems we are facing. Vanessa's research seems like a trailhead that can open more avenues of research, and seems to strike right at the heart of the core problems in technical AI alignment research.

Grant reports by Adam Gleave

Anonymous: up to $40,000

Supporting a PhD student's career in technical AI safety.

We are supporting a PhD student's career in technical AI safety with up to $40,000. The applicant's salary provided by their university was sufficiently low to interfere with their research, both directly (e.g. being unable to afford a new laptop) and indirectly (via increasing the risk of burnout). We feel that the grantee has a strong track record in technical AI research and has demonstrated a clear interest in pursuing safety research long-term. The grantee is also applying for funding from other sources, and will return some of this grant if their other applications are successful.

The grantee expressed a strong personal preference against a personalized public report about this grant. With their permission, we've decided to fund it and report it anonymously, because we think the grant is valuable, and we'd like to signal this possibility to other potential grantseekers. The grant was made with unanimous assent from all fund managers, none of whom had a conflict of interest, but we realize that anonymous grants make accountability to donors harder. To combat this, we asked Rohin Shah, a well-known AI safety researcher who is unaffiliated with the fund, to review this grant. He felt the grant was clearly above the bar for funding. His main concern was that the grantee might not work on safety after graduating, but he felt this was a relatively minor risk.

This is our first time making an anonymous grant, and we would appreciate feedback. Should we make anonymous grants in the future? In this case, we asked someone unaffiliated with the fund (Rohin Shah) to review the grant; are there additional measures we should take to maintain accountability?

Grant reports by Asya Bergal

Lee Sharkey: $44,668

Researching methods to continuously monitor and analyse artificial agents for the purpose of control.

Lee applied to spend the year before his PhD researching methods for monitoring the internal processes of artificial agents while they're running, which would allow us to detect and respond to undesirable internal behavior. His work will consist of doing a literature review of existing work before trying to make progress on methods that are potentially promising. This research direction has a lot in common with work on the interpretability of neural networks, though it is additionally focused on finding methods that can be used to understand agent internals in real time.

Lee was previously a Masters student at ETH Zurich studying Neural Systems and Computation. During his Masters, Lee worked with Valerio Mante, who was doing work on understanding recurrent neural network models of the brain using an interpretability method called dynamical systems analysis. I'm not sure that dynamical systems analysis in particular is likely to be a scalable solution to real-time monitoring, but I'm interested in supporting Lee doing more work in this area and think he will benefit from his connections to existing academic researchers. Lee also shared some preliminary work he did on a related research direction with the fund; I thought the work was interesting and showed that he was likely to make progress working as an independent reinforcement learning (RL) researcher.

I also shared Lee's proposal with a few AI safety researchers, including people working on interpretability; overall, they thought this was a potentially promising research direction.

Mrinank Sharma: $9,798

Supporting an AI PhD student's research on the effects of removing non-pharmaceutical interventions for COVID-19.

Mrinank is a first-year PhD student at the Autonomous Intelligent Machines and Systems CDT (AIMS) program at Oxford University and a DPhil affiliate at the Future of Humanity Institute. Mrinank was invited to work with the Imperial College London COVID Response team on machine learning (ML) research estimating the effects of removing and then reimplementing COVID-19 restrictions (e.g., opening schools and businesses, and then closing them again). This grant supports his living expenses while he takes a break from his studies to work on this.

I think this research itself is unlikely to have strong effects on the long-term future, but I think it's a good opportunity to support the career of an up-and-coming ML and AI safety researcher. Mrinank has expressed a strong interest in AI safety (e.g. by running the AI safety group at Oxford), and seems committed to using his ML career to do good from a longtermist perspective. My impression from talking to others is that COVID research is an unusually effective way of building credibility in ML right now, and Mrinank has been extremely successful with related work so far -- his paper on the robustness of effectiveness estimations of non-pharmaceutical interventions was accepted as a Spotlight Talk at NeurIPS 2020, his paper on the effectiveness of non-pharmaceutical interventions is currently under review at a top-tier journal (and has an Altmetric score of 587), and he was invited to present his work at the Africa CDC.

In general, I think the fund should be hesitant to recommend grants that could likely get funding from sources focused on near-term impact. In this case, the opportunity to do this research seemed very short-lived, and I wasn't aware of any such sources that would fund Mrinank on short notice (applications for fastgrants.org are paused).

Richard Ngo: up to $150,000

Doing a PhD on understanding the analogy between the development of human intelligence and AGI.

This grant was made prior to our standard cycle.

This grant covers the tuition and living expenses for Richard Ngo's 3-year PhD in the philosophy of machine learning at Cambridge University. Richard applied to do this PhD after the standard funding deadline for PhD students had passed, so he was not able to acquire university funding. This grant enabled him to start the PhD now rather than wait another year for university funding to be available. Richard is also applying for funding from other sources, and will return some of this grant if his other applications are successful.

Richard previously worked as a research engineer at DeepMind, and his continued work there would have been primarily engineering-focused. I was excited to enable him to work on AI philosophy full-time instead. I've been extremely impressed with Richard's previous philosophical work -- in particular, his report on AGI safety from first principles is the best articulation I'm aware of about why AGI might pose an existential threat. He also has a strong background in philosophy and AI: a BA in Computer Science and Philosophy from Oxford, and an MPhil in machine learning from Cambridge. His proposed PhD topic is on the analogy between the development of human intelligence and AGI.

From his application:

I think this is one of the most crucial foundational issues in AI safety. Most of our current beliefs about future AI systems (e.g. that they'll be generally intelligent and pursue large-scale goals) are based on loose analogies to humans. However, early arguments for why AI might be dangerous have been critiqued for relying on formalisms like utility maximisation in inapplicable ways (e.g. by Rohin Shah, Eric Drexler, and myself). Instead, I believe that taking a perspective closer to cognitive science and evolutionary analysis will allow us to generate concepts which, while less precise, may be more useful for predicting and shaping AGI behaviour.

I'd classify Hubinger et al.'s paper on mesa-optimisation (https://arxiv.org/abs/1906.01820) as a good example of how we can get more clarity about AGI by drawing on the analogy to humans. I'd also note that the foundations of the field are still in flux - a significant number of safety researchers think that inner alignment is the hardest problem, yet we didn't even have a term for it until very recently. So I expect further work on clarifying the conceptual foundations of the field to have major benefits for the research that's being done elsewhere.

I agree with Richard's justification for this work.

Grant reports by Matt Wage

Maximilian Negele: $3,579

Investigating the historical longevity of institutions in order to better understand the feasibility of setting up charitable foundations that last hundreds of years.

This small grant is for Maximilian to work on a research project with Phil Trammell, who works at the Global Priorities Institute at Oxford.  The project is to research the historical "longevity and decay of universities, philanthropic foundations, and catholic orders" in order to inform the feasibility of setting up charitable foundations that last for centuries.

I am optimistic about Phil Trammell's research work on patient philanthropy and think this is a relevant topic to research as part of that. Existing work relies on assumptions about being able to transfer wealth and resources into the future; understanding how likely an institution is to be able to do this will be informative for understanding whether to spend longtermist resources now or later.

Nuño Sempere: $41,337

Conducting independent research on forecasting and optimal paths to improve the long-term future.

This grant is for Nuño to do a year of independent research.  Nuño will be working with Phil Trammell (of the Global Priorities Institute at the University of Oxford) on philanthropic timing research and Ozzie Gooen (of the Quantified Uncertainty Research Institute) on forecasting research.

We are most optimistic about Nuño's work on philanthropic timing, which is a continuation of the work he already started as a summer research fellow at the Future of Humanity Institute. This work models the optimal allocation of capital for a social movement between direct spending, investment, and movement building, as well as the optimal allocation of labor between direct workers, money earners, and movement builders. Nuño received positive references, particularly from Phil Trammell (who thought this work could be a valuable extension of his own past work on patient philanthropy).

Grant reports by Helen Toner

Nick Hollman: $24,000

Understanding and advising legal practitioners on the long-term challenges of AI in the judiciary.

This funding will support an early project of the Legal Priorities Project (LPP), a new organization that describes its mission as "...to conduct legal research that tackles the world's most pressing problems, [which] currently leads us to focus on the protection of future generations." Nick applied for funding to work with an organization that advises the Indian Supreme Court, to carry out a project researching and eventually advising on the long-term challenges of using AI in judicial systems.

I'm excited about LPP's potential to build out legal priorities research as a new area, and this specific project seems like an interesting approach to try. I think it's especially good for this to be done in collaboration with practitioners (in this case, advisors and/or staff of the Indian Supreme Court), because in my view, the feedback loops enabled by this kind of arrangement make it more likely that the work produced is useful and relevant to the real world. I see this grant as an experiment and as an investment in exploring what types of work LPP can facilitate.

Sam Clarke: $4,455

Supporting Jess Whittlestone's work on a project regarding 'mid-term AI impacts'.

Sam applied for funding to cover relocation expenses from New Zealand to Cambridge, UK, in order to begin his role there as a research assistant supporting Jess Whittlestone's work at the Centre for the Future of Intelligence, which we have supported in the past. The fund generally tries to avoid paying for costs that we would expect employers to cover, to avoid creating bad incentives. In this case, however, Cambridge University apparently only covers moving expenses for contracts over 2 years, and this is a 1-year position. We therefore decided that this was a reasonable use of funds.

Feedback

If you have any feedback, we would love to hear from you. You can submit your thoughts through this form or email us at ealongtermfuture@gmail.com.