September 2020: Long-Term Future Fund Grants

Payout Date: September 4, 2020

Total grants: USD 394,200

Number of grantees: 12

Discussion: EA Forum comments

Contents

Introduction

In this report, alongside information about our latest grants, we have further news to share about the Long-Term Future Fund.

Changes to Fund management

We welcome two new Fund managers to our team: Adam Gleave and Asya Bergal. Adam is a PhD candidate at UC Berkeley, working on technical AI safety with the Center for Human-Compatible AI (CHAI). Asya Bergal works as a researcher at AI Impacts and writes for the AI Alignment Newsletter.

We decided to seek new Fund managers with strong backgrounds in AI safety and strategy research to increase our capacity to carefully evaluate grants in these areas, especially given that Alex Zhu left the Long-Term Future Fund (LTFF) early this year. We expect the number of high-quality grant applications in these areas to increase over time.

The new Fund managers were appointed by Matt Wage, the Fund's chairperson, after a search process with consultation from existing Fund managers and advisors. They both trialed for one grant round (Adam in March, Asya in July) before being confirmed as permanent members. We are excited about their broad-ranging expertise in longtermism, AI safety, and AI strategy.

Adam is a PhD candidate advised by Stuart Russell and has previously interned at DeepMind. He has published several safety-relevant machine learning papers. Over the past six years, he was deeply involved with the effective altruism community (running an EA group at the University of Cambridge, earning to give as a quantitative trader) and demonstrated careful judgment on a broad range of longtermist prioritization questions (see, e.g., his donor lottery report).

Asya brings a broad-ranging AI background to the Fund. At AI Impacts, she has worked on a variety of empirical projects and developed novel perspectives on far-ranging strategy questions (see, e.g., this presentation on AI timelines). She has demonstrated technical proficiency in AI, writing summaries and opinions of papers for the AI Alignment Newsletter. More recently, she has been working on hardware forecasting questions at the Centre for the Governance of AI. Asya has also researched a broad range of longtermist prioritization questions, including at the Open Philanthropy Project (where she looked into whole brain emulation, animal welfare, and biosecurity).

Adam and Asya both had very positive external references, and both appear to be esteemed and trustworthy community members. During the trial, they demonstrated careful judgment and deep engagement with the grant applications. We are excited to have them on board and believe their contributions will further improve the quality of our grants.

In other news, Jonas Vollmer recently joined CEA as Head of EA Funds. He previously served as an advisor to the LTFF. In his new role, he will make decisions on behalf of EA Funds and explore longer-term strategy for the entire EA Funds project, including the LTFF. EA Funds may be spun out of CEA's core team within 6--12 months.

Other updates

Long-Term Future Fund

  • We plan to continue to focus on grants to small projects and individuals rather than large organizations. We think the Fund has a comparative advantage in this area: Individual donors cannot easily grant to researchers and small projects, and large grantmakers such as the Open Philanthropy Project are less active in the area of small grants.
  • We would like to take some concrete actions to increase transparency around our grantmaking process, partly in response to feedback from donors and grantseekers. Over the next few months, we plan to publish a document outlining our process and run an “Ask Me Anything” session with our new Fund team on the Effective Altruism Forum.
  • We are tentatively considering expanding into more active grantmaking, which would entail publicly advertising the types of work we would be excited to fund (details TBD).

EA Funds

Earlier this year, EA Funds ran a donor survey eliciting feedback on the Long Term Future Fund.

  • Overall, the LTFF received a relatively low Net Promoter score: when asked “How likely is it that you would recommend the Long-Term Future Fund to a friend or colleague?”, donors responded with an average of 6.5 on a scale from 1 to 10. However, some donors gave a low score despite being satisfied with the Fund because their friends and colleagues are generally uninterested in longtermism. In future surveys, EA Funds intends to ask questions that more directly address how donors themselves feel about the LTFF.
  • Some donors were interested in how the Fund addresses conflicts of interest, so EA Funds has been developing a conflict of interest policy and intends to have stricter rules around grants to the personal acquaintances of Fund managers.
  • Some donors were surprised by the Fund’s large number of AI risk-focused grants. While the Fund managers are in favor of these grants, we want to make sure that donors are aware of the work they are supporting. As a result, we changed the EA Funds donation interface such that donors have to opt into supporting their chosen Funds. (Previously, the website suggested a default allocation for each Fund.) EA Funds also plans to offer a donation option focused on climate change for interested donors.
  • Some donors expressed a preference for more legible grants (e.g., to established, reputable institutions). EA Funds will consider offering a separate donation option for those donors; while we are still developing our plans, this might take the form of a separate Fund that primarily supports Open Philanthropy’s longtermist grant recipients.

Grant recipients

Each grant recipient is followed by the size of the grant and a one-sentence description of their project. All of these grants have been paid out.

Grants made during our standard cycle:

  • Robert Miles ($60,000): Creating quality videos on AI safety, and offering communication and media support to AI safety orgs.
  • Center for Human-Compatible AI ($75,000): Hiring a research engineer to support CHAI’s technical research projects.
  • Joe Collman ($25,000): Developing algorithms, environments and tests for AI safety via debate.
  • AI Impacts ($75,000): Answering decision-relevant questions about the future of artificial intelligence.
  • Alexis Carlier ($5,000): Surveying experts on AI risk scenarios and working on other projects related to AI safety.
  • Gavin Taylor ($30,000): Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.
  • Center for Election Science ($50,000): Supporting the use of better voting methods in U.S. elections
  • Charlie Rogers-Smith ($7,900): Supporting research and job applications related to AI alignment.

Off-cycle grants:

  • Claudia Shi ($5,000): Organizing a "Human-Aligned AI” event at NeurIPS.
  • Gopal Sarma ($5,000): Organizing a workshop aimed at highlighting recent successes in the development of verified software.
  • Alex Turner ($30,000): Understanding when and why proposed AI designs seek power over their environment.
  • Cambridge Summer Programme in Applied Reasoning (CaSPAR) ($26,300): Organizing immersive workshops on meta skills and x-risk for STEM students at top universities.

Grant reports

Oliver Habryka

Robert Miles ($60,000)

Creating quality videos on AI safety, and offering communication and media support to AI safety orgs.

We’ve funded Rob Miles in the past, and since Rob’s work has continued to find traction and maintain a high quality bar, I am viewing this mostly as a grant renewal. Back then, I gave the following rationale for the grant:

The videos on [Rob's] YouTube channel pick up an average of ~20k views. His videos on the official Computerphile channel often pick up more than 100k views, including for topics like logical uncertainty and corrigibility (incidentally, a term Rob came up with).

More things that make me optimistic about Rob’s broad approach:

  • He explains that AI alignment is a technical problem. AI safety is not primarily a moral or political position; the biggest chunk of the problem is a matter of computer science. Reaching out to a technical audience to explain that AI safety is a technical problem, and thus directly related to their profession, is a type of ‘outreach’ that I’m very happy to endorse.
  • He does not make AI safety a politicized matter. I am very happy that Rob is not needlessly tribalising his content, e.g. by talking about something like “good vs bad ML researchers”. He seems to simply portray it as a set of interesting and important technical problems in the development of AGI.
  • His goal is to create interest in these problems from future researchers, and not to simply get as large of an audience as possible. As such, Rob’s explanations don’t optimize for views at the expense of quality explanation. His videos are clearly designed to be engaging, but his explanations are simple and accurate. Rob often interacts with researchers in the community (at places like DeepMind and MIRI) to discuss which concepts are in need of better explanations. I don’t expect Rob to take unilateral action in this domain.

Rob is the first skilled person in the X-risk community working full-time on producing video content. Being the very best we have in this skill area, he is able to help the community in a number of novel ways (for example, he’s already helping existing organizations produce videos about their ideas).

Since then, the average views on his videos appear to have quintupled, usually eclipsing 100k views on YouTube. While I have a lot of uncertainty about what level of engagement those views represent, it would not surprise me if more than 15% of people introduced to the topic of AI alignment in the last year discovered it through Rob’s YouTube channel. This would be a substantial figure, and I also consider Rob’s material one of the best ways to be introduced to the topic (in terms of accurately conveying what the field is about).

In most worlds where I think this grant turns out to be bad, it is because it is currently harmful for the field of AI alignment to grow rapidly, because it might cause the field to become harder to coordinate, cause more bad ideas to become popular, or lead too many people to join who don’t have sufficient background or talent to make strong contributions. I think it is relatively unlikely that we are in that world, and I continue to think that the type of outreach Rob is doing is quite valuable, but I still think there’s at least a 5% probability to it being bad for the AI Alignment field to grow right now.

I trust Rob to think about these considerations and to be careful about how he introduces people to the field; thus, I expect that if we were to end up in a world where this kind of outreach is more harmful than useful, Rob would take appropriate action.

Center for Human-Compatible AI ($75,000)

Hiring a research engineer to support CHAI’s technical research projects.

Over the last few years, CHAI has hosted a number of people who I think have contributed at a very high quality level to the AI alignment problem, most prominently Rohin Shah, who has been writing and updating the AI Alignment Newsletter and has also produced a substantial number of other high-quality articles, like this summary of AI alignment progress in 2018-2019.

Rohin is leaving CHAI soon, and I'm unsure about CHAI's future impact, since Rohin made up a large fraction of the impact of CHAI in my mind.

I have read a number of papers and articles from other CHAI grad students, and I think that the overall approach I see most of them taking has substantial value, but I also maintain a relatively high level of skepticism about research that tries to embed itself too closely within the existing ML research paradigm. That paradigm, at least in the past, hasn't really provided any space for what I consider the most valuable safety work (though I think most other members of the Fund don't share my skepticism). I don't think I have the space in this report to fully explain where that skepticism is coming from, so the below should only be seen as a very cursory exploration of my thoughts here.

A concrete example of the problems I have seen (chosen for its simplicity more than its importance) is that, on several occasions, I've spoken to authors who, during the publication and peer-review process, wound up having to remove some of their papers' most important contributions to AI alignment. Often, they also had to add material that seemed likely to confuse readers about the paper's purpose. One concrete class of examples:  adding empirical simulations of scenarios whose outcome is trivially predictable, where the specification of the scenario adds a substantial volume of unnecessary complexity to the paper, while distracting from the generality of the overall arguments.

Another concern: Most of the impact that Rohin contributed seemed to be driven more by distillation and field-building work than by novel research. As I have expressed in the past (and elsewhere in this report), I believe distillation and field-building to be particularly neglected and valuable at the margin. I don't currently see the rest of CHAI engaging in that work in the same way.

On the other hand, since it appears that CHAI has probably been quite impactful on Rohin's ability to produce work, I am somewhat optimistic that there are more people whose work is amplified by the existence of CHAI, even if I am less familiar with their work, and I am also reasonably optimistic that CHAI will be able to find other contributors as good as Rohin. I've also found engaging with Andrew Critch's thinking on AI alignment quite valuable, and I am hopeful about more work from Stuart Russell, who obviously has a very strong track record in terms of general research output, though my sense is that marginal funding to CHAI is unlikely to increase Stuart's output in particular (and might in fact decrease it, since managing an organization takes time away from research).

While I evaluated this funding request primarily as unrestricted funding to CHAI, the specific project that CHAI is requesting money for seems also quite reasonable to me. Given the prosaic nature of a lot of CHAI's AI alignment works, it seems quite important for them to be able to run engineering-heavy machine learning projects, for which it makes sense to hire research engineers to assist with the associated programming tasks. The reports we've received from students at CHAI also suggest that past engineer hiring  has been valuable and has enabled students at CHAI to do substantially better work.

Having thought more recently about CHAI as an organization and its place in the ecosystem of AI alignment,I am currently uncertain about its long-term impact and where it is going, and I eventually plan to spend more time thinking about the future of CHAI. So I think it's not that unlikely (~20%) that I might change my mind on the level of positive impact I'd expect from future grants like this. However, I think this holds less for the other Fund members who were also in favor of this grant, so I don't think my uncertainty is much evidence about how LTFF will think about future grants to CHAI.

(Recusal note: Due to being a grad student at CHAI, Adam Gleave recused himself from the discussion and voting surrounding this grant.)

Adam Gleave

Joe Collman ($25,000)

Developing algorithms, environments and tests for AI safety via debate.

Joe was previously awarded $10,000 for independent research into extensions to AI safety via debate. We have received positive feedback regarding his work and are pleased to see he has formed a collaboration with Beth Barnes at OpenAI. In this round, we have awarded $25,000 to support Joe's continued work and collaboration in this area.

Joe intends to continue collaborating with Beth to facilitate her work in testing debate in human subject studies. He also intends to develop simplified environments for debate, and to develop and evaluate ML algorithms in this environment.

In general, I apply a fairly high bar to funding independent research, as I believe most people are more productive working for a research organization. In this case, however, Joe has demonstrated an ability to make progress independently and forge collaborations with established researchers. I hope this grant will enable Joe to further develop his skills in the area, and to produce research output that can demonstrate his abilities to potential employers and/or funders.

AI Impacts ($75,000)

Answering decision-relevant questions about the future of artificial intelligence.

AI Impacts is a nonprofit organization (fiscally sponsored by MIRI) investigating decision-relevant questions about the future of artificial intelligence. Their work has and continues to influence my outlook on how and when advanced AI will develop, and I often see researchers I collaborate with cite their work in conversations. Notable recent output includes an interview series around reasons why beneficial AI may be developed "by default" and continued work on examples of discontinuous progress.

I would characterize much of AI Impacts' research as things that are fairly obvious to look into but which, surprisingly, no one else has. In part this is because their research is often secondary, summarizing relevant existing sources, and interdisciplinary -- both of which are under-incentivized in academia. Choosing the right questions to investigate also requires considerable skill and familiarity with AI research.

Overall, I would be excited to see more research into better understanding how AI will develop in the future. This research can help funders to decide which projects to support (and when), and researchers to select an impactful research agenda. We are pleased to support AI Impacts' work in this space, and hope this research field will continue to grow.

We awarded a grant of $75,000, approximately one fifth of the AI Impacts budget. We do not expect sharply diminishing returns, so it is likely that at the margin, additional funding to AI Impacts would continue to be valuable. When funding established organizations, we often try to contribute a "fair share" of organizations' budgets based on the Fund's overall share of the funding landscape. This aids coordination with other donors and encourages organizations to obtain funding from diverse sources (which reduces the risk of financial issues if one source becomes unavailable).

(Recusal note: Due to working as a contractor for AI Impacts, Asya Bergal recused herself from the discussion and voting surrounding this grant.)

Asya Bergal

Alexis Carlier ($5,000)

Surveying experts on AI risk scenarios and working on other projects related to AI safety.

We awarded Alexis $5,000, primarily to support his work on a survey aimed at identifying the arguments and related beliefs motivating top AI safety and governance researchers to work on reducing existential risk from AI.

I think the views of top researchers in the AI risk space have a strong effect on the views and research directions of other effective altruists. But as of now, only a small and potentially unrepresentative set of views exist in written form, and many are stated in imprecise ways. I am hopeful that a widely-taken survey will fill this gap and have a strong positive effect on future research directions.

I thought Alexis's previous work on the principal-agent literature and AI risk was useful and thoughtfully done, and showed that he was able to collaborate with prominent researchers in the space. This collaboration, as well as details of the application, suggested to me that the survey questions would be written with lots of input from existing researchers, and that Alexis was likely to be able to get widespread survey engagement.

Since recommending this grant, I have seen the survey circulated and taken it myself. I thought it was a good survey and am excited to see the results.

Gavin Taylor ($30,000)

Conducting a computational study on using a light to vibrations mechanism as a targeted antiviral.

We awarded Gavin $30,000 to work on a computational study assessing the feasibility of using a light to vibrations (L2V) mechanism as a targeted antiviral. Light to vibrations is an emerging technique that could destroy viruses by vibrating them at their resonant frequency using tuned pulses of light. In an optimistic scenario, this study would identify a set of viruses that are theoretically susceptible to L2V inactivation. Results would be published in academic journals and would pave the way for further experimental work, prototypes, and eventual commercial production of L2V antiviral equipment. L2V techniques could be generalizable and rapidly adaptable to new pathogens, which would provide an advantage over other techniques used for large-scale control of future viral pandemics.

On this grant, I largely deferred to the expertise of colleagues working in physics and biorisk. My ultimate take after talking to them was that the described approach was plausible and could meaningfully affect the course of future pandemics, although others have also recently started working on L2V approaches.

My impression is that Gavin's academic background is well-suited to doing this work, and I received positive personal feedback on his competence from other EAs working in biorisk.

My main uncertainty recommending this grant was in how the LTFF should compare relatively narrow biorisk interventions with other things we might fund. I ultimately decided that this project was worth funding, but still don't have a good way of thinking about this question.

Matt Wage

Center for Election Science ($50,000)

Supporting the use of better voting methods in U.S. elections.

This is an unrestricted grant to the Center for Election Science (CES). CES works to improve US elections by promoting approval voting, a voting method where voters can select as many candidates as they like (as opposed to the traditional voting method where you can only select one candidate).

Academic experts on voting theory widely consider approval voting to be a significant improvement over our current voting method (plurality voting), and our understanding is that approval voting on average produces outcomes that better reflect what voters actually want by preventing issues like vote splitting.  I think that promoting approval voting is a potentially promising form of improving institutional decision making within government.

CES is a relatively young organization, but so far they have a reasonable track record. Previously, they passed a ballot initiative to adopt approval voting in the 120,000-person city of Fargo, ND, and are now repeating this effort in St. Louis. Their next goal is to get approval voting adopted in bigger cities and then eventually states.

Charlie Rogers-Smith ($7,900)

Supporting research and job applications related to AI alignment.

Charlie applied for funding to spend a year doing research with Jan Brauner, Sören Mindermann, and their supervisor Yarin Gal (all at Oxford University), while applying to PhD programs to eventually work on AI alignment. Charlie is currently finishing a master’s in statistics at Oxford and is also participating in the Future of Humanity Institute’s Summer Research Fellowship.

We think Professor Gal is in a better position to evaluate this proposal (and our understanding is that his group is capable of providing funding for this themselves), but it will take some time for this to happen. Therefore, we decided to award Charlie a small “bridge funding” grant to give him time to try to finalize the proposal with Professor Gal or find an alternative position.

Off-cycle grants

The following grants were made outside of our regular schedule, and weren’t included in previous payout reports, so we’re including them here.

Helen Toner

Claudia Shi ($5,000)

Organizing a "Human-Aligned AI” event at NeurIPS.

Grant date: November 2019

Claudia Shi and Victor Veitch applied for funding to run a social event themed around “Human-aligned AI” at the machine learning conference NeurIPS in December 2019. The aim of the event was to provide a space for NeurIPS attendees who care about doing high-impact projects and/or about long-term AI safety to gather and discuss these topics.

I believe that holding events like this is an easy way to do a very basic form of “field-building,” by making it easier for machine learning researchers who are interested in longtermism and related topics to find each other, discuss their work, and perhaps work together in the future or change their research plans. Our funding was mainly used to cover catering for the 100-person event, which we hoped would make the event more enjoyable for participants and therefore more effective in facilitating discussions and connections. After the event, the organizers had $1863 left over, which they returned to the Fund.

Matt Wage

Gopal Sarma ($5,000)

Organizing a workshop aimed at highlighting recent successes in the development of verified software.

Grant date: January 2020.

Gopal applied for a grant to run a workshop called "Formal Methods for the Informal Engineer" (FMIE) at the Broad Institute of MIT and Harvard, on the topic of formal methods in software engineering.  More information on the workshop is here.

We made this grant because we know a small set of AI safety researchers are optimistic about formal verification techniques being useful for AI safety, and we thought this grant was a relatively inexpensive way to support progress in that area.

Unfortunately, the workshop has now been postponed because of COVID-19.

Oliver Habryka

Alex Turner ($30,000)

Understanding when and why proposed AI designs seek power over their environment.

Grant date: January 2020

We previously made a grant to Alex Turner at the beginning of 2019. Here is what I wrote at the time:

My thoughts and reasoning

I'm excited about this because:

  • Alex's approach to finding personal traction in the domain of AI Alignment is one that I would want many other people to follow. On LessWrong, he read and reviewed a large number of math textbooks that are useful for thinking about the alignment problem, and sought public input and feedback on what things to study and read early on in the process.

  • He wasn't intimidated by the complexity of the problem, but started thinking independently about potential solutions to important sub-problems long before he had "comprehensively" studied the mathematical background that is commonly cited as being the foundation of AI Alignment.

  • He wrote up his thoughts and hypotheses in a clear way, sought feedback on them early, and ended up making a set of novel contributions to an interesting sub-field of AI Alignment quite quickly (in the form of his work on impact measures, on which he recently collaborated with the DeepMind AI Safety team)

Potential concerns

These intuitions, however, are a bit in conflict with some of the concrete research that Alex has actually produced. My inside views on AI alignment make me think that work on impact measures is very unlikely to result in much concrete progress on what I perceive to be core AI alignment problems, and I have talked to a variety of other researchers in the field who share that assessment. I think it's important that this grant not be viewed as an endorsement of the concrete research direction that Alex is pursuing, but only as an endorsement of the higher-level process that he has been using while doing that research.

As such, I think it was a necessary component of this grant that I have talked to other people in AI alignment whose judgment I trust, who do seem excited about Alex's work on impact measures. I think I would not have recommended this grant, or at least this large of a grant amount, without their endorsement. I think in that case I would have been worried about a risk of diverting attention from what I think are more promising approaches to AI Alignment, and a potential dilution of the field by introducing a set of (to me) somewhat dubious philosophical assumptions.

Overall, while I try my best to form concrete and detailed models of the AI alignment research space, I don't currently devote enough time to it to build detailed models that I trust enough to put very large weight on my own perspective in this particular case. Instead, I am mostly deferring to other researchers in this space that I do trust, a number of whom have given positive reviews of Alex's work.

In aggregate, I have a sense that the way Alex went about working on AI alignment is a great example for others to follow, I'd like to see him continue, and I am excited about the LTF Fund giving out more grants to others who try to follow a similar path.

I've been following Alex's work closely since then, and overall have been quite happy with its quality. I still have high-level concerns about his approach, but have over time become more convinced that Alex is aware of some of the philosophical problems that work on impact measures seems to run into, and so am more confident that he will navigate the difficulties of this space correctly. His work also updated me on the tractability of impact-measure approaches, and though I am still skeptical, I am substantially more open to interesting insights coming out of an analysis of that space than I was before. (I think it is generally more valuable to pursue a promising approach that many people are skeptical about, rather than one already known to be good, because the former is much less likely to be replaceable).

I've also continued to get positive feedback from others in the field of AI alignment about Alex's work, and have had multiple conversations with people who thought it made a difference to their thinking on AI alignment.

One other thing that has excited me about Alex's work is his pedagogical approach to his insights. Researchers frequently produce ideas without paying attention to how understandable those ideas are to other people, and enshrine formulations that end up being clunky, unintuitive or unwieldy, as well as explanations that aren't actually very good at explaining. Over time, this poor communication often results in substantial research debt. Alex, on the other hand, has put large amounts of effort into explaining his ideas clearly and in an approachable way, with his "Reframing Impact" sequence on the AI Alignment Forum.

This grant would fund living expenses and tuition, helping Alex to continue his current line of research during his graduate program at Oregon State.

Cambridge Summer Programme in Applied Reasoning (CaSPAR) ($26,300)

Organizing immersive workshops for STEM students at top universities.

Grant date: January 2020

From the application:

We want to build on our momentum from CaSPAR 2019 by running another intensive week-long summer camp and alumni retreat for mathematically talented Cambridge students in 2020, and increase the cohort size by 1/3 from 12 to 16.

At CaSPAR, we attract young people who are talented, altruistically motivated and think transversally to show us what we might be missing. We find them at Cambridge University, in mathematics and adjacent subjects, and funnel them via our selection process to our week-long intensive summer camp. After the camp, we welcome them to the CaSPAR Alumni. In the alumni we further support their plan changes/ideas with them as peers, and send them opportunities at a decision-relevant time of their lives.

CaSPAR is a summer camp for Cambridge students that tries to cover a variety of material related to rationality and effective altruism. This grant was originally intended for CaSPAR 2020, but since COVID has made most in-person events like this infeasible, this grant is instead intended for CaSPAR 2021.

I consider CaSPAR to be in a similar reference class as SPARC or ESPR, two programs with somewhat similar goals that have been supported by other funders in the long-term future space. I currently think interventions in this space are quite valuable, and have been impressed with the impact of SPARC; multiple very promising people in the long-term future space cite it as the key reason they became involved.

The primary two variables I looked at while evaluating CaSPAR were its staff composition and the references we received from a number of people who worked with the CaSPAR team or attended their 2019 event. Both of those seemed quite solid to me. The team consists of people I think are pretty competent and have the right skills for a project like this, and the references we received were positive.

The biggest hesitation I have about this grant is mostly the size of the program and the number of participants. Compared to SPARC or ESPR, the program is shorter and has substantially fewer attendees. From my experience with those programs, the size of the program and the length both seemed integral to their impact (I think there's a sweet spot around 30 participants --- enough people to take advantage of network effects and form lots of connections, while still maintaining a high-trust atmosphere).