Dividing Our Money Between Causes

Note: We're experimenting with writing shorter, more accessible versions of key Open Philanthropy blog posts for the Good Ventures blog. For a more detailed version of this essay, see the original. In this post, "we" refers to Good Ventures and the Open Philanthropy Project, who work as partners.

Good Ventures is a large foundation with a significant amount of funding that we plan to grant out. We're interested in doing as much good, in terms of lives saved or improved, as we can with this funding. This leads to a tough question: What should we support to achieve as much good as possible? This post gives an update on our thinking, but the issues we’re tackling here are complex, and we’re still far from having a final answer. The tentative approach described in this post will hopefully clarify why we continue to work on very different causes, such as pandemic preparedness and U.S. housing policy, and give an early idea of how much money we think we might allocate to each in the future.

Ideally, we'd be able to estimate how much good we can do in each of our selected causes. But to do that, we need to answer some moral "worldview" questions like whether people who haven’t been born yet matter as much as people alive today (such that grants that primarily help future generations would be more appealing), and how much to morally value animals relative to humans, because different answers to those questions yield very different allocation strategies.

If we simply took our best guess on questions like these and allocated our funds on those assumptions, we might end up putting all funding into a single cause. For example, if we decided animals' suffering matters even 10% as much as humans' suffering, we could help a large number of animals at a far lower cost than it takes to help humans, and it would be very easy to spend all of our funding helping animals. We don't want to focus on a single cause for a number of reasons, which we’ve outlined previously:

  • Giving all of our money to a single cause is likely to hit diminishing returns (giving 10x as much money might only accomplish 2x as much good);
  • Developing staff capacity to work in many causes gives us the ability to adjust our approach as we learn;
  • Building expertise in how to do impact-focused giving across a variety of causes increases our odds of becoming powerful advocates for this broad idea and influencing other donors who share our values; and
  • Like a group of lottery-ticket-buying friends who make a deal to split the winnings, working in a variety of causes is the ethical thing to do in that it reflects an agreement we would have made, in the interest of fairness and reflecting different value systems, before we knew we’d have outsized resources.

Instead of concentrating on one cause, we're planning to divide the money into "worldview" buckets, and to imagine that each bucket is a different person with different values. These worldview buckets will then allocate "their" funding to causes and organizations in order to best accomplish “their” goals.

The three buckets we find most plausible are:

  • A "long-termist" bucket that ascribes very high value to the long-term future and might allocate its money to, for example, reducing the risks of global catastrophes that would threaten the existence of future society.
  • A "near-termist, human-focused" bucket that assesses grants by how well they improve the lives of humans on a relatively shorter time horizon and might allocate its money to a mix of direct aid, policy work, and scientific research to cure today’s worst diseases.
  • A "near-termist, animal-inclusive" bucket that focuses on a similarly short time horizon but ascribes significant moral weight to animals that might allocate its money to policy and science interventions that have animals’ welfare in mind.

To decide how much money to put in each worldview bucket, we’re conducting a number of investigations on issues like the ethics of causing more lives to exist versus improving lives today, which animals suffer in a way that matters, and how to deal with uncertainty about these moral questions. (For more on these investigations, refer to the original post.)

There are no “right” answers to questions like these. Our goal is to understand our own intuitions better, and eventually to allocate funding in a way that accounts for our uncertainty and maximizes the good we do. That said, there are a few points worth noting:

  • We will probably recommend that a cluster of "long-termist" buckets collectively receive at least 50% of our funding. We believe there could be an extremely large number of future humans if we avoid catastrophe. We also believe that, because of recent advances in powerful technologies, this moment in time is something of an outlier in terms of high-leverage opportunities to reduce the likelihood of catastrophe. Global catastrophic risk reduction likely accounts for some of the most promising work here, though work to promote international peace and cooperation could also be attractive.
  • We will likely have substantial, and somewhat diversified, programs in policy-oriented philanthropy and scientific research funding. We expect that we will recommend at least $50 million per year to policy-oriented causes and at least $50 million per year to scientific-research-oriented causes for at least the next 5 or so years.
  • We will likely recommend allocating something like 10% of our funding to a "direct aid" bucket for grants that do not require a lot of assumptions and can be clearly understood as helping the less fortunate in a rational, reasonably optimized way, which will likely correspond to supporting GiveWell recommendations for the near future.
  • We expect to ultimately recommend a substantial allocation (and not shrink our current commitment) to farm animal welfare, but it's a very open question whether and how much this allocation will grow. While we’re unsure about which animals merit moral concern, and how much concern they merit, our report on animal consciousness gives us no compelling reason to dismiss animals’ suffering, and we aspire to extend empathy to everyone it should be extended to, even when it is unusual or seems strange to do so.
  • We used to have a single benchmark for deciding whether to make any given grant today or save the funding to grant out in the future, but no longer do. Instead, each worldview bucket will have its own standards and way of assessing whether a potential grant qualifies for drawing down the funding in that bucket.

This is going to be a long and complex journey. The process will require a large number of deep judgment calls about moral values and difficult-to-assess empirical claims. To inform how we set the budget for each bucket relative to one another, we may follow a similar process to GiveWell's cost-effectiveness analysis: having every interested staff member fill in their inputs for key parameters, discussing our differences, and creating a summary of key disagreements as well as median values.

We don't expect to set the allocation between buckets all at once. We expect a continuing iterative process in which we are making enough guesses and tentative allocations to set our working budgets for existing causes and our desired trajectory for total giving over the next few years.

Ultimately, this exercise serves to inform our funders about how to allocate their funds. We want to put a lot of thought in, and to feel confident in our conclusions, before we scale up our giving. We expect the detail and confidence of our allocation between buckets to improve, and to be fairly high by the time we are giving away our funding at the fastest rate, which could be 10+ years from now.

Topics: