An Update to How the Open Philanthropy Project is Thinking About Grant Check-Ins

In this post, “we” refers to Good Ventures and the Open Philanthropy Project, who work as partners.

The Open Philanthropy Project’s mission is to give as effectively as we can and share our findings openly so that anyone can build on our work. When we first started making grants, we tended to assume that would mean conducting and publishing in-depth reviews of the performance of each grant. But as our first grants have wound down, we’ve spent more time evaluating and reflecting on the work we’ve done so far, and have developed a new framework to guide our approach to grant check-ins.

One lesson we’ve learned is that our hits-based approach to giving substantially limits the benefits from a frame that focuses on grantee “accountability” per se. Many philanthropists require that their grantees provide detailed programmatic and financial reporting to demonstrate that funds were spent exactly as intended, and when we started making grants, we assumed this might be valuable for our giving as well. However, we have found limited value using this approach. Given the inherent riskiness of hits-based giving, we have found that closely tracking the performance of each individual grant isn't very informative, and accordingly, we don’t plan on conducting or publishing the level of individual grant follow-up that we had initially anticipated.

Instead, we are generally performing relatively light check-ins, and internally report “updates, lessons, and impact” from each.

The goal of a check-in: updates, lessons, impact

We hope and expect to be more effective philanthropists 10 years from now than we are today, and we need to learn a lot to get there.

In addition to the investigation we conduct before we make any particular grant, we also try to learn from grants as they’re in progress, and afterward, by checking in with grantees. Generally, check-ins consist of informal phone calls between a grantee and an Open Philanthropy staff member about every six months to discuss what’s been working, what challenges have arisen, and what next steps the grantee plans to pursue for their project. What we learn from these conversations about a grant’s execution and results plays a central part in our thinking about whether and to what extent to renew grants, and also informs other grantmaking in a given focus area and across the organization as a whole.

We focus our check-ins on gleaning updates, lessons, and impact:

  • Updates are new developments that either lower or raise our expectations for how much impact a grant will have, relative to where our expectations were at the time the grant was made. Because we usually are uncertain whether objectives will be achieved, if a grant meets its objectives, this is generally considered a “positive update.” The point of marking updates is to keep track of how everything is going and not necessarily to self-examine every time something goes more poorly than had been expected.
  • A lesson learned is an update an investigator makes to their model of how to give as well as possible. These lessons could be at the programmatic level or a broader lesson about philanthropy. Lessons will be much rarer than positive and negative updates. We have a lot of known uncertainty when we make grants, and if a grant with a 50% chance of working didn’t work, that shouldn’t necessarily make us think differently about the world or how to give. We expect that most grants will not yield lessons, even when there are notable positive or negative updates.
  • Impact means a grant has made the world better (or worse). Rather than tracking incremental progress (e.g. the release of a paper, hiring to fill a position, positive news coverage) that would count as an “update,” “impact” should be limited to major milestones that we are reasonably confident created good in the world, at least in expectation. When feasible, we try to quantify impact in terms of appropriate units: for example,“dollars of value added to society”; “years of suffering reduced, adjusted for species and intensity of suffering”; or “percentage point change in the likelihood of a global catastrophic risk in the next 50 years.”

Grant check-ins in practice, and our evolving approach

Early on, we tended to expect that we’d be doing intensive, holistic evaluations of each grant (examples of statements along these lines are here, here, here, here and here), and as the first grants from the earliest days of Open Phil wind down, a few have yielded particularly clear updates, lessons, and impact for us to learn from:

  • Update: We initially supported the Center for Popular Democracy’s “Fed Up” campaign in August 2014 and January 2015. Over the following year, they developed partnerships in all 12 cities with Federal Reserve Banks, met with 10 of the 12 regional Fed presidents, and generated substantial press coverage for their 2015 Jackson Hole Economic Policy Symposium. Based on their track record, we decided to renew our support for Fed Up for 2016, and subsequently renewed for 2017.
  • Update: Thanks in part to our initial grant in April 2015, the Blue Ribbon Study Panel on Biodefense released a report in October of that year with 33 practical recommendations to improve U.S. biodefense policy. We believe the report was generally well-received within government, influenced the 2016 National Defense Authorization Act, was referenced multiple times in congressional hearings, and was cited by the CIA director in remarks at the Council on Foreign Relations. We decided to renew support to enable the Study Panel to continue its efforts.
  • Lesson: Protect the People’s program helped 58 Haitian farmers access seasonal work in the United States in 2016, shy of their goal of 150 farmers. We learned that American farmers were less interested in hiring Haitians than we had anticipated, and there were more bureaucratic issues in the visa process than we think would likely need to be the case in order to create a sustainable long-term flow. Largely on this basis, we decided not to renew the grant.[1]
  • Impact: Corporate campaigns have secured pledges from over 200 US companies to eliminate battery cages from their egg supply chains, including from all of the top 25 US grocers and nearly all of the top 20 fast food chains. The U.S. Department of Agriculture estimated that these pledges will affect around 225 million hens. These campaigns were primarily funded by $3 million in grants from the Open Philanthropy Project, split between four groups: the Humane League, Mercy for Animals, the Humane Society of the United States' Farm Animal Protection Campaign, and Compassion in World Farming USA. However, it’s worth noting that much or all of the success may have been inevitable once the early pledges (which preceded our funding) were achieved.

However, the relatively clear updates, lessons, and impact generated by these grants have been the exception rather than the rule. In most cases, particularly for grants that focus on field-building or policy and advocacy, we’re funding a group that we don’t expect to have clear, demonstrable effects on the metrics that we most care about over the course of an individual grant, and only rarely have such grantees clearly achieved success (or clearly failed). Some examples of these types of no-major-updates check-ins include our renewals of grants to the Center on Budget and Policy Priorities; Greater Greater Washington; and Niskanen Center.

We think this is partly in the nature of hits-based giving. Many grants will fail to have an impact not because of any flaw in our thinking or in the grant’s execution, but because it was a long shot that was always unlikely to succeed. Grantees in hits-based fields like advocacy or science often do roughly what they said they would do but still come up short of achieving the desired outcome in the world, and that’s fine and exactly what we expect in a hits-based framework.

Given these considerations, we have developed a streamlined renewal process that encourages grant investigators to collect updates, lessons, and impact, and avoids repeating the whole grant investigation process. For a grant renewal, investigators skip requesting approval to show “strong interest” in a prospective grant and instead attempt to answer questions they highlighted in their original investigation, score their own predictions from the original grant, and determine if further work on a project is warranted or if a strategy should be updated or revised to reflect lessons learned. In terms of public sharing, we prioritize doing so when we think it is especially informative (e.g. in the case of unusually clear or large lessons and impact), which we hope will be useful to emerging philanthropists.

Footnotes
  1. [1] However, after the grant concluded, two of the farms that had participated in the program secured 59 visas for Haitian farmers, which indicates that the increase in Haitian access to seasonal farm work in the U.S. may have outlived our support for the program and could suggest either that we may have been premature in winding down funding for a program that just needed more time to find its stride or that the key work of getting the initial farms to engage with the program was already complete and further funding was not necessary to ensure their continued participation.


Topics: