Suggestions for Individual Donors from Open Philanthropy Project Staff – 2017

In this post, “we” refers to Good Ventures and the Open Philanthropy Project, who work as partners.

Last year and the year before, we published a set of suggestions for individual donors looking for organizations to support. This year, we are repeating the practice and publishing updated suggestions from Open Philanthropy Project staff who chose to provide them.

The same caveats as in previous years apply:

  • These are reasonably strong options in causes of interest, and shouldn’t be taken as outright recommendations (i.e., it isn’t necessarily the case that the person making the suggestion thinks they’re the best option available across all causes). Note that interested staff wrote separately about where they personally donated, in this post.
  • In many cases, we find a funding gap we’d like to fill, and then we recommend filling the entire funding gap with a single grant. That doesn’t leave much scope for making a suggestion for individuals. The cases listed below, then, are the cases where, for one reason or another, we haven’t decided to recommend filling an organization’s full funding gap, and we believe it could make use of fairly arbitrary amounts of donations from individuals.
  • Our explanations for why these are strong giving opportunities are very brief and informal, and we don’t expect individuals to be persuaded by them unless they put a lot of weight on the judgment of the person making the suggestion.

Suggestions are alphabetical by cause (with some assorted and "meta" suggestions last).

Biosecurity and Pandemic Preparedness – suggestions from Jaime Yassif

Center for International Security and Cooperation program on Biosecurity and Global Health

What is it? The Center for International Security and Cooperation (CISAC) at Stanford is a university-based center that does policy research and development in the international security space. CISAC was founded 34 years ago and has a long history of working on nuclear issues; it has done some limited work on biosecurity in over the last decade.

CISAC has been looking to expand the scope of its biosecurity work, and it has partnered with its parent institute, the Freeman Spogli Institute (FSI) for International Studies, to launch a new initiative in this area. FSI and CISAC are trying to raise funds to get this initiative off the ground. They are looking to hire program staff to run in-house biosecurity projects and to support collaborative projects with faculty in other departments at Stanford and with partners outside the university. The program staff would also like to develop new biosecurity courses.

Recent biosecurity publications by CISAC staff include:

Why I suggest it: I think independent science and technology policy research and advocacy is an important means of developing new ideas for reducing biological risks; in some cases these ideas can influence practices in industry and academia and shape decision-making within governments. More specifically, I think developing new approaches to governance of dual-use bioscience is particularly important for reducing global catastrophic biological risks (GCBRs). This view stems from my working assumption that engineered pathogens pose some of the most acute risks for large-scale pandemics that can spread quickly, have a high case fatality rate and circumvent existing medical countermeasures.

I think CISAC has a comparative advantage in working on the technical aspects of biosecurity and developing new approaches to governance of dual-use bioscience and biotechnology. My view is based on the Center’s existing biosecurity staff, its ties to bioscience departments at Stanford and its location in Silicon Valley, which is a biotech industry hub. CISAC’s in-house biosecurity experts, David Relman and Megan Palmer, are both thought leaders in the field, and the Center collaborates with other faculty at Stanford, who have deep technical knowledge and biosecurity expertise. Examples include Drew Endy, a bioengineering professor and a leader in the synthetic biology field, and Tim Stearns, chair of the Biology Department and a member of JASON, an independent group that provides scientific advice to the US Government on national security issues.

Contributions to CISAC could have a significant impact on its activities because the Biosecurity Initiative currently has very limited funds. CISAC’s biosecurity project ideas are under development, and they’re likely to include some GCBR-relevant work, but I don’t have specifics about planned projects. If it can hire additional staff to get this initiative up and running, I think the Center has the potential to make a valuable contribution in governance of dual-use bioscience and biotechnology, and potentially other areas like biosecurity education and field building.

Why we haven’t fully funded it: Open Philanthropy is supporting Megan Palmer’s work at CISAC, but I haven’t yet prioritized investigating and making the case internally for a broader programmatic grant. I’ve been weighing this type of grant-making opportunity against opportunities in other areas within biosecurity and pandemic preparedness, and that prioritization work is still in process. I estimate that CISAC/FSI can productively absorb at least $1M-$2M per year for its biosecurity work.

Write-up forthcoming? No.

How to donate: Visit this site. Under ‘How to Give’ Select ‘Centers, Institutes and More’ and ‘Freeman Spogli Institute for International Studies’. Under Special Instructions, list the Biosecurity Initiative. Interested parties can also contact Michelle Townsend at [email protected].

Johns Hopkins Center for Health Security

What is it? The Center for Health Security (CHS) is a U.S.-based think tank that does policy research and development in biosecurity and pandemic preparedness (BPP), along with some communications and advocacy. Most of the organization’s work focuses on BPP issues that are broadly relevant to reducing global catastrophic biological risks, and a substantial portion of its activities–at least one third–is specifically focused on GCBRs.

Examples of ongoing GCBR-focused projects include a red-teaming project to improve our understanding of global catastrophic risks and a project focused on identifying technologies that could be used to reduce global catastrophic biological risks. CHS has also initiated a public discussion about GCBRs by publishing a working definition of this concept, which has started to get a little bit of traction in policy circles.

CHS is also doing valuable work on BPP issues more broadly. This work includes running track II (nongovernmental) biosecurity dialogues with partners in India, which we view as important in light of India’s rapidly growing biotechnology sector. In addition, CHS is weighing in on high-priority policy issues, for example this analysis of the Trump Administration’s 2018 proposed budget for programs in the biosecurity and pandemic preparedness space. We view this as a valuable contribution because it’s otherwise very difficult to track US government spending in this area; this analysis serves as a source of accountability by making government BPP spending more transparent. Another example is this commentary on the security risks associated with recent work on synthesis of the horsepox virus, which has implications for the synthesis of more dangerous viruses like smallpox.

Why I suggest it: Think tanks and advocacy groups can have a large impact in the BPP space by influencing and improving the use of government funds through policy research and development, acting as an independent source of accountability, and having the flexibility to work on long-term projects or politically controversial issues. They can also conduct research and develop innovative ideas that are useful to private donors, industry and academia.

I think CHS is among the best organizations to support because it has an excellent track record of producing quality research, analysis, and policy recommendations and a strong team that combines expertise in bioscience, medicine, public health and security. CHS is also a trusted source of independent advice to the US Government, and it is developing relationships with government partners in other key countries.

Why we haven’t fully funded it: Open Philanthropy supports approximately 75% of CHS’ budget, and the rest of its funding comes primarily from the US Government. We haven’t fully funded CHS because we think the organization will be stronger and more effective if it has additional funders, including government and private donors. I estimate that the organization can absorb an additional $500K per year.

Write-up: The write-up of our grant to CHS is here.

How to donate: Go to this site. In the “Please designate my gift to support” field at the top of the form, select “other (please specify).” In the “Please describe” field that appears immediately below, type in “Johns Hopkins Center for Health Security.” Complete the remaining required fields in the form and click “Submit” at the bottom-right corner of the page.

Criminal Justice Reform – suggestions from Chloe Cockburn

Good Call

What is it? Good Call is a nonprofit that runs a free 24/7 arrest hotline, allowing anyone to connect with a free lawyer right away if they or a loved one are arrested. There are over 14 million arrests in the US every year, concentrated in low income communities. Despite the fact that most people are arrested for low level offenses, roughly 500,000 people are held in jail awaiting trial on any given day. Good Call’s service (currently operating in the Bronx, and preparing to expand across NYC and to other cities around the country), helps prevent costly, unnecessary pretrial jail time by ensuring that people who are arrested and their families get legal advice as quickly as possible.

You can read more about their work in this New York Times feature from this summer.

Why I suggest it: Spending even one night in jail can have significant detrimental effects – making it more likely that a person will plead guilty, and putting their housing, employment, parental status, and other key components of their lives in jeopardy. I think Good Call is a good fit for marginal dollars because it reports strong outcomes, it's an innovative approach that uses a known pathway (legal representation is clearly a good thing), and the impact of the giving should be pretty clear to donors.

Why I'm not fully funding it: Good Call is not currently a grantee. It has not been an obvious fit because my portfolio focuses on more upstream, structural interventions (policy change, for example), and because I think this is an organization that many other, lower-risk funders are likely to see the value of and support.

How to donate: click here.

Court Watch NOLA

What is it: Court Watch NOLA (CWN) is a New Orleans based court watching program that engages 100-200 volunteers on an annual ​basis (having trained over 1000 over the past 10 years) to witness court proceedings and take detailed notes on what they observe. CWN compiles data from these notes into reports issued regularly, such as this one. To my knowledge, it is the largest and most rigorous court watching program in the country, notwithstanding which its budget has traditionally run about ​​200k ​a year. In 2017, information gathered by CWN contributed to substantially increased scrutiny of the New Orleans District Attorney, Leon Cannizzaro, who was found to be issuing fake subpoenas and jailing sexual assault victims for refusing to testify.

Additional funding to CWN allows it to expand the number of trainings it gives to new court watchers, and will allow it to expand staff capacity to be responsive to the requests regularly made to the program ​for guidance and assistance from other developing court watching programs around the country.

Why support it: CWN believes, and I agree, that systemic change comes if directly-impacted people and the community at large ​are actively engaged in the process of legal education, direct observation and data collection. Court monitoring, observation and the collection of data can empower the people most affected by the criminal justice system to ultimately lead movements to improve the policies that affect their lives.

Court watching programs lower the barrier to entry for regular people to hold the CJ system accountable. They also can change outcomes in specific cases: the presence of court watchers ​in the courtroom, especially in the case of CWN where they carry highly-recognizable clipboards and bring the credible threat of exposure of bad behavior, has the reputation of impacting sentences, ​​pre-trial violations that can result in jail, and bail determinations. CWN goes beyond other court watching programs I know of by publishing regular reports, aggregating the data they have collected in order to bring transparency to the court's overall operations. Other outcomes of CWN's reports include: winning access for the public to bail hearings, pushing the sheriff to stop recording attorney-client calls in the jail, and this year pressuring the New Orleans DA to stop issuing fake subpoenas.

Why we don't fully fund it: We have made two grants to CWN, one for $25k and one for $100k. I am very enthusiastic about court watching but have not been able to provide additional funding thus far due to competing priorities in my portfolio.

How to donate: click here.

Farm Animal Welfare – suggestions from Lewis Bollard

Compassion in World Farming USA

What is it? Compassion in World Farming USA is one of four groups responsible for the major recent US corporate wins for layer hens and broiler chickens. (The others are The Humane League, the Humane Society of the US Farm Animal Protection campaign, and Mercy for Animals.) Its focus is on winning further corporate reforms for broiler chickens and ensuring that corporate cage-free pledges are implemented.

Why I suggest it: CIWF has a strong track record of success: most recently it helped secure new broiler chicken welfare pledges from Nestle, Unilever, and Moe’s, and launched EggTrack to push companies to fulfill their cage-free pledges. It also has a talented leader in Leah Garces, and solely focuses on what I believe to be one of the most cost-effective interventions: corporate outreach for layers and broilers. But it remains a ~$600K/year group, perhaps partly because it isn’t an ACE top charity (though it did become a standout charity this year) and partly perhaps because corporate outreach work is harder to fundraise for. At this size, I think that small donors can make a bigger marginal difference in the group’s future — especially since CIWF USA needs a broader donor base to grow sustainably.

Why we haven’t fully funded it: In April of 2016, we made a two-year $550K grant to CIWF USA, which filled much of its room for more funding at the time, and we later made it a $30K grant to support a specific project. I’ve separately recommended a total of $50K in additional grants to CIWF USA via the EA Fund for Animal Welfare that I manage for the Centre for Effective Altruism. We likely will recommend more funding over time, but we’re constrained in how much we give by not wanting any group to be overwhelmingly dependent on our funding.

Writeup: Here.

How to donate: You can donate here.

Wild-Animal Suffering Research — a project of the Effective Altruism Foundation

What is it? WAS – Research is a new initiative of the Effective Altruism Foundation to fund the research of Ozy Brennan, Persis Eskander, and Georgia Ray. They’re seeking to found a new research field focused on understanding and improving the wellbeing of wild animals.

Why I suggest it: I think that wild animal welfare is a very important and neglected issue — there are trillions of wild animals alive at any time, yet almost no funding goes to evaluating and improving their welfare (as distinct from conserving their species or habitat). I’m not sure if there are any opportunities for improvements that are both clearly beneficial and tractable, but think the magnitude of suffering argues for doing more research to see if there could be. I think that Ozy, Persis, and Georgia are some of the most promising researchers currently in the field (although I agree with them that “in my ideal world the [wild animal welfare] field would consist of conservation biologists, wildlife managers, ecologists, ethologists and other people who can apply their academic knowledge to the question of improving wild animal welfare”). They have clear needs for more funding, which I don’t expect other funders to fully address, though I have recommended a $50K grant to fill half of these needs via the EA Animal Welfare Fund.

Why we haven’t fully funded it: We’re not currently funding wild animal welfare work. The cause is distinct enough from farm animal welfare (and carries distinct enough risks) that we would want to carefully consider an entry into this area.

Writeup forthcoming? No.

How to donate: You can donate here.

Potential Risks from Artificial Intelligence – suggestions from Daniel Dewey

Disclaimer: I (Daniel Dewey) spend most of my time thinking about how the Open Philanthropy Project can use its resources and capabilities to mitigate potential risks from AI. I think this is a very different question from how an individual donor can best use their funding to mitigate potential risks from AI. I'm giving suggestions in case they are helpful, but I would encourage individual donors to do their own investigations and read arguments from many different people.

EA Funds Long-Term Future Fund

This is a fund managed by Nick Beckstead, who also works at the Open Philanthropy Project. You can see more details about this fund here. This is where I give the portion of my personal donations aimed to make a difference to the long-term future. My basic reasoning is that money goes further in this space when it can be given in larger amounts by an executor who has the time and experience required to think carefully, communicate with grantees, and help develop new projects or extend existing projects in significant ways. I personally trust Nick Beckstead's judgement and think he's a good representative of my values, which makes the EA Funds Long-Term Future Fund a good fit for me.

How to donate: follow instructions on the EA Funds website.

Academic AI safety work via donor lottery

A donor lottery was recenly announced on the Effective Altruism Forum. While I think it is generically difficult for a small donor to make effective academic grants, a donor lottery could collect enough funds and give the executor of the lottery enough time to make an academic grant supporting technical AI safety work. The most straightforward execution I can see is that the executor of a lottery could contact me in order to figure out how to add more funds to existing academic projects (e.g. MILA, Stanford), which I think could be competitive with other uses of money to mitigate potential risks from AI. (Thanks to Carl Shulman for this idea.)

How to donate: follow directions on the donor lottery announcement post.

The Future of Humanity Institute

In addition to the basic case for FHI I gave last year, I currently think that FHI is the organization with the best shot at producing AI strategy and governance researchers. I don't have a detailed model of how marginal funds given to FHI will play a role in producing those researchers, but I still expect funds on the current margin to increase the expected number of AI strategy researchers in some way.

We granted FHI $1,995,425 earlier this year, and are in the process of evaluating another large grant; I think it makes sense to ask whether marginal funds from individual donors are likely to make a difference to FHI's activities. Unfortunately, I think it's very difficult to answer this question confidently. FHI's activities change a lot from year to year, there's not a simple model that I know of for how marginal dollars translate into activities, and the translation of dollars into impact is even more difficult since most of FHI's expected impact would come in the long-term future through indirect effects of their work.

In the absence of a concrete model, I still feel good about suggesting FHI to individual donors; I think it's reasonable to expect additional funds on the current margin to improve the quantity and quality of work they do, though I can't say exactly how this will play out.

How to donate: I would suggest supporting FHI either by donating to the collaboration between FHI and the Berkeley Existential Risk Initiative (by visiting this page and adding "For BERI's collaboration with FHI" in "special instructions for to the seller") or by donating directly to FHI on their site. I think that marginal funds to the BERI/FHI collaboration are slightly more flexible, but I don't think that the difference in effectiveness is very significant.

Center for Human-Compatible AI (CHAI)

The Center for Human-Compatible AI is an academic research center primarily housed in UC Berkeley's EECS department. You can find our initial grant writeup here. Since our initial grant, I think there has been some evidence that CHAI is having a positive impact on the growth of the technical AI safety field. Most notably, the CHAI paper Inverse Reward Design was accepted as an oral presentation at NIPS 2017, making it the most well-received (by mainstream AI/ML academics) AI safety paper that I know of, and CHAI researchers have reported that many PhD applicants mention AI safety as a possible research focus when applying to Berkeley's AI PhD program.

How to donate: I would suggest supporting CHAI by donating to the collaboration between CHAI and the Berkeley Existential Risk Initiative (by visiting this page and adding "For BERI's collaboration with CHAI" in "special instructions for to the seller") or by contacting CHAI directly if you are planning to make a large donation.

Machine Intelligence Research Institute

See suggestions from me and Nick Beckstead last year, and our most recent grant writeup.

How to donate: follow directions on MIRI's website.

Assorted suggestions from Nick Beckstead

My suggestions have slightly changed relative to my suggestions last year. I'll state my new ranking and then comment briefly on some reasons for changes. I am leaving out much of my reasoning because I don't know how to share it explicitly without substantial additional work.

My suggestions for individual donors are as follows (in descending order of preference, organized by category):

  1. Very meta suggestions:
    1. If you already know what to give to and you don’t think your decision would change if you thought about it more or let someone more informed decide on your behalf, give there.
    2. If you know someone who is likely to make a better decision than you would on your own, ask them to allocate your giving. If you think that person should be me, donate to the Long-term Future Fund. This might be a good fit for people who have some combination of the following properties: interest in global catastrophic risks, context needed to assess my track record, trust in my judgment, limited time/context available to make donation decisions themselves.
    3. If you are uncertain where to donate and uncertain who to trust to donate on your behalf, participate in a donor lottery and then only think carefully about donations if you win.
  2. My next suggestion, which I consider not far behind, would be to donate to FHI or MIRI. I don't have much of an opinion between the two of them. If you do donate to FHI, I would suggest donating to BERI and earmarking the funds for use at FHI's discretion. (Note that I used to work at FHI.)
  3. My next suggestion, which I consider not far behind, would be to donate to CEA, 80,000 Hours, or my EA Community Fund. I don't have much of an opinion between the three of them. (Note that I am a board member of CEA, and 80,000 Hours is part of CEA.)
  4. My next suggestion is to support a biosecurity organization suggested above, or possibly Dave Denkenberger's ALLFED. These suggestions are based on a lower level of understanding and offered more tentatively. I don't necessarily think these are the "next best bets" from a long-term perspective, but I offer them because I could imagine favoring these areas over AI safety or developing the EA community if I were more skeptical of the ability of people's ability to anticipate and meaningfully prepare for AI.

The main updates relative to last year's suggestions are that:

  1. I now tentatively favor work directly focused on AI safety over work promoting effective altruism. I'm not fully sure what caused this update over the last year, but the main factors are probably: (i) I am now more impressed with the track records of MIRI and FHI, including their ability to bring talented people to work on their causes, than I was previously; (ii) the EA community could potentially contribute to these areas by recruiting additional funds to support them, but that appears to me to be a less important bottleneck than it did previously; (iii) some of my friends have arrived at similar conclusions, though I'm not sure if their reasons are the same.
  2. If you want to offer funds for me to regrant at my discretion, I now prefer that you donate through the Long-term Future Fund.
  3. I no longer favor MIRI over FHI because I see fewer differences in terms of room for more funding.
  4. I no longer favor biosecurity over nuclear weapons as a cause by as much as I used to. This is partly due to a tentative decrease in my estimate of the probability of global catastrophe from the use of bioweapons that occurred when I thought through an internal spreadsheet prepared by Claire Zabel as part of our work on biosecurity and pandemic preparedness.
  5. Within nuclear weapons, my (less informed) pick has changed from Ploughshares Fund to Dave Denkenberger's ALLFED. This change was partly based on some back-of-the-envelope calculations I did and partly based on learning more about nuclear weapons policy via conversations that Claire Zabel had.

I’ve limited my suggestions to organizations that focus on effective altruism and global catastrophic risks (and not short-to-medium-term factory farming or global poverty) because those are a couple of the areas I consider to have highest expected altruistic returns and know most about.

Topics: