We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Open Phil AI Fellowship — 2022 Class

    Award Date:
    04/2022
    Amount:
    $1,840,000
    Potential Risks from Advanced AI
  2. Berkeley Existential Risk Initiative — SERI MATS Program (2022)

    Award Date:
    04/2022
    Amount:
    $1,008,127
    Potential Risks from Advanced AI
  3. Essere Animali — Farm Animal Welfare in Italy (2022)

    Award Date:
    04/2022
    Amount:
    $554,000
    Farm Animal Welfare
  4. The Center for Responsible Seafood — Fish Welfare Research and Promotion

    Award Date:
    04/2022
    Amount:
    $625,000
    Farm Animal Welfare
  5. Funding for AI Alignment Projects Working With Deep Learning Systems

    Award Date:
    04/2022
    Amount:
    $16,604,737
    Potential Risks from Advanced AI
  6. Anima International — Chicken Welfare Campaigns (2022)

    Award Date:
    04/2022
    Amount:
    $5,778,000
    Farm Animal Welfare
  7. Longview Philanthropy — Nuclear Security Grantmaking

    Award Date:
    04/2022
    Amount:
    $500,000
    Global Catastrophic Risks
  8. University of Illinois — Course Development Support (Ben Levinstein)

    Award Date:
    04/2022
    Amount:
    $58,141
    Potential Risks from Advanced AI
  9. Center for Global Development — General Support

    Award Date:
    04/2022
    Amount:
    $1,000,000
    Global Aid Policy
  10. Berkeley Existential Risk Initiative — David Krueger Collaboration

    Award Date:
    04/2022
    Amount:
    $40,000
    Potential Risks from Advanced AI