We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Center for AI Safety — General Support (2023)

    Award Date:
    04/2023
    Amount:
    $4,025,729
    Potential Risks from Advanced AI
  2. Metaculus — Platform Development (2023)

    Award Date:
    04/2023
    Amount:
    $3,000,000
    Potential Risks from Advanced AI
  3. Aquaculture Stewardship Council — Shrimp Welfare

    Award Date:
    04/2023
    Amount:
    $525,000
    Fish Welfare
  4. Carnegie Endowment — Alternative Proteins for Security

    Award Date:
    04/2023
    Amount:
    $322,000
    Farm Animal Welfare
  5. Rethink Priorities — AI Governance Workshop

    Award Date:
    04/2023
    Amount:
    $302,390
    Potential Risks from Advanced AI
  6. Atlas Fellowship — General Support (2023)

    Award Date:
    04/2023
    Amount:
    $2,000,000
    Effective Giving and Careers
  7. University of Maryland — Policy Fellowship (2023)

    Award Date:
    04/2023
    Amount:
    $312,959
    Potential Risks from Advanced AI
  8. Forethought Foundation — Global Priorities Research

    Award Date:
    04/2023
    Amount:
    $348,993
    Global Catastrophic Risks
  9. Swedish University of Agricultural Sciences — Fish EEG

    Award Date:
    04/2023
    Amount:
    $384,000
    Fish Welfare
  10. University of Utah — AI Alignment Research

    Award Date:
    04/2023
    Amount:
    $140,000
    Potential Risks from Advanced AI