We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. University of Maryland — Study on Encoded Reasoning in LLMs

    Award Date:
    06/2025
    Amount:
    $218,000
    Potential Risks from Advanced AI
  2. Meridian — Research on Emergent Misalignment

    Award Date:
    06/2025
    Amount:
    $396,000
    Potential Risks from Advanced AI
  3. Kairos — General Support

    Award Date:
    06/2025
    Amount:
    $195,000
    Potential Risks from Advanced AI
  4. ETH Zurich — Research on Prompt Injection Attacks

    Award Date:
    06/2025
    Amount:
    $20,000
    Potential Risks from Advanced AI
  5. Good Science Project — Analysis of U.S. R&D Funding

    Award Date:
    06/2025
    Amount:
    $238,460
    Abundance & Growth
  6. Stanford University — Robust AI benchmarks

    Award Date:
    06/2025
    Amount:
    $1,922,145
    Potential Risks from Advanced AI
  7. Talent Mobility Fund — Help Desk for International Students

    Award Date:
    06/2025
    Amount:
    $777,700
    Abundance & Growth
  8. OpenMined Foundation — Secure Enclaves for LLM Evaluation

    Award Date:
    06/2025
    Amount:
    $10,943,400
    Potential Risks from Advanced AI
  9. ML4Good — AI Safety Bootcamps

    Award Date:
    06/2025
    Amount:
    $954,911
    Global Catastrophic Risks
  10. Successif — AI Safety Management Consulting

    Award Date:
    06/2025
    Amount:
    $227,000
    Global Catastrophic Risks