We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. IDinsight — Public Health Surveys in India

    Award Date:
    06/2025
    Amount:
    $282,000
    Global Public Health Policy
  2. BlueDot Impact — General Support (2025)

    Award Date:
    06/2025
    Amount:
    $25,649,888
    Global Catastrophic Risks
  3. UC Berkeley — Cyberoffense Benchmark

    Award Date:
    06/2025
    Amount:
    $3,390,000
    Potential Risks from Advanced AI
  4. Center for a New American Security — AI Security and Stability Program

    Award Date:
    06/2025
    Amount:
    $8,324,325
    Potential Risks from Advanced AI
  5. Fair Share Housing Center — Legal Work in New Jersey (2025)

    Award Date:
    06/2025
    Amount:
    $600,000
    Abundance & Growth
  6. Meridian — Avoiding Encoded Reasoning in LLMs

    Award Date:
    06/2025
    Amount:
    $244,614
    Potential Risks from Advanced AI
  7. UC Berkeley — Study on Frontier Model Behavior

    Award Date:
    06/2025
    Amount:
    $499,597
    Potential Risks from Advanced AI
  8. International Conference on Machine Learning — AI Governance Workshop

    Award Date:
    06/2025
    Amount:
    $12,545
    Potential Risks from Advanced AI
  9. Conjecture — Cybersecurity Bootcamp

    Award Date:
    06/2025
    Amount:
    $223,134
    Potential Risks from Advanced AI
  10. GovAI — General Support (June 2025)

    Award Date:
    06/2025
    Amount:
    $2,800,000
    Potential Risks from Advanced AI