Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Michigan State University — Robust AI Unlearning Techniques

    Award Date:
    05/2025
    Amount:
    $484,000
    Potential Risks from Advanced AI
  2. The University of Texas at Austin — Research on AI Safety and Computational Complexity Theory

    Award Date:
    05/2025
    Amount:
    $1,650,000
    Potential Risks from Advanced AI
  3. FAR.AI — AI Communications and Outreach

    Award Date:
    04/2025
    Amount:
    $2,420,253
    Potential Risks from Advanced AI
  4. Hebrew University of Jerusalem — Governance of AI Lab

    Award Date:
    04/2025
    Amount:
    $2,725,000
    Potential Risks from Advanced AI
  5. Daniel Kang — LLM Hacking Benchmarks

    Award Date:
    04/2025
    Amount:
    $265,000
    Potential Risks from Advanced AI
  6. Johns Hopkins University — Course Buyouts

    Award Date:
    04/2025
    Amount:
    $94,600
    Potential Risks from Advanced AI
  7. Talos Network — AI Governance Field-Building

    Award Date:
    04/2025
    Amount:
    $1,493,840
    Potential Risks from Advanced AI
  8. Institute for AI Policy and Strategy — General Support (2025)

    Award Date:
    04/2025
    Amount:
    $11,510,081
    Potential Risks from Advanced AI
  9. Stanford University — AI Interpretability Research

    Award Date:
    04/2025
    Amount:
    $743,500
    Potential Risks from Advanced AI
  10. Carnegie Endowment for International Peace — AI Governance Research (2025)

    Award Date:
    04/2025
    Amount:
    $443,732
    Potential Risks from Advanced AI