Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. ETH Zurich — Research on Prompt Injection Attacks

    Award Date:
    06/2025
    Amount:
    $20,000
    Potential Risks from Advanced AI
  2. Stanford University — Robust AI benchmarks

    Award Date:
    06/2025
    Amount:
    $1,922,145
    Potential Risks from Advanced AI
  3. OpenMined Foundation — Secure Enclaves for LLM Evaluation

    Award Date:
    06/2025
    Amount:
    $10,943,400
    Potential Risks from Advanced AI
  4. RAND Corporation — Emerging Technology Initiatives (2025)

    Award Date:
    05/2025
    Amount:
    $2,500,000
    Potential Risks from Advanced AI
  5. Redwood Research — AI Safety Research Collaborations

    Award Date:
    05/2025
    Amount:
    $1,100,000
    Potential Risks from Advanced AI
  6. Berkeley Existential Risk Initiative — AI Governance Workshop

    Award Date:
    05/2025
    Amount:
    $56,915
    Potential Risks from Advanced AI
  7. Carnegie Mellon University — Robust AI Unlearning Techniques

    Award Date:
    05/2025
    Amount:
    $584,108
    Potential Risks from Advanced AI
  8. Palisade Research — General Support (2025)

    Award Date:
    05/2025
    Amount:
    $1,843,463
    Potential Risks from Advanced AI
  9. GovAI — General Support (May 2025)

    Award Date:
    05/2025
    Amount:
    $1,000,000
    Potential Risks from Advanced AI
  10. UC Berkeley — Compute Resources for AI Safety Research

    Award Date:
    05/2025
    Amount:
    $2,587,634
    Potential Risks from Advanced AI