Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Hofvarpnir Studios — Compute Cluster for AI Safety Research

    Award Date:
    03/2022
    Amount:
    $1,443,540
    Potential Risks from Advanced AI
  2. Rethink Priorities — AI Governance Research (2022)

    Award Date:
    03/2022
    Amount:
    $2,728,319
    Potential Risks from Advanced AI
  3. Stiftung Neue Verantwortung — AI Policy Analysis

    Award Date:
    03/2022
    Amount:
    $444,000
    Potential Risks from Advanced AI
  4. Alignment Research Center — General Support

    Award Date:
    03/2022
    Amount:
    $265,000
    Potential Risks from Advanced AI
  5. Berkeley Existential Risk Initiative — CHAI Collaboration (2022)

    Award Date:
    02/2022
    Amount:
    $1,126,160
    Potential Risks from Advanced AI
  6. NASEM — Safety-Critical Machine Learning

    Award Date:
    02/2022
    Amount:
    $309,441
    Potential Risks from Advanced AI
  7. Open Philanthropy Technology Policy Fellowship (2022)

    Award Date:
    01/2022
    Amount:
    $2,869,940
    Potential Risks from Advanced AI
  8. Wilson Center — AI Policy Training Program (2022)

    Award Date:
    01/2022
    Amount:
    $2,023,322
    Potential Risks from Advanced AI
  9. Georgetown University — Policy Fellowship (2021)

    Award Date:
    12/2021
    Amount:
    $246,564
    Potential Risks from Advanced AI
  10. Centre for the Governance of AI — AI Field Building

    Award Date:
    11/2021
    Amount:
    $2,537,600
    Global Catastrophic Risks