Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. University of Illinois — Course Development Support (Ben Levinstein)

    Award Date:
    04/2022
    Amount:
    $58,141
    Potential Risks from Advanced AI
  2. Berkeley Existential Risk Initiative — David Krueger Collaboration

    Award Date:
    04/2022
    Amount:
    $40,000
    Potential Risks from Advanced AI
  3. Carnegie Endowment for International Peace — AI Governance Research

    Award Date:
    03/2022
    Amount:
    $597,717
    Potential Risks from Advanced AI
  4. Massachusetts Institute of Technology — AI Trends and Impacts Research (2022)

    Award Date:
    03/2022
    Amount:
    $13,277,348
    Potential Risks from Advanced AI
  5. Hofvarpnir Studios — Compute Cluster for AI Safety Research

    Award Date:
    03/2022
    Amount:
    $1,443,540
    Potential Risks from Advanced AI
  6. Rethink Priorities — AI Governance Research (2022)

    Award Date:
    03/2022
    Amount:
    $2,728,319
    Potential Risks from Advanced AI
  7. Stiftung Neue Verantwortung — AI Policy Analysis

    Award Date:
    03/2022
    Amount:
    $444,000
    Potential Risks from Advanced AI
  8. Alignment Research Center — General Support

    Award Date:
    03/2022
    Amount:
    $265,000
    Potential Risks from Advanced AI
  9. Berkeley Existential Risk Initiative — CHAI Collaboration (2022)

    Award Date:
    02/2022
    Amount:
    $1,126,160
    Potential Risks from Advanced AI
  10. NASEM — Safety-Critical Machine Learning

    Award Date:
    02/2022
    Amount:
    $309,441
    Potential Risks from Advanced AI