Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Harvard University — AI Interpretability, Controllability, and Safety Research

    Award Date:
    01/2024
    Amount:
    $1,000,000
    Potential Risks from Advanced AI
  2. ETH Zurich Foundation (USA) — Machine Learning Research Support

    Award Date:
    11/2023
    Amount:
    $25,000
    Potential Risks from Advanced AI
  3. Arcadia Impact — University Group Support

    Award Date:
    11/2023
    Amount:
    $405,254
    Global Catastrophic Risks
  4. Stanford University — AI Economic Impacts Workshop

    Award Date:
    11/2023
    Amount:
    $120,000
    Potential Risks from Advanced AI
  5. Eleuther AI — Interpretability Research

    Award Date:
    11/2023
    Amount:
    $2,642,273
    Potential Risks from Advanced AI
  6. London Initiative for Safe AI (LISA) — General Support

    Award Date:
    11/2023
    Amount:
    $237,000
    Potential Risks from Advanced AI
  7. Berkeley Existential Risk Initiative — University Collaboration Program

    Award Date:
    10/2023
    Amount:
    $70,000
    Potential Risks from Advanced AI
  8. RAND Corporation — Emerging Technology Initiatives

    Award Date:
    10/2023
    Amount:
    $10,500,000
    Potential Risks from Advanced AI
  9. Northeastern University — Mechanistic Interpretability Research

    Award Date:
    09/2023
    Amount:
    $116,072
    Potential Risks from Advanced AI
  10. OpenMined — Software for AI Audits

    Award Date:
    09/2023
    Amount:
    $6,000,000
    Potential Risks from Advanced AI