Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Stanford University — AI Alignment Research (2021)

    Award Date:
    11/2021
    Amount:
    $1,500,000
    Potential Risks from Advanced AI
  2. BERI — SERI MATS Program

    Award Date:
    11/2021
    Amount:
    $195,000
    Potential Risks from Advanced AI
  3. Redwood Research — General Support

    Award Date:
    11/2021
    Amount:
    $8,790,000
    Global Catastrophic Risks
  4. Mila — Research Project on Artificial Intelligence

    Award Date:
    11/2021
    Amount:
    $237,931
    Potential Risks from Advanced AI
  5. Language Model Safety Fund — Language Model Misalignment

    Award Date:
    10/2021
    Amount:
    $425,800
    Potential Risks from Advanced AI
  6. University of Washington — Adversarial Robustness Research

    Award Date:
    10/2021
    Amount:
    $730,000
    Potential Risks from Advanced AI
  7. Bit by Bit Coding — High School Courses on Neural Networks

    Award Date:
    10/2021
    Amount:
    $250,275
    Longtermism
  8. Stanford University — AI Index

    Award Date:
    09/2021
    Amount:
    $78,000
    Potential Risks from Advanced AI
  9. Université de Montréal — Research Project on Artificial Intelligence

    Award Date:
    09/2021
    Amount:
    $210,552
    Potential Risks from Advanced AI
  10. CNAS — Risks from Militarized AI

    Award Date:
    09/2021
    Amount:
    $101,187
    Potential Risks from Advanced AI