Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Redwood Research — General Support

    Award Date:
    11/2021
    Amount:
    $8,790,000
    Global Catastrophic Risks
  2. Mila — Research Project on Artificial Intelligence

    Award Date:
    11/2021
    Amount:
    $237,931
    Potential Risks from Advanced AI
  3. Language Model Safety Fund — Language Model Misalignment

    Award Date:
    10/2021
    Amount:
    $425,800
    Potential Risks from Advanced AI
  4. University of Washington — Adversarial Robustness Research

    Award Date:
    10/2021
    Amount:
    $730,000
    Potential Risks from Advanced AI
  5. Bit by Bit Coding — High School Courses on Neural Networks

    Award Date:
    10/2021
    Amount:
    $250,275
    Longtermism
  6. Stanford University — AI Index

    Award Date:
    09/2021
    Amount:
    $78,000
    Potential Risks from Advanced AI
  7. Université de Montréal — Research Project on Artificial Intelligence

    Award Date:
    09/2021
    Amount:
    $210,552
    Potential Risks from Advanced AI
  8. CNAS — Risks from Militarized AI

    Award Date:
    09/2021
    Amount:
    $101,187
    Potential Risks from Advanced AI
  9. Stanford University — Adversarial Robustness Research (Tsipras)

    Award Date:
    08/2021
    Amount:
    $330,792
    Global Catastrophic Risks
  10. Stanford University — Adversarial Robustness Research (Santurkar)

    Award Date:
    08/2021
    Amount:
    $330,792
    Global Catastrophic Risks