Showing grants tagged "Potential Risks From Advanced AI"

We're open to supporting safe bets, like direct cash transfers to the world's poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Language Model Safety Fund — Language Model Misalignment

    Award Date:
    10/2021
    Amount:
    $425,800
    Potential Risks from Advanced AI
  2. University of Washington — Adversarial Robustness Research

    Award Date:
    10/2021
    Amount:
    $730,000
    Potential Risks from Advanced AI
  3. Bit by Bit Coding — High School Courses on Neural Networks

    Award Date:
    10/2021
    Amount:
    $250,275
    Longtermism
  4. Université de Montréal — Research Project on Artificial Intelligence

    Award Date:
    09/2021
    Amount:
    $210,552
    Potential Risks from Advanced AI
  5. Stanford University — Adversarial Robustness Research (Santurkar)

    Award Date:
    08/2021
    Amount:
    $330,792
    Potential Risks from Advanced AI
  6. Stanford University — Adversarial Robustness Research (Tsipras)

    Award Date:
    08/2021
    Amount:
    $330,792
    Potential Risks from Advanced AI
  7. University of Southern California — Adversarial Robustness Research

    Award Date:
    08/2021
    Amount:
    $320,000
    Potential Risks from Advanced AI
  8. UC Berkeley — Adversarial Robustness Research (Aditi Raghunathan)

    Award Date:
    08/2021
    Amount:
    $101,064
    Potential Risks from Advanced AI
  9. CSET — General Support (August 2021)

    Award Date:
    08/2021
    Amount:
    $38,920,000
    Potential Risks from Advanced AI
  10. Center for Long-Term Cybersecurity — AI Standards

    Award Date:
    07/2021
    Amount:
    $25,000
    Potential Risks from Advanced AI