Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Stanford University — Adversarial Robustness Research (Tsipras)

    Award Date:
    08/2021
    Amount:
    $330,792
    Global Catastrophic Risks
  2. UC Berkeley — Adversarial Robustness Research (Aditi Raghunathan)

    Award Date:
    08/2021
    Amount:
    $87,829
    Potential Risks from Advanced AI
  3. University of Southern California — Adversarial Robustness Research

    Award Date:
    08/2021
    Amount:
    $320,000
    Global Catastrophic Risks
  4. CSET — General Support (August 2021)

    Award Date:
    08/2021
    Amount:
    $38,920,000
    Global Catastrophic Risks
  5. Center for Long-Term Cybersecurity — AI Standards

    Award Date:
    07/2021
    Amount:
    $25,000
    Potential Risks from Advanced AI
  6. Center for International Security and Cooperation — AI and Strategic Stability

    Award Date:
    07/2021
    Amount:
    $365,361
    Potential Risks from Advanced AI
  7. ARLIS — Report on Security Clearances

    Award Date:
    07/2021
    Amount:
    $70,000
    Potential Risks from Advanced AI
  8. Berkeley Existential Risk Initiative — MineRL BASALT Competition

    Award Date:
    07/2021
    Amount:
    $70,000
    Potential Risks from Advanced AI
  9. Berkeley Existential Risk Initiative — AI Standards

    Award Date:
    07/2021
    Amount:
    $300,000
    Potential Risks from Advanced AI
  10. Carnegie Mellon University — Adversarial Robustness Research

    Award Date:
    05/2021
    Amount:
    $330,000
    Global Catastrophic Risks