Showing grants tagged "Global Catastrophic Risks"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Stanford University — Machine Learning Security Research

    Award Date:
    07/2018
    Amount:
    $100,000
    Potential Risks from Advanced AI
  2. Wilson Center — AI Policy Seminar Series (2018)

    Award Date:
    07/2018
    Amount:
    $400,000
    Potential Risks from Advanced AI
  3. Nuclear Threat Initiative — Global Health Security Index

    Award Date:
    07/2018
    Amount:
    $3,556,773
    Biosecurity and Pandemic Preparedness
  4. University of Oxford — Research on the Global Politics of AI

    Award Date:
    07/2018
    Amount:
    $429,770
    Potential Risks from Advanced AI
  5. Johns Hopkins Center for Health Security — SynBioBeta 2018 Meeting

    Award Date:
    06/2018
    Amount:
    $127,600
    Biosecurity and Pandemic Preparedness
  6. Future of Life Institute — General Support (2018)

    Award Date:
    06/2018
    Amount:
    $250,000
    Global Catastrophic Risks
  7. Machine Intelligence Research Institute — AI Safety Retraining Program

    Award Date:
    06/2018
    Amount:
    $150,000
    Potential Risks from Advanced AI
  8. AI Impacts — General Support (2018)

    Award Date:
    06/2018
    Amount:
    $100,000
    Potential Risks from Advanced AI
  9. Open Phil AI Fellowship — 2018 Class

    Award Date:
    05/2018
    Amount:
    $1,135,000
    Potential Risks from Advanced AI
  10. Ought — General Support (2018)

    Award Date:
    05/2018
    Amount:
    $525,000
    Potential Risks from Advanced AI