Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Brian Christian — Psychology Research

    Award Date:
    02/2023
    Amount:
    $37,903
    Potential Risks from Advanced AI
  2. Cornell University — AI Safety Research

    Award Date:
    02/2023
    Amount:
    $342,645
    Potential Risks from Advanced AI
  3. University of Tuebingen — Adversarial Robustness Research

    Award Date:
    02/2023
    Amount:
    $575,000
    Potential Risks from Advanced AI
  4. Responsible AI Collaborative — AI Incident Database

    Award Date:
    02/2023
    Amount:
    $100,000
    Potential Risks from Advanced AI
  5. Center for AI Safety — Philosophy Fellowship and NeurIPS Prizes

    Award Date:
    02/2023
    Amount:
    $1,433,000
    Potential Risks from Advanced AI
  6. Epoch — AI Worldview Investigations

    Award Date:
    02/2023
    Amount:
    $188,558
    Potential Risks from Advanced AI
  7. University of Toronto — Alignment Research

    Award Date:
    01/2023
    Amount:
    $80,000
    Potential Risks from Advanced AI
  8. University of British Columbia — AI Alignment Research

    Award Date:
    01/2023
    Amount:
    $100,375
    Potential Risks from Advanced AI
  9. University of California Santa Cruz — Adversarial Robustness Research (2023)

    Award Date:
    01/2023
    Amount:
    $114,000
    Potential Risks from Advanced AI
  10. Purdue University — Language Model Research

    Award Date:
    12/2022
    Amount:
    $170,000
    Potential Risks from Advanced AI