Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Eleuther AI — Interpretability Research

    Award Date:
    11/2023
    Amount:
    $2,642,273
    Potential Risks from Advanced AI
  2. Berkeley Existential Risk Initiative — University Collaboration Program

    Award Date:
    10/2023
    Amount:
    $70,000
    Potential Risks from Advanced AI
  3. RAND Corporation — Emerging Technology Initiatives

    Award Date:
    10/2023
    Amount:
    $10,500,000
    Potential Risks from Advanced AI
  4. Northeastern University — Mechanistic Interpretability Research

    Award Date:
    09/2023
    Amount:
    $116,072
    Potential Risks from Advanced AI
  5. OpenMined — Software for AI Audits

    Award Date:
    09/2023
    Amount:
    $6,000,000
    Potential Risks from Advanced AI
  6. FAR AI — Alignment Workshop

    Award Date:
    09/2023
    Amount:
    $166,500
    Potential Risks from Advanced AI
  7. University of Pennsylvania — AI Governance Roundtables

    Award Date:
    09/2023
    Amount:
    $110,000
    Potential Risks from Advanced AI
  8. Effective Ventures Foundation — AI Safety Communications Centre

    Award Date:
    08/2023
    Amount:
    $288,000
    Potential Risks from Advanced AI
  9. Guide Labs — Open Access Interpretability Project

    Award Date:
    08/2023
    Amount:
    $750,000
    Potential Risks from Advanced AI
  10. Berkeley Existential Risk Initiative — Scalable Oversight Dataset

    Award Date:
    08/2023
    Amount:
    $70,000
    Potential Risks from Advanced AI