Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

    Award Date:
    11/2022
    Amount:
    $2,047,268
    Potential Risks from Advanced AI
  2. Berkeley Existential Risk Initiative — General Support (2022)

    Award Date:
    11/2022
    Amount:
    $100,000
    Potential Risks from Advanced AI
  3. Centre for the Governance of AI — Research Assistant

    Award Date:
    09/2022
    Amount:
    $19,200
    Potential Risks from Advanced AI
  4. AI Alignment Awards — Shutdown Problem Contest

    Award Date:
    09/2022
    Amount:
    $75,000
    Potential Risks from Advanced AI
  5. Centre for Effective Altruism — Harvard AI Safety Office

    Award Date:
    08/2022
    Amount:
    $250,000
    Potential Risks from Advanced AI
  6. Fund for Alignment Research — Language Model Misalignment (2022)

    Award Date:
    08/2022
    Amount:
    $463,693
    Potential Risks from Advanced AI
  7. Arizona State University — Adversarial Robustness Research

    Award Date:
    08/2022
    Amount:
    $200,000
    Potential Risks from Advanced AI
  8. Redwood Research — General Support (2022)

    Award Date:
    08/2022
    Amount:
    $10,700,000
    Potential Risks from Advanced AI
  9. UW — Philosophy of AI Course Development

    Award Date:
    07/2022
    Amount:
    $16,500
    Potential Risks from Advanced AI
  10. Center for a New American Security — Work on AI Governance

    Award Date:
    07/2022
    Amount:
    $5,149,398
    Potential Risks from Advanced AI