Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Stanford University — LLM-Generated Research Ideation Benchmark

    Award Date:
    05/2024
    Amount:
    $880,000
    Potential Risks from Advanced AI
  2. Northeastern University — Large Language Model Interpretability Research (2024)

    Award Date:
    05/2024
    Amount:
    $1,095,017
    Potential Risks from Advanced AI
  3. Carnegie Mellon University — LLM Use Case Database

    Award Date:
    05/2024
    Amount:
    $266,805
    Potential Risks from Advanced AI
  4. Metaculus — Forecasting Tournaments

    Award Date:
    05/2024
    Amount:
    $532,400
    Potential Risks from Advanced AI
  5. Sage — AI Explainers

    Award Date:
    05/2024
    Amount:
    $550,000
    Potential Risks from Advanced AI
  6. AI Standards Lab — AI Standards and Risk Management Frameworks

    Award Date:
    05/2024
    Amount:
    $200,000
    Potential Risks from Advanced AI
  7. Apollo Research — General Support

    Award Date:
    05/2024
    Amount:
    $2,178,700
    Potential Risks from Advanced AI
  8. Rethink Priorities — Research on LLM Use

    Award Date:
    04/2024
    Amount:
    $115,887
    Potential Risks from Advanced AI
  9. University of Wisconsin–Madison — Scalable Oversight Research

    Award Date:
    04/2024
    Amount:
    $100,000
    Potential Risks from Advanced AI
  10. Institute for AI Policy and Strategy — General Support (April 2024)

    Award Date:
    04/2024
    Amount:
    $828,049
    Potential Risks from Advanced AI