Berkeley Existential Risk Initiative — AI Standards (2022)

Organization:
Berkeley Existential Risk Initiative
Award Date:
04/2022
Amount:
$210,000
Purpose:
To support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence.

Open Philanthropy recommended a grant of $210,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An additional grant to the Center for Long-Term Cybersecurity will support related work.

This follows Open Philanthropy's July 2021 support and falls within their focus area of potential risks from advanced artificial intelligence.

Read more: