Berkeley Existential Risk Initiative — AI Standards

Organization:
Berkeley Existential Risk Initiative
Award Date:
07/2021
Amount:
$300,000
Purpose:
To support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence.

Open Philanthropy recommended a grant of $300,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An additional grant to the Center for Long-Term Cybersecurity will support related work.

This follows Open Philanthropy’s January 2020 support and falls within its focus area of potential risks from advanced artificial intelligence.

Read more: