Berkeley Existential Risk Initiative — Language Model Alignment Research

Organization:
Berkeley Existential Risk Initiative
Award Date:
06/2022
Amount:
$40,000
Purpose:
To support a project to develop a dataset and accompanying methods for language model alignment research.

Open Philanthropy recommended a grant of $40,000 over three years to the Berkeley Existential Risk Initiative to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language model alignment research.

This falls within Open Philanthropy’s focus area of potential risks from advanced artificial intelligence.

The grant amount was updated in April 2024. 

Read more: