Berkeley Existential Risk Initiative — Language Model Alignment Research

Organization:
Berkeley Existential Risk Initiative
Award Date:
06/2022
Amount:
$30,000
Purpose:
To support a project to develop a dataset and accompanying methods for language model alignment research.