AI Safety Support — SERI MATS 4.0

Organization:
AI Safety Support
Award Date:
06/2023
Amount:
$1,207,840
Purpose:
To support the Machine Learning Alignment Theory Scholars program.

Open Philanthropy recommended two grants totaling $1,207,840 to AI Safety Support to support their collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with in-person alignment research communities.

These grants will support the MATS program’s fourth cohort. They follow Open Philanthropy’s November 2022 support for the previous iteration of MATS, and fall within its focus area of potential risks from advanced artificial intelligence. Open Philanthropy also made a separate grant to the Berkeley Existential Risk Initiative for this cohort.

Read more: