We conduct impactful research on AI safety.
CAIS conducts technical and conceptual research. Our team develops benchmarks and methods designed to improve the safety of existing systems. We prioritize transparency and accessibility, publishing our findings at top conferences and sharing our resources with the global community.
We are building the AI safety research field.
CAIS builds infrastructure and pathways into AI safety. We empower researchers with compute resources, funding, and educational materials while organizing workshops and competitions to promote safety research. Our goal is to create a thriving research ecosystem that will drive progress toward safe AI.
/
Alexander Pan*, Chan Jun Shern*, Andy Zou*, Nathaniel Li, Steven Basart, Thomas Woodside, Jonathan Ng, Hanlin Zhang, Scott Emmons, Dan Hendrycks
/
Dan Hendrycks*, Steven Basart*, Mantas Mazeika, Andy Zou, Joe Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, Dawn Song
/
Dan Hendrycks*, Andy Zou*, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, and Jacob Steinhardt
Hundreds of AI experts and public figures express their concern about AI risk in this open letter. It was covered globally in publications like the New York Times, the Wall Street Journal, and the Washington Post.
To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems.
The CAIS Philosophy Fellowship is a seven-month research program that investigates the societal implications and potential risks associated with advanced AI.
The ML Safety course offers a comprehensive introduction to ML safety, covering topics such as anomaly detection, alignment, risk engineering, and so on.