The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely. Our mission is to reduce societal-scale risks associated with AI by advancing safety research, building the field of AI safety researchers, and promoting safety standards.
The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. Our mission is to reduce catastrophic and existential risks from artificial intelligence through technical research and advocacy of machine learning safety in the broader research community. View our mission statement.
Enabling ML safety research at scale
To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems.
Tackling conceptual issues in AI safety
The CAIS Philosophy Fellowship is a seven-month research program that investigates the societal implications and potential risks associated with advanced AI.
Reducing barriers to entry in ML safety
The ML Safety course offers a comprehensive introduction to ML safety, covering topics such as anomaly detection, alignment, risk engineering, and so on.
Director, Center for AI Safety
PhD Computer Science, UC Berkeley
Current systems already can pass the bar exam, write code, fold proteins, and even explain humor. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic.
As AI systems become more advanced and embedded in society, it becomes increasingly important to address and mitigate these risks. By prioritizing the development of safe and responsible AI practices, we can unlock the full potential of this technology for the benefit of humanity.
At the Center for AI Safety, our research exclusively focuses on mitigating societal-scale risks posed by AI. As a technical research laboratory:
In addition to our technical research, we also explore the less formalized aspects of AI safety.
We have compiled a list of frequently asked questions to help you find the answers you need quickly and easily.