Reducing Societal-scale
Risks from AI

The Center for AI Safety
is a research and field-building nonprofit.

The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.

Featured CAIS Work

AI Safety Field-Building

Featured

CAIS Compute Cluster

Compute Cluster

Enabling ML safety research at scale

To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems.

Learn More

ML Safety Infrastructure

Philosophy Fellowship

Tackling conceptual issues in AI safety

The CAIS Philosophy Fellowship is a seven-month research program that investigates the societal implications and potential risks associated with advanced AI.

Learn More

Philosophy Fellowship

ML Safety Course

Reducing barriers to entry in ML safety

The ML Safety course offers a comprehensive introduction to ML safety, covering topics such as anomaly detection, alignment, risk engineering, and so on.

Learn More
Dan Hendrycks Headshot

Dan Hendrycks

Director, Center for AI Safety
PhD Computer Science, UC Berkeley

"Preventing extreme risks from AI requires more than just technical work, so CAIS takes a multidisciplinary approach working across academic disciplines, public and private entities, and with the general public."

Risks from AI

Artificial Intelligence (AI) possesses the potential to benefit and advance society. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic

Current AI Systems

Current systems already can pass the bar exam, write code, fold proteins, and even explain humor. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic.

AI Safety

As AI systems become more advanced and embedded in society, it becomes increasingly important to address and mitigate these risks. By prioritizing the development of safe and responsible AI practices, we can unlock the full potential of this technology for the benefit of humanity.

AI risks overview
CAIS Hourglass Illustration

Our Research

We conduct impactful research aimed at improving the safety of AI systems.

Technical Research

At the Center for AI Safety, our research exclusively focuses on mitigating societal-scale risks posed by AI. As a technical research laboratory:

  • We create foundational benchmarks and methods which lay the groundwork for the scientific community to address these technical challenges.
  • We ensure our work is public and accessible. We publish in top ML conferences and always release our datasets and code.

Conceptual Research

In addition to our technical research, we also explore the less formalized aspects of AI safety.

  • We pursue conceptual research that examines AI safety from a multidisciplinary perspective, incorporating insights from safety engineering, complex systems, international relations, philosophy, and other fields.
  • Through our conceptual research, we create frameworks that aid in understanding the current technical challenges and publish papers which provide insight into the societal risks posed by future AI systems.
CAIS research

CAIS Open Letter

Statement on AI Risk

CAIS authored a global statement on AI Risk signed by 600+leading AI researchers and public figures.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Signatories:

AI Scientists
Notable Figures
Geoffrey Hinton
AI Scientists
Emeritus Professor of Computer Science, University of Toronto
Yoshua Bengio
AI Scientists
Professor of Computer Science, U. Montreal / Mila
Demis Hassabis
AI Scientists
CEO, Google DeepMind
Sam Altman
Other Notable Figures
CEO, OpenAI
Dario Amodei
AI Scientists
CEO, Anthropic
Dawn Song
AI Scientists
Professor of Computer Science, UC Berkeley
Ted Lieu
Other Notable Figures
Congressman, US House of Representatives
Bill Gates
Other Notable Figures
Gates Ventures
Ya-Qin Zhang
AI Scientists
Professor and Dean, AIR, Tsinghua University
Ilya Sutskever
AI Scientists
Co-Founder and Chief Scientist, OpenAI
Igor Babuschkin
AI Scientists
Co-Founder, xAI
Shane Legg
AI Scientists
Chief AGI Scientist and Co-Founder, Google DeepMind
Martin Hellman
Other Notable Figures
Professor Emeritus of Electrical Engineering, Stanford
James Manyika
AI Scientists
SVP, Research, Technology and Society, Google-Alphabet
Yi Zeng
AI Scientists
Professor and Director of Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences
Xianyuan Zhan
AI Scientists
Assistant Professor, Tsinghua University
View all

Statement press coverage

Learn more about CAIS

Frequently Asked Questions

We have compiled a list of frequently asked questions to help you find the answers you need quickly and easily.

What does CAIS do?
Where is CAIS located?
What does CAIS mean by field-building?
How can I support CAIS and get involved?
How does CAIS choose which projects it works on?
Where can I learn more about the research CAIS is doing?