CAIS exists to ensure the safe development and deployment of AI
AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.
AI safety is highly neglected. CAIS reduces societal-scale risks from AI through research, field-building, and advocacy.
CAIS conducts research solely focused on improving the safety of AIs. Through our research initiatives, we aim to identify and address AI safety issues before they become significant concerns.
Activities:
CAIS grows the AI safety research field through funding, research infrastructure, and educational resources. We aim to create a thriving research ecosystem that will drive progress towards safe AI.
Activities:
CAIS advises industry leaders, policymakers, and other labs to bring AI safety research into the real-world. We aim to build awareness and establish guidelines for the safe and responsible deployment of AI.
Activities:
We systematically assess our projects so we can quickly scale what works and stop what doesn’t.
Prioritize by estimating the expected impact of each project. ↴
Pilot the top projects to a point where impact can be assessed ↴
Evaluate the impact compared to our projections and other projects. ↴
Scale the successful projects. We implement structures to make the project efficient and repeatable. ⟳
Stop the less successful projects and pursue other ideas.
CAIS is organized into functional teams that support our work and approach to AI safety. CAIS has over a dozen employees spread throughout these teams.
Performs conceptual and empirical AI safety research.
Runs field-building projects and manages our collaborations and advising opportunities.
Supports the organization by ensuring we have proper tools, processes, and personnel.