The Center for AI Safety is a non-profit organization based in San Francisco, dedicated to research and the advancement of the field of AI safety. We believe that artificial intelligence will be a powerful technology that will dramatically change society, and our aim is to mitigate potential high-consequence risks. We are doing this through:
- Technical and conceptual research
- Promotion of safety within the broader machine learning community
- Collaboration with academics, industry researchers, philosophers, and policy drivers who are uniquely positioned to influence the future of AI safety
We’re seeking a highly skilled project manager who is aligned with our mission to develop and promote the field of AI safety. The ideal candidate will have a strong understanding of project management techniques, such as stakeholder management and project planning, and the ability to adapt to a diverse range of projects. In this role, you will have the opportunity to lead important and impactful projects that are critical to the growth and improvement of the AI safety community.
We pride ourselves as being interdisciplinary, and we think this is a unique and exciting opportunity to be on the forefront of this quickly evolving technical field, regardless of previous technical training.
Example Projects Include:
- Overseeing our 256-GPU supercomputer cluster, including implementation of usage policies, managing day-to-day operations and logistics, and effectively communicating with numerous high-powered research
- Facilitating the development and large-scale testing of high-quality safety public materials
- Engaging the philosophy community to produce work in conceptual safety (see our philosophy fellowship)
- Driving growth in the machine learning safety community by developing events, activities, and infrastructure (e.g., prizes, workshops, various online resources, training programs, competitions, and so on)
- Exploring potential new projects, including outreach to the international ML community, coordinating university outreach, improving open-source safety development, and so on.
We're looking for someone who is great at:
- Developing and managing project plans including timelines, budgets, and milestones
- Scoping projects and ensuring everyone is clear on the goals and requirements
- Managing a diverse set of stakeholders and keeping everyone on track to the project goals
- Communicating well verbally and in writing
- Identifying and managing project risks; escalating effectively when necessary
- Working autonomously and thriving in a small and growing organization
- Implementing solutions to increase efficiency across the project management team and organization
You might thrive in this role if you:
- Are motivated by work that is focused on AI safety and risk
- Are conscientious and produce high quality work
- Are adaptive, able to rapidly transfer to new domains while staying organized
- Are excited by the idea of joining a small and growing team; that kind of collaboration and problem solving drives you
- Are open to new ideas and feedback, honest with yourself and others; at CAIS, even when it’s slightly uncomfortable, we prioritize getting to the best answer
- Are energized to fully own a project; managing the details without losing sight of the bigger picture
- Health insurance for you and your dependents
- Competitive PTO
- Free lunch and dinner at the office
- Reimbursement for commute fees
- Annual learning & development stipend
- Access to some of the top talent working on technical and philosophical research in the space
- Numerous guest speakers including top philosophy professors and ML researchers
- Personalized ergonomic technology set-up
- Lots of office snacks!
We’re a young organization and currently building out our benefits offerings, so this list will grow!
The salary range for this role is $80,000-$120,000. We are hiring for multiple Project Managers and expect to hire people with varying levels, between 2-6 years of experience. We are a small organization in a quickly evolving field, and we believe in-person collaboration is key to our success, so we are looking for candidates who live in the San Francisco Bay Area or are willing to relocate.
If you have any questions about the role, feel free to reach out to email@example.com.
The Center of AI Safety is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
Some studies have found that a higher percentage of women and underrepresented minority candidates won't apply if they don't meet every listed qualification. The Center for AI Safety values candidates of all backgrounds. If you find yourself excited by the position but you don't check every box in the description, we encourage you to apply anyway!