Published on May 4, 2025 5:39 PM GMT
We’ve been looking for joinable endeavors in AI safety outreach over the past weeks and would like to share our findings with you. Let us know if we missed any and we’ll add them to the list.
For comprehensive directories of AI safety communities spanning general interest, technical focus, and local chapters, check out https://www.aisafety.com/communities and https://www.aisafety.com/map. If you're uncertain where to start, https://aisafety.quest/ offers personalized guidance.
ControlAI
ControlAI started out as a think tank. Over the past months, they developed a theory of change for how to prevent ASI development (“Direct Institutional Plan”). As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk. So far, they talked to 70 representatives of which 31 agreed to publicly stand against ASI development.
Control AI is also supporting grassroots activism: On https://controlai.com/take-action , you can find templates to send to your representatives yourself, as well as guides for how to constructively inform people about AI risk. They are also reaching out to influencers and supporting content creation.
While they are the org on this list whose theory of change and actions we found most convincing, so far, they are still at the start of building infrastructure that would allow them to take in considerable numbers of volunteers. We expect them to react positively anyways if you reach out to them with requests for talks, training or similar. You can join the Control AI Discord here.
ControlAI is currently hiring!
EncodeAI
EncodeAI is an organization of high school and college students that addresses all kinds of AI risks. Their past endeavors and successes include a bipartisan event advocating for anti-deepfake laws, and co-sponsoring SB 1047, California’s landmark AI safety legislation that would, if passed, have been a tremendous contribution to AI existential safety.
You can find an overview of their past activities here and join their local chapters or start a new one here.
PauseAI
PauseAI is a community-focused organization dedicated to AI safety activism. Their primary aim is to normalize discussions about AI existential risk and advocate for a pause in advanced AI development. They contact policymakers, influencers and experts, organize protests, hand out leaflets, do tabling, and anything else that seems useful. PauseAI also offers microgrants to fund a variety of projects fitting their mission.
We (Ben and Severin) also started running co-working sessions for mailing MPs over the PauseAI Discord, as well as Outreach Conversation Labs where you can practice informing people about AI x-risk via fun mock conversations. Our goal is to empower others rather than become bottlenecks, so we encourage you to organize similar events. Whether over the PauseAI Discord, in your local group, or at conferences.
Currently, PauseAI seems to be the org on this list that’s best equipped to absorb new members.
More on https://pauseai.info/. To get involved, you can join their Discord or one of the local groups. To get really involved, you can attend PauseCon from June 27 to 30 in London.
StopAI
Focusing on civil disobedience, StopAI are the spicy end of this spectrum. You can follow their YouTube to learn more about their protests.
More on https://www.stopai.info/. To get involved, check https://www.stopai.info/join or join their discord.
Collective Action for Existential Safety (CAES)
CAES’s central aim is to catalyze collective action to ensure humanity survives this decade. It serves all existential safety advocates globally, and is more cause-area agnostic than the other organizations on this list. If you want to help with existential risk but are yet uncertain which niche suits you best, they’ll help point you in a good direction.
Their website features a list of 80+ concrete actions individuals, organizations, and nations can take to increase humanity’s existential safety in light of risks from advanced AI, nuclear weapons, synthetic biology, and other novel technologies.
More info: existentialsafety.org.
Call to action
These organizations are mostly in their early stages. Accordingly, any effort now is disproportionately impactful. With short timelines and AI risks becoming more salient to the average person, taking action here seems like a great chance. And if you are worried that political outreach won’t go in the right direction or might be harmful, this is your chance to shift the trajectory of these endeavors!
Discuss