少点错误 05月05日 01:47
Overview: AI Safety Outreach Grassroots Orgs
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了多个致力于AI安全行动的组织,旨在鼓励更多人参与到确保AI安全发展的行列中来。文章重点介绍了ControlAI、EncodeAI、PauseAI、StopAI和CAES等组织,它们分别从政策倡导、社区建设、行动抗议和跨领域合作等不同角度,为应对AI带来的潜在风险而努力。无论你对哪个领域感兴趣,都能找到合适的切入点,为AI安全贡献力量。现在行动起来,你的努力将产生重大影响。

🏛️ControlAI:作为一个智库,ControlAI制定了ASI预防的变革理论,并通过直接联系英国议员来提高他们对AI风险的认识。他们还支持基层行动主义,提供模板和指南,帮助人们向代表发送信息,并积极联系有影响力的人和支持内容创作。

🧑‍🎓EncodeAI:EncodeAI是由高中生和大学生组成的组织,致力于解决各种AI风险。他们过去的成功包括举办两党活动,倡导反深度伪造法,并共同发起SB 1047,这是一项具有里程碑意义的加州AI安全立法。

📢PauseAI:PauseAI是一个以社区为中心的组织,致力于AI安全行动主义。他们的主要目标是使关于AI生存风险的讨论正常化,并倡导暂停高级AI的开发。他们联系决策者、有影响力的人和专家,组织抗议活动,分发传单,并提供小额赠款以资助各种符合其使命的项目。

🔥StopAI:StopAI专注于公民不服从,是该领域中比较激进的一端。他们通过YouTube展示其抗议活动,鼓励人们参与到他们的行动中。

🌍CAES:CAES的核心目标是促进集体行动,以确保人类在本十年生存下来。它服务于全球所有生存安全倡导者,并且比此列表中的其他组织更具有跨领域性。他们的网站列出了80多项具体行动,个人、组织和国家可以采取这些行动来提高人类在面对高级AI、核武器、合成生物学和其他新兴技术风险时的生存安全。

Published on May 4, 2025 5:39 PM GMT

We’ve been looking for joinable endeavors in AI safety outreach over the past weeks and would like to share our findings with you. Let us know if we missed any and we’ll add them to the list.

For comprehensive directories of AI safety communities spanning general interest, technical focus, and local chapters, check out https://www.aisafety.com/communities and https://www.aisafety.com/map. If you're uncertain where to start, https://aisafety.quest/ offers personalized guidance.

ControlAI

ControlAI started out as a think tank. Over the past months, they developed a theory of change for how to prevent ASI development (“Direct Institutional Plan”). As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk. So far, they talked to 70 representatives of which 31 agreed to publicly stand against ASI development.

Control AI is also supporting grassroots activism: On https://controlai.com/take-action , you can find templates to send to your representatives yourself, as well as guides for how to constructively inform people about AI risk. They are also reaching out to influencers and supporting content creation.

While they are the org on this list whose theory of change and actions we found most convincing, so far, they are still at the start of building infrastructure that would allow them to take in considerable numbers of volunteers. We expect them to react positively anyways if you reach out to them with requests for talks, training or similar. You can join the Control AI Discord here.

ControlAI is currently hiring!

EncodeAI

EncodeAI is an organization of high school and college students that addresses all kinds of AI risks. Their past endeavors and successes include a bipartisan event advocating for anti-deepfake laws, and co-sponsoring SB 1047, California’s landmark AI safety legislation that would, if passed, have been a tremendous contribution to AI existential safety.

You can find an overview of their past activities here and join their local chapters or start a new one here.

PauseAI

PauseAI is a community-focused organization dedicated to AI safety activism. Their primary aim is to normalize discussions about AI existential risk and advocate for a pause in advanced AI development. They contact policymakers,  influencers and experts, organize protests, hand out leaflets, do tabling, and anything else that seems useful. PauseAI also offers microgrants to fund a variety of projects fitting their mission.

We (Ben and Severin) also started running co-working sessions for mailing MPs over the PauseAI Discord, as well as Outreach Conversation Labs where you can practice informing people about AI x-risk via fun mock conversations. Our goal is to empower others rather than become bottlenecks, so we encourage you to organize similar events. Whether over the PauseAI Discord, in your local group, or at conferences.

Currently, PauseAI seems to be the org on this list that’s best equipped to absorb new members.

More on https://pauseai.info/. To get involved, you can join their Discord or one of the local groups. To get really involved, you can attend PauseCon from June 27 to 30 in London.

StopAI

Focusing on civil disobedience, StopAI are the spicy end of this spectrum. You can follow their YouTube to learn more about their protests.

More on https://www.stopai.info/. To get involved, check https://www.stopai.info/join or join their discord.

Collective Action for Existential Safety (CAES)

CAES’s central aim is to catalyze collective action to ensure humanity survives this decade. It serves all existential safety advocates globally, and is more cause-area agnostic than the other organizations on this list. If you want to help with existential risk but are yet uncertain which niche suits you best, they’ll help point you in a good direction.

Their website features a list of 80+ concrete actions individuals, organizations, and nations can take to increase humanity’s existential safety in light of risks from advanced AI, nuclear weapons, synthetic biology, and other novel technologies. 

More info:  existentialsafety.org. 

Call to action

These organizations are mostly in their early stages. Accordingly, any effort now is disproportionately impactful. With short timelines and AI risks becoming more salient to the average person, taking action here seems like a great chance. And if you are worried that political outreach won’t go in the right direction or might be harmful, this is your chance to shift the trajectory of these endeavors! 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 行动主义 风险防范 社区参与
相关文章