少点错误 04月11日 15:28
Crash scenario 1: Rapidly mobilise for a 2025 AI crash
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在AI领域可能面临的快速变革背景下,如何有效组织和动员社会力量。作者认为,面对潜在的AI危机,传统的组织模式可能无法满足需求,需要加速从组织向动员的转变。文章强调了支持多元化抵抗AI的组织的重要性,并提出了资助这些组织、促进跨群体合作以及推动机构变革的策略。此外,文章还关注了地缘政治和经济因素对AI发展的影响,以及在资助和合作过程中需要注意的问题,以避免负面影响。

📢 快速动员的重要性:文章指出,面对AI领域可能出现的快速变化,传统的组织模式可能无法及时应对,需要加速从组织向动员的转变,以便快速响应和采取行动。

🤝 多元化支持的关键:为了建立广泛的社会运动,文章强调了支持不同意识形态和关注点的组织的重要性,例如关注数据权利、工人权益、调查研究、宗教团体以及灭绝风险的组织。

💰 资金支持的必要性:文章呼吁为抵抗AI的组织提供资金支持,以帮助它们扩大规模、增加全职员工,从而更有效地推动变革。同时,也提到了资助顾问,为这些组织提供专业支持。

⚠️ 避免负面影响:文章警示在资助和合作过程中需要注意的问题,尤其是在与科技巨头合作时,要避免引起其他群体的猜疑,并保持资金使用的透明性,以免损害合作关系。

Published on April 11, 2025 6:54 AM GMT

Large movement organising takes time. It takes listening deeply to many communities' concerns, finding consensus around a campaign, ramping up training of organisers, etc.

But what if the AI crash is about to happen? What if US tariffs[1] just triggered a recession that is making consumers and enterprises cut their luxury subscriptions? What if even the sucker VCs stop investing in companies that after years of billion-dollar losses on compute, now compete with cheap alternatives to their not-much-improving-LLMs?

In that case, we mostly skip organising and jump to mobilisation. But AI Safety has been playing the inside game, and is poorly positioned to mobilise the resistance.

So we need groups that can:

    Scale the outside game, meaning a movement pushing for change from the outside.Promote robust messages, e.g. affirm concerns about tech oligarchs seizing power.Bridge-build with other groups to coordinate campaigns around shared concerns.Legitimately pressure and negotiate with institutions to enforce restrictions.

Each group could mobilise a network of supporters fast. But they need money to cover their hours. We have money. Some safety researchers advise tech billionaires. You might have a high-earning tech job. If you won't push for reforms, you can fund groups that do.

You can donate to organisations already resisting AI, so more staff can go full-time.
Some examples:

Their ideologies vary widely, with some controversial to other groups. By supporting many to stand up for their concerns, you can preempt politics getting divided, as it got around climate change. Many different groups are needed for a broad-based movement.

At the early signs of a crash, groups need funding to ratchet up actions against weakened AI companies. If you wait, they lose their effectiveness. In this scenario, it is better to seed fund many proactive groups than to hold off.[2] 

Plus you can fund advisors for these groups. The people I have in mind led one of the largest grassroots movements in recent history. I'll introduce them in the next post. 

There is also room for large campaigns grounded in citizens' concerns. These can target illegal and dehumanising activities by leading AI companies. That's also for the next post.

Want to discuss more?  Join me on Sunday the 20th. Add this session to your calendar.

  1. ^

    The high tariffs seem partly temporary, meant to pressure countries into better trade deals. Still, AI's hardware supply chains span 3+ continents. So remaining tariffs on goods can put a lasting damper on GPU data center construction. 

    Chaotic tit-for-tat tariffs also further erode people’s trust in and willingness to rely on the US economy, fueling civil unrest and eroding its international ties. The relative decline of the US makes it and its allies vulnerable to land grabs, which may prompt militaries to ramp up contracts for autonomous weapons. State leaders may also react to civil unrest by procuring tools for automated surveillance. So surveillance and autonomous weapons are "growth" opportunities that we can already see AI companies pivot to.

  2. ^

    Supporting other communities unconditionally also builds healthier relations. Leaders working to stop AI's increasing harms are suspicious of us buddying up with and soliciting outsized funds from tech leaders. Those connections and funds give us a position of power, and they do not trust us to wield that power to enable their work. If it even looks like we use our money to selectively influence their communities to do our bidding, that will confirm their suspicions. While in my experience, longtermist grants are unusually hands-off, it only takes one incident. This already happened – last year, a fund suddenly cancelled an already committed grant, for political reasons they didn't clarify. The recipient runs professional activities and has a stellar network. They could have gone public, but instead decided to no longer have anything to do with our community. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI危机 社会动员 多元支持 资金支持
相关文章