少点错误 04月11日 14:57
Crash scenario 1: Rapidly mobilise for a 2025 AI market crash
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在AI技术可能面临经济危机的背景下,如何通过资金支持迅速动员抵抗力量。核心观点是,面对潜在的AI崩溃,传统的组织策略可能不足,需要转向快速动员。文章强调了资助多个不同意识形态的团体的重要性,以应对AI带来的各种风险,包括数据隐私、劳工权益、生存威胁等。作者呼吁资助现有抵抗AI的组织,并警惕资金使用方式可能引发的信任问题,强调无条件支持其他社区的重要性。

📢 面对AI技术可能引发的经济危机,文章强调了从传统组织策略向快速动员的转变,以应对潜在的AI崩溃。

💰 作者呼吁为现有抵抗AI的组织提供资金支持,以扩大其影响力,并支持全职工作人员。

🤝 文章提倡资助不同意识形态的团体,以建立广泛的联盟,共同应对AI带来的各种风险,如数据隐私、劳工权益和生存威胁。

⚠️ 作者警示了资金使用方式可能引发的信任问题,强调无条件支持其他社区的重要性,以避免孤立并促进更广泛的合作。

Published on April 11, 2025 6:54 AM GMT

Large movement organising takes time. It takes listening deeply to many communities' concerns, finding consensus around a campaign, ramping up training of organisers, etc.

But what if the AI crash is about to happen? What if US tariffs[1] just triggered a recession that is making consumers and enterprises cut their luxury subscriptions? What if even the sucker VCs stop investing in companies that after years of billion-dollar losses on compute, now compete with cheap alternatives to their not-much-improving-LLMs?

In that case, we mostly skip organising and jump to mobilisation. But AI Safety has been playing the inside game, and is poorly positioned to mobilise the resistance.

So we need groups that can:

    Scale the outside game, meaning a movement pushing for change from the outside.Promote robust messages, e.g. affirm concerns about tech oligarchs seizing power.Bridge-build with other groups to coordinate campaigns around shared concerns.Legitimately pressure and negotiate with institutions to enforce restrictions.

Each group could mobilise a network of supporters fast. But they need money to cover their hours. We have money. Some safety researchers advise tech billionaires. You might have a high-earning tech job. If you won't push for reforms, you can fund groups that do.

You can donate to organisations already resisting AI, so more staff can go full-time.
Some examples:

Their ideologies vary widely, with some controversial to other groups. By supporting many to stand up for their concerns, you can preempt politics getting divided, as it got around climate change. Many different groups are needed for a broad-based movement.

At the early signs of a crash, groups need funding to ratchet up actions against weakened AI companies. If you wait, they lose their effectiveness. In this scenario, it is better to seed fund many proactive groups than to hold off.[2] 

Plus you can fund advisors for these groups. The people I have in mind led one of the largest grassroots movements in recent history. I'll introduce them in the next post. 

There is also room for large campaigns grounded in citizens' concerns. These can target illegal and dehumanising activities by leading AI companies. That's also for the next post.

Want to discuss more?  Join me on Sunday the 20th. Add this session to your calendar.

  1. ^

    The high tariffs seem partly temporary, meant to pressure countries into better trade deals. Still, AI's hardware supply chains span 3+ continents. So remaining tariffs on goods can put a lasting damper on GPU data center construction. 

    Chaotic tit-for-tat tariffs also further erode people’s trust in and willingness to rely on the US economy, fueling civil unrest and eroding its international ties. The relative decline of the US makes it and its allies vulnerable to land grabs, which may prompt militaries to ramp up contracts for autonomous weapons. State leaders may react to civil unrest by procuring tools for automated surveillance. So surveillance and autonomous weapons are "growth" opportunities that we can already see AI companies pivot to.

  2. ^

    Supporting other communities unconditionally also builds healthier relations. Leaders working on AI's increasing harms are suspicious of us buddying up with and soliciting outsized funds from tech leaders. Those connections (and funds) give us a position of power, and they do not trust us to wield that power to enable their work. If it even looks like we use our money to selectively influence their communities to do our bidding, that will confirm their suspicions. While my experience is that longtermist grants are unusually hands-off, it only takes one incident. This already happened – last year, a fund suddenly cancelled an already committed grant, for political reasons they didn't clarify. The recipient runs professional activities and has a stellar network. They could have gone public, but instead decided to no longer have anything to do with our community. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI危机 资金支持 多元抵抗 快速动员 社区合作
相关文章