少点错误 04月01日
We’re not prepared for an AI market crash
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能领域可能面临的市场崩溃,并强调了社区对此准备不足的现状。作者指出,OpenAI和Anthropic等公司巨额亏损,以及廉价LLMs的竞争,都可能导致市场崩溃。文章分析了“内部游戏”策略的失效,以及公众对AI风险认知的转变。为了应对潜在的崩溃,作者建议资助AI安全领域之外的反运动,并做好应对市场危机的准备,呼吁社区采取行动,避免在危机中措手不及。

💡AI领域正面临市场崩溃的风险:OpenAI和Anthropic等公司每年亏损超过50亿美元,同时面临廉价LLMs的竞争,这可能导致市场崩溃。

⚠️“内部游戏”策略正在失效:文章指出,依靠在机构内部推动AI安全的策略已经失败,例如OpenAI解散了超级对齐团队,以及AI安全峰会更名为AI行动峰会等。

📉公众对AI的怀疑论将达到顶峰:在经济衰退时期,人们对AI的风险的质疑会增加,这使得推广“强大AI”的策略难以奏效。

🤝社区缺乏应对危机的准备:AI安全社区在建立共识和采取行动方面存在局限性,缺乏经验丰富的桥梁建设者和调解人,难以与其他群体协调行动。

💰应对策略:作者建议资助AI安全领域之外的反运动,为应对市场崩溃做好准备;并认为在行业初期阶段采取行动比在行业获得自主能力后再采取行动更有利。

Published on April 1, 2025 4:33 AM GMT

Our community is not prepared for an AI crash. We're good at tracking new capability developments, but not as much the company financials. Currently, both OpenAI and Anthropic are losing $5 billion+ a year, while under threat of losing users to cheap LLMs.

A crash will weaken the labs. Funding-deprived and distracted, execs struggle to counter coordinated efforts to restrict their reckless actions. Journalists turn on tech darlings. Optimism makes way for mass outrage, for all the wasted money and reckless harms.

You may not think a crash is likely. But if it happens, we can turn the tide.

Preparing for a crash is our best bet.[1] But our community is poorly positioned to respond. Core people positioned themselves inside institutions – to advise on how to maybe make AI 'safe', under the assumption that models rapidly become generally useful.

After a crash, this no longer works, for at least four reasons:

    The 'inside game' approach is already failing. To give examples: OpenAI ended its superalignment team, and Anthropic is releasing agents. The US is demolishing the AI Safety Institute, and its UK counterpart was renamed the AI Security Institute. The AI Safety Summit is now called the AI Action Summit. Need we go on?In the economic trough, skepticism of AI will reach its peak. People will dismiss and ridicule us for talking about risks of powerful AI. I'd say that promoting the “powerful AI” framing to an audience that contains power-hungry entrepreneurs and politicians never was a winning strategy. But it sure was believable when ChatGPT took off. Once OpenAI loses more money than it can recoup through VC rounds and its new compute provider goes bankrupt, the message just falls flat.Even if we change our messaging, it won't be enough to reach broad-based public agreement. To create lasting institutional reforms (that powerful tech lobbies cannot undermine), various civic groups that often oppose each other need to reach consensus. Unfortunately, AI Safety is rather insular, and lacks experienced bridgebuilders and facilitators who can listen to the concerns of different communities, and support coordinated action between them.To overhaul institutions that are failing us, more confrontational tactics like civil disobedience may be needed. Such actions are often seen as radical in their time (e.g. as civil rights marches were). The AI Safety community lacks the training and mindset to lead such actions, and may not even want to associate itself with people taking such actions. Conversely, many of the people taking such actions may not want to associate with AI Safety. The reasons are various: safety researchers and funders collaborated with the labs, while neglecting already harmed communities, and ignoring the value of religious worldviews.
     

As things stand, we’ll get caught flat-footed.

One way to prepare is to fund a counter-movement outside of AI Safety. I'm assisting experienced organisers making plans. I hope to share details before a crash happens.[2]
 

  1. ^

    Preparing for a warning shot is another option. This is dicey though given that: (1) we don’t know when or how it will happen (2) a convincing enough warning shot implies that models are already gaining the capacity for huge impacts, making it even harder to prepare for the changed world that results (3) in a world with such resourceful AI, the industry could still garner political and financial backing to continue developing supposedly safer versions, and (4) we should not rely on rational action following a (near-)catastrophe, given that even tech with little upside has continued to be developed after being traced back to maybe having caused a catastrophe (e.g. virus gain-of-function research).

    Overall, I’d prefer to not wait until the point that lots of people might die before trying to restrict AI corporations. I think campaigning in an early period of industry weakness is a better moment than campaigning when the industry gains AI with autonomous capabilities. Maybe I'm missing other options (please share), but this is why I think preparing for a market crash is our best bet.

  2. ^

    We’re starting to see signs of investments not being able to swell further. E.g. OpenAI’s latest VC round is led by an unrespectable firm that must lend money to invest at a staggering valuation of $300 billion. Also, OpenAI buys compute from CoreWeave, a debt-ridden company that recently had a disappointing IPO. I think we're in the late stage of the bubble, which is most likely to pop by 2027.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 市场崩溃 AI安全 OpenAI Anthropic
相关文章