少点错误 07月30日 06:29
Against racing to AGI: Cooperation, deterrence, and catastrophic risks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文挑战了AI发展领域主要参与者应出于自身利益而竞相开发先进AI的观点。作者认为,这种“竞赛”观点低估了潜在风险,夸大了私人收益,并忽视了其他可行方案。文章指出,竞相发展AGI将显著增加灾难性风险,如核不稳定,并可能阻碍AI安全研究的有效性。同时,赢得竞赛可能无法带来完全的主导地位,其预期收益被高估。作者提出,国际合作、协调及审慎的威慑措施是优于竞赛的替代方案,风险更小且能实现大部分竞赛所承诺的收益。

💰 竞赛观点低估风险与夸大收益:文章的核心论点是,追求AGI竞赛低估了潜在的灾难性风险,例如核不稳定,并可能削弱AI安全研究的有效性。同时,作者质疑赢得竞赛是否能带来完全的主导地位,认为其私人收益被夸大。

🤝 国际合作是可行替代方案:文章提出,国际间的合作、协调以及审慎的威慑措施是比AGI竞赛更优的替代策略。这些方法不仅风险更小,而且有望实现竞赛所声称的大部分效益。

🌍 维护全球AI安全与稳定:作者强调,通过合作而非竞争来推进AI发展,有助于避免潜在的全球性风险,并为AI安全研究的有效进展创造更有利的环境,从而维护长期的全球稳定。

Published on July 29, 2025 10:23 PM GMT

Leonard Dung and I have a new draft – preprint here – arguing against the view that major actors in AI development should out of self-interest race in their attempts to build advanced AI. We argue (roughly) that this pro-racing view 1) underestimates the risks, 2) overestimates the (private) benefits, and 3) neglects alternatives to racing. Needless to say, this view[1] has recently gotten unfortunately common, hence this piece. We're grateful for constructive criticism and feedback.


Below is the abstract:

AGI Racing is the view that it is in the self-interest of major actors in AI development, especially powerful nations, to accelerate their frontier AI development to build highly capable AI, especially artificial general intelligence (AGI), before competitors have a chance. We argue against AGI Racing. First, the downsides of racing to AGI are much higher than portrayed by this view. Racing to AGI would substantially increase catastrophic risks from AI, including nuclear instability, and undermine the prospects of technical AI safety research to be effective. Second, the expected benefits of racing may be lower than proponents of AGI Racing hold. In particular, it is questionable whether winning the race enables complete domination over losers. Third, international cooperation and coordination, and perhaps carefully crafted deterrence measures, constitute viable alternatives to racing to AGI which have much smaller risks and promise to deliver most of the benefits that racing to AGI is supposed to provide. Hence, racing to AGI is not in anyone’s self-interest as other actions, particularly incentivizing and seeking international cooperation around AI issues, are preferable.

  1. ^

    Aschenbrenner captures the position well here:

    Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI发展 AGI竞赛 AI风险 国际合作 AI安全
相关文章