少点错误 05月22日 16:27
Contain and verify: The endgame of US-China AI competition
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了中美在人工智能(AI)领域的竞争,认为这更像是一场“遏制”游戏,而非单纯的竞赛。作者指出,美国的目标不仅仅是率先开发出强大的AI系统,而是要阻止中国达到某些AI能力水平,以确保国家安全。文章强调了“可验证的不发展”的重要性,即确认中国放弃开发AGI(通用人工智能)的方法。最后,文章建议降低竞争的风险,加强国际合作,制定AI安全标准,以实现更安全、更稳定的未来。

🤔中美AI竞争的核心在于“遏制”,而非单纯的“竞赛”。美国的目标并非仅仅是率先开发出强大的AI,而是要阻止中国达到某些关键的AI能力水平,以保护自身安全,例如防止中国开发出能够干扰美国核指挥控制的AI。

💡“遏制”意味着美国需要阻止中国在AI领域取得某些进展。即使美国在AI技术上占据优势,如果中国能够开发出具有威胁性的AI系统,对美国来说仍然是一个问题。

✅“可验证的不发展”是关键。文章强调,美国需要开发可靠的方法来确认中国是否停止了对AGI的开发,并制定相应的国际条约来执行这些措施。因为中国可能会在追赶的过程中,最终超越美国,甚至可能失控。

🤝降低竞争风险和加强国际合作至关重要。文章建议通过“强化”世界,使其减少对强大AI的脆弱性,并合作制定国际安全标准,来降低竞争带来的风险。

Published on May 22, 2025 8:13 AM GMT

Some competitions have a clear win condition: In a race, be the first to cross a finish line.

The US-China AI competition isn’t like this. It’s not enough to be the first to get a powerful AI system.

So, what is necessary for a good outcome from the US-China AI competition?

I thought about this all the time as a researcher on OpenAI’s AGI Readiness team: If the US races to develop powerful AI before China - and even succeeds at doing so safely - what happens next? The endgame is still pretty complicated, even if we’ve “won” the race by getting to AGI1first.

I suggest two reframes on the US-China AI race:

By “containment,” I mean that a good outcome for the US might require stopping China from ever reaching a certain level of AI capability. It isn’t enough for the US to get there first. For instance, it’s an issue if China builds AI that can disrupt US nuclear command and control even a small percentage of the time. This is true even if the US has a system that can more reliably disrupt theirs. There are some types of AI the US wants for China never to develop - and likewise, that China wants the US never to develop: the interest in containment is mutual.

By “verifiably yielding,” I mean that the US must be confident that China is not continuing to try to build powerful AI. Otherwise, China might eventually surpass US systems or incur other risks, like losing control over their AI system in the rush to catch up. Unfortunately, methods for “verifiable non-development” - confirming that another party isn’t building AGI - are very understudied. We need to invest heavily in developing these methods and creating treaties that can enforce them: Otherwise, even if we “win” the race to certain powerful abilities, we won’t have good ways to confirm that China has given up on pursuing AGI. (These methods can also be useful for slowing or avoiding the race ahead-of-time, if countries can verify that the other is not developing AGI.)

Given how high the stakes are perceived to be, getting China to yield might require the US to take a truly dominant lead. Such a dominant lead is far from assured, even if the US believes it could ultimately outrace China.

Both nations would benefit from lowering the stakes of the competition - like “hardening” the world so it’s less vulnerable to the risks of powerful AI, and cooperating on international safety standards.

~~~~

Continues here;

Twitter post: https://x.com/sjgadler/status/1925372613721038910

Thank you to Justis Mills of LessWrong’s feedback service, among others in the Acknowledgements
 


 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

中美AI竞争 遏制 AGI 国际合作
相关文章