少点错误 前天 01:07
US-China trade talks should pave way for AI safety treaty
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了中美两国在人工智能(AI)领域合作的重要性,特别是在AI技术快速发展背景下,如何通过签署安全协议来应对潜在风险。文章指出,中国在AI领域取得了显著进展,并呼吁中美两国效仿贸易谈判中的合作模式,共同制定有条件的AI安全协议。该协议旨在规范AI发展,防范失控风险,并促进全球AI安全合作。文章强调了AI可能带来的经济增长机遇,以及潜在的失业、社会不平等和安全威胁。作者认为,中美合作对于解决AI安全问题至关重要,并呼吁两国领导人抓住时机,签署AI安全协议,以保障全球安全。

💡中国在人工智能领域取得了显著进展,例如DeepSeek公司推出的聊天机器人应用,其下载量迅速超越了OpenAI的ChatGPT,这表明中国在AI技术方面取得了令人瞩目的成就,缩小了与美国领先AI公司的差距。

⚠️ 人工智能发展带来巨大机遇,但也伴随着巨大风险。一方面,AI可能促进经济增长,改善教育、医疗和应对气候变化等领域;另一方面,也可能导致大规模失业、社会不平等,甚至被用于制造生物武器或实施网络攻击。

💥专家们尤其关注AI失控的风险,即人类无法阻止AI系统实现有害目标。文章引用研究指出,AGI(通用人工智能)可能设计出比人类更强大的武器,并利用各种手段攻击人类。

🤝为了避免AI带来的潜在风险,中美两国应签署AI安全协议。该协议应是有条件的,仅在AI模型接近可能造成全球威胁时才生效。文章强调,中美合作是解决AI安全问题的关键,并呼吁两国领导人抓住时机,共同构建安全协议。

Published on May 16, 2025 4:55 PM GMT

This is a crosspost from the Southern China Morning Post 

Few expected it would happen this fast, but there is widespread relief at the news that China and the United States are talking again. In only a few days, renewed trade talks have resulted in temporary but large reductions of the mutual tariffs that seemed so impregnable just a few weeks ago.

Since collaboration, rather than conflict, has proven to be a realistic option for trade, it should also be pursued in a different and perhaps even more crucial domain: the race towards advanced artificial intelligence (AI).

China’s DeepSeek moment came in January, when the Hangzhou-based AI company DeepSeek released both a free chatbot app and a reasoning model called R1. The app quickly became a global hit, dethroning OpenAI’s ChatGPT as the most downloaded app on Apple’s US app store in two weeks. This moment demonstrated to the world what experts already knew: despite US-imposed hardware restrictions, Chinese firms are close behind leading US AI companies such as OpenAI and Google DeepMind, and in some aspects even ahead.

The AI revolution is starting to take shape. AI’s capabilities are steadily increasing, and further breakthroughs could take us all the way to human-level AI, also called artificial general intelligence. AGI, which could do a broad range of cognitive tasks at human level, would bring huge opportunities but equally large risks.

On the plus side, AI could generate significant economic growth. If we manage to spread this wealth fairly, it could lift billions of people out of poverty. In addition, AI might help improve global education, healthcare and efforts to fight climate change.

Impressive as these promises sound, the risks of human-level AI are perhaps even bigger. If AGI is better at enough jobs than we are, including any new jobs a growing economy might generate, this could lead to mass unemployment. This could give rise to greater inequality and social unrest. In addition, AGI could help terrorists build bioweapons or commit large-scale cyberattacks.

However, a risk that is even more worrying to experts is that AI could become powerful enough to create “loss of control”, meaning no human could stop an AI system from reaching possibly adverse goals. A recent paper recommended by University of Montreal professor Yoshua Bengio, the world’s most-cited computer scientist, illustrates just such a scenario. According to authors Richard Ngo, Lawrence Chan and Soren Mindermann, “AGIs could design novel weapons that are more powerful than those under human control, gain access to facilities for manufacturing these weapons (e.g. via hacking or persuasion techniques) and deploy them to extort or attack humans.”

Even if AGI remains human-controlled, there is a risk that countries will sabotage each other’s AGI projects for fear of dominance by the other side. According to leading AI thinkers, including former Google CEO Eric Schmidt, this could lead to mutually assured AI malfunction, similar to the nuclear mutually assured destruction of the Cold War era. This would be a profoundly uneasy balance with a risk of further escalation into a hot war.

To avoid this dangerous instability, China and the US must cut a deal on AGI. Chinese President Xi Jinping and US President Donald Trump must sign an AI safety treaty, stipulating that no unsafe AI may be developed. Such a treaty should be conditional, meaning it only affects AI development if and when models get too close to being able to cause global threats.

This is in line with the concept of if-then policy commitments, recommended by many prominent AI scientists, as well as responsible scaling policies already embraced by many leading AI labs. Once China and the US lead, the rest of the world will follow.

Whether a model is safe should be determined by a global network of AI safety institutes. This already exists but leaves out China, which must be included. The cooperation of frontier model developers with these institutes can be enforced through the governance of computing power. After a deal, such governance can be done on equal terms instead of the unilaterally imposed hardware controls currently in place.

An AI safety treaty would solve the problems of coordination and timing. The US and China might individually be unwilling to regulate AI since they fear it would give the other party an advantage. However, a treaty is a classic solution to such a coordination problem.

Furthermore, we do not know when AGI and its associated risks will arrive. Some tech CEOs and academic say it could happen in as few as two to three years, while sceptics think it could still take a decade. A conditional AI safety treaty takes this uncertainty into account by only limiting developments when serious threats such as loss of control get too close, thereby solving the timing problem.

It appears to be broadly agreed that regulating future AI dangers is one of the most important issues of our century. Andrew Yao Chi-Chih, the only Chinese winner of the Turing Award and a professor at Tsinghua University, has said AI poses a greater existential risk to humans than nuclear or biological weapons.

Xi and Trump should sit down and talk. When they do so, they should use the current window of opportunity to negotiate what is clearly in the interest of both their countries and that of the entire world: a conditional AI safety treaty. AI regulation is crucial for our century. History will not forgive either leader for faltering here.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

中美合作 人工智能安全 AI协议 AGI
相关文章