Published on May 16, 2025 4:55 PM GMT
This is a crosspost from the Southern China Morning Post
Few expected it would happen this fast, but there is widespread relief at the news that China and the United States are talking again. In only a few days, renewed trade talks have resulted in temporary but large reductions of the mutual tariffs that seemed so impregnable just a few weeks ago.
Since collaboration, rather than conflict, has proven to be a realistic option for trade, it should also be pursued in a different and perhaps even more crucial domain: the race towards advanced artificial intelligence (AI).
China’s DeepSeek moment came in January, when the Hangzhou-based AI company DeepSeek released both a free chatbot app and a reasoning model called R1. The app quickly became a global hit, dethroning OpenAI’s ChatGPT as the most downloaded app on Apple’s US app store in two weeks. This moment demonstrated to the world what experts already knew: despite US-imposed hardware restrictions, Chinese firms are close behind leading US AI companies such as OpenAI and Google DeepMind, and in some aspects even ahead.
The AI revolution is starting to take shape. AI’s capabilities are steadily increasing, and further breakthroughs could take us all the way to human-level AI, also called artificial general intelligence. AGI, which could do a broad range of cognitive tasks at human level, would bring huge opportunities but equally large risks.
On the plus side, AI could generate significant economic growth. If we manage to spread this wealth fairly, it could lift billions of people out of poverty. In addition, AI might help improve global education, healthcare and efforts to fight climate change.
Impressive as these promises sound, the risks of human-level AI are perhaps even bigger. If AGI is better at enough jobs than we are, including any new jobs a growing economy might generate, this could lead to mass unemployment. This could give rise to greater inequality and social unrest. In addition, AGI could help terrorists build bioweapons or commit large-scale cyberattacks.
However, a risk that is even more worrying to experts is that AI could become powerful enough to create “loss of control”, meaning no human could stop an AI system from reaching possibly adverse goals. A recent paper recommended by University of Montreal professor Yoshua Bengio, the world’s most-cited computer scientist, illustrates just such a scenario. According to authors Richard Ngo, Lawrence Chan and Soren Mindermann, “AGIs could design novel weapons that are more powerful than those under human control, gain access to facilities for manufacturing these weapons (e.g. via hacking or persuasion techniques) and deploy them to extort or attack humans.”
Even if AGI remains human-controlled, there is a risk that countries will sabotage each other’s AGI projects for fear of dominance by the other side. According to leading AI thinkers, including former Google CEO Eric Schmidt, this could lead to mutually assured AI malfunction, similar to the nuclear mutually assured destruction of the Cold War era. This would be a profoundly uneasy balance with a risk of further escalation into a hot war.
To avoid this dangerous instability, China and the US must cut a deal on AGI. Chinese President Xi Jinping and US President Donald Trump must sign an AI safety treaty, stipulating that no unsafe AI may be developed. Such a treaty should be conditional, meaning it only affects AI development if and when models get too close to being able to cause global threats.
This is in line with the concept of if-then policy commitments, recommended by many prominent AI scientists, as well as responsible scaling policies already embraced by many leading AI labs. Once China and the US lead, the rest of the world will follow.
Whether a model is safe should be determined by a global network of AI safety institutes. This already exists but leaves out China, which must be included. The cooperation of frontier model developers with these institutes can be enforced through the governance of computing power. After a deal, such governance can be done on equal terms instead of the unilaterally imposed hardware controls currently in place.
An AI safety treaty would solve the problems of coordination and timing. The US and China might individually be unwilling to regulate AI since they fear it would give the other party an advantage. However, a treaty is a classic solution to such a coordination problem.
Furthermore, we do not know when AGI and its associated risks will arrive. Some tech CEOs and academic say it could happen in as few as two to three years, while sceptics think it could still take a decade. A conditional AI safety treaty takes this uncertainty into account by only limiting developments when serious threats such as loss of control get too close, thereby solving the timing problem.
It appears to be broadly agreed that regulating future AI dangers is one of the most important issues of our century. Andrew Yao Chi-Chih, the only Chinese winner of the Turing Award and a professor at Tsinghua University, has said AI poses a greater existential risk to humans than nuclear or biological weapons.
Xi and Trump should sit down and talk. When they do so, they should use the current window of opportunity to negotiate what is clearly in the interest of both their countries and that of the entire world: a conditional AI safety treaty. AI regulation is crucial for our century. History will not forgive either leader for faltering here.
Discuss