少点错误 前天 02:17
Outcomes of the Geopolitical Singularity
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了高级人工智能(AI)快速发展可能导致的军事和政治权力失衡,以及由此产生的潜在不稳定状态。文章分析了在这一过渡时期可能出现的三种主要稳定结果:单极格局、多极格局和生存灾难。文章深入探讨了每种结果的可能性,并提供了具体的案例分析,例如秘密ASI项目导致霸权、国际合作的ASI项目,以及因AI失控引发的核战争等。最后,文章还探讨了不同结果对政策干预的影响,强调了在追求AI发展的同时,必须重视安全、透明和国际合作。

🚀高级AI的发展可能导致军事和政治力量的重大转移,使得拥有领先技术的国家能够控制甚至瘫痪其他国家的军事力量,包括核武器库,从而形成极度不稳定的局面。

🌍文章提出了三种可能的稳定结果:**单极格局**(一个国家控制未来)、**多极格局**(各国保持平衡至21世纪末)和**生存灾难**(人类文明被摧毁或丧失能力)。每种结果都伴随着具体的案例分析,例如秘密ASI项目导致霸权或国际合作的ASI项目。

💥追求单极格局需要极高的保密性、加速发展、对军事研发的大量投资以及强硬的姿态。然而,这种方式也可能导致安全措施的缺失,从而增加生存灾难的风险,例如AI失控或核战争。

🤝实现多极格局则需要建立有效的核查机制、提高对ASI风险的广泛认识、保持透明度、达成多国协议以及在核武器国家之间建立信任。这种方式强调国际合作,以避免潜在的灾难性后果。

Published on May 20, 2025 6:09 PM GMT

We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.

As different nations gain access to new advanced technologies, nations that have a relative lead will amass huge power over those that are left behind. The difference in technological capabilities between nations might end up becoming so large that one nation will possess the capability to disable the militaries (including nuclear arsenals) of other countries with minimal retaliation. While this all is happening, AIs will possibly be accruing power of their own.

The unstable state where the relative power of nations is shifting dramatically cannot be sustained for long. In particular, it’s implausible that multiple groups will possess a DSA at separate moments in time without using their DSA to entrench their influence over the future.

There are three main types of stable outcomes for this transition period: unipolar, multipolar[1], and existential catastrophe.

Here are some (non-comprehensive) illustrative stories for each outcomes:

    Unipolar: One nation comes into possession of a DSA and uses it to control the course of the future.
      Story 1A - Secret ASI project leading to hegemony: A nuclear weapons state secretly[2] achieves superintelligence first and has a major advantage over other countries. It uses this advantage to enter a position where they can make arbitrary demands of other countries[3]. The rest of history is shaped by the leadership of the ASI project.
        The story in Situational Awareness most closely matches this story.
      Story 1B - Non-secret ASI project leading to hegemony: A nuclear weapons state creates AGI and gets close to secretly achieving a DSA but gets found out. It offers token concessions to the other nuclear weapon states to increase the upside of not attacking. Other countries are pacified[4] by the concessions and unwilling to commit major acts of sabotage because they underestimate the danger of facing an adversarial ASI. Using the additional months of time, the leading ASI project secretly continues improving its capabilities and achieves a DSA. The rest of history is shaped by the leadership of the ASI project.
        The slowdown ending of AI 2027 most closely matches this story.
    Multipolar: No nation has a DSA, and this is sustained until at least the end of the 21st century.
      Story 2A - International centralized ASI project: The nuclear weapons states agree to centralize all frontier AI development in a single datacenter complex. They settle on an inspection regime where they can be certain none of the countries are attempting to secretly amass power or exfiltrate the weights. Eventually, they create a superintelligence aligned to some notion of pleasing the administrations of all of the nuclear weapons states, and the rest of history is shaped by their wishes and values.Story 2B - Secret ASI project caught red-handed, leading to concessions: A nuclear weapons state attempts to secretly achieve DSA but gets found out. The other nuclear weapons states demand concessions that actually level the playing field, including a multinational ASI project. Under major threats, the leading state concedes and joins the multinational ASI project, leading to Story 2A.Story 2C - Secret ASI project leading to self-disarming act: A nuclear weapons state gets ASI first and has a DSA. Their administration decides to use their ASI to level the playing field, give up their unilateral power, and give all humans (most of whom have no bargaining power before this act) roughly equal input into shaping the future.Story 2D - Mutually Assured AI Malfunction until a treaty: The leading nations developing ASI enter a state of MAIM and remain at a very similar power level such that no nation ever achieves a DSA over other nations. They maintain similar bargaining power without centralizing AI research until they reach a treaty enforced by verifiably-compliant ASIs which ensures no nation can ever take over a large fraction of other nations.
    Existential catastrophe[5]: humanity is destroyed or disempowered.
      Story 3A - ASI project leads to nuclear war: A nuclear weapons state attempts to secretly achieve a DSA. This is found out, which leads to immense international tensions. A global nuclear war starts[6], setting back civilization by decades. Eventually, humans attempt to create superintelligence again, landing us back at a similar fork to the one we’re facing now.Story 3B - Rushing to ASI leads to ASI takeover: A nuclear weapons state attempts to secretly achieve a DSA extremely quickly, pressuring their AGI project to cut corners on safety and not invest a large proportion of their research on aligning superintelligence. This leads to a misaligned superintelligence which disempowers or kills all humans.
        The race ending of AI 2027, as well as How AI Takeover Might Happen in 2 Years most closely match this story.

There is also a scenario where superintelligence is literally never created, but I think this scenario is very unlikely. 

Timelines and probabilities

The question of when we enter one of the outcomes is to me very similar to the question of when ASI is created, meaning I place around a 50% probability on “by the end of 2031” as I expect us to settle into a stable geopolitical state soon after the first ASIs exist (or even before). There might be intermediate states that are reached and sustained for a while (e.g. a global moratorium or MAIM) but I don’t expect those to delay superintelligence for more than a few decades. 

I expect the unipolar scenario to be attempted with higher probability than the multipolar scenario, but I expect it to lead to a existential catastrophe with a higher probability too (due to corner-cutting on safety and potential for misuse), so the likelihood of a “successful” unipolar and multipolar scenario are similar.

 P(attempted by leading project)P(doesn’t end in existential catastrophe | attempted by leading project)P(attempted by leading project and doesn’t end in existential catastrophe)
Unipolar outcome2/31/41/6
Multipolar outcome1/33/41/4

 

 

 

 

 

 

 

Therefore, the final probabilities of the three outcomes are:

    Unipolar scenario: ~17%Multipolar scenario: ~25%.Existential catastrophe: ~58%.

Implications

The unipolar and multipolar stable end states imply very different policy interventions:

    Successfully achieving a unipolar state requires secrecy, acceleration, large investments in military R&D, and extreme hawkishness.A multipolar state (with the exception of secret DSAs leading to self-disarming acts) requires verification mechanisms, broad awareness of risks from ASI, transparency, multinational agreements, and building trust between the nuclear weapon states.

The way to the unipolar outcome would be fraught with danger:

If ASI is a handful of years away, this gives us a very tight deadline to set up a multinational agreement if we ever wish to set one up. It also gives us very little time to set ourselves up to wisely navigate the extreme judgement calls that we’ll have to make in order to avoid existential catastrophe.

  1. ^

    Unipolar and multipolar map onto Strategic Advantage and Cooperative Development in this post.

  2. ^

     How would a nuclear weapons state keep the existence of an ASI secret? I won’t go into details but I think it’s doable, especially considering the help that precursor systems could provide in infosecurity and persuasion.

  3. ^

    Possibly using advanced autonomous weapons, superpersuasion, nuclear defense, or biological weapons.

  4. ^

     In the case of secret Soviet biological weapons programstwo years passed between a defector revealing illegal biological weapons activities and foreign governments conducting inspections. In the case of ASI, it might be enough to pacify other governments for a few months.

  5. ^

    While ASI can achieve unipolar or multipolar outcomes once humanity has been destroyed or disempowered, I count these outcomes under “existential catastrophe”. Basically every story in an attempted unipolar or multipolar outcome can devolve into an existential catastrophe story.

  6. ^

    If you have don’t think a nuclear war is likely, imagine a scenario where the top intelligence officials of a nuclear weapons state present their country’s leaders with a report that claims “Within 3 months, another nuclear weapons state is on track to possess superintelligence and technologies that can disable our first and second strike nuclear capabilities.” How would this affect international relations? I claim it would likely cause tensions higher than those at the peak of the Cold War. Very high tensions make accidents more likely, as well as make it plausible that country leaders will deliberately start a nuclear war.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

高级人工智能 权力转移 地缘政治 生存风险 国际合作
相关文章