Published on May 20, 2025 6:09 PM GMT
We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.
As different nations gain access to new advanced technologies, nations that have a relative lead will amass huge power over those that are left behind. The difference in technological capabilities between nations might end up becoming so large that one nation will possess the capability to disable the militaries (including nuclear arsenals) of other countries with minimal retaliation. While this all is happening, AIs will possibly be accruing power of their own.
The unstable state where the relative power of nations is shifting dramatically cannot be sustained for long. In particular, it’s implausible that multiple groups will possess a DSA at separate moments in time without using their DSA to entrench their influence over the future.
There are three main types of stable outcomes for this transition period: unipolar, multipolar[1], and existential catastrophe.
Here are some (non-comprehensive) illustrative stories for each outcomes:
- Unipolar: One nation comes into possession of a DSA and uses it to control the course of the future.
- Story 1A - Secret ASI project leading to hegemony: A nuclear weapons state secretly[2] achieves superintelligence first and has a major advantage over other countries. It uses this advantage to enter a position where they can make arbitrary demands of other countries[3]. The rest of history is shaped by the leadership of the ASI project.
- The story in Situational Awareness most closely matches this story.
- The slowdown ending of AI 2027 most closely matches this story.
- Story 2A - International centralized ASI project: The nuclear weapons states agree to centralize all frontier AI development in a single datacenter complex. They settle on an inspection regime where they can be certain none of the countries are attempting to secretly amass power or exfiltrate the weights. Eventually, they create a superintelligence aligned to some notion of pleasing the administrations of all of the nuclear weapons states, and the rest of history is shaped by their wishes and values.Story 2B - Secret ASI project caught red-handed, leading to concessions: A nuclear weapons state attempts to secretly achieve DSA but gets found out. The other nuclear weapons states demand concessions that actually level the playing field, including a multinational ASI project. Under major threats, the leading state concedes and joins the multinational ASI project, leading to Story 2A.Story 2C - Secret ASI project leading to self-disarming act: A nuclear weapons state gets ASI first and has a DSA. Their administration decides to use their ASI to level the playing field, give up their unilateral power, and give all humans (most of whom have no bargaining power before this act) roughly equal input into shaping the future.Story 2D - Mutually Assured AI Malfunction until a treaty: The leading nations developing ASI enter a state of MAIM and remain at a very similar power level such that no nation ever achieves a DSA over other nations. They maintain similar bargaining power without centralizing AI research until they reach a treaty enforced by verifiably-compliant ASIs which ensures no nation can ever take over a large fraction of other nations.
- Story 3A - ASI project leads to nuclear war: A nuclear weapons state attempts to secretly achieve a DSA. This is found out, which leads to immense international tensions. A global nuclear war starts[6], setting back civilization by decades. Eventually, humans attempt to create superintelligence again, landing us back at a similar fork to the one we’re facing now.Story 3B - Rushing to ASI leads to ASI takeover: A nuclear weapons state attempts to secretly achieve a DSA extremely quickly, pressuring their AGI project to cut corners on safety and not invest a large proportion of their research on aligning superintelligence. This leads to a misaligned superintelligence which disempowers or kills all humans.
- The race ending of AI 2027, as well as How AI Takeover Might Happen in 2 Years most closely match this story.
There is also a scenario where superintelligence is literally never created, but I think this scenario is very unlikely.
Timelines and probabilities
The question of when we enter one of the outcomes is to me very similar to the question of when ASI is created, meaning I place around a 50% probability on “by the end of 2031” as I expect us to settle into a stable geopolitical state soon after the first ASIs exist (or even before). There might be intermediate states that are reached and sustained for a while (e.g. a global moratorium or MAIM) but I don’t expect those to delay superintelligence for more than a few decades.
I expect the unipolar scenario to be attempted with higher probability than the multipolar scenario, but I expect it to lead to a existential catastrophe with a higher probability too (due to corner-cutting on safety and potential for misuse), so the likelihood of a “successful” unipolar and multipolar scenario are similar.
P(attempted by leading project) | P(doesn’t end in existential catastrophe | attempted by leading project) | P(attempted by leading project and doesn’t end in existential catastrophe) | |
Unipolar outcome | 2/3 | 1/4 | 1/6 |
Multipolar outcome | 1/3 | 3/4 | 1/4 |
Therefore, the final probabilities of the three outcomes are:
- Unipolar scenario: ~17%Multipolar scenario: ~25%.Existential catastrophe: ~58%.
Implications
The unipolar and multipolar stable end states imply very different policy interventions:
- Successfully achieving a unipolar state requires secrecy, acceleration, large investments in military R&D, and extreme hawkishness.A multipolar state (with the exception of secret DSAs leading to self-disarming acts) requires verification mechanisms, broad awareness of risks from ASI, transparency, multinational agreements, and building trust between the nuclear weapon states.
The way to the unipolar outcome would be fraught with danger:
- Racing allows very little room for safety work amid the mad rush to superintelligence – which might make the difference between ASI that helps or destroys humanity.It’s unclear if a secret superintelligence explosion and buildup of advanced military technology is possible to cover up from the other nuclear weapons states. If a nuclear weapons state gets caught doing this, it’s likely that the other nuclear weapons states will consider this an extremely escalatory action and resort to rapidly escalating threats, sabotage, or pre-emptive attacks.
- Additionally, secret buildups of advanced technology would be easier to pull off for ASI projects and militaries which are more willing to infringe on the rights of their human employees by keeping their movements and communications restricted 24/7. This means that nations which care less about personal freedoms are favored in achieving unipolar outcomes over others.
- However, once substantial levels of automation are achieved, it is possible that a DSA will be achievable with the involvement of a very small number of humans (e.g. through superpersuasion). So projects aiming to achieve a unipolar outcome can make a trade-off where they provide human employees with more rights at the cost of less transparency (to ensure a very small number of humans are aware of a planned hegemony).
If ASI is a handful of years away, this gives us a very tight deadline to set up a multinational agreement if we ever wish to set one up. It also gives us very little time to set ourselves up to wisely navigate the extreme judgement calls that we’ll have to make in order to avoid existential catastrophe.
- ^
Unipolar and multipolar map onto Strategic Advantage and Cooperative Development in this post.
- ^
How would a nuclear weapons state keep the existence of an ASI secret? I won’t go into details but I think it’s doable, especially considering the help that precursor systems could provide in infosecurity and persuasion.
- ^
Possibly using advanced autonomous weapons, superpersuasion, nuclear defense, or biological weapons.
- ^
In the case of secret Soviet biological weapons programs, two years passed between a defector revealing illegal biological weapons activities and foreign governments conducting inspections. In the case of ASI, it might be enough to pacify other governments for a few months.
- ^
While ASI can achieve unipolar or multipolar outcomes once humanity has been destroyed or disempowered, I count these outcomes under “existential catastrophe”. Basically every story in an attempted unipolar or multipolar outcome can devolve into an existential catastrophe story.
- ^
If you have don’t think a nuclear war is likely, imagine a scenario where the top intelligence officials of a nuclear weapons state present their country’s leaders with a report that claims “Within 3 months, another nuclear weapons state is on track to possess superintelligence and technologies that can disable our first and second strike nuclear capabilities.” How would this affect international relations? I claim it would likely cause tensions higher than those at the peak of the Cold War. Very high tensions make accidents more likely, as well as make it plausible that country leaders will deliberately start a nuclear war.
Discuss