Published on July 10, 2025 6:52 PM GMT
I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy.
(Epistemic status: I’ve read a bit about this, talked to AIs about it, and talked to one natsec professional about it who agreed with my analysis (and suggested some ideas that I included here), but I’m not an expert.)
For context, the story is:
- Iraq was sort of a rogue state after invading Kuwait and then being repelled in 1990-91. After that, they violated the terms of the ceasefire, e.g. by ceasing to allow inspectors to verify that they weren't developing weapons of mass destruction (WMDs). (For context, they had previously developed biological and chemical weapons, and used chemical weapons in war against Iran and against various civilians and rebels). So the US was sanctioning and intermittently bombing them.
- After the war, it became clear that Iraq actually wasn’t producing WMDs. This obviously begs the question of why they rejected the inspections. I think it was a combination of them wanting strategic ambiguity to deter their regional enemies, and it being desirable and politically beneficial to reject inspections that they viewed as violations of their sovereignty.
- They allowed inspections again when it looked like they were about to be invaded, but the invasion happened anyway.
- At the time, the track record of nation-building looked somewhat better, because the recent American experiences with it were Germany and Japan.
- The PNAC/“neoconservative” faction, including Vice President Cheney and Secretary of Defense Rumsfeld, wanted to exploit American hegemony to promote democracy and topple regimes they disliked. Secretary of State Powell and the State department were cautious and opposed to unilateral action that might upset allies.National Security Advisor Rice, who is a USSR scholar, was mostly focused on great power competition with Russia and China.Many other senior staff were focused on domestic issues.
- Cheney in particular was terrified of the prospect of WMDs being used for much larger-scale acts of terrorism. (He seems like the kind of guy who worries a lot about novel extreme threats. For example, he’d previously been involved in a bunch of continuity-of-government exercises.) He was concerned by wargames indicating that millions would die if a terrorist attacked an American city with smallpox.
- But Iraq seemed vaguely like a threat to American national security. And the people who had already wanted to invade Iraq now found it easier to argue for it, by noting that it’s dangerous to have a state sponsor of terrorism that’s known to make WMDs.Many people in the admin, and then in the public, believed Iraq had WMDs. This was false. The admin made this mistake because of some mix of confirmation bias and political pressures to advocate the invasion, and then non-admin people deferred too much to the admin (whose evidence was in substantial part classified). (And maybe it was partially overcorrection: the intelligence community had underestimated the state of the Iraqi WMD programs before the Gulf War, e.g. they thought Iraq was further away from developing nuclear weapons than it was, and obviously they were traumatized by their failure to pay more attention to the pre-9/11 evidence for an imminent al-Qaeda terrorist plot.)
- (Not relevant for the main point here, but they quickly made egregious errors like firing most of the Iraqi army and government, without which the nation-building might have succeeded.)
Takeaways
A few parts of this story stood out to me as surprisingly relevant to how AI might go:
- A shocking event led to the dominance of a political faction that previously had just been one of several competing factions, because that faction’s basic vibe (that we should make use of American hegemony, and that rogue states are a threat to national security) was roughly supported by the event.The response was substantially driven by elite judgements rather than popular judgement. Invading Iraq wasn’t called for by the American population until the admin started advocating for it; it was just vaguely related, semi-justifiable, and popular among a particular set of elites.The response involved some generalization and some scope sensitivity. The admin was terrified of bigger attacks, especially ones using chemical weapons and biological weapons.
- This was notably completely absent in the reaction to covid or the Spanish flu. One person I spoke to discussed evidence that this is because humans have a very different reaction to disease than to other types of threats.
- And this failure caused huge problems for people who supported it (like Hillary Clinton and the Republican establishment) and was a huge boon for its opponents (most famously Obama, also Bernie Sanders).I’m kind of confused by why these consequences didn’t hit home earlier. By the time of the 2004 presidential election, it was pretty clear that Iraq didn’t have WMDs. I would have thought that the Democratic nominee should have centered the messaging “Bush told us that secret intelligence indicated that Iraq had WMDs, and that because of this we needed to invade. But that seems to have been totally and predictably wrong and led us into a dumb war. Such an egregious error disqualifies you as President.” Kerry (the Democratic nominee) went much softer than that. This was probably partially because he had voted in support of the Iraq war, so he couldn’t be too harsh on the decision. (Matthew Yglesias has a good article that discusses the history of support for the Iraq war.)
So to spell out some possibilities this implies about AI:
- If there’s some non-existential AI catastrophe (even on the scale of 9/11), it might open a policy window to responses that seem extreme and that aren’t just direct obvious responses to the literal bad thing that occurred. E.g. maybe an extreme misuse event could empower people who are mostly worried about an intelligence explosion and AI takeover.Those factions might make bad policy decisions or execute terribly on them.
Discuss