Published on June 11, 2025 2:12 PM GMT
Jack Clark has a very important post on why it's so difficult to communicate with policymakers on AI risk, and the reason is that AI risk (and most discussions of AGI/ASI) is basically eschatological, in that it involves the end of the world/technology that looks like magic being developed by AIs, and this creates a very difficult landscape for policy makers.
In particular, each group of experts considers the other group of experts to be wildly incorrect, and there's little feedback on anything you do, and the feedback may be corrupted, and this explains a lot about why policymakers are doing things that feel wildly underscaled to deal with the problem of AI x-risk:
Eschatological AI Policy Is Very Difficult
A lot of people that care about the increasing power of AI systems and go into policy do so for fundamentally eschatological reasons – they are convinced that at some point, if badly managed or designed, powerful AI systems could end the world. They think this in a literal sense – AI may lead to the gradual and eventually total disempowerment of humans, and potentially even the death of the whole species.
People with these views often don’t recognize how completely crazy they sound – and I think they also don’t manage to have empathy for the policymakers that they’re trying to talk to.
Imagine you are a senior policymaker in a major world economy – your day looks something like this:
There is a land war in Europe, you think while making yourself coffee.
The international trading system is going through a period of immense change and there could be serious price inflation which often bodes poorly for elected officials, you ponder while eating some granola.
The US and China seem to be on an inexorable collision course, you write down in your notepad, while getting the car to your place of work.
There are seventeen different groups trying to put together attacks that will harm the public, you say to yourself, reading some classified briefing.
“Something akin to god is coming in two years and if you don’t prioritize dealing with it right now, everyone dies,” says some relatively young person with a PhD and an earnest yet worried demeanor. “God is going to come out of a technology called artificial intelligence. Artificial intelligence is a technology that lots of us are developing, but we think we’re playing Russian Roulette at the scale of civilization, and we don’t know how many chambers there are in the gun or how many bullets are in it, and the gun is firing every few months due to something called scaling laws combined with market incentives. This technology has on the order of $100 billion dollars a year dumped into its development and all the really important companies and infrastructure exist outside the easy control of government. You have to do something about this.”
The above is, I think, what it’s like being a policymaker in 2025 and dealing with AI on top of everything else. Where do you even start?
[Intermission]
Let us imagine that you make all of these policy moves. What happens then? Well, you’ve mostly succeeded by averting or delaying a catastrophe which most people had no knowledge of and of the people that did have knowledge of it, only a minority believed it was going to happen. Your ‘reward’ insofar as you get one is being known as a policymaker that ‘did something’, but whether the thing you did is good or not is very hard to know.
The best part? If you go back to the AI person that talked to you earlier and ask them to assess what you did, they’ll probably say some variation of: “Thank you, these are the minimum things that needed to be done to buy us time to work on the really hard problems. Since we last spoke the number of times the gun has fired has increased, and the number of bullets in the chamber has grown.”
What did I do, then? You ask.
“You put more chambers in the gun, so you bought us more time,” they say. “Now let’s get to work”.I write all of the above not as an excuse for the actions of policymakers, nor as a criticism of people in the AI policy community that believe in the possibility of superintelligence, but rather to instead illustrate the immense difficulty of working on AI policy when you truly believe that the technology may have the ability to end the world. Most of the policy moves that people make – if they make them – are going to seem wildly unsatisfying relative to the scale of the problem. Meanwhile, the people that make these moves are going to likely be juggling them against a million other different priorities and are going to be looking to the AI experts for some level of confidence and validation – neither of which are easily given.
Good luck to us all.
Discuss