Published on June 27, 2025 2:15 AM GMT
I think more people should say what they actually believe about AI dangers, loudly and often. Even if you work in AI policy.
I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will react very differently if you share your concerns while feeling ashamed about them versus if you share your concerns as if they’re obvious and sensible, because humans are very good at picking up on your social cues. If you act as if it’s shameful to believe AI will kill us all, people are more prone to treat you that way. If you act as if it’s an obvious serious threat, they’re more likely to take it seriously too.
I have another whole spiel about how it’s possible to speak on these issues with a voice of authority. Nobel laureates and lab heads and the most cited researchers in the field are saying there’s a real issue here. If someone is dismissive, you can be like “What do you think you know that the Nobel laureates and the lab heads and the most cited researchers don’t? Where do you get your confidence?” You don’t need to talk like it’s a fringe concern, because it isn’t.
And in the last year or so I’ve started collecting anecdotes such as the time I had dinner with an elected official, and I encouraged other people at the dinner to really speak their minds, and the most courage they could muster was statements like, “Well, perhaps future AIs could help Iranians figure out how to build nuclear material.” To which the elected official replied, “My worries are much bigger than that; I worry about recursive self-improvement and superintelligence wiping us completely off the map, and I think it could start in as little as three years.”
These spiels haven’t always moved people. I’m regularly told that I’m just an idealistic rationalist who’s enamored by the virtue of truth, and who’s trying to worm out of a taboo tradeoff between honesty and effectiveness. I’m sometimes told that the elected officials are probably just trying to say what they know their supporters want to hear. And I am not at liberty to share all the details from those sorts of dinners, which makes it a little harder to share my evidence.
But I am at liberty to share the praise we've received for my forthcoming book. Eliezer and I do not mince words in the book, and the responses we got from readers were much more positive than I was expecting, even given all my spiels.
You might wonder how filtered this evidence is. It’s filtered in some ways and not in others.
We cold-emailed a bunch of famous people (like Obama and Oprah), and got a low response rate. Separately, there’s a whole tier of media personalities that said they don’t really do book endorsements; many of those instead invited us on their shows sometime around book launch. Most people who work in AI declined to comment. Most elected officials declined to comment (even while some gave private praise, for whatever that’s worth).
But among national security professionals, I think we only approached seven of them. Five of them gave strong praise, one of them (Shanahan) gave a qualified statement, and the seventh said they didn’t have time (which might have been a polite expression of disinterest). Which is a much stronger showing than I was expecting, from the national security community.
We also had a high response rate among people like Ben Bernanke and George Church and Stephen Fry. These are people that we had some very tangential connection to — like “one of my friends is childhood friends with the son of a well-connected economist.” Among those sorts of connections, almost everyone had a reaction somewhere between “yep, this sure is an important issue that people should be discussing” and “holy shit”, and most offered us a solid endorsement. (In fact, I don’t recall any such wacky plans that didn’t pan out, but there were probably a few I’m just not thinking of. We tried to keep a table of all our attempts but it quickly devolved into chaos.)
I think this is pretty solid evidence for “people are ready to hear about AI danger, if you speak bluntly and with courage.” Indeed, I’ve been surprised and heartened by the response to the book.
I think loads of people in this community are being far too cowardly in their communication, especially in the policy world.
I’m friends with some of the folk who helped draft SB 1047, so I’ll pick on them a bit.
SB 1047 was an ill-fated California AI bill that more-or-less required AI companies to file annual safety reports. When it was being drafted, I advised that it should be much more explicit about how AI poses an extinction threat, and about the need for regulations with teeth to avert that danger. The very first drafts of SB 1047 had the faintest of teeth, and then it was quickly defanged (by congress) before being passed and ultimately vetoed.
One of my friends who helped author the bill took this sequence of events as evidence that my advice was dead wrong. According to them, the bill was just slightly too controversial. Perhaps if it had been just a little more watered down then maybe it would have passed.
I took the evidence differently.
I noted statements like Senator Ted Cruz saying that regulations would “set the U.S. behind China in the race to lead AI innovation. [...] We should be doing everything possible to unleash competition with China, not putting up artificial roadblocks.” I noted J.D. Vance saying that regulations would “entrench the tech incumbents that we actually have, and make it actually harder for new entrants to create the innovation that’s going to power the next generation of American growth.”
On my view, these were evidence that the message wasn’t working. Politicians were not understanding the bill as being about extinction threats; they were understanding the bill as being about regulatory capture of a normal budding technology industry.
...Which makes sense. If people really believed that everyone was gonna die from this stuff, why would they be putting forth a bill that asks for annual reporting requirements? Why, that’d practically be fishy. People can often tell when you’re being fishy.
It’s been a tricky disagreement to resolve. My friend and I each saw a way that the evidenced supported our existing position, which means it didn't actually discriminate between our views.[1]
But, this friend who helped draft SB 1047? I’ve been soliciting their advice about which of the book endorsements would be more or less impressive to folks in D.C. And as they’ve seen the endorsements coming in, they said that all these endorsements were “slightly breaking their model of where things are in the overton window.” So perhaps we’re finally starting to see some clearer evidence that the courageous strategy actually works, in real life.[2]
I think many people who understand the dangers of AI are being too cowardly in their communications. Especially among people in D.C. I think that such communication is useless at best (because it doesn’t focus attention on the real problems) and harmful at worst (because it smells fishy). And I think it’s probably been harmful to date.
If you need a dose of courage, maybe go back and look at the advance praise for If Anyone Builds It, Everyone Dies again. Recall that the praise came from many people we had no prior relationship with, including some people who were initially quite skeptical. Those blurbs don’t feel to me like they’re typical, run-of-the-mill book endorsements. Those blurbs feel to me like unusual evidence that the world is ready to talk about this problem openly.
Those blurbs feel to me like early signs that people with a wide variety of backgrounds and world-views are pretty likely to adopt a sensible viewpoint upon hearing real arguments made with conviction.
The job isn’t done yet. Society being ready to have the conversation doesn’t mean that society is having the conversation. We’re still somehow in a world where I occasionally have dinner with elected officials who worry about recursive self-improvement leading to superintelligence inside of three years — and yet none of the congress members who privately praise our book are willing to endorse it publicly yet.
There’s work to be done. And now is perhaps the right time to do it.
This possible window of opportunity is why Eliezer and I wrote the book in the first place. Once I started noticing that the message was really working, e.g. in private meetings with elected officials in D.C., I came back and told Eliezer that I thought the time was ripe. Perhaps a clear, convincing, and successful book (with an associated media tour) will do a lot to thrust this conversation into the mainstream. And then maybe humanity will decide to do something else, and maybe we won't all die.
So, at the risk of pandering: If you see any of the hope here that I see, consider helping us make this book cause a big splash.
The time is ripe. Pre-orders can help our book debut on the best-seller lists, and being on the best-seller list will influence whether our book, and this conversation, catches national attention.
Last month we were hitting Amazon best-seller lists in our categories despite still being in pre-order mode (which I’m told is rare). I’m heartened by this support. If we can keep up the pace of the last month for the next two and a half months, we’ll be in good shape. If we can keep up the pace and then get an extra burst of support right as the book launches, we’ll be in great shape. I think that’s a possibility, with support from this community.
We’re doing everything we can on our end, and it’s going well. If you want to coordinate about breaking open the Overton window, now is quite plausibly the best time for it.
If we do our job well here, there’s a real chance that politicians and journalists and your neighbor are taking extinction-level dangers from AI seriously by the end of the year.
And regardless of whether you think our book is worth throwing your weight behind, I hope you consider that you don’t need to hide your beliefs about AI danger, nor be ashamed of them. People respond well when you speak to them with courage.
- ^
That wasn't the only case where the evidence was tricky to read. In January, Senator Cruz had Elon Musk on his podcast and asked “How real is the prospect of killer robots annihilating humanity?” in a context where the senator was overall a bit dismissive. Some of my friends read this as a bad sign, on account of the dismissiveness. I read it it as a good sign, because he was at least starting to think about the key dangers.
And shortly after J.D. Vance read AI 2027, he was asked “Do you think that the U.S. government is capable in a scenario — not like the ultimate Skynet scenario — but just a scenario where A.I. seems to be getting out of control in some way, of taking a pause?” and he answered:
I don’t know. That’s a good question.
The honest answer to that is that I don’t know, because part of this arms race component is if we take a pause, does the People’s Republic of China not take a pause? And then we find ourselves all enslaved to P.R.C.-mediated A.I.?
Some of my friends read this as a bad sign, because he seemed intent on a suicide race. I read it as a good sign, because — again — he was engaging with the real issues at hand, and he wasn't treating AI as just another industry at the mercy of over-eager regulators.
- ^
My friend caveats that they're still not sure SB 1047 should have been more courageous. All we have actually observed is courage working in 2025, which does not necessarily imply it would have worked in 2023.
Discuss