Published on May 14, 2025 6:15 AM GMT
For the purpose of this post, “getting famous” means “building a large, general (primarily online) audience of people who agree with/support you”.
If you believe that slowing down or pausing AI development is a good idea, and you believe this should be official policy, you’re going to need a large number of (if not most) people to also agree that slowing down or pausing AI is a good idea. On this issue, being correct is insufficient, you need to be correct in public, in front of as many people as possible.
In a world where ~70% of people think that AI doom is sci-fi, and that the future will be business as usual but with better healthcare, solar power, and the iPhone 37, getting anything close to an international treaty where nations agree to clip the wings of their economy is a pipe dream.
To get pauses, treaties, and sane policy, you need to convince as many people as possible that there is a significant risk of AI killing everyone. To do that, you need to get famous.
Getting famous is feasible
Becoming (internet) famous is a relatively predictable process in 2025. You simply need to make videos that a large number of people:
- Click onWatch for long periods of time
This, while difficult, is a much more tractable problem than solving AI alignment in a five-to-fifteen-year timespan in a culture of catastrophic race dynamics.
Why aren’t AI doomers trying to get famous?
For many good candidates, fame is anathema. Fame involves an audience of people who are largely clueless, because few people have a clue. Therefore, for the sake of creating content, it involves simplifying concepts that don’t simplify neatly.
Fame is ugly, anxiety-inducing, and requires loosening stringent intellectual standards.
If you actually have a high P(doom), it may be wise to suck it up and do it anyway.
The current strategy is obviously bad and is not working well
The prevailing strategy of:
- Making blog posts on esoteric forums to an audience of people who already largely agreeHaving a few reputable representatives go on (mostly niche) podcasts Write to senators imploring them to take AI risk seriously
causes AI safety/notkilleveryoneism to be an obscure, tightly-knit community rather than something that lends itself to viral memetic growth.
If you don’t change direction, you are likely to end up where you’re going.
Maybe you should just try harder to get famous and then worry about the specifics later
Rob Miles is a great example of someone who has tried popularising AI safety in a digestible, virality-accessible form. He’s amassed around 160,000 subscribers on YouTube, and makes content that almost exclusively focuses on alignment. This is a clear signal that this subject has immense potential for widespread awareness and popularity. Looking at his channel, he’s made around 2 videos a year for the past 5 years. How many more people would have been exposed to these ideas if he made 2 videos a month during this period? It’s plausible it would be millions.
It’s possible he has been working on higher ROI projects during this time, but it would need to be an extremely high ROI indeed for it to justify an opportunity cost of a million+ people in the United States to wake up to the problem of AI risk.
With widespread recognition, a large following, and millions of people brought to awareness, you can leverage your way to much greater influence than you can by making lengthy blog posts preaching to 250 members of the choir.
Reputation matters, but not without reach
The advantage of notkilleveryoneism over accelerationism is the intellectual and reputational calibre of its advocates. Eliezer Yudkowsky, Geoffrey Hinton, Paul Cristiano, Ilya Sustkever, and others with a high P(doom) and solid credentials are in a position, with a moderate amount of effort, to grow a general audience in the millions over a one-to-five year span, and will have the advantage of being right and properly credentialed.
Reputation is a multiplier on the influence conferred by reach - it’s not sufficient to be reputable and right, you need to be reputable, comprehensible and visible.
No more bungling general-audience podcasts
Eliezer had a shot on Lex Fridman, and he botched it horribly. This was a priceless opportunity to win millions of people over and it was wasted. There are only so many Rogans and Fridmans in the content ecosystem. Do not create disreputation by lecturing obscurely to general audiences.
Distill the most important concepts into digestible one-liners or die.
The rough strategy for anyone with the necessary reputation
- Make fortnightly (or more frequent) YouTube videos that distill important concepts in AI safety, analyse developments in AI, make mini documentaries, etc.Write these videos for a 100 IQ audience, not a 140 IQ audience. Write, present, film, and edit these videos engagingly rather than present them in a dry lecture format. Clip out / independently edit segments of these videos and post them as shorts/reels on as many short-form content platforms as possible.
Do everything possible to get featured on podcasts like “Diary of a CEO”, Joe Rogan, Lex Fridman, Tom Bilyeu, and show up with prepared, digestible rhetoric intended to persuade general audiences.
You can either sacrifice clarity for purity, or purity for impact. Choose wrong, and no one will hear your warning until the lights go out.
Discuss