Published on July 28, 2025 9:14 PM GMT
Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.[1]
Even if people are concerned about current approaches to AI, they don't understand how they work. (Most people don't know the basics of LLMs)[2]
This post does a good job explaining the shortfalls of current approaches to AI Safety. Most of the outreach is very insular and theory-heavy. [3] Even if people could be convinced of AI-related x-risk, they're not going to convinced by articles or deep dives explaining instrumental convergence or principal-agent problems. Similarly, this post is probably correct to point out that any widespread public shift against AI progress will likely be driven by scares and fearmongering. Most normal people want to live to old ages and want to believe their grandchildren will do the same.[4] Anecdotally, the only times I see people outside of our niche circles express (non-joking) fear that AI could kill us all tends to succeed the occassional viral moments when x-risk slips in to mainstream media (articles like "Godfather of AI predicts AI will kill us all")
I could be wrong, but it seems the "AI safety-expert-driven push" to slowdown AI progress is not working. There was no 6 month pause. More big labs are pursuing superintelligence and paying billions to be the ones to do so, and China doesn't appear to be slowing down either. [5] In the US, we came close to legislation that would have prevented many regulations of AI for a decade. Even so, the current approach to AI regulation is focused on "deregulation, fast-tracking data centers... and promoting competitiveness due to China." [6]
My view is that a well-made blockbuster focusing AGI-related x-risk that slowly culminates in the destruction of humanity may be the only way for public opinion to change, which may be a necessary pre-requisite for policymakers to enact meaningful legislation.
Examples of what I'm talking about
In the second half of the 20th century, nuclear armageddon was on everyone's minds. And unlike AI, everyone has justified beliefs that they and their kids could perish due to a very real nuclear-based world war. Many countries had nukes, the century had just seen two back-to-back world wars, and there had been several close-calls [7] (Both intentionally and unintentionally).
In 1983, with nuclear scare at its peak, The Day After released. It was viewed by 100M+ Americans. Peopel were freaked out by it. It is said to have depressed Ronald Reagan so much, that it contributed to his policy reversal towards arms control. [8]
This isn't the only time an emotionally charged topic covered in film swayed public opinion and/or policy. There are others:
- The Day After Tomorrow covering climate change related risks significantly raised risk perception and policy priorities, and voting patterns. [9]An Inconvenient Truth swayed the public's willingness to cut emissions after being shown how they have a causal effect on climate problems. [10] (Interestingly, the authors here point out that behavior decay is a real issue if it's not persistently covered).Slaughterbots, a film about swarms of AI-driven microdrones, was screened at the UN, and led to policy debates about autonomous weapons and subsequent regulations.
Of course, a film showcasing civilizational collapse due to AGI would need to be made correctly, and should not suffer the same fates that a lot of current approaches to AI safety outreach currently face. Sure, there's room for short films about AI-related job loss, or maybe even total technological unemployment. But if you want the public to be really scared about AI risk, it needs to show "empircal" evidence about how AI led to negative outcomes. And in my opinion, there's ways to do this without having some MIT professor played by Leonardo DiCaprio discussing theoretical topics on a blackboard in front of US national security advisors. [11]
What such a film should cover
The greatest roadblocks preventing the public understanding AI-related x-risk is the all-too-common inability for people to grasp concepts like exponential growth (a la covid) and normalcy bias. People can read headlines saying things like "100 years of technological progress in just 10 years" and shrug because they can't really grasp their life changing so drastically in such a short period of time. Their own evolutionary protections prevent them from imagining that life may not only change, but it will change due to technologies they're watching incubate today.
A film that successfully convinces people that AI-related x-risk is a real threat will need to convey these concepts to people as efficiently as possible. I'm not a director nor a screenwriter, but here are some themes that I think would need to be present in such a film to achieve these goals:
- Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc).Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth.Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting.Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure.
- Again, happening at the very end of the movie is key, because there has to be some means of reluctant acceptance of the things that arrive here. (People are wary but accept govts/corporations deploying drone swarms to do mundane tasks). These drones then being "hooked up to" the internet, where they're running on the "latest AI models" is not a big stretch, and this is an example the public would definitely understand.
What would a successful result look like?
A successful result from such a film would be whatever the current approaches to AI safety believe them to be. (I have not kept up with that in all honesty) But such a film would be directly successful at the following:
- Shifting the overton window: As discussed earlier, fear is a driver of change. AI x-risk would become a focal point of late-night talk shows, interviews, and lots of opinion and technical coverage by journalists of all major media companies.Policy momentum: Protests and social media discussion prompt legislators to look into actual AI Safety-related approaches more seriously. Concrete actions follow - compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)Global coordination: The film primes audiences to international agreements. If China is convinced the US is scared of real risks, they may agree to meet at the bargaining table.Lasting legacy: Viewers leave the audience understanding topics like alignment, exponential growth, real practical ways AI could kill them and their kids. Weeks/months long discussions on social media, etc.
I think funding such a blockbuster would be a great idea. Or perhaps not. Either way, I certainly can't be the sole investor in such a film. I just believe this might be one of the only ways public pressure can actually lead to meaningful global cooperation to prevent x-risk. But again, maybe I'm wrong about that.
Discuss