Published on July 9, 2025 11:58 PM GMT
About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the broader public into that conversation.
We wanted to make a video that conveyed:
- Superintelligence is plausibleIt might be coming soonIt might determine large amounts of how the coming years and decades play outAnd it’s not on track to go well
Whether viewers have encountered the AI 2027 report or not, we hope this video will give a new appreciation for the story it tells, what experts think about it, and what the implications are for the world.
We also just think it’s an enjoyable, highly produced video Forum readers will like watching (even if the material is kind of dark).
Watch the video here!
Strategy and future of the video program
Lots of people started thinking about AI when ChatGPT came out.
The people in our ecosystem though, know that that was just one point in a broader trend.
We want to talk about that trajectory, catch people up, and talk about where things are going.
We also believe a lot of thoughtful, smart people have been lightly following the rise of AI progress but aren’t quite sure what they think about it yet. We want to suggest a framework that we think better explains what’s happening and what will happen than most of what else is out there (rather than, e.g. describing it all as hype, focusing exclusively on some ethical issues we think don’t encompass the whole story, arguing we should develop AI as fast as possible, etc).
We’re excited to make more videos that tell important stories and discuss relevant arguments. We’re also leaving room for talking more about relevant news, making more skits about appalling behavior, and creating more short explanations of useful concepts.
Watch this space!
Subscribing and sharing
Subscribe to AI in Context if you want to keep up with what we’re doing there, and share the AI 2027 video if you liked it.
AI 2027 seems to have been unusually successful at communicating AI safety ideas to the broader public and non-EAs, so if you’ve been looking for something to communicate your worries about AI, this might be a good choice.
If you like the video, and you want to help boost its reach, then ‘liking’ it on YouTube and leaving a comment (even a short one) really help it get seen by more people. Plus, we hope to see some useful discussion of the scenario in the comments.
Request for feedback
The program is new and we’re excited to get your input. If you see one of our videos and have thoughts on how it could be better or ideas for videos to make, we'd love to hear from you!
For the AI 2027 video in particular, we'd love it if you filled out this feedback form.
- ^
This came after some initial experiments with making problem profile videos on bioweapons and AI risk you may have seen, and the podcast team expanding to video podcasts! All of those are separate from this video program.
Discuss