少点错误 前天 08:07
80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

80,000 Hours推出了新的视频节目“AI in Context”,旨在通过长短视频相结合的方式,探讨变革性人工智能的风险以及人们可以采取的行动。该节目由Aric Floyd主持,并伴有Instagram和TikTok账号。首个长视频探讨了AI Futures Project的AI 2027情景,该情景结合了量化预测和叙事,描绘了可能包含人类灭绝或权力高度集中的未来。节目希望通过引人入胜的故事,让观众了解AI的潜在风险,并邀请更广泛的公众参与讨论。

📢 80,000 Hours推出了名为“AI in Context”的YouTube频道,旨在通过长短视频内容,探讨与变革性人工智能相关的风险及应对措施。

🎬 该频道首个长视频重点介绍了AI Futures Project的AI 2027情景,该情景结合量化预测和叙事,描绘了人类可能面临的未来,包括潜在的灭绝风险或权力高度集中。

🤔 节目旨在向观众传递关键信息:超级智能是可能实现的,它可能很快到来,并可能对未来几十年产生重大影响,但目前的发展轨迹并不乐观。

💡 该视频节目希望帮助观众理解AI的潜在风险,并邀请更广泛的公众参与到关于AI的讨论中,分享他们的观点和想法。

🤝 节目鼓励观众提供反馈,分享对视频内容的看法,并提出未来视频制作的建议,以不断改进和完善节目内容。

Published on July 9, 2025 11:58 PM GMT

About the program

Hi! We’re Chana and Aric, from the new 80,000 Hours video program.

For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.

But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!

80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.

[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]

We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.

Our first long-form video

For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.

Why?

We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)

We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the broader public into that conversation.

We wanted to make a video that conveyed:

Whether viewers have encountered the AI 2027 report or not, we hope this video will give a new appreciation for the story it tells, what experts think about it, and what the implications are for the world.

We also just think it’s an enjoyable, highly produced video Forum readers will like watching (even if the material is kind of dark).

Watch the video here!

Strategy and future of the video program

Lots of people started thinking about AI when ChatGPT came out.

The people in our ecosystem though, know that that was just one point in a broader trend.

We want to talk about that trajectory, catch people up, and talk about where things are going.

We also believe a lot of thoughtful, smart people have been lightly following the rise of AI progress but aren’t quite sure what they think about it yet. We want to suggest a framework that we think better explains what’s happening and what will happen than most of what else is out there (rather than, e.g. describing it all as hype, focusing exclusively on some ethical issues we think don’t encompass the whole story, arguing we should develop AI as fast as possible, etc).

We’re excited to make more videos that tell important stories and discuss relevant arguments. We’re also leaving room for talking more about relevant news, making more skits about appalling behavior, and creating more short explanations of useful concepts.

Watch this space!

Subscribing and sharing

Subscribe to AI in Context if you want to keep up with what we’re doing there, and share the AI 2027 video if you liked it.

AI 2027 seems to have been unusually successful at communicating AI safety ideas to the broader public and non-EAs, so if you’ve been looking for something to communicate your worries about AI, this might be a good choice.

If you like the video, and you want to help boost its reach, then ‘liking’ it on YouTube and leaving a comment (even a short one) really help it get seen by more people. Plus, we hope to see some useful discussion of the scenario in the comments.

Request for feedback

The program is new and we’re excited to get your input. If you see one of our videos and have thoughts on how it could be better or ideas for videos to make, we'd love to hear from you!

For the AI 2027 video in particular, we'd love it if you filled out this feedback form.

  1. ^

     This came after some initial experiments with making problem profile videos on bioweapons and AI risk you may have seen, and the podcast team expanding to video podcasts! All of those are separate from this video program.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

80,000 Hours AI in Context 人工智能 AI风险
相关文章