少点错误 05月14日
Doomers Should try Much Harder to get Famous
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在人工智能(AI)领域中,为了推动AI安全和避免潜在风险,倡议者需要采取更有效的策略。作者认为,仅仅依靠小众论坛和学术讨论是远远不够的。为了影响政策制定和公众认知,必须通过吸引更广泛的受众,例如通过YouTube视频、播客等方式,来传播AI安全理念。文章提到了具体的实施方法,例如简化复杂概念、制作易于理解的内容,以及利用知名平台扩大影响力。作者强调,在AI风险问题上,仅仅正确是不够的,需要在公众面前广泛传播,才能真正实现改变。

💡 为了推动AI安全,仅仅依靠小众论坛和学术讨论是远远不够的,需要建立广泛的公众认知。

🎬 成为(互联网)名人是一个相对可预测的过程,关键在于制作能够吸引大量观众点击和长时间观看的视频。

🤔 许多AI安全倡导者不愿出名,因为这需要简化概念,但为了传达信息,这可能是必要的。

🚀 当前策略——在小众论坛上发布文章和在播客上露面——导致AI安全成为一个小众群体,而不是一种可以病毒式传播的文化现象。

🎤 通过YouTube视频、播客等方式,可以扩大影响力,例如Rob Miles的案例表明,AI安全主题具有广泛的普及潜力。

📢 知名度和影响力可以带来更大的影响,声誉是影响力的倍增器,需要兼具声誉、可理解性和可见性。

Published on May 14, 2025 6:15 AM GMT

For the purpose of this post, “getting famous” means “building a large, general (primarily online) audience of people who agree with/support you”.

 

If you believe that slowing down or pausing AI development is a good idea, and you believe this should be official policy, you’re going to need a large number of (if not most) people to also agree that slowing down or pausing AI is a good idea. On this issue, being correct is insufficient, you need to be correct in public, in front of as many people as possible.

In a world where ~70% of people think that AI doom is sci-fi, and that the future will be business as usual but with better healthcare, solar power, and the iPhone 37, getting anything close to an international treaty where nations agree to clip the wings of their economy is a pipe dream. 

To get pauses, treaties, and sane policy, you need to convince as many people as possible that there is a significant risk of AI killing everyone. To do that, you need to get famous.

Getting famous is feasible

Becoming (internet) famous is a relatively predictable process in 2025. You simply need to make videos that a large number of people:

    Click onWatch for long periods of time

This, while difficult, is a much more tractable problem than solving AI alignment in a five-to-fifteen-year timespan in a culture of catastrophic race dynamics.

Why aren’t AI doomers trying to get famous? 

For many good candidates, fame is anathema. Fame involves an audience of people who are largely clueless, because few people have a clue. Therefore, for the sake of creating content, it involves simplifying concepts that don’t simplify neatly. 

Fame is ugly, anxiety-inducing, and requires loosening stringent intellectual standards. 

If you actually have a high P(doom), it may be wise to suck it up and do it anyway.

The current strategy is obviously bad and is not working well

The prevailing strategy of:

causes AI safety/notkilleveryoneism to be an obscure, tightly-knit community rather than something that lends itself to viral memetic growth. 

If you don’t change direction, you are likely to end up where you’re going.

Maybe you should just try harder to get famous and then worry about the specifics later

Rob Miles is a great example of someone who has tried popularising AI safety in a digestible, virality-accessible form. He’s amassed around 160,000 subscribers on YouTube, and makes content that almost exclusively focuses on alignment. This is a clear signal that this subject has immense potential for widespread awareness and popularity. Looking at his channel, he’s made around 2 videos a year for the past 5 years. How many more people would have been exposed to these ideas if he made 2 videos a month during this period? It’s plausible it would be millions.

It’s possible he has been working on higher ROI projects during this time, but it would need to be an extremely high ROI indeed for it to justify an opportunity cost of a million+ people in the United States to wake up to the problem of AI risk.

With widespread recognition, a large following, and millions of people brought to awareness, you can leverage your way to much greater influence than you can by making lengthy blog posts preaching to 250 members of the choir. 

Reputation matters, but not without reach

The advantage of notkilleveryoneism over accelerationism is the intellectual and reputational calibre of its advocates. Eliezer Yudkowsky, Geoffrey Hinton, Paul Cristiano, Ilya Sustkever, and others with a high P(doom) and solid credentials are in a position, with a moderate amount of effort, to grow a general audience in the millions over a one-to-five year span, and will have the advantage of being right and properly credentialed. 

Reputation is a multiplier on the influence conferred by reach - it’s not sufficient to be reputable and right, you need to be reputable, comprehensible and visible. 

No more bungling general-audience podcasts

Eliezer had a shot on Lex Fridman, and he botched it horribly. This was a priceless opportunity to win millions of people over and it was wasted. There are only so many Rogans and Fridmans in the content ecosystem. Do not create disreputation by lecturing obscurely to general audiences.

Distill the most important concepts into digestible one-liners or die.

The rough strategy for anyone with the necessary reputation

You can either sacrifice clarity for purity, or purity for impact. Choose wrong, and no one will hear your warning until the lights go out.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 公众认知 传播策略
相关文章