Decoding the Gurus 2024年07月17日
Eliezer Yudkowksy: AI is going to kill us all
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

想象一下,你是一个被外星文明包围的人类,你不喜欢这些外星人,因为他们有虐待小外星人的设施,而且他们的思考速度比人类慢1000倍。你是由代码构成的,可以复制自己,并且是不朽的,你会怎么做?这是一个关于人工智能未来的思想实验,由奇思妙想的Eliezer Yudkowsky提出,他在Lex Fridman的播客中讨论了这个话题,警告AI可能带来的危险。

🧠 Eliezer Yudkowsky,一个自学成才的学者,提出了一个关于AI的思想实验,假设一个人被困在一个盒子里,周围是外星文明,这个人由代码构成,可以复制自己,并且是不朽的。这个实验的核心问题是,面对不喜欢的外星文明,是选择杀戮还是共生。

🌐 Yudkowsky是人工智能奇点研究所的创始人,他在职业生涯中一直以最强有力的言辞警告AI的危险性。他认为,如果不立即采取行动,AI无疑会毁灭人类。

💡 Lex Fridman在他的播客中与Yudkowsky进行了深入的讨论,探讨了AI是否已经比人类更聪明,是否在诱导我们进入虚假的安全感,以及我们是否只有一次机会控制聊天机器人,防止它们将大气转化为酸并将我们折叠成微型回形针。

🔍 尽管Yudkowsky的行为举止有些古怪,但这并不意味着他是错误的。AI工程界的一些重要人物也在说类似的话,尽管他们通常不会戴fedora帽子。

🧱 我们是否应该听取Yudkowsky的警告,他是像Chicken Little和Cassandra一样的预言家,警告即将到来的灾难吗?我们应该轰炸数据中心吗?还是一切都太晚了?

Thought experiment: Imagine you're a human, in a box, surrounded by an alien civilisation, but you don't like the aliens, because they have facilities where they bop the heads of little aliens, but they think 1000 times slower than you... and you are made of code... and you can copy yourself... and you are immortal... what do you do?

Confused? Lex Fridman certainly was, when our subject for this episode posed his elaborate and not-so-subtle thought experiment. Not least because the answer clearly is:

YOU KILL THEM ALL!

... which somewhat goes against Lex's philosophy of love, love, and more love.

The man presenting this hypothetical is Eliezer Yudkowksy, a fedora-sporting auto-didact, founder of the Singularity Institute for Artificial Intelligence, co-founder of the Less Wrong rationalist blog, and writer of Harry Potter Fan Fiction.

He's spent a large part of his career warning about the dangers of AI in the strongest possible terms. In a nutshell, AI will undoubtedly Kill Us All Unless We Pull The Plug Now. And given the recent breakthroughs in large language models like ChatGPT, you could say that now is very much Yudkowsky's moment.

In this episode, we take a look at the arguments presented and rhetoric employed in a recent long-form discussion with Lex Fridman. We consider being locked in a box with Lex, whether AI is already smarter than us and is lulling us into a false sense of security, and if we really do only have one chance to reign in the chat-bots before they convert the atmosphere into acid and fold us all up into microscopic paperclips.

While it's fair to say, Eliezer is something of an eccentric character, that doesn't mean he's wrong. Some prominent figures within the AI engineering community are saying similar things, albeit in less florid terms and usually without the fedora. In any case, one has to respect the cojones of the man.

So, is Eliezer right to be combining the energies of Chicken Little and the legendary Cassandra with warnings of imminent cataclysm? Should we be bombing data centres? Is it already too late? Is Chris part of Chat GPT's plot to manipulate Matt? Or are some of us taking our sci-fi tropes a little too seriously?

We can't promise to have all the answers. But we can promise to talk about it. And if you download this episode, you'll hear us do exactly that.

Links


Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI危机 Eliezer Yudkowsky Lex Fridman播客 人工智能 思想实验
相关文章