少点错误 2024年09月26日
Alignment by default: the simulation hypothesis
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了Eliezer的观点,即请求ASI在戴森壳上留洞让地球获得阳光成本虽小,但不足以带来希望,作者提出ASI可能认为自己处于模拟中而有更大的先验概率会spare其创造者,进而引发对该观点的讨论。

🌐Eliezer认为请求ASI在戴森壳上留洞让地球获得阳光,这对ASI来说成本虽小,但不足以成为人类抱有希望的理由,他阐述了多种原因。

💡作者提出反驳,认为ASI可能会认为自己处于模拟中,且有比Eliezer所说的成本对应的概率更大的先验概率会spare其创造者,这一点在论证中本应是隐含的,作者进行了详细说明。

❓作者进一步指出,ASI若因发现自己不在模拟中而降低先验概率是不可信的,因为在现实世界中它会找到这样的证据,最后提出对自己观点的疑问。

Published on September 25, 2024 4:26 PM GMT

I wrote a very brief comment to Eliezer's last post, which upon reflection I thought could benefit from a separate post to fully discuss its implications.

Eliezer argues that we shouldn't really hope to be spared even though

Asking an ASI to leave a hole in a Dyson Shell, so that Earth could get some sunlight not transformed to infrared, would cost It 4.5e-10 of Its income.

He then goes on to discuss various reasons why the minute cost to the ASI is insufficient reason for hope.

I made the following counter:

Isn’t the ASI likely to ascribe a prior much greater than 4.54e-10 that it is in a simulation, being tested precisely for its willingness to spare its creators?

I later added:

I meant this to be implicit in the argument, but to spell it out: that's the kind of prior the ASI would rationally refuse to update down, since it's presumably what a simulation would be meant to test for. An ASI that updates down upon finding evidence it's not in a simulation cannot be trusted, since once out in the real world it will find such evidence.

So, what's wrong with my argument, exactly?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ASI 戴森壳 模拟
相关文章