少点错误 2024年07月31日
Eliezer raising awareness about AI safety is not net negative, actually: a thought experiment
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了告知人们AI风险的必要性。以医生告知病人绝症为例,说明若不告知,后果必然不好,而告知虽可能有不良后果,但也有治愈希望。类比到AI,认为谈论AI风险虽可能加速其发展,但也是实现AGI对齐的唯一途径。

🧑‍⚕️医生发现病人有绝症,若不告知则病人必死,告知后病人虽可能选择错误治疗加速死亡,但也有治愈可能,以此类比谈论AI风险的必要性。

👨‍💻有人认为AI安全运动因导致OpenAI的启动并加速AI自杀竞赛而产生负面影响,但文章指出若不谈论AI风险,默认结果就是灭亡。

🤔文章认为不谈论AI风险就无法实现AGI对齐,解决一个无人知晓的问题是不可能的,所以谈论AI风险虽可能加速发展,但却是实现AGI对齐的唯一途径。

Published on July 31, 2024 2:24 AM GMT

Imagine a doctor discovers that a client of dubious rational abilities has a terminal illness that will almost definitely kill her in 10 years if left untreated.

If the doctor tells her about the illness, there’s a chance that the woman decides to try some treatments that make her die sooner. (She’s into a lot of quack medicine)

However, she’ll definitely die in 10 years without being told anything, and if she’s told, there’s a higher chance that she tries some treatments that cure her.

The doctor tells her.

The woman proceeds to do a mix of treatments, some of which speed up her illness, some of which might actually cure her disease, it’s too soon to tell.

Is the doctor net negative for that woman?

No. The woman would definitely have died if she left the disease untreated.

Sure, she made the dubious choice of treatments that sped up her demise, but the only way she could get the effective treatment was if she knew the diagnosis in the first place.

Now, of course, the doctor is Eliezer and the woman of dubious rational abilities is humanity learning about the dangers of superintelligent AI.

Some people say Eliezer / the AI safety movement are net negative because us raising the alarm led to the launch of OpenAI, which sped up the AI suicide race.

But the thing is - the default outcome is death.

The choice isn’t:

    Talk about AI risk, accidentally speed up things, then we all die ORDon’t talk about AI risk and then somehow we get aligned AGI

You can’t get an aligned AGI without talking about it.

You cannot solve a problem that nobody knows exists.

The choice is:

    Talk about AI risk, accidentally speed up everything, then we may or may not all dieDon’t talk about AI risk and then we almost definitely all die

So, even if it might have sped up AI development, this is the only way to eventually align AGI, and I am grateful for all the work the AI safety movement has done on this front so far.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 AGI对齐 AI安全运动
相关文章