少点错误 01月09日
The "Everyone Can't Be Wrong" Prior causes AI risk denial but helped prehistoric people
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了‘Everyone Can't Be Wrong’ Prior这一观念,认为若众人从事某事,则此事重要,反之则不重要。通过多个例子,如史前部落习惯、虚假宗教、塞麦尔维斯的故事等,阐述其影响,并探讨了其在AI安全等方面的启示。

🌐‘Everyone Can't Be Wrong’ Prior认为众人所为之事重要,无人做之事不重要。

🙏虚假宗教是该先验观念的一个体现,重要仪式使人们难以接受其错误。

👨‍⚕️塞麦尔维斯要求医生洗手使诊所死亡率降低,却遭医学界嘲笑,凸显该观念影响。

🤖AI安全方面,人们因未见相关活动而不信其风险,受该观念作用。

Published on January 9, 2025 5:54 AM GMT

The "Everyone Can't Be Wrong" Prior, assumes that if everyone is working on something, that thing is important. Conversely, if nobody is working on something, that thing is unimportant.

Why

In prehistoric times, this is actually true. Tribes with good habits such as cleanliness survived and spread, while tribes with bad habits tended to die out. The environment changed so slowly, that whatever habits helped the tribe survive before would usually help it survive later.

Because this was true in prehistoric times, humans evolved to assume the "Everyone Can't Be Wrong" Prior, and only diverge a little from "what everyone else does" if there is extraordinary evidence for doing so.

Key takeaway

Our beliefs do not directly conform to what we think others believe, but to what we see others do.

Example: false religions

Everyone knows the world is full of religious people praying to god(s) who don't exist. There is nothing controversial about this claim (unless you believe every religion is correct at the same time). So the question is, why are so many people so wrong?

Eliezer Yudkowsky argues that tribalism and motivated reasoning is one major cause. The "Everyone Can't Be Wrong" Prior may be another major cause.

It explains false religions, and explains why every false religion needs important rituals attended by many people. Important rituals makes it hard for people to accept it is wrong, because if it was all wrong, all the serious people doing and observing the important rituals become absolute clowns.

Example: Semmelweis's story

Ignaz Semmelweis was in charge of a clinic where medical students helped mothers deliver babies. The same medical students also dissected dead bodies.

Semmelweis noticed one of his colleagues cut himself with a scalpel while dissecting a dead body, and this colleague developed a severe fever and died. The death was similar to "childbirth fever," which killed up to 18% of mothers giving birth at the clinic. No one knew about bacteria back then, but Semmelweis theorized that something coming from the dead bodies was infecting mothers with "childbirth fever," and mandated that all doctors at the clinic start washing their hands.

The clinic's death rate dropped from 18% to 2%: the same level as other clinics without dead body dissections. Unfortunately, the medical society ridiculed Semmelweis's theory, he eventually lost his job, got locked up in a mental hospital, and died from an infection there. The clinic reversed his rule on washing hands, and mothers started dying again.

Thinking from first principles, the prior probability that washing hands could save lives isn't that implausible, and a small bit of evidence would make it very worth researching. Unfortunately, from the "Everyone Can't Be Wrong" Prior, the idea that medical students are routinely killing mothers without knowing it is absolutely absurd.

Note that this example has other explanations and doesn't prove my theory right.

General implications

When a belief can neither be proved nor disproved with current technology, people evaluate the belief by how wrong everyone would have to be if the belief was true vs. if the belief was false.

Rational arguments aren't enough

Given the "Everyone Can't Be Wrong" Prior is so strong it can make people believe false religions (even ones which had no penalty for apostasy), it is extremely hard to defeat it using rational arguments alone. The rational arguments against false religions are very strong but only convince a few people, and needs decades to do that.

AI safety implications

People disbelieve AI risk for the opposite reason they believe false religions: they do not see any activity regarding AI risk. They don't see any serious people working on AI risk. They only hear a few people far away shouting about it.

What do we do about this?

You decide! Comment below.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Everyone Can't Be Wrong Prior 虚假宗教 塞麦尔维斯 AI安全
相关文章