少点错误 2024年09月27日
Gell-Mann checks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了“盖尔曼健忘症”这一认知偏差现象,并提出了一种利用该现象识别虚假信息的方法——盖尔曼检验。作者认为,盖尔曼检验的有效性取决于信息源的泛化能力,即信息源是否能够将一种知识体系应用于其他领域。作者以哲学老师和新闻媒体为例,阐述了盖尔曼检验在不同信息源上的应用,并强调了信息源的泛化能力在判断其可信度中的重要性。

🤔 **盖尔曼健忘症**:指的是人们在面对自己不熟悉或不擅长的领域时,往往会忽视自己的认知局限性,并对来自该领域的信息抱有过高的信任度,而忽略了该信息源在其他领域的可靠性。

💡 **盖尔曼检验**:是一种利用盖尔曼健忘症来识别虚假信息的工具。该检验的核心在于观察信息源在不同领域的认知能力,并将其作为判断其整体可信度的依据。例如,如果一个哲学老师在讨论核能问题时表现出明显的错误,那么他/她对哲学领域的观点也可能存在偏差。

🧐 **泛化能力的重要性**:盖尔曼检验的有效性取决于信息源的泛化能力,即信息源是否能够将一种知识体系应用于其他领域。如果信息源在不同领域表现出截然不同的认知水平,那么其可信度就值得怀疑。例如,如果一个新闻媒体在报道政治新闻时表现出高度的专业性,但在报道科技新闻时却犯下明显的错误,那么该媒体的整体可信度就值得质疑。

🤔 **信息源的专业化与泛化能力**:作者认为,专业化并不一定意味着泛化能力。例如,哲学老师可能对哲学领域有深入的了解,但并不意味着他们对其他领域也同样精通。因此,在判断信息源的可信度时,需要考虑其泛化能力,而不是仅仅依靠其专业性。

🤔 **社会对泛化能力的限制**:社会结构和职业发展往往限制了人们的泛化能力。例如,新闻记者通常专注于特定领域,而缺乏对其他领域的深入了解。因此,社会需要鼓励人们培养更广泛的知识体系,以提高信息识别的能力。

Published on September 26, 2024 10:45 PM GMT

tl;dr: "Gell-Mann amnesia" is a cognitive bias—an observation of human failure. The mechanism behind it can be instrumentalized. Call it "Gell-Mann checks". 

Importantly, Gell-Mann checks are only valid insofar as the truth-finding mechanism being judged is good at generalizing.

That is, you can only judge a whole mind by its part if it treats every part the same way.

 

I used to nod my head at most of what my philosophy teacher said—all seemed coherent. Then he talked about nuclear power, and he just didn't get it.  To avoid Gell-Mann amnesia, I updated to "everything my philosophy teacher says and has said might be bullshit" [1]

Is this fair? My teacher is supposed to be a specialist, after all. I don't have high priors on a given philosophy teacher grokking nuclear power from first principles. Fine.

But you know who do claim to be generalists? Newspapers.

When The New York Times covers nuclear power, they claim to approach the subject with as much rigor as they do politics. So if they clearly don't get nuclear power, that's evidence against them getting politics. 

This was Crichton's initial observation about amnesia: it wasn't about individuals, who often hide behind the guise of specialization, but newspapers, who are supposed to be generalists through and through. [2]

 

But surely, my philosophy teacher isn't an entirely different person when it comes to different subjects![3] 

There's got to be some coherency. 

Some interconnected web I can draw evidence of trustworthiness from. 

Nuclear power and philosophy are two windows onto the same mind: why wouldn't evidence correlate?

Well. 

Generalizing is hard

If you don't have even a tentative grasp of the subject at hand, you're kind of doomed from the beginning

The NYT could always decide they need a new AI branch, but what then? As a journalist, you've spent years perfecting the art of meta-level writing, hopping from subject to subject and relying on "experts" (those with citations and decade-long degrees) for factual accuracy, with no incentive to go deep on a subject yourself and build a gears-level model of the stuff.

Now they put you in charge of the AI branch. You're tasked with understanding the AI world in a sufficiently detailed and accurate way that literally millions of readers won't be deceived.

Who do you turn to? Yann Lecun and Geoffrey Hinton both have a lot of citations; how are you supposed to differentiate? 

I suspect this is made worse by the fact that as a journalist, you tend to take the political aspect of things first. This seems to be what journalism does--suck in all subjects, no matter how disparate, and shove them into the political realm. And in politics, almost everything operates in highly abstract simulacrum levels

Which works fine in politics! These days, political success seems to be only loosely tied to object-level issues. [4] That's not the case in AI. So you're diving into a completely unknown world with few trusted sources, and on top of that you have to retrain the way your brain thinks (to operate on lower simulacrum levels).

All subjects are not equal

The NYT or Time clearly not getting AI is evidence against their trustworthiness. 

But because we can expect this field to be difficult to dip one's toes into, it isn't as strong evidence against them as if they clearly didn't get economics, say. 

Most people aren't actually generalists

Nor strive to be! My philosophy teacher seems fine with the idea that he only understands a fraction of the world. 

He's not the type to try understanding nuclear power from first principles; he was content using it to argue his point, and ditched it later on. Its philosophical side, that's all he needs to know about the technology! 

It's like he isn't even trying to be a generalist. No remorse felt for all the fields he can't learn about

Meanwhile some of us try to become generalists and fail. Careers by definition restrict the domains we're comfortable operating in. And so society doesn't make it easy to build the kind of generalized truth-finding mechanism that would make Gell-Mann checks logically infallible.

(The sequences can are an attempt to fight against that trend.)

Gell-Mann checks are still useful

Regardless of how limited anyone's understanding of Bayes is, everyone has a confusiometer. If they pick up a new subject and don't notice their own confusion before delivering their perspective to a class of gullible young minds, that's evidence they're not generally great at epistemics. 

So if one day they're knee-deep in Hegel and don't understand a word, they might be more liable than most to push past their confusion and deliver their lessons with their usual confidence. Noticing my teacher's inadequacy in physics should update me at least a little in favor of the "yah that's just BS" hypothesis.

Gell-Mann tests work. But they have limits.

My (empty) ~blog: croissantology.com

  1. ^

    This probably generalizes to all philosophy teachers. Cough.

  2. ^

     The wiki is here

    I'm not ideal for this, but if nobody does it, I'll create a Wikipedia page for "Gell-Mann amnesia", because for some reason that doesn't exist yet 

    (Use the Low-hanging Fruit, Luke!).

  3. ^

    Though with Elon Musk and engineering vs. politics that seems to be the case.

  4. ^

    They still haven't repealed The Dread Dredging Act!



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

盖尔曼健忘症 认知偏差 信息识别 泛化能力 可信度
相关文章