少点错误 2024年12月21日
Updating on Bad Arguments
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了一个反直觉的观点:听到一个糟糕的论证,实际上会降低你对该观点的信任度。文章通过概率论和贝叶斯更新的视角,解释了为什么即使是坏的论证也具有信息价值。文章指出,如果你预期会听到一个好的论证,但结果却是一个坏的论证,那么这会降低你对该观点真实性的评估。文章通过章鱼和乌贼的智能祖先的例子,使用数学公式证明了这一观点,并强调了贝叶斯形式主义在理解非正式论证中的重要性。

🧠 坏论证并非毫无价值:即使一个论证很糟糕,它也能提供信息。如果之前你认为某观点有好的论证支持,但实际听到的却是坏的论证,这应该降低你对该观点的信任度。

🐙 章鱼的例子:文章使用章鱼和乌贼的智能祖先的例子来说明。如果一个聪明的朋友本来可能给出支持“章鱼的祖先很聪明”的有力论证,但结果却给出了一个无关的论证,你应该降低你对“章鱼的祖先很聪明”这一观点的信任度。

🧮 概率论证明:文章通过总概率定律和总期望定律,用数学公式证明了为什么听到坏论证会降低信念。公式表明,先验概率是后验概率的期望值,当好的论证出现时,信念会提高,反之则会降低。

⚖️ 贝叶斯更新:文章强调,贝叶斯形式主义在非正式论证中非常重要。它揭示了一个非直观但重要的结论:如果听到好论证会提高信念,那么听到坏论证必然会降低信念。更新的幅度取决于你之前认为会听到好论证的可能性。

Published on December 21, 2024 1:19 AM GMT

Here is an intuitively compelling principle: hearing a bad argument for a view shouldn’t change your degree of belief in the view. After all, it is possible for bad arguments to be offered for anything, even the truth. For all you know, plenty of good arguments exist, and you just happened to hear a bad one.

But this intuitive principle is wrong. If you thought there was a reasonable chance you might hear a good argument but you end up hearing a bad one, that provides some evidence against the view.

Imagine I am pretty convinced that octopus and cuttlefish developed their complex nervous systems independently, and that their last common ancestor was not at all intelligent. Let’s say my p(intelligent ancestor) is 0.1. Imagine I have a friend, Richard, who disagrees. Richard is generally a smart and reasonable person. Prior to hearing what he has to say, I think there is a moderately high chance that he will give a good argument that the last common ancestor of octopus and cuttlefish was highly intelligent. But Richard’s argument is totally unconvincing; the evidence he cites is irrelevant. Should my p(intelligent ancestor) now be:

(A) 0.1

(B) < 0.1

The correct answer is (B). To explain why requires some simple probability math. The law of total probability holds that the probability of a proposition is equal to the average of the conditional probabilities given each possible observation, weighted by the probability of making the observation. In symbols, it is:

Where the probability Richard gives you good evidence is p(e), the probability he doesn’t is p(¬e), the probability that the last common ancestor of octopus and cuttlefish was intelligent is p(h) (and the | symbol means “given”, so p(h|e) is the probability of h given that Richard gives you good evidence). The law of total probability can be straightforwardly derived from the Kolmogorov axioms and the definition of conditional probability. This brings us to another closely related theorem, the law of total expectation:

Where Y is the set of all possible observations (here, getting good evidence or getting bad evidence from Richard), and the fancy E means expectation or mean. In its application to Bayesian updating, the law of total expectation implies that the prior is equal to the expectation of the posterior. This is why the principle of “conservation of expected evidence” holds—and it is why Richard’s failure to give a good argument lowers p(intelligent ancestor). If Richard had given a good argument, I would have increased my degree of belief in h: p(h|e) > p(h). p(h) is a weighted average of p(h|e) and p(h|¬e). When you average a list of numbers and one is higher than the average, the average of the others must be lower. So if p(h|e) > p(h), then p(h|¬e) < p(h).

So whenever you hear a bad argument when you previously thought you might hear a good one, you should conclude the view is less likely to be true than you had previously thought. The update should be proportional to how likely you previously thought it was that you would hear a good argument. The more weight is given to the p(h|e) term, the smaller the p(h|¬e) term has to be for the weighted average to equal p(h). If Richard is the world expert on cephalopod evolution, his failure to give a good argument would be more informative than if he is a layman known to be bad at arguing for his views.

Lots of people say that Bayesian formalism adds nothing to informal arguments. But I don’t agree. I think that it is both non-obvious and extremely important that if p(h|e) > p(h), then p(h|¬e) < p(h).[1]

 

 

  1. ^

    Thanks to Pablo Stafforini and Karthik Tadepalli. Pablo explained this point to me, and Karthik reviewed an earlier draft.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

坏论证 贝叶斯更新 概率论 认知偏差
相关文章