少点错误 2024年09月16日
What's the Deal with Logical Uncertainty?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了逻辑不确定性与经验不确定性,指出两者在某些方面存在相似问题,如对错误陈述赋予非零概率会导致荒谬结果,且认为两者可按相同原则看待。

🧐文章以π的第googolth位数字是否为奇数及抛硬币后结果未知为例,说明在这两种情况下,若按常规概率思维,会对错误陈述赋予非零概率,从而得出荒谬的结论,如高额赌注的价值计算。

🤔作者认为逻辑不确定性中虽理论上可通过足够计算力得出π的数字奇偶,但这并未解决对错误陈述赋概率的问题,且能看穿不透明盒子的工具对确定π的数字奇偶无帮助。

😕文章最后提出,无论是逻辑还是经验不确定性,都是相对于特定条件的,不同问题需要不同的条件调整,质疑为何不能将这两种情况按相同原则处理。

Published on September 16, 2024 8:11 AM GMT

I notice that reasoning about logical uncertainty does not appear more confusing to me than reasoning about empirical one. Am I missing something?

Consider the classical example from the description of the tag:

Is the googolth digit of pi odd? The probability that it is odd is, intuitively, 0.5. Yet we know that this is definitely true or false by the rules of logic, even though we don't know which. Formalizing this sort of probability is the primary goal of the field of logical uncertainty.

The problem with the 0.5 probability is that it gives non-zero probability to false statements. If I am asked to bet on whether the googolth digit of pi is odd, I can reason as follows: There is 0.5 chance that it is odd. Let P represent the actual, unknown, parity of the googolth digit (odd or even); and let Q represent the other parity. If Q, then anything follows. (By the Principle of Explosion, a false statement implies anything.) For example, Q implies that I will win $1 billion. Therefore the value of this bet is at least $500,000,000, which is 0.5 $1,000,000, and I should be willing to pay that much to take the bet. This is an absurdity.

I don't see how this case is significantly different from an empirical incertainty one:

A coin is tossed and put into an opaque box, without showing you the result. What is the probability that the result of this particular toss was Heads?

Let's assume that it's 0.5. But, then just as in the previous case, we have the same problem: we are assigning non-zero probability to a false statement. And so, by the same logic, if I am asked to bet on whether the coin is Heads or Tails, I can reason as follows: There is 0.5 chance that it is Heads. Let P represent the actual, unknown, state of the outcome of the toss (Heads or Tails); and let Q represent the other state. If Q, then anything follows. For example, Q implies that I will win $1 billion. Therefore the value of this bet is at least $500,000,000, which is 0.5 $1,000,000, and I should be willing to pay that much to take the bet. This is an absurdity.

It's often claimed that important difference between logical and empirical uncertainty is that in the case with the digit of pi, I can, in principle, calculate whether its odd or even if I had arbitrary amount of computing power and therefore become confident in the correct answer. But in case of opaque box, no amount of computing power will help.

First of all, I don't see how it addresses the previous issue of having to assign non-zero credences to wrong statements, anyway. But, beyond that, if I had a tool which allowed me to see through the opaque box, I'd also be able to become confident in the actual state of the coin toss, while this tool would not be helpful at all to figure out the actual parity of googolth digit of pi.

In both cases the uncertainty is relative to my specific conditions be it cognitive resources or acuity. Yes, obviously, if the conditions were different I would reason differently about the problems at hand, and different problems require different modification of conditions. So what? What is stopping us from generalize this two cases as working by the same principles?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

逻辑不确定性 经验不确定性 概率问题 条件调整
相关文章