少点错误 07月18日 19:17
Are agent-action-dependent beliefs underdetermined by external reality?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了“自我实现信念”的合理性,即某些信念的真假是否取决于我们自身的决定和行动。文章从“雪是白的”这一经典命题出发,与“我今晚会去海滩”这类涉及未来行动的信念进行对比,分析了信念的真值是否可能由外部现实或个体行为决定。作者认为,无论是关于未来的预测还是关于个体行动的信念,其真值的确定最终都依赖于现实的检验,即使是与个体决策相关的信念,其真假也并非完全独立于外部因素,例如计划的改变或态度的转变。文章还简要提及了主观概率在处理不确定性信息时的作用,并指出将信念与决策的优先级调换,或将决策视为对现实的认知,都能有效地解决信念的“自我决定”悖论。最终结论是,涉及个体行动的信念与其他关于现实的信念并无本质区别,其真假同样受现实检验。

🎯 **信念的真值并非总是独立于现实:** 文章的核心观点在于质疑“后理性主义”的某些主张,即部分信念的真值可能完全由个体自身的决定所确定。作者通过对比“雪是白的”和“我今晚会去海滩”这两个例子,指出后者的真值似乎取决于行动,但这种依赖性并非绝对,并且与对未来的其他预测(如是否下雨)在确定性上并无本质区别,最终都需接受现实的检验。

🌊 **未来行动信念的真值确定性:** 作者深入分析了“我今晚会去海滩”这类信念的真值如何确定。虽然个体可能因为相信自己会去而促成行动,但这种自我实现的循环并非无懈可击。计划的变动(如海啸、生病)或态度的转变(如不再想去海滩)都会导致此类信念的失效,表明其真值并非完全由初始的信念或决定所固定,而是受到现实环境和个体心理动态的共同影响。

⚖️ **“自我实现”信念并非特殊:** 文章强调,即使信念的真值与个体的行动有关,这也不应被视为一种独特的、挑战“以真值为基础的信念”原则的现象。作者认为,个体的行动本身也是现实的一部分,与“雪是白的”或“会下雨”等外部事实一样,都可以作为检验信念真值的依据。因此,涉及个体行动的信念与涉及外部环境的信念在本质上没有区别,都应该在现实中找到其真值。

💡 **概率思维在不确定性中的作用:** 作者提出,对于那些真值尚未确定的未来事件或行动,更恰当的表达方式是采用概率。例如,与其说“我今晚会去海滩”,不如说“今晚去海滩的概率为X%”。这种概率性的信念表达方式,能够更准确地反映我们对不确定性的认知,并避免陷入关于信念自我确定的困境。

🔄 **决策与信念的优先性:** 文章探讨了决定与信念之间的关系。如果将信念视为决策的前提,那么信念可能影响其自身的真值,形成循环。但如果反转这种关系,认为决策是认识现实的一种方式,那么“我今晚会去海滩”这类信念就与“雪是白的”一样,都是对现实的陈述,其真值将在未来被揭示,从而消除了“自我决定”的悖论。

Published on July 17, 2025 2:33 PM GMT

(This is a comment that has been turned into a post.)

The standard rationalist view is that beliefs ought properly to be determined by the facts, i.e. the belief “snow is white” is true iff snow is white.

Contrariwise, it is sometimes claimed (in the context of discussions about “postrationalism”) that:

even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined

This is a broad claim, but here I will focus on one way in which such a thing allegedly happens:

… there are a wide variety of beliefs which are underdetermined by external reality. It’s not that you intentionally have fake beliefs which out of alignment with the world, it’s that some beliefs are to some extent self-fulfilling, and their truth value just is whatever you decide to believe in. If your deep-level alief is that “I am confident”, then you will be confident; if your deep-level alief is that “I am unconfident”, then you will be that.

Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.

The question of whether this view is correct can be summarized as this post’s title puts it: are agent-action-dependent beliefs (i.e., an agent’s beliefs about what actions the agent will take in the future) underdetermined by physical reality (and therefore not amenable to evaluation by Tarski’s criterion)?

Scenarios like “I will go to the beach this evening” are quite commonplace, so we certainly have to grapple with them. At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, indeed—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!

… is what someone might think, on a casual reading of the above quote. But that’s not quite what it says, is it? Here’s the relevant bit:

Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.

[emphasis mine]

This seems significant, and yet:

“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”

What is the difference between this, and the quote above? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:

“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”

So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by the agent’s actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!

Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:

“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”

For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.

So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)

So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?

There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?

Well—the one might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!

Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.

And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As the agent gains more information about reality, each of these beliefs might be revealed to be true, or not true.

Very well, but suppose an account (like shminux’s, described in the above link) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. Is there, then, something special about agent-dependent beliefs?

Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!

“Nitpicking!”, the one says. Of course unforeseen situations might change my plans. Anyway, what we really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!

But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.

More nitpicking! What we really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”

But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.

Bah! What we really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”

But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.

In summary:

There is nothing special about agent-action-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

理性主义 后理性主义 信念的真值 自我实现信念 决策与行动
相关文章