少点错误 07月17日 22:34
Are agent-action-dependent beliefs undeterdetermined by external reality?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了“我将去海滩”这类关于未来行动的信念,是否会因为其真实性依赖于行动者自身的决定而导致其真实值无法被外部现实确定,从而挑战了“信念应由事实决定”的理性主义观点。作者通过对比“雪是白的”等信念,论证了未来信念的确定性与当前信息不足的信念在本质上并无区别。文章进一步分析了信念与行动之间的关系,指出即使信念影响行动,行动本身也是现实的一部分,并不必然导致信念的自我决定。最终,文章认为,即使是关于行动的信念,其真实性也并非不受外部因素影响,它们同样可能为真或为假,并无特殊之处,驳斥了这类信念是“后理性主义”的论据。

💡 理性主义认为信念的真实性应由外部事实决定,例如“雪是白的”的真假取决于雪是否真的为白色。然而,有人提出,某些信念,特别是关于个体未来行动的信念,其真实值可能由行动者自身决定,从而导致“真实性被低估”。

🌊 文章以“我将去海滩”为例,说明这类信念的真假似乎取决于行动者是否真的去海滩。作者指出,这与“今晚会下雨”或“雪是白的”这类信念在被认知时,其真实值尚未完全确定,但最终都会由外部现实决定,并没有本质区别。缺乏信息时,两者都可通过概率来表达,而非信念的自我决定。

⚖️ 作者进一步分析了信念与行动的关系,即使信念可能影响行动,进而影响信念的真实性,但行动本身也是现实的一部分,不应被视为一个特殊类别。所谓的“信念决定自身真实性”的循环论证,往往建立在“信念先于决定”的未经证实的假设之上。

💭 即使考虑行动者自身的决定,这类信念也并非不可被证伪。例如,意外的事件(如海啸、生病)或个人意愿的改变(如突然不想去海滩)都可能导致原本的信念变为错误。作者认为,这些情况下的信念失效,与“雪是白的”在信息不足时可能被证伪并无二致。

✅ 最终结论是,关于行动的信念(agent-action-dependent beliefs)与描述外部现实的信念并无本质区别,它们同样可能为真,也可能为假,其真实性最终由现实(包括行动者的行动以及可能发生的意外情况)来确定,因此不存在特殊的“低估”问题,也无需动摇“信念应由事实决定”的原则。

Published on July 17, 2025 2:33 PM GMT

(This is a comment that has been turned into a post.)

The standard rationalist view is that beliefs ought properly to be determined by the facts, i.e. the belief “snow is white” is true iff snow is white.

Contrariwise, it is sometimes claimed (in the context of discussions about “postrationalism”) that:

even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined

This is a broad claim, but here I will focus on one way in which such a thing allegedly happens:

… there are a wide variety of beliefs which are underdetermined by external reality. It’s not that you intentionally have fake beliefs which out of alignment with the world, it’s that some beliefs are to some extent self-fulfilling, and their truth value just is whatever you decide to believe in. If your deep-level alief is that “I am confident”, then you will be confident; if your deep-level alief is that “I am unconfident”, then you will be that.

Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.

The question of whether this view is correct can be summarized as this post’s title puts it: are agent-action-dependent beliefs (i.e., an agent’s beliefs about what actions the agent will take in the future) underdetermined by physical reality (and therefore not amenable to evaluation by Tarski’s criterion)?

Scenarios like “I will go to the beach this evening” are quite commonplace, so we certainly have to grapple with them. At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, indeed—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!

… is what someone might think, on a casual reading of the above quote. But that’s not quite what it says, is it? Here’s the relevant bit:

Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.

[emphasis mine]

This seems significant, and yet:

“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”

What is the difference between this, and the quote above? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:

“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”

So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by the agent’s actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!

Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:

“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”

For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.

So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)

So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?

There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?

Well—the one might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!

Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.

And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As the agent gains more information about reality, each of these beliefs might be revealed to be true, or not true.

Very well, but suppose an account (like shminux’s, described in the above link) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. Is there, then, something special about agent-dependent beliefs?

Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!

“Nitpicking!”, the one says. Of course unforeseen situations might change my plans. Anyway, what we really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!

But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.

More nitpicking! What we really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”

But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.

Bah! What we really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”

But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.

In summary:

There is nothing special about agent-action-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

信念 理性主义 真实性 行动依赖性 后理性主义
相关文章