Published on July 14, 2025 9:28 PM GMT
There is a large class of true statements which are indistinguishable from nonsense, and this is a problem for rationality. There's also a large class of statements which are obviously true, yet which we don't know. This also poses a problem.
Because if we're not just wrong, but so wrong that we can't even recognize what rightness would look like -- or what our own beliefs are -- how can we recover from that?
"Pain is just information" is one of those statements, which ordinarily gets brushed off as "not true " -- or "Not true in any meaningful way", at least. Yet I was able to dissolve my kid cousin's suffering in a single word when others had failed, simply because I knew that pain is just information. You can watch that knowledge lead to a resolution of suffering in the chronic pain transcript as well, where the conversation ended with the guy with debilitating pain from nerve damage telling me "You helped more with a couple messages than doctors have in over a year".
"You can just decide to not swell" is another one that seems obviously false. "It doesn't matter if your makeup washes off", to someone experiencing anxiety over it, could go either way. They might take it as "Obviously false", or they might take it as "I know I know, but knowing that doesn't make my brain stop". What normally doesn't happen, is "Hm. You're right. I think I'll go swimming now".
Yet that girl did go swimming, and found that it didn't matter if her makeup washed off. And my friend did decide to not swell her injuries, and found that she was in fact capable of deciding such things.
So wtf?
If these things are true, why do they seem so obviously wrong? If we "don't know" these things, then why does it feel so damn obvious that we do?
How many other things are like this, and how can we navigate these things intentionally, so that we don't get stuck with bad outcomes as a result of these false beliefs?
In the posts on attention (1,2) I talked about the part of the solution where we expect people to set aside objections and listen when we know we're right -- and where we expect ourselves to set aside objections and listen when we know they might be. What those posts don't cover, is when and why that process of negotiating attention breaks down.
If we're to imagine we know "It doesn't matter if your makeup washes off", and try to negotiate for attention by "just pointing at what's true" when some stranger is worried about it, what happens?
Maybe your anticipations differ from mine, but I don't anticipate that telling her "It doesn't matter if your makeup washes off" would get a response of the sort "Hm. I disagree. I think it's important enough to worry about". I don't expect any amount of "I disagree"/"I'm still confident" to ever get to "Okay, I agree" because I don't anticipate any Aumann updating happening. Common knowledge of each others rationality is a prerequisite for Aumann agreement to work, and I would expect her to quite reasonably view me as irrational if I were to expect her to take my word on that.
In other words, the response I anticipate is of the sort "The fact that you assert this isn't even evidence, so you don't have a seat at the attention negotiation table".
In other other words, "You aren't an authority on this"
The necessity of appeals to authority
Appeal to authority is generally considered to be fallacious[1], because the veracity of a statement doesn't change depending on who is saying it. False statements don't become true when a smart person says them, and true statements don't become false when a dumb person says them. Therefore, the argument goes, we should look only at whether the statements are true and the arguments valid, and not at who it is that is saying them. Sure, veracity of claims might correlate with authority, but not perfectly. So just look at what's true.
And that works great, when we're competent to figure out what's true. That there's value in this I think goes without saying here. The part that's a little more subtle is that as a general rule we can't actually do this. We cannot, as a general rule, simply look at the object level arguments and determine object level truth to the degree that the origin of the statement is entirely or almost entirely screened off.
Part of this is that even in things where we are quite good at figuring out what is true, often other people are too and so if there's a disagreement with the right other people it tends to shake our confidence. How many things do you think you know where you can honestly say that if all the smartest people were to tell you that you're wrong, you could confidently tell them "No, you are" without even a niggling doubt of "Am I missing something here? What the heck!?". For how many things would it be rational to take this extreme confidence? Even if it's something as simple as "Can I move my arm", is it infinitely more likely that the whole world went crazy, or is it possible that you have anosognosia? Even when we're looking at the object level and "thinking for ourselves", we're necessarily supporting this with a degree of social validation in that if the right people were to disagree in the right way -- even content free -- we'd have a different confidence level. In this sense, we're never purely thinking for ourselves -- and that's a good thing.
The other part of it is that evaluating arguments on the object level is laborious to the extreme. It's not as simple as "I'd just check what's true" because as a general rule, "checking what's true" isn't a one step process like that.
For example, say we're out at a park somewhere and some guy is getting visibly and increasingly agitated, and angry with the person he's talking to. You look at him and think "This guy is going to hit someone". I disagree, and we agree to wait for more evidence before concluding. The next thing the dude says is "I'm going to punch you in the face!", and you excitedly say "See! See! The evidence supports me!". Does it really, though? Is it true that relative to the entirety of what we know, his verbal threats make it more likely that he's about to punch someone?
It's certainly possible. People often get increasingly worked up before they get violent, and frequently follow through on threats. The fact that he progressed to the point of yelling "I'm gonna punch you!" could definitely be evidence of that. It's not the only story that could fit though. Maybe I know the guy, and rather than some schizophrenic homeless guy struggling to hold it together, he's a fairly successful manager somewhere who has a reputation for a quick temper kept in check by an uncanny ability to figure out just how far he can go and get away with it. In that case, I might hear the same sensory data as you and think "Oh good, he only thinks he can get away with yelling here. If he thought he could get away with punching he'd have just done it". Which one is right depends on which model better explains why he's yelling, and we can't "just look at the evidence" until we know what to do with the evidence.
When our disagreement is at the surface, then yes, we can "just look at the data". If it's only one layer deep, then yes, you can say "I think he will yell on his way to punching", I can say "I think he will yell because he knows he can't punch", and having shared the differences in our models, we can be united in looking for evidence of whether he's more like a socially calibrated bully or a man losing his composure. If it's two or three layers deep, we can conceivably sit down and hash it all out -- if we have the time for that, and sufficient epistemic caution[2].
We usually don't have such clean and shallow differences in perspective though. You can have fairly deep political discussions where the same set of facts are interpreted and justified differently layers and layers deep, past where everyone loses track of their own justifications. You can't "just stick to the facts" with someone who disagrees with you politically, because in their mind all your sources have been debunked, while in your mind all of their debunkings have been debunked and vice versa. Look at the lab leak debate, for example. Not only did that go back and forth without resolving to agreement, there were new layers that came up after the debate that neither person seemed aware of. And that makes the whole debate fairly pointless. Or at least, not nearly as pointful as it could have been.
Even on things with few explicit layers of reasoning, often the reasoning chain is long and illegible. Whether you get a gut sense that the guy is bluffing or losing it depends on your entire history of social interaction -- including internal interactions with yourself, and including watching but not quite being able to put a finger on subtler and non-violent bluffs -- so when it comes to discussing why we predict loud threats instead of violence, we can't always say "Oh, I know that guy and he has a reputation". Sometimes we'd have to say "Okay, here's my whole dataset of all my experiences", which we obviously cannot do.
We're left with respect, or left with nothing
What we're stuck with, is estimating how much it means to us that the opaque box of another person's reasoning (or our own opaque box of reasoning, when our "gut" is telling us something we can't find rational reasons for) is giving this output, and using that to determine how far we're willing to entertain their conclusions. At low levels of respect, communication is exhausting, because every update must by necessity be infinitesimal, and building to anything surprising takes iteration after iteration.
What this feels like is when a crackpot tries to explain to you their new theory of everything, and cites their sources like Crackpot Weekly. It's hard not to laugh at everything they say, and if they were ever to be right about something as surprising as the cubic nature of time, and Crackpot Weekly actually pointed to valid arguments, actually, you might think you'd notice. You probably wouldn't.
On the other side of that coin, this is when you try explaining to someone why their political views are wrong and cite science -- only for them to laugh in your face, and ask "Don't you know that study has been debunked!?". And then when you look up the "debunking", it's in Crackpot Weekly and is utter trash. In order to get through to them with this level of respect, you have to do the leg work of finding all of their sources of support, how they justify them as valid, exactly where the debunkings go wrong according to their own principles, etc etc etc. You can be entirely right, with completely valid arguments, and they'd never notice. Because the disagreement runs deep enough that they won't notice that they're evaluating the validity of your arguments incorrectly. In some cases, if you're right you can just build the perpetual motion machine and start selling energy back to the grid. Often there's no clear cut proof in the pudding.
Look at Mormons, for example. South Park's caricature of them is as great people, happy, and quite successful -- e.g. the kid is a state champion wrestler. In my limited experience, South Park is eerily accurate; the Mormon family I knew were great people, happy, and while the kid wasn't literally a state champion wrestler... he was one brain fart away from becoming one, in double overtime, in one of the toughest states in the country. Pulling that kind of thing off often enough to be caricatured as such in a fairly anti-religious South Park sure looks like real world success to me. Yet I don't believe in magic underpants.
On the high respect side of the spectrum, these constraints are all relaxed. You can say whatever you want, knowing you won't be written off. You may be disagreed with. The person might not find your perspective persuasive, after trying it on. And you might be wrong. But you can get your fair shake, if you want. "Turns out you can just decide not to swell injuries" was met with skepticism, but an intent to actually test "I can decide to not swell this", not just "I can feel like I'm deciding and fail".
Shroedinger's Koan
"Koans" make a good illustration of what it's like to work with high degrees of respect. In western culture we like to laugh at the idea of some Supremely Wise monk sitting at the top of mountain, offering Great Wisdom like "The journey of a thousand miles begins with a single step", "Before enlightenment, chop wood, carry water. After enlightenment, chop wood, carry water", and "There is no self".
No shit, sounds pointless bro, and I'm pretty sure I exist -- respectively. So when Yoda of the mountain tops expects me to ponder the sound of one hand clapping, no thanks. You need two hands to clap, next question.
But that's kinda foolish.
Sure, it's easy to pose as "deeply wise" and just say shit without having great wisdom. Ekhart Tolle comes off that way to me. So far as I can tell he just suffered so much so pointlessly that he broke his ability to give a shit, and overshot in the other direction. So he spent a bunch of time sitting on a park bench doing nothing but giggling blissfully. That's not wisdom, that's poor regulation. That's him being slow to learn something everyone else got by default. Or maybe I'm wrong, and I'm missing his deep wisdom; I haven't read him carefully. But that's my point: I could be wrong here. The fact that superficial markers of deep wisdom can be faked doesn't mean that wisdom calling for that kind of respect can't exist. It means that we can't know whether it's deep wisdom. At least, not until we observe the meaning -- which can sometimes take years, if it happens at all.
Surely it was good that I listened to my parents about not playing in the street as a kid. I'm obviously smart enough to know that even long journeys start with a single step, but have I been smart enough to weight this fact appropriately? Would attending more to this fact lead me to just start stepping instead of getting overwhelmed by trying to take on the whole thing at once when it's not necessary? Maybe. Might there be something desirable about "enlightenment", even if it doesn't negate the need for chopping wood and carrying water? Of course. Might there be a purpose for focusing on seemingly absurd questions like the one about one hand clapping? I dunno about that one. Maybe if I pondered it long enough I'd find something. I decline to find out in this specific case.
Which could be a mistake I won't recognize for years. One moment that stands out vividly from an early LW meetup where one of the other attendees floated the idea that the Bible, despite not being a literal "Word of God", might contain legitimate deep wisdom as the consequence of a long chain of evolved tradition. I laughed at the idea at the time, and said something like "What, like 'don't murder!?", expecting her to explain with an example of the type of potential wisdom she had in mind, or else maybe laughing with me and saying "Yeah maybe not. I haven't really thought about it much". Instead though, she just seemed kinda hurt, like she was actually serious about it, and I immediately regretted laughing -- not out of mere politeness, but out of recognition that I didn't expect that, and if this person was serious about it then I was certainly missing something and didn't know what. Sure enough, I've found more and more things where morality is just less obvious than I thought and more fit to Christian morality than I had realized[3], to the point where I'm rescuing concepts like "faith" in this sequence, and using several others in contexts where they don't provoke memetic allergic reactions[4].
The point isn't that this claim is true, or that you should accept it. The point is that relative to everything I knew at the time, I was underestimating it and my current more informed perspective rates it more highly -- meaning that I could have more quickly gotten to results that are more correct in expectation by offering more respect to such claims -- even if you have reason to know that my current perspective rates it too highly.
So when the monk atop the mountain asks about the sound of one hand clapping... I mean, it's not supposed to be a question that has an obviously meaningful solution. It's supposed to be hard to figure out, and to sound like nonsense at first glance. So it's not surprising that it does. What happens if I laugh at the monk? Does he say "Oh, yeah, I guess I'm just too high and stopped making sense"? Does he get all offended at the idea that I wouldn't validate his supposed wisdom? Or does he just smile at me and look at me like I'm an idiot?
I might still laugh at the idea, like I kinda laugh at Tolle sometimes. Because I do think he's a bit silly, even taking into account when I've underestimated things. But I also don't think it's quite so simple, and have been humbled enough to mostly just think "Thanks, and good luck to you, but I'm not interested" -- at least until I see something that surprises me, and suggests I pay attention.
Integration with attention
With attention, there's a spectrum where on one side I'm essentially hypnotizing people and directly dictating their experience of reality, and on the other I'm essentially going into hypnosis myself with empathy and experiencing their world directly without objection. With respect, there's a similar spectrum, with its own considerations.
When the resident homeless guy preaches to "Be the vegetable your DNA is", that's a koan. It's not obvious to me what it even means, so I'm in no position to evaluate whether it's good advice -- just whether I want to spare the attention to find out. When your doctor advises you to get a flu shot, it's much less koan like, but it's not completely un-koanlike ("what do you mean "should", exactly?"). It's fairly rare that statements are made sufficiently concrete that you can instantly know that you've properly understood them and can reject them. If your doctor says "The flu shot will reduce your chance of getting the flu by X%", then that sounds pretty concrete, but even if you look it up and the studies seems to contradict his statement, are you sure you're looking at the right studies? Are you sure you're analyzing them properly? Is "X%" before or after conditioning on age and other factors? There is a very persistent gray area between "I am definitely understanding correctly and this is wrong" and "Maaaaybe I just don't understand as much as I think I do".
The question, therefore, isn't just "Is this objectively correct" or even "Do I have respect for this doctor", but "How much respect do I have for this doctor". If things were to not add up, at what point are you going to laugh him off and say "Look, I know you think you're an authority here, but what you're saying doesn't make sense, doesn't match the data, and if it did I think I'd know".
And at what point are you going to swallow your pride and consider whether maybe you wouldn't.
Trusting someone else's authority does imply that we're unable to look at the object level facts and figure it out for ourselves. It's foolish if we can, and fallacious if we try to use appeals to authority to position ourselves as "someone who knows", so that we can reject object level arguments instead of saying "I dunno man, that's above my pay grade". This can be humbling -- or humiliating, if the realization is forced on us -- but it is also often true, and the best we can do.
Leaning on our own authority does imply that we're unable to convey an understanding on the object level directly, in a way that makes sense relative to our interlocuter's models of the world. It's fallacious if the reason we can't is that they have objections we can't grapple with without softening our own stance and changing our own minds. But sometimes the inferential distance is too large to cross in itty bitty marginal little steps from a mistaken worldview, even though we can predictably take each step without encountering evidence that we're wrong. In these cases it can make a lot of sense to help people take bigger steps into the unknown-to-them, and leverage earned respect to allow strong bids for attention towards the truth, to the exclusion of what to them looks like truth.
- ^
Fallacies are often Bayesian evidence, and we have finite attention.
- ^
If I tell you that this man won't punch anyone, and you reason "Well the guy is screaming about punching people now, so evidence is against you", then you're not being careful enough because you're mistakenly counting this against my predictions being good despite being in line with them.
- ^
Seeing people’s response to the UHC CEO assasination makes me think “Don’t murder” is actually “deep wisdom”, these days.
- ^
I'm not Christian, mind you, and this isn't a defense of Christianity as a whole. Nor do I read the bible. It was just "Oh. There's signal among that noise, and I was arrogant to dismiss that possibility. Oops".
- ^
Discuss