Published on July 21, 2025 6:49 PM GMT
I have been hesitant to post on Less Wrong, because, while I appreciate a lot of the projects of Rationalism, I have been concerned about some of its proponents' approaches to things including emotions. So I deeply appreciated the post on Generalized Hangriness. It seems like an in to something that I have wanted to discuss for some time.
However, where the previous author was describing a stance that could be generally found among Rationalists, I am not making that claim. Rather, I want to put forward some refinements and expansions to this base position that I believe are valuable for improving people's capacity for rational thought and collective problem-solving.
In addition, as an outsider, I am trying to adapt to the local discourse standards, but I am very open to feedback about my rhetoric, structure, or argumentation.
Refinement One. Emotions aren't semantic claims; they are more like signals or data that trigger resource allocation.
Semantic claims typically arise so quickly from emotions that its easy to conflate them.
Emotions are important sources of information.
They also re-allocate physical resources to help meet challenges or threats, or move towards things that benefit us, and those biochemical differences can easily sweep us quickly into a mindset. However, the embodied impulse itself is not a claim or even semantic. For some people, it never takes that shape.
Instead, a great deal of what language and culture does is provide heuristics and programming to help us quickly translate an embodied experience into a semantic framework that we can more easily access. This process happens incredibly, incredibly fast, in a highly conditioned way, so it can be very difficult to see.
In fact, the danger processing portion of your brain can terrify you and move your body before your prefrontal cortex even registers that you saw a snake. (Personal experience supported by neuroscience.) At the same time, being afraid of snakes is very often the result of acculturation, not direct experience. (I have never been bitten by a snake; I was taught to be afraid.)
When you are getting 'bad claims' from your emotions, it's important to recognize that the place to troubleshoot is likely one or more elements of the interpretive matrix you are operating through, not the emotion.
This might seem like a distinction without a difference, since johnswentworth actually gives a decent starting process for troubleshooting. But I think opening it up to include more steps provides more interactable solution points. Also, for those who don't experience emotions as semantic claims at all, it can make the process more transparent.
To use a computer analogy, there are multiple levels of OS, drivers, and programs whose job is to take data from sensors and present it to the User Interface. In this analogy, your conscious self is the user, and your mind, the things you are currently aware of or experiencing, are the desktop and UI. Conditioned habits of thoughts and thinking tools are programs; acculturation, narratives, and language are OS; and information from your body is handled by drivers that have to interface with all of that.
The metaphor breaks down, of course, but you get that there are multiple levels and potentially problematic sources involved in turning signals into something more. The point is, you are also a programmer in this analogy. You have the ability to look at what you have been given by your linguistic and cultural inheritance, and to change it. And you can you analyze it for fitness at so many levels.
This refinement also recognizes that there is a chemical, biological change in your capacity to think that needs to be acknowledged, and sometimes processed and metabolized.
This isn't bad. It's useful within a framework.
It also tells you what is important.
(And also, personally I believe that a cultural tendency to de-legitimize emotion, to act is if they aren't an inextricable part of our mind and experience, to strip cultural capital or legitimacy from people who experience emotions, exacerbates the problem rather than resolving it. Sometimes, the person you are talking to has really hurt you, the problem is really a threat, your emotional investment derives from the stakes.
If you feel bad about having emotions impacting your thinking, that's just another layer of emotions impacting your thinking. If you dissociate from your emotions because they are shameful, they can more fully drive you than when you consciously work with them.)
Fear is useful when you need run faster, punch harder, or see a small detail more precisely. It tells you that there is some kind of danger, and de-prioritizes everything else. But fear doesn't, itself, protect you and it can also prevent you from doing the thinking you need to do, or drain your physical resources. Also, as fear is learned, it can be unlearned.
Anger does something similar in terms of knowing you have been harmed or your goals blocked. It also provides energy at long-term cost, cuts off some sources of information, and gives you willpower to protect yourself. Guilt is useful for signaling that your behavior doesn't match up with your standards, and letting you know that you need to change your behavior or your standards, or put something wrong to right.
However, often you need to get the information and then release the resource reallocation, the physical activation, like someone acknowledging a pop-up warning, but clearing it so they can continue to work. Except, let's be honest, it's a lot more difficult than a mouse click. There are biological consequences to cortisol and threat response activation, not just to our digestion but also to our neural pathways.
Sometimes you need to find a way to reason from within a mind experiencing the emotion, acknowledging the emotion as important information and as part of what is shaping the space within which you reason. After all, there is no human mind that isn't impacted by the chemicals and electricity of the body from which it arises. You don't reason in spite of these things; these things are what you reason through.
These are skills one practices, or problems one troubleshoots, when not emotionally activated, as any skill to be practiced while in pain or threat must be.
My perception is that the culture that I grew up in didn't give me a good starting place for working with my emotions. I am about to give your some of the current ways I have been reworking my beliefs about them. People will disagree with them, and I am still thinking about how to think about them.
For me, my inherited information was especially inadequate around reward and punishment, as provided by both religion and science.
I have come to believe that my entire starting framework for reward and punishment is based on someone hacking the underlying system, not not giving me tools to work with it. Pleasure/dopamine/satisfaction is the biological resource I need to pay more attention. Food is the biological resource I need to keep moving. These things should be thought of as prerequisites for action, important resources, not prizes for good behavior or worthiness. I feel good so I can focus on the next task; thus helping myself feel good about what I have done isn't lazy or self-indulgent, it's generating resources for further action.
Likewise, guilt is not a sign I deserve punishment. Deserving, in fact, might be bad code.
When people are terrified of guilt or shame or being wrong, in part because of beliefs about deserving punishment, it doesn't help them behave better or think more clearly. They frequently interpret important information as a threat, and go into threat response mode: for example, first, sensing guilt, pivoting to activating anger, going into fight mode, creating a rationale for the Other's badness, and launching into some kind of counter attack, all without being aware or it. There are distinct steps, all predicated on belief systems, cognitive architecture, and modeled behavior, that the mind is protected from having to acknowledge. The claim is the output of an extensive process.
I think of the Semmelweiss reflex often, here, but there are thousands of examples of similar behavior. In fact, I think a lot of the difficulty that people have in conceptualizing and addressing AI or climate threats is based on a similar protective move, though perhaps to protect from existential terror, or fear of losing resources, and less often guilt.
Regardless, until we can actively start troubleshooting these emotional processes as both valuable and problematic, as both invaluable information and something that prevents us from getting information, I don't think we can make meaningful progress towards solving a lot of problems.
The problems are just too threatening to think about.
A key conclusion: In this model, a robust and varied set of tools and approaches for understanding and working with and through emotions and non-semantic thought is a prerequisite for high-stakes rational problem-solving.
Refinement Two. Personifying your emotions, instead of your drives or some other system, might not be the most useful approach in the long run.
Again, I appreciate the previous author's provision of a troubleshooting methodology. It seems to draw from Internal Family Systems (IFS), and I think there's a lot of value there. If we draw a little more closely on IFS, though, we might see that the element we call Angie isn't just anger, at least not for everyone.
Perhaps Angie also remembers several times when this kind of exhaustion harmed us. Perhaps that part is actively reallocating enough energy and breaking our focus so we can get up and grab something to fix our blood sugar.
Limiting Angie to anger might backfire when the part of our mind that is concerned about exhaustion or food is content, but the part of our mind that is concerned about social connections is angry or perceiving a threat. It might be more useful to split things differently, depending on a person's mental architecture.
Developing a system that allows the physical resource allocation/emotion to communicate and then clear, or even communicate without shifting your body, instead of the emotion being central to identity, seems like a valuable goal when you are trying to shift the way you process and understand your emotions. Again, though, people's minds vary.
To go back to the computer analogy; you are troubleshooting your own code. It makes sense to me to be careful and aware of long term goals when doing that.
Also, in my personal experience, I cannot emphasize enough how important it is to be kind and appreciative to every part of yourself. On that note, I am not sure where to put this, but this is not meant to be dismissive of people with alexithymia. Rather, I feel like especially for them, saying your culture has not given you good tools that match your experiences, but you can start developing your own, and your emotions are not threats, they are signals, can clear a barrier. Perhaps what your body is signalling doesn't make sense with an emotion word, because there's something going on that doesn't map cleanly to a word in your language. Luckily, you don't have to be limited by that inheritance.
In Closing
I want to say, again, that I appreciate the initial post; seeing it made me feel less apprehensive about posting on Less Wrong altogether. I don't want to provoke a threat or criticism response in an author I very much appreciated.
I'm not sure how far out of the LessWrong mainstream my claim about the central role of processing emotion in high-stakes thinking and problem-solving is. I know some individual writers have thought about this too. However, personally, I locate it as the most pressing and rate-limiting barrier to solving extinction level threats that I can do anything about. I am still working on how to understand it, phrase it, particularize it, or make it palatable. Since this is a community focused on helping people think better, I hope this is a project people are amenable to co-existing with, at the very least.
Discuss