少点错误 06月10日 01:42
Expectation = intention = setpoint
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了“期望”与“意图”在控制系统中的内在联系,特别是通过热敏恒温器的例子来阐述。文章指出,期望和意图在控制系统中是同一事物的不同表现,尤其是在追求最优控制时。文章深入分析了测量噪声、系统输出对测量结果的影响以及贝叶斯定律在温度控制中的应用,最终得出结论:最优控制行为使得期望等同于意图,因为控制行为本身就决定了对外部现实的期望。

💡 期望与意图的统一性:文章的核心观点是,在控制系统中,期望和意图是同一事物的两个方面。这与人们通常认为的“期望是关于现实的信念,而意图是关于我们希望的未来”的观点有所不同。

🌡️ 热敏恒温器的案例分析:通过热敏恒温器的例子,文章展示了控制系统如何通过“期望”来运作。恒温器不仅需要测量温度,还需要预测温度变化,并根据预测结果调整输出,从而实现对温度的控制。

⚙️ 测量噪声与贝叶斯定律:文章强调了测量噪声对控制系统预测的影响。由于测量数据存在噪声,系统需要结合先验知识和最新数据,利用贝叶斯定律来估计真实的温度。这使得系统只能在“期望”的层面上进行控制。

🔥 最优控制与期望的融合:文章认为,当系统试图进行最优控制时,期望与意图会融合。例如,恒温器会根据其意图(达到设定温度)来调整输出,从而使“期望”的温度等于设定温度。这就是意图和期望统一的过程。

Published on June 9, 2025 5:33 PM GMT

When I was first learning about hypnosis, one of the things that was very confusing to me is how "expectations" relate to "intent". Some hypnotists would say "All suggestion is about expectation; if they expect to have an experience they will", and frame their inductions in terms of expectation (e.g. "Your eyelids will become heavy"). The problem with this is that "I don't think it's gonna work". Other hypnotists would avoid this issue entirely by saying "I don't care if you think it will work. Follow my instructions, and you will get the results regardless of what you believe" and then say things like "Make your eyelids heavy". The problem with this is that "I don't know to do that!", which would be avoided by saying "You don't have to 'do' anything; I'm telling you what  is going to happen, and your job is simply to notice when it does" -- back to square one. This dual path thing always confused me, because "Sure, we can equivocate and play sleight of mouth in order to trick people into getting the results[1], but which is it really?". 

It's both. 

They're the same thing. 

In the same way that "running away from a tiger" and "running to catch a bus" are both the same object level behavior being used for a somewhat different purpose.

Initially, it seemed weird to me that humans are designed so that expectation and intent are implemented in the same way. Strange design choice, given that it creates massive vulnerabilities, where if we can be tricked into expecting we won't be able to remember something, it becomes self fulfilling.[2]

Turns out, it has to be this way - not just for humans, but for any system that tries to control anything. To see why, let's start with something simpler than a human mind.

Consider a thermostat.

When anthropomorphizing a thermostat, you might be tempted to model this controller as having "beliefs" and "desires". The output of the thermometer is the "belief" the system has about the temperature of the room, and the set point we dial to is the "desire" of the system, to which it attempts to regulate external reality. Under this model, it kinda seems like intentions are different kinds of things from expectations, because expectations are like beliefs which are about what reality is while intentions are like desires which are about what we want it to be, but isn't. But a thermometer reading isn't really an expectation, so where's that come in? What does a thermostat "expect"?

Let's model with a bit more detail, including some necessary facts of reality that we skipped over at first. This is a little weird and confusing, so bear with me here.

Say we want to do better than a thermostat on the wall that either flicks the heater on or off depending on the most recent reading of the thermometer. That works "well enough", for simple systems and lax standards, but say we want to get it right. Say we want to put out the exact amount of heat to get the temperature exactly right, to the best it is possible to do.

Now, all of a sudden, we have a lot of complications to think about. To simplify it we'll still ignore most of them, but the following two are important.

    Thermometer data are noisy. One of the better ways to measure temperature is with a thermistor, which is essentially a resistor whose resistance changes with temperature. But now we have thermal noise of electrons polluting our signal. How much noise is there relative to the signal? Well here's the equation for thermal noise:

     

    If you try to calculate the noise for a single instant in time, that annoying little "delta f" term becomes infinite. There is no such thing as an instantaneous measure of temperature (or anything), because we need to accumulate signal over a finite window before it can stand out above the noise. The more time, the more accurately we can measure. Thermometer readings aren't belief. They're data, and can only approach belief in absence of disturbance.

    Our own output disturbs the temperature![3] By design! We can't just take forever to measure our temperature and get perfect accuracy, because if we ever try to change the temperature we now have to do it infinitely slow or else we screw up our measurement!

When setting the AC thermostat, or checking the internal temperature of the turkey in the oven, these things don't really matter. The thermometer responds on the order of seconds, the turkey responds on the order of hours, so the predictable error here is "small enough" and we can ignore it. The moment you try to make an optimal controller of anything (maybe temperature control for a more sensitive chemical process in the face of external disturbance), this becomes a real issue.

For example, imagine we're trying to do optimal temperature control, and we get to cycle the relay on and off once every ten seconds. This means that every ten seconds we have to decide for how long we want to keep the heater on, and for how long to keep it off. At each time step, we have ten seconds of new thermometer data to update on. What do we do?

Bayes' Law, right? The most probable temperature is the one you get by starting with your priors -- i.e. your last temperature estimate, based on the previous estimate, new thermometer data, etc -- and update that with the latest thermometer data and whatever else you might know.

But now we have to decide how much to trust priors and how much to trust new measurements. "Look at the [new] data! Don't bias your results!" is one way of going about it, but to the extent that the new data is noisy it makes sense to look back at your priors built on old data, and average over more data. If we do that for long enough, and have no unpredictable disturbance, we can get perfect measurements! If there's a disturbance though (maybe the wind is blowing), then we're back to the issue of the actual temperature changing faster than we can measure it -- so who cares what temperature it was yesterday, we need to look at what the thermometer is saying now. If we have any disturbance at all, and any noise at all -- which we always do -- then we can never know the actual temperature perfectly; we can only calculate what it will be in expectation.

Once we have our temperature estimate, we can decide how much heat to add, and that messes with the temperature estimate again. We might expect, based on previous thermometer data and the latest batch, that the thermometer reading at the next timestep will be 60.0F. But if we turn the heater on full blast, it might increase the temperature 10 degrees, so do we expect 60.0F, or 70.0F, or somewhere in between? We can predict 60F and be right, or 70F and be right, or anywhere in between. It's underspecified, whatever shall we predict?

So long as you desire it to be somewhere between 60F and 70F, you predict it to be exactly what you want it to be. If you want to get to 65F, no more no less, then you add exactly 50% heat. This brings the temperature up 5 degrees from where it would have been, and shifts the expected temperature up 5 degrees. Not only do you expect that the temperature will be approximately 65F eventually in the future sometime, 65F becomes your exact proximal expectation. For if you expected any less at your next measurement, you would output more heat, and if you expected any higher you would output less.

You don't know what the exact temperature is at any moment in time, and the exact temperature depends on what you're putting out. If you last got to read the thermometer five seconds ago and it said 64.5F, do you think you're a half degree low? No! Because you put out enough heat that you expect that to be compensated for, so you have no idea whether you're still below 65F or not. The act of optimally controlling to a set point -- which could usefully be termed "actually trying to reach the setpoint" -- sets the expectation of what external reality is to be equal to your setpoint. To the extent you can be said to have any intention whatsoever, your expectation is necessarily the same thing as your intention, or else it's not a real intention. Because you aren't doing what you need to do in order to realize that desire, even in expectation.[4]


And this is why expectation is equivalent to intent. 

You have a variety of output behaviors you can choose from, and with them the corresponding set of expectations you can have. Choosing what you expect to happen goes hand in hand with choosing to act so as to realize the corresponding expectation, and this is what intent is; deliberately acting as to expect that result. We can expect without deliberate intention, but we can't intend without expectation, and regardless of how deliberate we are about setting our intentions, we will always behave in ways that realize our expectations -- in expectation, by definition.

Returning to our simpler thermostat model (or our turkey model), where the thermometer really does read out a good estimate of the system temperature which is predictably below the desired set point for minutes at end, this is the result of limited control. Here, we can't expect the temperature to be equal to the set point -- without closing our eyes to the data and refusing to take it into account. We can expect it to become equal, but no turn of the dial will cook a turkey as fast as we can orient to incoming data. 

If I blast full power to my homemade heat treat oven, the heating elements will melt in seconds. As a result, I programmed it to limit the output power to what I expect to be safe (even temperature controllers have insecurity issues), and it is better modeled as having an intention to control the temperature to as close to desired as is safely possible -- because that's what determines the heat output. It controls to a trajectory, which is the optimal trajectory given the limitations. But when I first turn it on, and it's 2000F too cold, it has no expectation of being at that temperature, no intention of being at that temperature -- only an intention of getting there, if it's safe.

My heat treat oven cannot, and we cannot, intend to do anything which we cannot expect. What we can do instead, is expect and intend to be on a trajectory that leads to the desired set point. This distinction between "where we want to end up" and "where we intend and expect to be right now, and at our next time step", is where the confusion comes from.

So what's this look like, as applied to humans? How does this expectation/intention equivalency and "end state vs trajectory" thing confuse people in practice?

 

Sleep talking, on purpose

For a slightly weird example, one time I tested this by asking my wife to remind me to turn the oven off when I got in bed, knowing she'd be asleep. Her initial response, quite understandably, was to object "I'm going to be asleep!"

I told her that I understood that, and that I wouldn't be upset if she failed, but I wanted her to promise me anyway. Go through the motions, make the promise, and if you don't do it I will forgive you. She said okay, she'll remind me, and went to sleep. 

When I got into bed, she reminded me to turn the oven off, and in the morning she had no idea she did so. She had been asleep. 

I could have dressed this up as a "hypnotic induction" and rambled about how "it's not her, but her unconscious mind" which would enact the suggestion -- using that to try to trick her into not refusing my request. But I could also just tell her that if she fails it's fine and let's find out if she can do it. It's still her, no need to "wooo wooo!" about it, and no need to deny reality in order to find the truth. Even if she was literally unconscious at the time of fulfilling that expectation.[5]

 

You can just... decide that?

For an even weirder example, imagine trying to "decide to not swell" an injury you have. It's so bizarre it feels like "Where would I even start!?", because it's not clear where those levers are and that makes it hard to expect it to work. Which necessarily makes it hard to intend for it to.

I met a hypnotist once who told me about how his friend had hypnotized him to help him with his elbow swelling, and I was super skeptical. I didn't write it off completely, but I sure didn't feel like that's something I could expect to work. As a result, I never intentionally tried, because I don't like telling people to expect things I couldn't expect myself, and I certainly couldn't expect it to work. That seems crazy.

And then one day at Jiu Jitsu, someone ankle locked me before I could tap, spraining my ankle pretty pretty badly. I was quite frustrated, and I really didn't want to have to deal with a swollen ankle keeping me from doing even things that wouldn't overstress a healing ankle. I found myself indignantly refusing to swell the injury. I have no idea where that came from. I did not felt like "I chose to do that", I felt like "You can't just decide these things you crazy person!!!"[6]. At the same time, I didn't care. "Don't care, doing it anyway. It's not going to swell. I'm not gonna do it. I just won't take advantage of it, I'll take care of my injury, and it doesn't need to swell". Okay, I thought. We'll see what happens, I guess.

It didn't swell. Took a whole month to heal, but never swelled up like my previous ankle sprains had.

The next time I expected an injury to swell came when I dropped a small boat on my finger, smashing it between the boat and the concrete at the water's edge. I knew from experience that it was definitely going to swell. It wasn't a borderline case. Even knowing "Hey, so, these things are apparently decidable" didn't make it feel any more decidable than you'd expect. It's like telling someone "Oh, you don't have to keep an irrational fear around. You can just decide not to fear". Yeah, sure, buddy.

But I knew it was likely true, since my previous experience was compelling, so I put some work into it. Yes, it certainly feels like it's definitely going to swell up. That's certainly a reasonable expectation to have and it's backed by a lot of... confirmation bias, at least. I had a lot of experience confirming that jammed fingers would swell when I expected them to, but not a lot of experience with things swelling when I expected them not to. What if it didn't have to go this way? What would it be like, if somehow, it were to not swell? Could I imagine that? Could I imagine that being real?

With some effort... actually, yeah. I could imagine what that would be like, and given my previous experience I could imagine that actually being true. So I went with that, and sure enough it didn't swell; I could still bend my fingers just fine. Which was kinda a surreal experience too, given how smashed my finger got. Since then, it hasn't even felt like something I shouldn't have significant control over. I just think about what I want to do, and expect that.

I told my friend about this "Apparently you can just decide to not swell injuries" thing shortly after my first experience with it, and she was understandably skeptical. At the same time, she knew from experience that when I say something that sounds hard to believe it's because I have better reason to believe it's true than she does to think it's false -- and that these things tend to prove to be true. So she knew she couldn't actually rule it out, and so next time she got injured, she tried to expect it to not swell. And did expect it to not swell. And it didn't swell. All it took to give her this same ability to decide when to swell things was to casually mention that she already can, and tell her about the one experience I had.[7]

 

You [don't] have to believe!

You know how high school sports coaches like to go on about how "You have to believe you will win!"? And how the standard rationalist response is "Nonsense, of course you don't. Beliefs are supposed to track reality, not be wishful thinking. Believe what looks to be true, try your best, and find out if you win"?

The coach does have a point though, and there's a reason he's so adamant about what he's saying. If you expect to lose -- if you're directing attention towards the experience of your upcoming loss -- then you are intending to lose, and good luck winning if you aren't gonna even try. The problem is that he's expecting on the level of "Will we win this game?", which, according to the data, isn't looking like it's something we can control. He doesn't know what else to do, and he doesn't want to just give up, so of course he's going to engage in motivated thinking. Fudging the data until he can expect success is the only way he can hope to succeed. It's a load bearing delusion.[8][9]

One way to do better is to deliberately trade correctness of expectation for effort without letting delusion spread to infect the rest of your thinking. "Yeah, I'm probably going to lose. I don't care. I intend to win anyway". Or, in other words "Do or do not. There is no  'try'". That means setting yourself up for failure, expecting success knowing that you aren't likely to have that expectation realized. It's not pleasant, and that gap between your expectations and the data coming from reality is what suffering is. But with suffering comes hope, and sometimes the tradeoff is worthwhile. 

The other option, which tends to be preferable when you have time to compute it, is to shift from trying to predict instant success to predicting that you will stay on an optimal trajectory. Instead of coding your temperature controller to try so hard that it melts down if there's ever a large gap between the belief and desire, limit the output power. Code it such that when too cold, it will heat as fast as it safely can, and predict that the temperature will therefore rise as quickly as is safely possible -- which may or may not reach the desired set point.

In other words, "Believe what looks to be true, try your best, and find out if you can win".

To give a concrete example, in the finals of the most important tournament of my high school sports career, I went in knowing it'd be a tough match and very quickly realized that I definitely wasn't going to win. Not "Wah wahh, boohoo, I'm not gonna win!", dramatizing a fear that I might not win, just a sober recognition that I got second place. Which hey, could be worse. 

It's one thing to talk the talk of "Just try your best and find out what happens", but a true 100% effort is hard, and risky[10], and what's the point if not to maybe succeed? Normally, I fought with an intent to win. My motivation that justified all the the effort was in expectation of potentially securing a win, even if somewhat unlikely. This match started so badly that I quickly recognized that it wasn't gonna happen. I couldn't even imagine a scenario where I could win, and as a result I had no ability to control towards a success. Without that hopeful expectation of success I couldn't motivate a real effort. I couldn't try.

I grieved my loss for a moment, pondered for a bit, and ended up deciding that five more minutes of maximal effort was worthwhile just out of principle. I decided that that I would intend to do as well as I possibly could, giving up absolutely nothing that wasn't taken completely against my will. With this alternate intention, it didn't matter that I was definitely going to lose. I knew it was true, and so there was nothing there to see. No reason to look at "Might I win?", and therefore no reason to expect "No". And nothing keeping me from denying my opponent every point I possibly could while scoring every point on him that he let me. With about thirty seconds left I realized "Oh shit, I might actually win", which was scary because that meant I might lose (which, amusingly enough, isn't a fear so long as you know it's reality).

Surprisingly enough I didn't. Turns out, "Denying every point possible" was all of them, and "Scoring every point I could" was more than he had scored already.[11]

If I had expected on the level of "Will I win?", even attempting to fudge the scales in effort to predict and therefore aim towards "Yes", I would have predicted failure, behaved in line with that prediction, and never figured out that it didn't have to be true. Even my coach admitted after the match that he didn't have that optimism in him. 

Motivated thinking often does this. We try to believe what we want to happen, and struggle against reality telling us we're failing, only to miss the chance to do something better. Give in completely, take your thumb off the scale, submit to the reality of what you cannot expect to change, and you get to notice the question of "What next? Given that I'm [probably] going to lose, what do I want?". While likely not enough to achieve everything you ever wanted (by your own estimation), more nuanced models that predict optimal trajectories tend to map reality better than those which predict immediate and unconditional success, and work better as a result.

 

Pay attention

To what you expect, because that is what you intend, and that is the world towards which you are aiming. Not to what you would expect, if you were to attend to a particular question -- to what you are actively expecting in each moment.

To the difficulty with which you can expect to succeed, for that is the warning sign that even trying is going to require submitting more to reality, and finding something you can expect less ham-fistedly -- something that is as good as you can possibly achieve, and therefore you might actually achieve.

  1. ^

    I sat in on the last class of a hypnotherapy training course once, and the instructor (whom I respect) was saying that we are essentially con men, conning people into getting better.

  2. ^

    It's not quite self fulfilling, technically. No one is ever unable to remember their names in hypnosis. People are unable to try because they've been convinced they can't succeed -- generally without recognizing that they've been convinced and that the apparent inability stems from the lack of intent. The end result is functionally the same, until you figure out that you can stop imagining that any time you like -- at which point the vulnerability is patched. 

  3. ^

    Goodhart's law is a bitch

  4. ^

    This isn't a "No True Scotsman" because it literally wouldn't fit the definition of "intention" otherwise. 

    in·ten·tion: a thing intended; an aim or plan.

    If we're actively updating our behavior to maintain expectation of something else, that something else is the thing we're aiming at. If we're updating our expectations and not changing our behavior in response, we're not aiming at anything.
     

  5. ^

    This is the opposite of hypnotic "challenge suggestions" like name amnesia or sticking hands to tables.

  6. ^

    Having ones inner monolog replaced with a legit dialog involving accusations of insanity does not help one feel sane.

  7. ^

    It did take a little more work to help her keep it, several injuries down the line.

    Eventually, someone offered her ibuprofen when she broke her thumb, and she casually said that she didn't need it because she just decided it wouldn't swell. This was such an unlikely thing to him that his brain autocorrected over it, as if he had seen "the the" and reasoned that it couldn't possibly be what was said/meant. She felt self conscious about it, started doubting herself, and her injury swelled up. I had to talk to her that night about "Who cares if it sounds crazy? Find out if it's true", and the swelling was most of the way back down in the morning.

  8. ^

    It also applies to the to the makeup washing off example. Her friends wanted her to get into the pool, but couldn't look at reality and expect her to go swimming, so they didn't. In order to preserve their expectation that she goes swimming, they dissociated from her feedback and cut off any chance of anyone learning anything that could change their minds -- and because this is visible, it means their beliefs don't track reality and can't be used to Aumann update

    This is where the apparent "other sense" of the word "expect" comes from -- people can and do choose to expect things (direct attention and attempt control towards outcomes) while simultaneously expecting to fail at realizing those expectations, leaving a sense of social pressure without necessarily expecting that you will yield to it -- just that you "should". As strange as it sounds, her friends were genuinely expecting her to get in the pool, while simultaneously expecting her to not do as they expected her to.

    Yes, this is fundamentally irrational, and it's for a reason; they had no idea how to even try if they were to be rational, and it's better to choose a slim chance at success than it is to choose no chance of success -- what is rationality for if not for winning? 

    My thesis here is that no, full rationality really is better. Of course you don't know what to do with the discouraging facts right away; you have to figure that out!

  9. ^

    Another important corollary is that not only is motivated cognition fundamental, so is confirmation bias. When we teach kids that the scientific method is "come up with a hypothesis, and then test it", that's simply wrong, and a recipe for confirmation bias -- because by focusing on one predetermined outcome you are programming your brain to confirm it. This is fine as an engineering hypothesis where you test "I can make a cool thing work", but good science is going to come when you don't know what will happen when you do a certain thing, and run tests to find out. The "hypothesis", in good science, is "This experiment is going to teach me something useful!".

  10. ^

    Because of this match some of my ribs pop out further than they're supposed to, to this day. Oops.

    More dramatically, Eddie Hall's record breaking 500kg lift had some scary effects on him, and to get himself to output that level of effort he had to feed his brain false data to get himself to treat it like a life or death issue.

  11. ^

    I didn't have an explicit understanding of how expectations shape intentions, or how to consciously navigate from using my attention in an ineffective way to one which was more effective. But I did have a stubborn rejection of thumbing the scale, and a stubborn rejection of the idea of not trying

    The reason this works as an example despite preceding explicit my explicit understanding is that it's not explicit understanding that is doing the work. We already make these decisions when we notice them, and our results follow our decisions. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

期望 意图 控制系统 贝叶斯定律 热敏恒温器
相关文章