![]() |
|

Was this newsletter forwarded to you? Sign up to get it in your inbox.
We've spent years trying to get large language models to obey instructions, but what happens when they start obeying our feelings instead?
Most conversations around GPT-4.5 (and whatever other model we're currently fussing about) obsess over functionality: code debugged, emails written, summaries summarized, or benchmarks outperformed. Basically, we've been building really fancy, really expensive software to automate stuff we already know how to do—just faster and cheaper.
But maybe we're missing what's actually interesting here. What if the real magic isn't in getting AI to flawlessly execute tasks, but in leveraging a more subtle, powerful ability to read between the lines of the words in a prompt and pick up on the intent behind them? Previous generations of software were already great at helping us be more productive at tasks that have well-defined, step-by-step processes and clear answers. Excel power users, for example, can whiz through financial modeling. I see little reason to fix that. Perhaps our mistake was in trying to make LLMs automate these things, instead of focusing on what new capabilities could be unlocked.
This shift is just now gathering steam: Computing is moving away from explicit instruction-following toward intuitive preference-reading.
This wasn’t deliberate. Research labs have been trying for years to replicate existing digital workflows. However, this happy side effect is what I’m calling “vibe compute.” On the frontier, AI models are already moving up layers of thoughts to more abstract thinking. They still struggle at things human children would do easily—try asking them to count the number of Rs in the word “strawberry.” And yet, these models are able to intuit my tastes and preferences in ways that continue to shock me.
I’ve had this idea for a while but have so far lacked clear examples by which to prove my point. With GPT-4.5’s release last week, this latent capability finally feels legible to the average user. Vibe compute is here.
The implications of this shift feel wildly under-discussed. When talking about the role of probabilities and emotions in our relationship with computers, most people post a screenshot of Joaquin Phoenix in the movie Her and call it a day. Instead, I want to truly interrogate the idea of an intuitive machine. What does it mean when LLMs are bad at math, but great at feelings?
The three breakthroughs behind vibe compute
Before we assume that I’m right, it’s worth going through the three technical breakthroughs that have made this leap possible:
Become a paid subscriber to Every to unlock this piece and learn about:
- The three technical developments that are powering this new eraHow LLMs encode subtle emotional contextProducts that are harnessing LLMs' subtle, powerful new capabilities
Click here to read the full post
Want the full text of all articles in RSS? Become a subscriber, or learn more.