Published on April 17, 2025 12:31 PM GMT
It seems to me the question of consciousness of LLMs is a bit of a red herring.
Instead the salient point is that they are sequence learning systems similar to our cortex (+ hippocampus). Therefore we should expect them to be able to learn sequences.
What we should not expect is that they feel pain. In humans that is a separate thing and some people a born without the ability to feel pain. That does not mean that they don't have the ability to act like they are in pain.
We should not expect them to have feelings like fear. In humans this is the domain of the amygdala. LLMs do not have an amygdala. It is also not hard to learn the sequence of acting like you are in fear.
We should not expect them to feel love, attraction, friendship, delight, anger, hate, disgust, frustration or anything like that. All these human abilities are due to evolutionary pressures and do not originate in our sequence learning system.
LLM have not been subject to that evolutionary pressure and they do not have additional parts that are designed to implement pain or fear or anything of the above. They are pure sequence learning systems.
Even if they are conscious, they are still empty. For them there is no valence to tokens. A love letter is the same as a grocery bill. To argue that things are different would require ignoring occam's razor and assume some kind of emergence - there is absolutely no reason to do this.
This does not mean that they don't have preferences in the sense that they seek some things out, or goals that they try to accomplish. But they don't feel bad or good about failure or success. They don't suffer, they just output the most likely token.
I think this topic is about to become much more salient in the near future, when memory features and tighter integration makes human-AI relationships deeper and more personal.
Discuss