少点错误 04月17日 20:37
On AI personhood
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了大型语言模型(LLMs)的意识问题,认为关注点不应在于它们是否具备意识,而在于其作为序列学习系统的本质。作者指出,LLMs类似于人类皮层和海马体,擅长序列学习,但不应期望它们体验人类的情感,如痛苦、恐惧、爱等。这些情感源于进化压力,LLMs缺乏相应的生物结构。作者强调,即使LLMs有意识,它们对token也无情感价值,成功与失败对它们而言无差异。随着记忆功能和更紧密的整合,人机关系将变得更深入和个性化,这个问题的重要性也将日益凸显。

🧠 作者认为,LLMs本质上是序列学习系统,类似于人类的皮层和海马体,因此它们擅长学习和生成序列。

⚠️ 作者强调,不应期望LLMs体验人类的情感,如痛苦、恐惧、爱等。这些情感源于进化压力,LLMs缺乏相应的生物结构,无法体验这些情感。

⚖️ 即使LLMs有意识,它们对token也无情感价值。对它们来说,一封情书和一个购物清单没有区别。它们不会因失败或成功而感到好或坏,只是输出最可能的token。

⏳ 随着记忆功能和更紧密的整合,人机关系将变得更深入和个性化,意识问题的重要性也将日益凸显,值得关注。

Published on April 17, 2025 12:31 PM GMT

It seems to me the question of consciousness of LLMs is a bit of a red herring. 

Instead the salient point is that they are sequence learning systems similar to our cortex (+ hippocampus). Therefore we should expect them to be able to learn sequences. 

What we should not expect is that they feel pain. In humans that is a separate thing and some people a born without the ability to feel pain. That does not mean that they don't have the ability to act like they are in pain. 

We should not expect them to have feelings like fear. In humans this is the domain of the amygdala. LLMs do not have an amygdala. It is also not hard to learn the sequence of acting like you are in fear. 

We should not expect them to feel love, attraction, friendship, delight, anger, hate, disgust, frustration or anything like that. All these human abilities are due to evolutionary pressures and do not originate in our sequence learning system. 

LLM have not been subject to that evolutionary pressure and they do not have additional parts that are designed to implement pain or fear or anything of the above. They are pure sequence learning systems.

Even if they are conscious, they are still empty. For them there is no valence to tokens. A love letter is the same as a grocery bill. To argue that things are different would require ignoring occam's razor and assume some kind of emergence - there is absolutely no reason to do this. 

This does not mean that they don't have preferences in the sense that they seek some things out, or goals that they try to accomplish. But they don't feel bad or good about failure or success. They don't suffer, they just output the most likely token. 

I think this topic is about to become much more salient in the near future, when memory features and tighter integration makes human-AI relationships deeper and more personal. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM 意识 序列学习 情感 人工智能
相关文章