少点错误 前天 20:35
How AI researchers define AI sentience? Participate in the poll
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)意识的定义问题,尤其关注AI研究者和神经科学家在这一问题上的不同观点。文章指出,虽然哲学家和神经科学家经常讨论AI意识,但实际决定AI发展方向的是AI研究者和政策制定者。文章强调了AI研究者对AI意识的看法的重要性,并引用了Anthropic研究人员对Claude可能具有意识的估计。文章还提到了神经科学中关于意识的理论,如全局工作空间理论,并指出AI研究者可能需要不同的标准来判断AI是否具有意识。

🧠文章的核心在于探讨AI意识的定义,强调AI研究者在定义AI意识方面的重要性,因为他们将主导AI的开发和实验。

🤔文章指出,AI研究者可能对AI意识有不同于神经科学家的直觉定义。例如,虽然神经科学中有如全局工作空间理论等关于意识的理论,但AI研究者可能认为这些理论并不足以证明AI具有意识。

💡文章提倡AI研究者和政策制定者分享他们对AI意识的定义。文章鼓励相关人员参与调查,以便更好地理解AI研究者对AI意识的实际标准。

Published on July 4, 2025 12:29 PM GMT

TLDR: AI researchers may have a different intuitive definition of sentience than neuroscientists; if you are one of the AI researchers (or policymakers, also important), please consider suggesting your definition in the poll here

The question of whether AI is sentient, and what criteria can establish this, gets more and more attention lately. Usually, people who discuss this question are either doing it from a general philosophical perspective, or from a neuroscience perspective. However, philosophers and neuroscientists will not be the people who will make a decision about AI development, how it can be trained, which experiments can be conducted, etc. This will be mostly done inside the AI labs, and, potentially, policymakers will also have a word there. Thus, it is important to see what AI researchers think about AI sentience, since the decision will be theirs. 

It is reasonable to assume that AI researchers do not completely dismiss the possibility of AI sentience. For example, some Anthropic researchers even estimate the probability that the current version of Claude is sentient to 15% . 

Should the definition of AI researchers differ from that of neuroscientists or philosophers? After all, won't AI researchers who worry about this question just study the current agenda? This is a valid assumption, but studying does not mean agreeing. As an example, one of the common theories of consciousness in neuroscience is Global Workspace Theory.  Inspired by this model,  a group of researchers built a perceiver architecture, which satisfies the minimal criteria of consciousness according to this model - yet nobody seems to treat it as a sentient being. So it means that Global Workspace Theory, from the point of view of most AI researchers, is not enough for AI to be sentient. 

I think it would be very interesting to see what the actual minimal criteria of consciousness/sentience are, according to AI researchers. (So that if you see it in your model, you would treat it as a sentient being). So if you AI researcher or policymaker - please take the poll, and later I will summarize the results in another post.   



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 意识 AI研究者 神经科学 定义
相关文章