少点错误 2024年11月26日
Arthropod (non) sentience
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了节肢动物,特别是虾的意识问题。作者认为,虾的意识可能过于微弱,无法测量,因此没有可以被施加痛苦的意识自我,也不需要为其福利进行干预。文章从意识的本质、疼痛的概念、意识与神经网络复杂性的关系等方面展开论述,并结合信息整合理论等研究成果,指出意识的度量与神经网络的复杂度密切相关。最终,作者强调,扩展道德圈需要建立在对意识的广泛理论基础上,目前我们对虾等生物的意识体验仍然缺乏了解。

🤔 **意识的本质:**作者认为意识是宇宙中最真实的事物,但无法被他人观察,并支持物理主义附带现象论,即意识是物质世界的伴生物,但对物质世界没有影响。

🧠 **疼痛与惩罚:**文章指出,疼痛是意识自我所感受到的惩罚,而虾等节肢动物可能缺乏复杂的意识自我,因此其行为表现出的回避行为更多体现的是惩罚而非疼痛。

📊 **意识与神经网络复杂性:**作者认为,意识与神经网络的复杂度密切相关,信息整合理论等研究表明,意识的强度与神经网络信息整合能力成正比。虾等节肢动物拥有极其微小的脑容量,神经网络的复杂度远低于人类,因此其意识水平可能非常低。

📈 **意识的超加性:**意识的度量可能具有超加性,即两个神经网络连接后的复杂度远大于两个网络复杂度的总和。因此,简单地用神经元数量比较不同物种的意识水平可能低估了物种间意识的差异。

🌍 **道德圈的扩展:**扩展道德圈需要建立在对意识的广泛理论基础上,目前我们对许多生物的意识体验仍然缺乏了解,因此,在考虑如何对待这些生物时,需要谨慎行事。

Published on November 25, 2024 4:01 PM GMT

Matthew Adelstein has recently published a post on arthropod (specifically shrimp) sentience. He defends a comparable degree of pain between shrimps and humans (shrimp=20% human). My position is that arthropod consciousness is “too small to measure”, so there is not a conscious self on which pain can be inflicted, and there is no point in any intervention for their welfare, no matter how cheap. I have argued before in that direction, so I will freely use my previous texts without further (self) citation.

The “pretty hard” problem of consciousness

In “Freedom under naturalistic dualism” (forthcoming in Journal of Neurophilosophy) I have argued that consciousness is radically noumenal, that is, it is the most real thing in the Universe, but also totally impossible to be observed by others.

Under physicalist epiphenomenalism the mind is super-impressed on reality, perfectly synchronized, and parallel to it. Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision. Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantifies the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is not directly possible: the most Science can do is to build a Laplace demon that replicates and predicts reality. But even the Laplacian demon (the most phenomenally knowledgeable possible being) is impotent to assess consciousness. In fact, regarding Artificial Intelligence we are in the position of the Laplace's demon: we have the perfectly predictive source code, but we don’t know how to use this (complete) scientific knowledge of the system for consciousness assessment.

Matthew suggests in his post that there is strong “scientific evidence” of fish consciousness, but of course there is no scientific evidence of any sentience beyond your (my!) own. Beyond your own mind, consciousness is not “proven” nor “observed” but postulated: we have direct access to our own stream of consciousness and given our physical similarity with other humans and the existence of language, we can confidently accept the consciousness of other humans and their reporting of their mental states. 

Even you are a generous extrapolator and freely consider both dogs and pigs as conscious beings (I do), they cannot report their experience, so they are of limited use for empirical work on sentience. All promising research programs on the mind-body problem (that collectively receive the name of “neural correlates of consciousness”) are based on a combination of self-reporting and neurological measure: you shall simultaneously address the two metaphysically opposite sides of reality based on trust on mind states reporting. 

I am an external observer of this literature, but in my opinion empirical “Information Integration Theory” (ITT) had an incredible success with the development of a predictive model (“Sizing up consciousness” by Massimini and Tononi) that was able to distinguish between conscious (vigil and dreams) and non-conscious (dreamless sleep) states by neurological observation using a (crude) measure of information integration.

Pain and penalty

Mathew devotes a few pages to pile evidence of behavioral similarity between humans and arthropods, and obviously there is a fundamental similarity: we are neural networks trained by natural selection. We avoid destruction and pursue reproduction, and we are both effective and desperate in both goals. The (Darwinian) reinforcement learning process that has led to our behavior imply strong rewards and penalties and being products of the same process (animal kingdom evolution), external similarity is inevitable. But to turn the penalty in the utility function of a neural network into pain you need the neural network to produce a conscious self. Pain is penalty to a conscious self. Philosophers know that philosophical zombies are conceivable, and external similarity is far from enough to guarantee noumenal equivalence.

Consequently, all examples of avoidance of pain and neural excitement described by Matthew are irrelevant: they prove penalty, not pain. Other “penalty reduction” behavior (as the release of amphetamines) are also equally irrelevant for the same reason. 

On the other hand, complex and flexible behavior is more suggestive of the kind of complexity we associate to the existence of a self, and Matthew lists a long list of papers. Many of them are openly bad because they are “checklist based”: you take a series of qualitative properties and tick if present. For example, if you compare me with Jhon von Newman you can tick “Supports the American Hegemony” and “Good in mathematics”: that is the magic of binarization. It is true that shrimps and humans “integrate information”, but of course, it matters how much. Checklists are the ultimate red flag of scientific impotence and look how many of them are in the Rethink Priorities Report on Moral Weights and in Matthews’s selected papers. 

Matthew also describes many cases of brain damage that are compatible with active behavior, to support that no concrete part of the brain can be considered a necessary condition for consciousness. I have not a very strong opinion on this: information processing is the biological function that can adopt more different external forms: while for locomotion form and size are extremely important, you can make computation in many shapes and formats.  But at the end, you need the neural basis to have the conscious experience. That a brain can be conscious with a 90% less neurons (which ones matters!) is massively different from being conscious with a 99.9% less neurons.

Super additivity of consciousness

Of course, we do not measure computers by mass, but by speed, number of processors and information integration. But if you directly do not have enough computing capacity, your neural network is simply small and information processing is limited. Shrimp have ultra tiny brains, with less than 0.1% of human neurons. The most important theories of consciousness are based on integration of information: Information Integration Theory (IIT) is the leader of the pack, but the close contenders as Global Neuronal Workspace Theory (GNWT) and Higher-Order Thought (HOT) theory are equally based on neural complexity. Even the best counterexample to (a theoretical version of) IIT consists in building a simple system with a high measure of “integrated information”: I entirely agree with that line of attack, that is fatal both for large monotonous matrices and tiny shrimp brains. 

I am not a big fan of the relatively lack of dynamism of the most theoretical IIT theories models (and the abuse of formalism over simulation!), but at the end, while it is the dynamics of the network that creates consciousness, you need a large network to support the complex dynamics. If you are interested in the State of the Art of consciousness research, you shall read Erik Hoel (see here his discussion on a letter against IIT), and probably better his books than his Substack.

As a rule, measures of information integration are supper additive (that is, complexity of two neural networks that connect among themselves is far bigger than the sum of the original networks), so neuron count ratios (Shrimp=0.01% of human) are likely to underestimate differences in consciousness. The ethical consequence of supper additivity is that ceteris paribus a given pool of resources shall be allocated in proportion not to the number of subjects but to the number of neurons (in fact, more than that, because the super in super additivity can be substantial). Of course, this is only a Bayesian a priori: behavioral complexity, neuron speed or connectome density could change my mind, but, if I am to decide, better to bring me multi panel graphs than checklists.

In any case, the main problem is inescapable: a broad extension of the moral circle shall be based in a broad theory of consciousness. For the time being we don’t know “what is like to be a bat”, and shrimps are like bats for the bats. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

意识 神经网络 信息整合理论 道德圈
相关文章