少点错误 07月08日 02:12
You Can't Objectively Compare Seven Bees to One Human
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

这篇文章批判了Rethink Priorities的福利范围报告,指出其基于单元论和享乐主义的论点存在严重问题。报告试图用数学函数量化不同生物的福利,但并未定义福利的“量”,导致其结论缺乏科学依据。作者认为,不同生物的福利无法客观比较,道德权重更多是主观选择。文章还探讨了进化理论在解释福利中的作用,并指出报告在技术哲学层面存在缺陷,未能提供关于意识近因的理论。

🔍单元论假设福利的“量”可以量化,但报告并未定义福利的“量”,导致其量化方法缺乏科学依据,无法客观比较不同生物的福利。

🧬报告提及的进化理论仅说明福利的价值,但未解释福利的本质,无法提供比较不同生物福利的方法,例如无法比较人类和虾的福利。

🧠报告缺乏关于意识近因的理论,无法解释福利的生理基础,导致其量化方法仅是主观设定,缺乏客观性。

🌟作者认为,不同生物的福利无法客观比较,道德权重更多是主观选择,例如基于神经元数量或自我建模能力的判断都是主观选择。

🎨即使放弃客观比较,仍可基于福利对行为进行道德评判,例如虐待他人仍是错误的,即使福利量化方法不同。

🔢作者提出,即使采用自由能最小化等理论,仍可定义福利的客观比较方法,例如通过预测误差总和定义宇宙中的痛苦总量。

📚报告其余部分存在理论堆砌但缺乏整合的问题,但其中关于临界闪烁融合频率与主观时间体验的讨论值得深入阅读。

Published on July 7, 2025 6:11 PM GMT

One thing I've been quietly festering about for a year or so is the Rethink Priorities Welfare Range Report. It gets dunked on a lot for its conclusions, and I understand why. The argument deployed by individuals such as Bentham's Bulldog boils down to: "Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts". Most people who argue like this are doing so in bad faith and should just be ignored.

I'm writing this as an attempt to crystallize what I think are the serious problems with this report, and with its line of thinking in general. I'll start with

Unitarianism vs Theory-Free Models

No, not the church from Unsong. From the report:

    Utilitarianism, according to which you ought to maximize (expected) utility.Hedonism, according to which welfare is determined wholly by positively and negatively valenced experiences (roughly, experiences that feel good and bad to the subject).Valence symmetry, according to which positively and negatively valenced experiences of equal intensities have symmetrical impacts on welfare.Unitarianism, according to which equal amounts of welfare count equally, regardless of whose welfare it is.

Now unitarianism sneaks in a pretty big assumption here when it says 'amount' of welfare. It leaves out what 'amount' actually means. Do RP actually define 'amount' in a satsifying way? No![1]

You can basically skip to "The Fatal Problem" from here, but I want to go over some clarifications first.

Evolutionary Theories Mentioned in The Report

I ought to mention that, they do mention three theories about the evolutionary function of valenced experience, but these aren't relevant here, since they still don't make claims about what valence actually is. If you think they do, then consider the following three statements

    It is beneficial for organisms to keep track of fitness-relevant informationIt is beneficial for organisms to have a common currency for decision makingIt is beneficial for organisms to label states as good or bad, so they can learn

Firstly, note that these theories aren't at all mutually exclusive and seem to be three ways of looking at the same thing. And none of them give us a way to compare valence between different organisms: for example, if we're looking at fitness-relevant information, there's no principled way to compare +5.2 expected shrimp-grandchildren with +1.5 expected pig-grandchildren.[2]

All of this is fine, since the evolutionary function of valence is a totally different issue to the cognitive representation of valence.

This is called the ultimate cause/proximate cause distinction and crops up all the time in evolutionary biology. An example is this:
Question: why do plants grow tall?
Proximate answer: two hormones (auxins and gibberellins) cause cells to divide and elongate, respectively
Ultimate answer: plants can get more light by growing above their neighbors, so taller plants which grow taller are favoured

The Fatal Problem

The fact that the authors of the report don't give us any proximate theories of consciousness, unfortunately, damns the whole project to h∄ll, which is where poor technical philosophies go when they  make contact with reality (good technical philosophies stick around if they're true, or go to h∃aven if they're false).[3]

If I could summarize my biggest issue with the report, it's this:

Unitarianism smuggles in an assumption of "amount" of valence, but the authors don't define what "amount" means in any way, not even to give competing theories of how to do so.

This, unfortunately, makes the whole thing meaningless. It's all vibes! To reiterate, the central claim being made by the report is:

    There is an objective thing called 'valence' which we can assign to four-volumes of spacetime using a mathematical function (but we're not going to even speculate about the function here)Making one human brain happy (as opposed to sad) increases the valence of that human brain by one arbitrary unit per cubic-centimeter-second On the same scale, making one bee brain happy (as opposed to sad) increases the valence of that bee brain by fifteen thousand arbitrary units per cubic-centimeter-second

I don't think there's a function I would endorse that behaves in that way. 

My Position

Since I've critiqued other people's positions, I should state my own. It's polite:

I don't think there's an objective way to compare valence between different minds at all. You can anchor on neuron count and I won't criticize you, since that's at least proportional to information content, but that's still an arbitrary choice. You can claim that what you care about is a particular form of self-modelling and discount anything without a sophisticated self-model.[4] All choices of moral weighting are somewhat arbitrary. All utilitarian-ish claims about morality are about assigning values to different computations, and there's not an easy way to compare the computations in a human vs a fish vs a shrimp vs a nematode. The most reasonable critiques are critiques on the marginal consistency of different worldviews. For example, a critique which values the computations going on inside all humans except for those with red hair, is fairly obviously marginally less consistent than one which makes no reference to hair colour. Whether a worldview values one bee as much as 1 human, 0.07 humans, or 1e-6 humans is primarily a matter of choice and frankly aesthetics. Just because we're throwing out objectivity, we need not throw out 'good' and 'bad' as judgements on actions or even people. A person who treats gingers badly based on an assumption like the one above can still be said to be evil.[5] How much of the world you write off as evil is also an arbitrary judgement, and do not make that judgement lightly.

  1. ^

    What would it even mean to do that? Suppose you were into free-energy-minimization as a form of perceptual control. You could think of the brain as carrying out a series of prediction-update cycles, where each prediction was biased by some welfare-increasing term. Then you could define the total amount of suffering in the universe as the sum over all cycles of the prediction error. You'd end up a negative utilitarian, but you could do it, and it would give you an objective way of comparing between individuals. Even if this particular example is incoherent in some ways, it does at least contain a term which can be compared between individuals.

  2. ^

    Also, consider the normative statements we get if we start talking about moral weight:

      You should care about anything which keeps track of fitness-relevant informationYou should care about anything which has a common currency for decision makingYou should care about anything which labels states as good or bad for learning

    Now to me these are incorrect moral statements. 

    This doesn't actually change the previous statement, but I do find it useful when talking about morality to check every now and then with the question ' What does this imply I should care about?'

  3. ^

    I've read chunks of the rest of the report, and it gives me an eyes-glazing-over feeling which I have recently come to recognize as a telltale sign of unclear thinking. Much of it just cites different theories with no real integration of them. I will make an exception for "Does Critical Flicker-Fusion Frequency Track The Subjective Experience of Time?" which raises a very interesting point and is worth a read, at least in part.

  4. ^

    I currently think in terms of some combination of the two.

  5. ^

    I think there's a Scott Alexander piece which discusses moral disagreements of this form. The conclusion was that some worldviews can be considered evil even if they're in some sense disputes about the world, if they're sufficiently poorly-reasoned.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

福利范围报告 单元论 享乐主义 进化理论 意识近因 道德权重 主观选择
相关文章