少点错误 前天 00:15
It's Better To Save Infinite Shrimp From Torture Than To Save One Person
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了一个极具争议的哲学命题:拯救无限数量的虾免受极端折磨,是否比拯救一个人免于死亡更重要。文章通过一系列逻辑论证,包括对生命延续的重新定义、数量论证、演化与道德连续性、以及决策中的道德风险等角度,挑战了人类中心主义的直觉。作者认为,当面临天文数字般的福祉或痛苦时,其道德权重可能超越个体人类的生命。文章还探讨了我们直觉在处理无限概念时的局限性,并鼓励读者思考和辩论这些深刻的伦理问题,甚至建议支持动物福利项目。

✨ 哲学论证挑战人类中心主义:文章提出,拯救无限数量的虾免受极端折磨,可能比拯救一个人的生命更具道德价值。这一观点基于对生命延续的重新理解——即生命被延长而非真正被拯救,以及对数量的考量,认为天文数字的福祉或痛苦具有压倒性的道德分量。

📈 演化与道德连续性推导:通过追溯人类与类似虾的早期生命形式的演化关系,并结合“世代间道德重要性递增”的原则,文章论证了从虾到人类的道德价值可能存在连续性。这意味着,跨越漫长演化链,早期生命形式(如虾)的福祉累积起来,其道德重要性可能超过个体人类的生命。

⚖️ 决策下的道德风险与直觉局限:文章探讨了在道德不确定性下的决策问题,指出当某个行为在某种可信的道德理论下具有无限的价值时,我们有强烈的理由去执行它。同时,作者也强调人类直觉在处理无限概念时的不可靠性,认为应谨慎对待基于直觉的判断,转而依赖严谨的逻辑推理。

💡 论证的普遍性与反直觉的合理性:作者进一步指出,类似论证也适用于其他情境,例如将“免受折磨的虾”替换为“免受头痛的人”,核心问题在于如何处理大量小规模痛苦的累积效应。因此,即使结论看似反直觉,也可能反映了深层道德真理,并呼吁读者重新审视对数量和价值的认知。

💰 实际行动与伦理考量:文章最后指出,在现实中,花费相对较少的金钱可以拯救大量虾免受痛苦。作者鼓励读者基于所提出的论证,考虑支持动物福利项目,表明深刻的哲学思考可以导向具体的道德行动。

Published on August 5, 2025 4:10 PM GMT

1 Introduction

Crosspost of my blog post

Science often discovers surprising things.

Who would have thought that your great^30-billion grandfather was a prokaryote? So does philosophy. Any particular example of something that philosophers have discovered will be controversial, but I think pretty undeniable examples of discovered philosophical truths are that avoiding the repugnant conclusion is surprisingly tricky, worlds with infinite happy and miserable people are mostly incomparable, and every way of making decisions about low probabilities requires giving up some obvious-seeming principle.

Many philosophical judgments are surprising at first, but turn out to be nearly unavoidable. Here’s one of them: sparing infinite shrimp from extreme torture is vastly better than saving one human from death. This is a surprising judgment, but I think it’s supported by pretty irrefutable philosophical argument. Friend of the blog Flo Bacus recently kicked up a huge firestorm by arguing for this, and unsurprisingly, I think she is right.

 

Now, there’s some uncertainty about whether shrimp are actually conscious. So here I will argue that given our uncertainty about shrimp consciousness, we should save some number of shrimp from torture over one human. If shrimp aren’t conscious, then in fact saving shrimp over people won’t turn out for the best, but I claim that given our uncertainty in the matter, it’s expectationally worth it.

 

2 Life extension

 

I have bad news: no life has ever been saved!

Destruction of life is never really averted, merely delayed. Lives aren’t saved, they’re merely extended. What we colloquially refer to as saving a life merely means preventing someone from dying of one particular cause, so their life is lengthened until they eventually succumb to something else. If someone’s life is saved when they’re 40, and then they eventually die at the age of 80, then really all we’ve done is extend their life by 40 years.

Okay, with that out of the way, let me appeal to the following obvious principle:

It is better to prevent Graham’s number shrimp from being tortured than to extend a person’s life by a single millisecond.

Graham’s number is an astronomically large number—far too big to ever write down in the observable universe even if you wrote down each digit on a single atom.1 It’s a lot more than 100!

The principle says that if you can either prevent that many shrimp from being tortured—so that you avert vastly more suffering than has existed in human history—that would be better than extending one person’s life by one millisecond. This premise seems extremely obvious!

Now, imagine that there is some person who is currently 40. There are 1,577,847,600,000 keyboards in front of you (conveniently, the number of milliseconds in 50 years, what a coincidence). Each of them has two buttons, one labeled “shrimp” and the other is labeled “life extension.” For each keyboard, if you press “shrimp” then you spare Graham’s number shrimp from extreme torture. If you press “human,” then you add an extra millisecond to the person’s life. Note: each keyboard functions independently.

Clearly, it’s better to press the shrimp button than the life extension button. This follows from the earlier principle: it’s better to save Graham’s number shrimp from torture than to extend a person’s life by one millisecond. No matter how many other buttons you’ve pressed of each kind, it’s better to press the button that spares Graham’s number shrimp than the button that adds an extra millisecond to life! Thus, you should press every shrimp button and none of the life extension buttons.

But if you do that, rather than saving a person’s life (i.e. extending it by 50 years), you’ll spare 1,577,847,600,000Graham’s number shrimp from extreme torture! Thus, saving some number of shrimp is much better than saving one person’s life!

This argument strikes me as decisive!

3 Flo’s argument

 

In her piece,

Flo Bacus

presents another argument which I’ll present in slightly abridged form.

 

Imagine that you were an immortal being. Every 100 years, you could go on a five-minute drive to a nearby store with a button: if pressed, the button would spare 10^100 shrimp from extreme torture. Seems like quite a good idea!

But here’s the catch: people sometimes get into car accidents. If you are an immortal being, even if you are an extremely good driver, at some point, you will get into a car accident that kills someone. Now, by that time you’ll probably have saved a very large number of shrimp from torture (the odds per drive that you’ll kill someone are quite low!) But if human lives are infinitely more ethically significant than shrimp lives, then driving for five minutes every 100 years to save 10^100 shrimp would be morally wrong; it would predictably cost a human life and save shrimp.

Thus, to think that a human life matters infinitely more than shrimp, you have to think it would be wrong to drive for five minutes to save 10^100 shrimp from extreme torture. That’s obviously wrong!

4 Love on the spectrum

 

The theory of evolution implies that we evolved from creatures that were relatively like shrimp, at least with respect to their moral importance. Your great^10 billion grandfather was a creature with about the moral importance of a shrimp.

Now, here is a very plausible principle:

There is no organism in any generation between us and shrimp-like creatures for which sparing one from torture is more important than sparing any number of the previous generation’s members from torture.

For example, it isn’t true that it’s better to spare one person from the current generation from torture than to spare infinite people from our parent’s generation from torture. Nor is it true that sparing one person from our parent’s generation is better than sparing infinite people from our grandparent’s generation.

Now, there’s one more controversial premise called the transitivity of the better than relation. This is simply the idea that if A is better than B, and B is better than C, then A is better than C. It’s supported by many powerful arguments, and also it seems pretty obvious on its face.

These premises collectively imply that saving infinite shrimp from torture is better than saving one human from torture.

To simplify, let’s call our current generation generation one, the generation before generation two, the generation before that generation three, and so on. Assume that the shrimp creatures were generation 1 billion.

Saving one person from torture in generation one is less valuable than saving, say, 1,000 people in generation two. For each of those 1,000 in generation two, saving 1,000 in generation three is more valuable (which is to say, saving 1 million in generation three is more valuable). For each of those 1 million in generation 3, saving 1,000 in generation four is more valuable. You can see where I’m going with this! If you extrapolate this to 1 billion, you end up with the conclusion that saving some number of shrimp-like creatures (members of generation 1 billion) from torture is better than saving one person from torture.

If saving one person in generation N less is valuable than saving some number of people in generation N+1, then saving a sufficiently large number in any previous generation is more valuable than saving one in the current generation. Because very distant generations contain shrimp-like creatures, saving some number of shrimp from torture is more important than saving one person from torture.

You might worry that this generalizes to bacteria, but bacteria aren’t conscious. There was a first conscious organism, and that organism mattered infinitely more than the one before.

5 Moral risk

 

The ethics of moral risk is notoriously fraught! It’s unclear how to make decisions under ethical uncertainty, particularly when the units of value across different ethical theories aren’t really comparable. But there’s a pretty plausible principle:

If some action might, on some plausible ethical theory, be the best thing that anyone ever did by infinite orders of magnitude, you have a very strong reason to perform the action—vastly stronger than your reasons to perform an action that’s only somewhat good on other theories.

For example, if I have non-trivial credence in utilitarianism, and some action is infinitely good according to utilitarianism, then I should regard that action as being vastly better than actions that are just pretty good on other theories—e.g. an action that saves a single person. But on any view according to which welfare matters, saving infinite shrimp from torture is the best thing anyone ever did by infinite orders of magnitude, so given uncertainty, it seems there are strong reasons to perform it.

6 The Huemer argument

 

(For the paper presenting this in a different context, see

Michael Huemer

’s paper here, and his blog post here).

 

Suppose you think it’s better to prevent one person from dying than to spare any number of shrimp from torture. Presumably it’s better to save some number of shrimp than to have a very low chance of saving a person from dying—saving 10^100 shrimp is better than reducing a person’s risk of dying one second from now by one googolth.

Thus, there must be some critical probability threshold X, such that an X percent chance of saving a life is better than saving any number of shrimp. For instance, maybe X is 10%, so a 10% chance of saving a person is more valuable than saving any number of shrimp, but saving infinite shrimp is better than a 5% chance of saving a life.

Here’s the problem: this view violates the following plausible constraint on rationality. Suppose you should perform action A, and then you should perform action B after performing action A. Then you should perform actions A and B together.

But now suppose that action A saves googol shrimp rather than reducing a person’s risk of death by 8%. Action B also saves googol shrimp rather than reducing a person’s risk of death by 8%. Each individual action is worth taking, since they’re both below the 10% threshold. But together the actions incur a more than 10% risk of death to save 2googol shrimp. Thus, together the actions would be impermissible, even though both actions would individually be right.

Now, you could hold that actions A and B both stop being permissible if you’ll have the option to take the other one—so you should only take action A if you won’t later be able to take action B. But this is very odd. Why would the value of saving a bunch of shrimp vs reducing someone’s risk of death have anything at all to do with whether you will later, at some other time, save a totally separate bunch of shrimp or reduce the risk of death of someone totally different. We could imagine actions A and B being spaced millions of years apart—in such a scenario, it seems utterly bizarre that they’d affect the worthwhileness of the other.

Now, you could just bite the bullet and hold that it’s never worth risking human life to save any number of shrimp, but this would imply that you shouldn’t drive for one minute to save infinite shrimp, because this would impose a tiny risk to other people. That’s pretty crazy! Thus, you’re either committed to:

    Thinking that you shouldn’t drive for a minute to save infinite shrimp from excruciating torture.Thinking that whether actions that save shrimp rather than reducing people’s risk of death are worth performing has to do with other, totally unrelated actions that affect completely different people and shrimp.Thinking that sometimes each of two actions is right to do, but performing them jointly is wrong. But even that’s not quite enough, because if you think each individual action is worth taking, then you’ll inevitably think that one should perform a sequence of actions that imposes risks on people above the critical threshold to save a bunch of shrimp.

7 This isn’t actually such a weird result!

 

So far I’ve given arguments that strike me as extremely strong. But you might think that the conclusion is just so weird that you shouldn’t accept it. I’ll try to disabuse you of this here! I think this conclusion isn’t so surprising or revisionary.

I know I’ve said these things many times, but extreme suffering is bad! It’s bad to be in lots of pain. Preventing an infinite amount of something very bad is infinitely good. So sparing infinite shrimp from intense pain is infinitely good. And it’s better to do something infinitely good than to save a human life.

Every plausible view will hold that well-being and suffering are among the things in the world that matter. So when astronomical quantities of suffering are on the line, they ought to provide overriding considerations.

There are also good reasons not to trust our direct intuitions on this matter. People just don’t have very good intuitions about infinitely big numbers! People will pay nearly the same amount to save 2,000, 20,000, and 200,000 birds. If our intuitions aren’t even sensitive to the difference between 2,000 and 200,000, why in the world would we trust them when it comes to literally infinite quantities?

Certainly our intuitions don’t closely track value—we don’t intuitively grok that sparing infinite shrimp is infinity times better than sparing a million shrimp. And, as I’ve argued at length elsewhere, there are reasons to distrust our direct intuitions about shrimp because they’re unsympathetic and weird-looking! Certainly behind the veil of ignorance, we’d prioritize the interests of infinite shrimp over one person.

So on the one side we have a single intuition that’s untrustworthy many times over, and on the other, we have many different extremely obvious intuitions. I know which one I’m going with!

Now, a person might object that if it’s a conflict between thinking that shrimp matter at all or that infinite shrimp matter more than saving a person, they’d rather give up the premise that shrimp matter at all. But I don’t think this is reasonable.

First of all, the arguments above were mostly reasons to think shrimp mattered—only some took shrimp mattering at least a tiny amount for granted, and then argued that they had great aggregate weight.

Second, as already discussed, you shouldn’t trust your direct intuitions on this matter.

Third—and most importantly—the arguments I gave generalize to other cases. Similar arguments establish that preventing infinite headaches is better than preventing a single death (in the arguments above, just replace “spare shrimp from torture,” with “spare people from headaches,” and change around the spectrum in argument four. Really the deeper problem is with the idea that no number of small harms aggregate to a big harm, and you can’t save that idea just by not caring about shrimp! Just as it would be silly to conclude that headaches don’t matter at all to avoid the result that some number of headaches are worse than a death, the analogous reasoning with respect to shrimp is similarly in error.

8 So…

 

I think the arguments that infinite shrimp matter more than a single person are pretty decisive! Sparing enough shrimp really is better than sparing one human. Now, it turns out that in practice, for five thousand dollars, you can save about 75 million shrimp from an agonizing torture. I don’t know exactly how many shrimp being tortured is as bad as one human dying, but it’s probably less than 75 million! So if you bought the arguments I’ve given, I’d encourage you to give some money to the shrimp welfare project!


 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

哲学伦理 动物福利 生命价值 数量论证 道德风险
相关文章