Published on August 5, 2025 4:10 PM GMT
1 Introduction
Crosspost of my blog post.
Science often discovers surprising things.
Who would have thought that your great^30-billion grandfather was a prokaryote? So does philosophy. Any particular example of something that philosophers have discovered will be controversial, but I think pretty undeniable examples of discovered philosophical truths are that avoiding the repugnant conclusion is surprisingly tricky, worlds with infinite happy and miserable people are mostly incomparable, and every way of making decisions about low probabilities requires giving up some obvious-seeming principle.
Many philosophical judgments are surprising at first, but turn out to be nearly unavoidable. Here’s one of them: sparing infinite shrimp from extreme torture is vastly better than saving one human from death. This is a surprising judgment, but I think it’s supported by pretty irrefutable philosophical argument. Friend of the blog Flo Bacus recently kicked up a huge firestorm by arguing for this, and unsurprisingly, I think she is right.
Now, there’s some uncertainty about whether shrimp are actually conscious. So here I will argue that given our uncertainty about shrimp consciousness, we should save some number of shrimp from torture over one human. If shrimp aren’t conscious, then in fact saving shrimp over people won’t turn out for the best, but I claim that given our uncertainty in the matter, it’s expectationally worth it.
2 Life extension
I have bad news: no life has ever been saved!
Destruction of life is never really averted, merely delayed. Lives aren’t saved, they’re merely extended. What we colloquially refer to as saving a life merely means preventing someone from dying of one particular cause, so their life is lengthened until they eventually succumb to something else. If someone’s life is saved when they’re 40, and then they eventually die at the age of 80, then really all we’ve done is extend their life by 40 years.
Okay, with that out of the way, let me appeal to the following obvious principle:
It is better to prevent Graham’s number shrimp from being tortured than to extend a person’s life by a single millisecond.
Graham’s number is an astronomically large number—far too big to ever write down in the observable universe even if you wrote down each digit on a single atom.1 It’s a lot more than 100!
The principle says that if you can either prevent that many shrimp from being tortured—so that you avert vastly more suffering than has existed in human history—that would be better than extending one person’s life by one millisecond. This premise seems extremely obvious!
Now, imagine that there is some person who is currently 40. There are 1,577,847,600,000 keyboards in front of you (conveniently, the number of milliseconds in 50 years, what a coincidence). Each of them has two buttons, one labeled “shrimp” and the other is labeled “life extension.” For each keyboard, if you press “shrimp” then you spare Graham’s number shrimp from extreme torture. If you press “human,” then you add an extra millisecond to the person’s life. Note: each keyboard functions independently.
Clearly, it’s better to press the shrimp button than the life extension button. This follows from the earlier principle: it’s better to save Graham’s number shrimp from torture than to extend a person’s life by one millisecond. No matter how many other buttons you’ve pressed of each kind, it’s better to press the button that spares Graham’s number shrimp than the button that adds an extra millisecond to life! Thus, you should press every shrimp button and none of the life extension buttons.
But if you do that, rather than saving a person’s life (i.e. extending it by 50 years), you’ll spare 1,577,847,600,000Graham’s number shrimp from extreme torture! Thus, saving some number of shrimp is much better than saving one person’s life!
This argument strikes me as decisive!
3 Flo’s argument
In her piece,
presents another argument which I’ll present in slightly abridged form.
Imagine that you were an immortal being. Every 100 years, you could go on a five-minute drive to a nearby store with a button: if pressed, the button would spare 10^100 shrimp from extreme torture. Seems like quite a good idea!
But here’s the catch: people sometimes get into car accidents. If you are an immortal being, even if you are an extremely good driver, at some point, you will get into a car accident that kills someone. Now, by that time you’ll probably have saved a very large number of shrimp from torture (the odds per drive that you’ll kill someone are quite low!) But if human lives are infinitely more ethically significant than shrimp lives, then driving for five minutes every 100 years to save 10^100 shrimp would be morally wrong; it would predictably cost a human life and save shrimp.
Thus, to think that a human life matters infinitely more than shrimp, you have to think it would be wrong to drive for five minutes to save 10^100 shrimp from extreme torture. That’s obviously wrong!
4 Love on the spectrum
The theory of evolution implies that we evolved from creatures that were relatively like shrimp, at least with respect to their moral importance. Your great^10 billion grandfather was a creature with about the moral importance of a shrimp.
Now, here is a very plausible principle:
There is no organism in any generation between us and shrimp-like creatures for which sparing one from torture is more important than sparing any number of the previous generation’s members from torture.
For example, it isn’t true that it’s better to spare one person from the current generation from torture than to spare infinite people from our parent’s generation from torture. Nor is it true that sparing one person from our parent’s generation is better than sparing infinite people from our grandparent’s generation.
Now, there’s one more controversial premise called the transitivity of the better than relation. This is simply the idea that if A is better than B, and B is better than C, then A is better than C. It’s supported by many powerful arguments, and also it seems pretty obvious on its face.
These premises collectively imply that saving infinite shrimp from torture is better than saving one human from torture.
To simplify, let’s call our current generation generation one, the generation before generation two, the generation before that generation three, and so on. Assume that the shrimp creatures were generation 1 billion.
Saving one person from torture in generation one is less valuable than saving, say, 1,000 people in generation two. For each of those 1,000 in generation two, saving 1,000 in generation three is more valuable (which is to say, saving 1 million in generation three is more valuable). For each of those 1 million in generation 3, saving 1,000 in generation four is more valuable. You can see where I’m going with this! If you extrapolate this to 1 billion, you end up with the conclusion that saving some number of shrimp-like creatures (members of generation 1 billion) from torture is better than saving one person from torture.
If saving one person in generation N less is valuable than saving some number of people in generation N+1, then saving a sufficiently large number in any previous generation is more valuable than saving one in the current generation. Because very distant generations contain shrimp-like creatures, saving some number of shrimp from torture is more important than saving one person from torture.
You might worry that this generalizes to bacteria, but bacteria aren’t conscious. There was a first conscious organism, and that organism mattered infinitely more than the one before.
5 Moral risk
The ethics of moral risk is notoriously fraught! It’s unclear how to make decisions under ethical uncertainty, particularly when the units of value across different ethical theories aren’t really comparable. But there’s a pretty plausible principle:
If some action might, on some plausible ethical theory, be the best thing that anyone ever did by infinite orders of magnitude, you have a very strong reason to perform the action—vastly stronger than your reasons to perform an action that’s only somewhat good on other theories.
For example, if I have non-trivial credence in utilitarianism, and some action is infinitely good according to utilitarianism, then I should regard that action as being vastly better than actions that are just pretty good on other theories—e.g. an action that saves a single person. But on any view according to which welfare matters, saving infinite shrimp from torture is the best thing anyone ever did by infinite orders of magnitude, so given uncertainty, it seems there are strong reasons to perform it.
6 The Huemer argument
(For the paper presenting this in a different context, see
’s paper here, and his blog post here).
Suppose you think it’s better to prevent one person from dying than to spare any number of shrimp from torture. Presumably it’s better to save some number of shrimp than to have a very low chance of saving a person from dying—saving 10^100 shrimp is better than reducing a person’s risk of dying one second from now by one googolth.
Thus, there must be some critical probability threshold X, such that an X percent chance of saving a life is better than saving any number of shrimp. For instance, maybe X is 10%, so a 10% chance of saving a person is more valuable than saving any number of shrimp, but saving infinite shrimp is better than a 5% chance of saving a life.
Here’s the problem: this view violates the following plausible constraint on rationality. Suppose you should perform action A, and then you should perform action B after performing action A. Then you should perform actions A and B together.
But now suppose that action A saves googol shrimp rather than reducing a person’s risk of death by 8%. Action B also saves googol shrimp rather than reducing a person’s risk of death by 8%. Each individual action is worth taking, since they’re both below the 10% threshold. But together the actions incur a more than 10% risk of death to save 2googol shrimp. Thus, together the actions would be impermissible, even though both actions would individually be right.
Now, you could hold that actions A and B both stop being permissible if you’ll have the option to take the other one—so you should only take action A if you won’t later be able to take action B. But this is very odd. Why would the value of saving a bunch of shrimp vs reducing someone’s risk of death have anything at all to do with whether you will later, at some other time, save a totally separate bunch of shrimp or reduce the risk of death of someone totally different. We could imagine actions A and B being spaced millions of years apart—in such a scenario, it seems utterly bizarre that they’d affect the worthwhileness of the other.
Now, you could just bite the bullet and hold that it’s never worth risking human life to save any number of shrimp, but this would imply that you shouldn’t drive for one minute to save infinite shrimp, because this would impose a tiny risk to other people. That’s pretty crazy! Thus, you’re either committed to:
- Thinking that you shouldn’t drive for a minute to save infinite shrimp from excruciating torture.Thinking that whether actions that save shrimp rather than reducing people’s risk of death are worth performing has to do with other, totally unrelated actions that affect completely different people and shrimp.Thinking that sometimes each of two actions is right to do, but performing them jointly is wrong. But even that’s not quite enough, because if you think each individual action is worth taking, then you’ll inevitably think that one should perform a sequence of actions that imposes risks on people above the critical threshold to save a bunch of shrimp.
7 This isn’t actually such a weird result!
So far I’ve given arguments that strike me as extremely strong. But you might think that the conclusion is just so weird that you shouldn’t accept it. I’ll try to disabuse you of this here! I think this conclusion isn’t so surprising or revisionary.
I know I’ve said these things many times, but extreme suffering is bad! It’s bad to be in lots of pain. Preventing an infinite amount of something very bad is infinitely good. So sparing infinite shrimp from intense pain is infinitely good. And it’s better to do something infinitely good than to save a human life.
Every plausible view will hold that well-being and suffering are among the things in the world that matter. So when astronomical quantities of suffering are on the line, they ought to provide overriding considerations.
There are also good reasons not to trust our direct intuitions on this matter. People just don’t have very good intuitions about infinitely big numbers! People will pay nearly the same amount to save 2,000, 20,000, and 200,000 birds. If our intuitions aren’t even sensitive to the difference between 2,000 and 200,000, why in the world would we trust them when it comes to literally infinite quantities?
Certainly our intuitions don’t closely track value—we don’t intuitively grok that sparing infinite shrimp is infinity times better than sparing a million shrimp. And, as I’ve argued at length elsewhere, there are reasons to distrust our direct intuitions about shrimp because they’re unsympathetic and weird-looking! Certainly behind the veil of ignorance, we’d prioritize the interests of infinite shrimp over one person.
So on the one side we have a single intuition that’s untrustworthy many times over, and on the other, we have many different extremely obvious intuitions. I know which one I’m going with!
Now, a person might object that if it’s a conflict between thinking that shrimp matter at all or that infinite shrimp matter more than saving a person, they’d rather give up the premise that shrimp matter at all. But I don’t think this is reasonable.
First of all, the arguments above were mostly reasons to think shrimp mattered—only some took shrimp mattering at least a tiny amount for granted, and then argued that they had great aggregate weight.
Second, as already discussed, you shouldn’t trust your direct intuitions on this matter.
Third—and most importantly—the arguments I gave generalize to other cases. Similar arguments establish that preventing infinite headaches is better than preventing a single death (in the arguments above, just replace “spare shrimp from torture,” with “spare people from headaches,” and change around the spectrum in argument four. Really the deeper problem is with the idea that no number of small harms aggregate to a big harm, and you can’t save that idea just by not caring about shrimp! Just as it would be silly to conclude that headaches don’t matter at all to avoid the result that some number of headaches are worse than a death, the analogous reasoning with respect to shrimp is similarly in error.
8 So…
I think the arguments that infinite shrimp matter more than a single person are pretty decisive! Sparing enough shrimp really is better than sparing one human. Now, it turns out that in practice, for five thousand dollars, you can save about 75 million shrimp from an agonizing torture. I don’t know exactly how many shrimp being tortured is as bad as one human dying, but it’s probably less than 75 million! So if you bought the arguments I’ve given, I’d encourage you to give some money to the shrimp welfare project!
Discuss