Objective Morality (cont.)

IIIIIIIV – V – VIVIIVIIIIXXXIXIIXIIIXIVXVXVIXVIIXVIIIXIXXX
[Single-page view]

One of the more common negative responses to the idea of consequentialism/utilitarianism is to ask: OK, if everything really does just come down to a calculation of utility tradeoffs, then wouldn’t the resulting lack of respect for human dignity as an end in itself lead to some disturbingly exploitative and unjust outcomes?

The short answer is that it wouldn’t, simply because any consequence that would be an overall negative (including a breakdown of respect for human dignity) would, by definition, not be prescribed by consequentialism. Arguing that a particular course of action would lead to worse outcomes isn’t an argument against consequentialism; it’s an argument within consequentialism, about which course of action will actually lead to the best consequences. As Alexander writes:

There’s a hilarious tactic one can use to defend consequentialism. Someone says “Consequentialism must be wrong, because if we acted in a consequentialist manner, it would cause Horrible Thing X.” Maybe X is half the population enslaving the other half, or everyone wireheading, or people being murdered for their organs. You answer “Is Horrible Thing X good?” They say “Of course not!”. You answer “Then good consequentialists wouldn’t act in such a way as to cause it, would they?”

By way of elaboration, he breaks down the slavery example:

[Q]: Wouldn’t utilitarianism lead to 51% of the population enslaving 49% of the population?

The argument goes: it gives 51% of the population higher utility. And it only gives 49% of the population lower utility. Therefore, the majority benefits. Therefore, by utilitarianism we should do it.

This is a fundamental misunderstanding of utilitarianism. It doesn’t say “do whatever makes the majority of people happier”, it says “do whatever increases the sum of happiness across people the most”.

Suppose that ten people get together – nine well-fed Americans and one starving African. Each one has a candy. The well-fed Americans get +1 unit utility from eating a candy, but the starving African gets +10 units utility from eating a candy. The highest utility action is to give all ten candies to the starving African, for a total utility of +100.

A person who doesn’t understand utilitarianism might say “Why not have all the Americans agree to take the African’s candy and divide it among them? Since there are 9 of them and only one of him, that means more people benefit.” But in fact we see that that would only create +10 utility – much less than the first option.

A person who thinks slavery would raise overall utility is making the same mistake. Sure, having a slave would be mildly useful to the master. But getting enslaved would be extremely unpleasant to the slave. Even though the majority of people “benefit”, the action is overall a very large net loss.

(if you don’t see why this is true, imagine I offered you a chance to live in either the real world, or a hypothetical world in which 51% of people are masters and 49% are slaves – with the caveat that you’ll be a randomly selected person and might end up in either group. Would you prefer to go into the pro-slavery world? If not, you’ve admitted that that’s not a “better” world to live in.)

He also addresses the organ-harvesting example (AKA “the transplant problem”):

[Q]: Wouldn’t utilitarianism lead to healthy people being killed to distribute their organs among people who needed organ transplants [assuming there was no other way to save them], since each person has a bunch of organs and so could save a bunch of lives?

We’ll start with the unsatisfying weaselish answers to this objection, which are nevertheless important. The first weaselish answer is that most people’s organs aren’t compatible and that most organ transplants don’t take very well, so the calculation would be less obvious than “I have two kidneys, so killing me could save two people who need kidney transplants.” The second weaselish answer is that a properly utilitarian society would solve the organ shortage long before this became necessary […] and so this would never come up.

But those answers, although true, don’t really address the philosophical question here, which is whether you can just go around killing people willy-nilly to save other people’s lives. I think that one important consideration here is the heuristic-related one mentioned [earlier]: having a rule against killing people is useful, and what any more complicated rule gained in flexibility, it might lose in sacrosanct-ness, making it more likely that immoral people or an immoral government would consider murder to be an option (see David Friedman on Schelling points).

[…]

But note that [this] still is a consequentialist argument and subject to discussion or refutation on consequentialist grounds.

Just as an aside, the transplant problem is particularly interesting to me, because although I share the immediate intuition that, say, a surgeon killing one unsuspecting bystander to save five lives (as in Judith Jarvis Thomson’s original formulation of the problem) would be acting immorally, Julia Galef points out that it’s possible to tweak the parameters of the problem so that the answer is much less intuitively obvious (or equally obvious but in the opposite direction). For instance, what if the surgeon couldn’t just save five lives by killing the bystander, but fifty lives, or five thousand lives, or five billion lives, or every life in existence? Would it still be more moral to let everyone in the world die than to kill one bystander? The fact that you can raise or lower the utility value on one side of the equation, and thereby change whether the action on the other side of the equation would be acceptable or not, seems like a pretty good indication that utility is in fact the key variable even here. That is to say, the fundamental question isn’t so much whether the bystander has a right not to be unexpectedly killed, but whether the benefit of upholding that right outweighs the alternative.

Assuming it does, though – i.e. assuming it’s right to think that killing the bystander to save only a few patients would be worse than sparing the bystander and letting the patients die – what’s the utilitarian reason for this conclusion? I think Alexander is right with his explanation here; we have an extremely strong heuristic against sudden unprovoked murder, and that heuristic produces a ton of utility in the world specifically because it’s so strong – so maintaining the authority of that rule for use in future situations has a high (if indirect) utility value in itself, on top of the basic object-level considerations of how many lives are lost or saved in the immediate situation. We also have a strong rule against using another person merely as an unconsenting means to some end, as Kant famously put it – even if that end is helping other people – and maintaining the authority of that heuristic likewise has a high utility value due to the better outcomes it’ll tend to produce over the long run. If you want to undermine these rules, then, there has to be an extremely high utility gain on the other side of the scale to justify it; and although saving five billion lives would surely be enough to do so, it’s not quite as obvious that saving only one or two extra lives would. True, it would be good that those lives had been saved – but considering the extent to which word of the surgeon’s actions would undoubtedly spread, and the extent to which it would make everyone afraid to ever go to the hospital again (not to mention all the other negative social effects it would have), it seems clear that it would make the broader society worse off overall.

Of course, there are other factors in play here too – and there are trickier variations on this thought experiment that we might imagine (What if it happened in secret? What if it was the last act of the surgeon’s life?) – so we’ll be coming back to it again later. For now though, the point is just that any interpretation of utilitarianism that claims it would allow for things like mass nonconsensual organ-harvesting (or some other such terrible outcome) makes the basic mistake of only applying the utilitarian calculus to the object level of the situation in question (e.g. lives saved vs. lives lost), rather than recognizing that if there were negative higher-order effects at the level of the broader society, the utilitarian calculus would necessarily take those into account as well. It’s the same kind of mistake creationists make when they argue that life on earth couldn’t have evolved, because the second law of thermodynamics prevents entropy from ever decreasing within a system (that is, a system can’t ever spontaneously grow more ordered, only more chaotic). What this argument overlooks is that there’s actually a huge external factor – namely the massive amount of energy being added into the system by the sun – which tips the entropic scales and does in fact provide the necessary fuel for life to emerge and evolve. And it’s the same story with utilitarianism: Sure, a particular harm-benefit calculation might not appear to make sense if you’re only considering an isolated object-level situation as a closed system – but once you recognize that it’s not a closed system at all and that there are all kinds of external factors to account for, the conclusions suddenly make a lot more sense. It’s true that sometimes you’ll find that allowing for exceptions to widespread moral rules would in fact be better for the world in the long run – and in those cases, that’s what utilitarianism would prescribe. But in other cases, you’ll find that it would be better for the world in the long run to set an extremely high bar for breaking certain moral rules, even if it means accepting some utility losses in the short run – and in those cases, that’s what utilitarianism would prescribe. The key is just that in whatever scenario you’re considering, you have to make sure to apply the utilitarian calculus to the whole big-picture situation – the whole state of the universe – rather than to only one small part of it.

(As for the question of how exactly to decide when to abide by moral rules and when to break them, it seems to me that a pretty good meta-heuristic (not a universal rule, of course, but just a general guideline) is to follow Alexander’s “be nice, at least until you can coordinate meanness” approach – that is, don’t make an individual habit of breaking moral rules in isolated situations where it would seem to increase utility more; only allow for moral rules to be broken when it’s collectively implemented in an officially-sanctioned way by the broader society.)

Continued on next page →