In my last post, I talked about why I’m not religious and why I don’t think religion provides a good basis for morality. Whenever this topic comes up, one of the most common responses from believers is to ask: Well, if you don’t think morality comes from God, where do you think it comes from? If God isn’t defining which actions are good and which are bad, then who does? Does everybody just use their own definitions of good and bad? Are we stuck with a system of moral relativism, in which there’s no basis for saying anyone’s definition is more legitimate than anyone else’s, and the conceptions of morality promoted by the Nazis and the Taliban are considered to be no less valid than those promoted by Martin Luther King and Mahatma Gandhi? Is morality really nothing more than a matter of cultural convention or personal opinion or preference – like a favorite ice cream flavor or something – with no objectively correct answer?
It’s a reasonable concern, because there actually are quite a few people who do believe in this kind of relativism. Some of them are well-meaning progressive types, who start from the premise that it’s good to respect other cultures (which is certainly true!) but then take that premise as such an absolute that they extend it far beyond its reasonable limits, to the point that they’ll readily accept even the most brutal and inhumane practices in the name of universal tolerance. This attitude can lead to some ugly results – as when, for instance, the government of Brunei recently attempted to justify its draconian penal code (which imposes punishments like amputation and stoning to death for offenses like adultery, theft, and homosexual behavior) with the assertion that “it must be appreciated that the diversities in culture, traditional and religious values in the world means that there is no one standard that fits all.” Not exactly that progressive after all, it turns out.
But in addition to the “all cultural practices are equally respectable” crowd (AKA “normative relativists”), there are also plenty of people – including professional philosophers – who are deeply opposed to the idea of tolerating all practices equally, yet can’t find any way of grounding that stance objectively, and so feel compelled to bite the bullet and admit that it’s all subjective (AKA “meta-ethical relativists”).
Personally, I share the intuition held by most people that moral relativism can’t be the right answer. But what’s the alternative? If people’s various conceptions of goodness and badness are totally subjective, how can we say that statements like “Torturing innocent people for fun is wrong” or “It’s immoral to enslave other people for profit” are somehow objectively correct? On what basis can we claim that objective moral truths exist? Is such a thing even possible?
I actually think it is. But before I explain why, I should point out that there are actually two distinct questions we need to answer here. The first question, of course, is how we can objectively say what’s good and what’s bad. But even if we manage to answer that, it doesn’t automatically mean that we’ve solved all of morality. There’s also the second question, which is: Even if we can objectively define good and bad, why should we then do what’s good rather than what’s bad? How do we ground the assertion that we ought to do what’s right rather than simply doing what’s best for ourselves?
There’s a lot of overlap between these questions; but they do require two distinct answers, and answering one won’t necessarily answer the other. Ultimately, I’ll try to answer both in this post. But let’s take them one at a time, starting with the first one.
The first thing I think we have to recognize is that the relativists are actually right about one point: When we talk about goodness and badness (not merely in the specific moral sense of “right” and “wrong,” but in the general sense of “desirable” and “undesirable” more broadly), we aren’t talking about some inherent property that the universe has, or that particular acts have. Properties of goodness and badness don’t just exist “out there” in the world, independent of anyone’s will. Acts can only have positive or negative value if that value is ascribed to them by sentient beings. If the world hypothetically consisted of nothing but rocks, and the wind caused one rock to roll down a hill and land on another rock, that event wouldn’t have any value one way or the other; it would just be something that happened. (By contrast, if the rock crushed someone’s foot, that person would certainly consider that a bad outcome.) The only way an event can be called good or bad (i.e. positive-value or negative-value) is if there’s at least one sentient being around to consider it good or bad. In other words, you can’t have value without a valuer.
That’s the part I agree with the relativists on: These kinds of valuations are necessarily subjective. The word “goodness” is basically just a synonym for “the positive value that we ascribe to things.” But does that then mean that we can never objectively say which actions are more good (i.e. have higher value) and which are more bad (i.e. have lower value)? On the contrary – as odd as it might sound, I think that subjective valuations are actually the very thing in which the objective goodness or badness of our actions can be grounded. Let me explain.
See, the key point here is that just because something is based on subjective valuations or experiences doesn’t mean that we can’t make objective statements about it. This is something we can see clearly enough when it comes to things other than goodness/badness, like our emotions and tactile sensations. Let’s say, for instance, that you burn your hand on a hot stove. The pain you feel from that experience is entirely subjective, true. But the fact that the pain exists is not. It’s not just a matter of opinion whether burning your hand causes you pain; it’s an objective fact that the universe in which you burn your hand contains more pain than an otherwise identical universe in which you don’t burn your hand. That’s not because painfulness is some inherent physical property of hot stoves; the subjective experience of hurting is what constitutes pain. But it’s because of that subjectivity-based definition that we can objectively say that some things are more painful than others. When we say that hot stoves are objectively more painful than room-temperature stoves, what we’re saying is that they objectively produce more of that subjective experience known as pain.
Similarly, take something like disgust. We can all agree that disgust is an inherently subjective valuation, not something that exists independently “out there” in the universe. The only way something can be called disgusting is if someone thinks it’s disgusting. And yet, it’s also possible to say that a hypothetical universe in which (say) everyone was asked to examine a bowl of vomit would objectively contain more disgust than an otherwise-identical universe in which everyone was asked to examine a bowl of roses. Despite the fact that disgust is a purely subjective valuation, we can nonetheless say objectively that certain universe-states will contain more or less disgust than others. Again, just because something is based on subjective valuations doesn’t mean that we can’t make objective statements about it.
And we can apply this same framework to the concept of goodness. Moral philosophers will often suggest that the only way to objectively ground the concept of goodness is to somehow find a way around the inconvenient fact that goodness seems to be nothing but a subjective valuation that we ascribe to things. But instead of trying to find a way around that premise, I think we can actually accept it as obviously true, embrace it, and run with it – and ironically, it’s precisely by doing so that we can form an objective basis for saying what’s good and what’s bad.
To illustrate what I mean, here’s one more thought experiment to go with the others above: Imagine a hypothetical universe in which (say) a mother noticed that her son was sad and gave him a hug; and then imagine a second otherwise-identical universe in which she stabbed him instead. In these scenarios, we can safely say that the inhabitants of the first universe (particularly the mother and son themselves) would ascribe a higher level of positive value (i.e. goodness) to the hug than the inhabitants of the second universe would ascribe to the stabbing. (Maybe if the mother in the second universe was a psychotic sadist or something, she might consider stabbing her son to be a good thing; but however much positive value she might ascribe to it, her son would certainly ascribe a whole lot more negative value to it – not to mention whatever negative value might be ascribed to it by the other inhabitants of their universe – so on net, the overall level of goodness ascribed to the stabbing would be significantly negative.) All of these valuations would be totally subjective, of course; the level of goodness ascribed to the mother’s actions by one person might be totally different from the level of goodness ascribed to them by another person. But what wouldn’t be subjective is whether the hugging universe ultimately contained a higher level of this subjectively-ascribed goodness than the stabbing universe. It inarguably would. What this shows, then, is that by accepting that goodness is nothing but a subjective valuation that we ascribe to things, we actually make it possible to make objective statements about which acts have higher levels of goodness than others.
Another way of saying this is that we don’t have an affinity toward certain things because they’re innately good and an aversion to other things because they’re innately bad; rather, our affinities and aversions toward these things are what constitute their goodness or badness. If we give something a higher level of subjective valuation, we can objectively say that it has a higher level of goodness, and if we give something a lower level of subjective valuation, we can objectively say that it has a lower level of goodness – because those subjective valuations are what goodness and badness are. Again, the only value that can exist is value that is ascribed to something by sentient beings like us. So if one action has a higher level of subjectively-ascribed value than another action, it’s necessarily better by definition, in the same sense that a triangle necessarily has three sides or a circle is necessarily round.
This might sound superficially similar to moral relativism; but it couldn’t be more different in its conclusions. Under moral relativism, there would be no basis for saying that (for instance) a society which routinely sacrificed children against their will was acting badly. Under this alternative system, though, if the society’s affinity for sacrificing its children was outweighed by the children’s aversion to being sacrificed (which it certainly would be, assuming the children were normal humans), we’d be able to objectively say that the society’s practice of child sacrifice was morally bad. (And assuming there were also negative effects on the broader society, this would be even more the case.) Weighing all the subjective valuations against each other would yield a total level of goodness or badness that we could objectively identify as positive or negative. And that’s the bottom line here: We can evaluate moral goodness as an objective quantification of individuals’ subjectively-held valuations and interests.
This is essentially what I see as the basis for a kind of utilitarianism. If you’ve studied moral philosophy before, you’ll already be familiar with utilitarianism – but if not, its core idea is that the morality of an action is defined by how much that action benefits sentient beings like us and how little it harms them. (For this reason, it’s considered a branch of consequentialism, which simply says that the morality of an action is defined by its consequences.) This isn’t the only way of trying to define morality, of course; there are also philosophies like deontology – utilitarianism’s biggest rival – which says that the consequences of an action aren’t actually what determine its morality at all, but that morality instead consists solely of following certain unconditional rules at all times, like “don’t lie” and “don’t steal” and so on, regardless of the consequences. Under the deontological model, if an axe murderer shows up at your front door and asks you where your best friend is, you should tell him exactly where she is – even if you know he’s going to go kill her – because lying is immoral, period, in every context. But according to utilitarianism, morality is more than just unconditional rule-following; the conditions actually matter. You have to weigh the harms against the benefits – and whichever choice produces the least amount of harm and the greatest amount of benefit (AKA utility, AKA the kind of subjectively-ascribed goodness I’ve been talking about), that’s the moral choice.
So in the above example, for instance, if you imagined using a numerical scale to rate the amount of utility that would accrue to everyone involved, you might conclude something like: Telling the axe murderer where your friend is would give a +20 utility boost to the murderer (since it would help him in his mission to kill her), a +10 boost to yourself (since you’d get the satisfaction of being honest), and a +500 boost to the broader society (since it would help reinforce a social norm of everyone being honest all the time), but it would also result in a reduction of -10,000,000 utility for your friend (since she would be killed and lose everything), a -10,000 reduction for all her loved ones (since they’d be devastated by her death), a -300 reduction for yourself (since you’d have to live with the guilt of knowing you abetted your friend’s murder), and a -500 reduction for the broader society (since it would help reinforce a social norm of everyone readily abetting axe murderers upon request) – so on net, telling the axe murderer where your friend is would be a significant negative in terms of total utility (a loss of -10,010,270 overall) relative to the alternative choice of lying to him. These numerical utility ratings, of course, would be based entirely on each person’s subjective valuation of the possible outcomes; the utility boost that the murderer would get from killing your friend might be higher or lower than +20 depending on how much he wanted to kill her, the utility reduction that your friend would get from dying might be higher or lower than -10,000,000 depending on how much she wanted to live, and so on. But knowing the subjective valuations of everyone involved would give us all the information we’d need to objectively say whether the act of lying to the murderer would ultimately be a good thing (positive utility) or a bad thing (negative utility) – because again, goodness and badness are nothing but subjective valuations themselves, and it is in fact possible to make objective statements about such things.
(Obviously, here in the real world we can’t know the utility values of every situation with the kind of exact numerical precision used in the example above; I just made up the numbers there for illustrative purposes. But we can at least approximate. And who knows, maybe at some point in the near future, we’ll develop such advanced brain scanning and computing technology that we actually will be able to determine exactly how much utility people ascribe to things. Either way though, the fact that we might only ever have an imperfect understanding of the objective moral truth of any given situation doesn’t change the fact that there is an objective truth there to be found, whether our estimations of it are perfect or not.)
Needless to say, I think utilitarianism/consequentialism makes a lot more sense than deontology as a foundation for morality – not only because it just seems more intuitively plausible (Would a deontologist refuse to tell a lie even if the consequence was that the entire universe would be destroyed?), but also because the fundamental basis for deontology just doesn’t seem like something you can actually pin down once you probe into its internal logic. For instance, in the example of the axe murderer, you might refuse to tell a lie because you consider “tell the truth” to be an inviolable moral duty; but who’s to say that you couldn’t just as easily consider something like “protect the innocent” or “don’t abet murder” to be an inviolable moral duty (which would lead you to take the opposite action and lie to the murderer)? Which rules are the ones you should consider absolute, and how should you make that determination? The guideline given by Immanuel Kant, history’s most famous deontologist philosopher, is that you should only consider something to be moral if it’s universalizable – meaning that (according to its most popular interpretation) you should only do something if you’d be willing to live in a world where everyone did it. Hence, telling lies would be considered immoral, since you wouldn’t want to live in a world where everyone lied whenever they thought they could get away with it. But by that same token, refusing to protect an innocent person from being murdered would also have to be considered immoral, since you wouldn’t want to live in a world where people readily abetted murderers upon request. The moral duty of telling the truth and the moral duty of protecting innocent life would fundamentally conflict with each other in this scenario – and since (for the purposes of this thought experiment) you wouldn’t be able to uphold both at the same time, how would you then figure out what to do? One approach, of course, would be to try and avoid the contradiction by formulating rules that were more narrowly tailored to particular circumstances – i.e. instead of having broad rules like “don’t lie” and “protect the innocent,” you could have much more specific rules like “don’t lie to firefighters when they ask you where the occupants of a burning building are” or “do go ahead and lie to axe murderers when they ask you where their would-be victims are hiding.” But even then, there are always so many nuances of circumstance in every decision, and accordingly so many potential moral tradeoffs to account for, that no two decisions are exactly alike; so if you wanted rules that were specific enough to never conflict with any other rules, you’d have to get extremely specific, to the point where you’d practically be creating a new rule for every individual situation on a one-by-one basis. And at that point, you’d be defeating the whole purpose of deontology, because you’d no longer be using broadly generalizable rules of moral behavior at all; you’d essentially just be doing a more convoluted version of what utilitarians do, weighing different moral considerations against each other according to the circumstances of each situation.
At any rate, it seems to me that the whole idea of trying to have a rule-based system of morality that’s separate from all consequentialist considerations is futile from the start, regardless of how you resolve such conflicts between rules. After all, the only way of determining whether a rule can be considered universalizable in the first place is to ask whether you’d rationally want to live in a hypothetical world where the rule had been made standard for everyone to follow – and how can you answer that question without basing it on your subjective judgment of what the consequences of that hypothetical scenario would be? How can you determine whether it would be desirable to live in a world where everybody lied, or everybody stole, or everybody killed, without considering what the consequences of those actions would be if they were universalized?
This was the argument that Jeremy Bentham, the founder of utilitarianism, made against deontology. According to him (and his successor John Stuart Mill), despite deontology’s claim to be a rival theory to consequentialism, once you drilled down far enough, it turned out that it was actually based on consequentialist considerations itself without realizing it. In fact, not only was deontology subsumed by consequentialism; so was every other serious ethical theory on the market. As Michael Sandel puts it:
Bentham’s argument for the principle that we should maximize utility takes the form of a bold assertion: There are no possible grounds for rejecting it. Every moral argument, he claims, must implicitly draw on the idea of maximizing [utility]. People may say they believe in certain absolute, categorical duties or rights. But they would have no basis for defending these duties or rights unless they believed that respecting them would maximize human [utility], at least in the long run.
In other words, even if you had a system of morality that tried to define the goodness of an act by something other than its consequences – like whether the person doing it had good motives, or whether their actions adhered to a certain set of rules, etc. – once you actually dug a little deeper and asked what the basis was for those criteria (What is it that makes good motives good? What is it that makes adherence to certain rules good? Etc.), you’d ultimately have to either arrive at some consequentialist/utilitarian justification for what you were calling good, or else find yourself stuck in a tautology. That isn’t to say, of course, that it wouldn’t even be theoretically possible to have a normative system that had nothing to do with utility – you could, for instance, have a system that said something like “what constitutes goodness is always wearing green on Sundays” – but at that point it seems fair to say that what you’d then have would no longer be a real system of morality at all, but something else entirely.
(Speaking of which, I should mention for the sake of completeness that the brand of deontology promoted by Kant himself was actually somewhat different from the more popular formulation of deontology I’ve been addressing here, and did in fact make more of an effort to define goodness in a non-consequentialist way. I don’t think he was ultimately successful in this effort; as G. W. F. Hegel points out, the standard for goodness Kant uses seems more like a test of non-contradiction than a true test of morality. Still, a lot of Kant’s ideas are genuinely valuable even within a consequentialist framework, and I’ll be bringing a few of them back into the discussion later on.)
I could keep going on about the whole consequentialism vs. deontology debate, but others have already covered it exhaustively and it’s not really my main focus here, so I don’t want to spend too much more time on it. I’ll just add that if you’re still on the fence about it (or even if you aren’t), I’d highly recommend Scott Alexander’s post on the subject here; I’ll be quoting him quite a bit in this discussion. (See also his brief response here to T.M. Scanlon’s “incommensurability” counterargument.)
Now of course, none of this is to say that consequentialism and utilitarianism don’t raise some tricky questions of their own. For one thing, if morality is always a matter of having to weigh various interests against each other (as opposed to just having black-and-white rules which say that certain actions are always right and others are always wrong), does that mean that nothing is truly sacrosanct? Are there really no duties or rights – like people’s lives, or their freedom – that are so unconditional that they couldn’t be violated if doing so would provide a big enough net utility gain?
Any time you start considering the idea of potential moral tradeoffs, it’s easy to fall back onto absolutist principles like “there are certain rights that must never be violated for any reason,” and “life must be protected at all costs.” But the thing is, we disregard these principles all the time, and for good reasons. We restrict people’s right to freedom of speech any time we prohibit them from falsely shouting “fire!” in a crowded theater. We endanger people’s lives any time we allow the speed limit to be higher than 20 miles per hour. If we really considered these rights to be totally inviolable, we could guarantee total freedom of speech by allowing people to say whatever they wanted at any time; we could guarantee maximal public safety by confining everyone to protective bubbles at all times and never letting anyone do anything that might endanger them in any way; and so on. The fact that virtually none of us think these are good ideas reveals that, although it’s generally good to think of certain rights and liberties as being absolutely inviolable, there are in fact real limits to them that have to be recognized. The utility value of these rights and liberties is immensely high, no doubt – and should be treated accordingly – but it isn’t infinite. And so if the harm of unconditionally guaranteeing them in some specific situation would outweigh any possible benefit, it would be better not to guarantee them in that situation.
This is especially true considering that in most such situations, the thing that shifts the balance of the utility calculations in favor of violating some particular right or liberty is the fact that doing so would be the only way to protect some other equally important right or liberty. We have a habit here in the US of treating freedom as some kind of binary thing that you either have or don’t have; you’re either totally free, or your rights are being infringed (and therefore you’re being oppressed). But in truth, it isn’t possible to have absolute freedom at all times, simply because different freedoms often conflict with each other. One person’s right to use their property as they see fit (say, by building a factory) might conflict with another person’s right to life and health (if the pollution from the factory would damage their lungs). A newspaper’s right to freely publish news stories might conflict with their subjects’ right to privacy (if the newspaper published details of those people’s personal lives). In order to protect people’s rights, other people’s rights sometimes have to be impinged upon. Again, it all comes down to the utilitarian process of weighing the various interests against each other. And although things like rights and duties and generalized rules of conduct absolutely can (and should) play a role within this utilitarian framework, they don’t form the foundation themselves; they’re more like helpful tools for ensuring that utility is maximized.
Here’s an excerpt from Alexander’s Q&A that explains:
[Q]: So what about all the usual moral rules, like “don’t lie” and “don’t steal”?
Consequentialists accord great respect to these rules. But instead of viewing them as the base level of morality, we view them as heuristics (“heuristic” – a convenient rule-of-thumb which is usually, but not always true).
For example, “don’t steal” is a good heuristic, because when I steal something, I deny you the use of it, lowering your utility. A world in which theft is permissible is one where no one has any incentive to do honest labor, the economy collapses, and everyone is reduced to thievery. This is not a very good world, and its people are on average less happy than people in a world without theft. Theft usually lowers utility, and we can package that insight to remember later in the convenient form of “don’t steal.”
[Q]: But what do you mean when you say these sorts of heuristics aren’t always true?
In the example with the axe murderer […] above, we already noticed that the heuristic “don’t lie” doesn’t always hold true. The same can sometimes be true of “don’t steal”.
In Les Miserables Jean Valjean’s family is trapped in bitter poverty in 19th century France, and his nephew is slowly starving to death. Valjean steals a loaf of bread from a rich man who has more than enough, in order to save his nephew’s life. Although not all of us would condone Jean’s act, it sure seems more excusable than, say, stealing a PlayStation because you like PlayStations.
The common thread here seems to be that although lying and stealing usually make the world a worse place and hurt other people, in certain rare cases they might do the opposite, in which case they are okay.
[Q]: So it’s okay to lie or steal or murder whenever you think lying or stealing or murdering would make the world a better place?
Not really. Having a hard-and-fast rule “never murder” is, if nothing else, painfully clear. You know where you stand with a rule like that.
There’s a reason God supposedly gave Moses a big stone with “Thou shalt not steal” and not “Thou shalt not steal unless you have a really good reason.” People have different definitions of “really good reason”. Some people would steal to save their nephew’s life. Some people would steal if it helped defend their friends from axe murderers. And some people would steal a PlayStation, and think up some bogus moral justification for it later.
We humans are very good at special pleading – the ability to think that MY situation is COMPLETELY DIFFERENT from all those other situations other people might get into. We’re very good at thinking up post hoc justifications for why whatever we want to do anyway is the right thing to do. And we’re all pretty sure that if we allowed people to steal if they thought there was a good reason, some idiot would abuse it and we’d all be worse off. So we enshrine the heuristic “don’t steal” as law, and I think it’s probably a very good choice.
Nevertheless, we do have procedures in place for breaking the heuristic when we need to. When society goes through the proper decision procedures, in most cases a vote by democratically elected representatives, the government is allowed to steal some money from everyone in the form of taxes. This is how modern day nation-states solve Jean Valjean’s problem without licensing random people to steal PlayStations: everyone agrees that Valjean’s nephew’s health is more important than a rich guy having some bread he doesn’t need, so the government taxes rich people and distributes the money to pay for bread for poor families. Having these procedures in place is also probably a very good choice.
[Q]: So is it ever okay to break laws?
I think civil disobedience – deliberate breaking of laws in accord with the principle of utility – is acceptable when you’re exceptionally sure that your action will raise utility rather than lower it.
To be exceptionally sure, you’d need very good evidence and you’d probably want to limit it to cases where you personally aren’t the beneficiary of the law-breaking, in order to prevent your brain from thinking up spurious moral arguments for breaking laws whenever it’s in your self-interest to do so.
I agree with the common opinion that people like Martin Luther King Jr. and Mahatma Gandhi who used civil disobedience for good ends were right to do so. They were certain enough in their own cause to violate moral heuristics in the name of the greater good, and as such were being good utilitarians.
[Q]: What about human rights? Are these also heuristics?
Yes, and political discussion would make a lot more sense if people realized this.
Everyone disagrees on what rights people do or do not have, and these disagreements about rights mirror their political positions only in a more inscrutable and unsolvable way. Suppose I say people should get free government-sponsored health care, and you say they shouldn’t. This disagreement is problematic, but it at least seems like we could have a reasonable discussion and perhaps change our minds. But if I assert “People should have free health care because everyone has a right to free health care,” then there’s not much you can say except “No they don’t!” The interesting and potentially debatable question “Should the government provide free health care?” has turned into a purely metaphysical question about which it is theoretically impossible to develop evidence either way: “Do people have a right to free health care?”
And this will only get worse if you respond “And you can’t raise my taxes to fund universal health care, because I have a right to my own property!”
Whenever there’s a political conflict, both parties figure out some reason why their natural rights are at stake, and the arbitrator can do whatever [they feel] like. No one can prove [them] wrong, because our common notion of rights is an inherently fuzzy concept created mainly so that people who would otherwise say things like “I hate euthanasia, but I guess I have no justification” can now say things like “I hate euthanasia, because it violates your right to life and your right to dignity.” (I actually heard someone use this argument a while ago)
Consequentialism allows us to use rights not as a way to avoid honest discussion, but as the outcome of such a discussion. Suppose we debate whether universal health care will make our country a better place, and we decide that it will. And suppose we are so certain about this decision that we want to enshrine a philosophical principle that everyone should definitely get free health care and future governments should never be able to change their mind on this no matter how convenient it would be at the time. In this case, we can say “There is a right to free health care” – i.e. establish a heuristic that such care should always be available.
Our modern array of rights – free speech, free religion, property, and all the rest – are heuristics that have been established as beneficial over many years. Free speech is a perfect example. It’s very tempting to get the government to shut up certain irritating people like racists, neo-Nazis, cultists, and the like. But we’ve realized that we’re not very good at deciding who genuinely ought to be silenced, and that once we give anyone the power to silence people they’ll probably use it for evil. So instead we enforce the heuristic “Never deny anyone their freedom of speech”.
Of course, it’s still a heuristic and not a universal law, which is why we’re perfectly willing to prevent people from speaking freely in cases where we’re very sure it would lower total utility; for example, shouting “Fire!” in a crowded theater.
[Q]: So consequentialism is a higher level of morality than rights?
Yes, and it is the proper level on which to think about cases where rights conflict or in which we are not certain which rights should apply.
For example, we believe in a right to freedom of movement: people (except prisoners) should be allowed to travel freely. But we also believe in parents’ rights to take care of their children. So if a five year old decides he wants to go live in the forest, should we allow the parents to tell him he can’t?
Yes. Although this is a case of two rights conflicting, once we realize that the right to freedom of movement only exists to help mature reasonable people live in the sort of places that make them happy, it becomes clear that allowing a five year old to run away to the forest would result in bad consequences like him being eaten by bears, and we see no reason to follow it.
But what if that child wants to run away because his parents are abusing him? Everyone has a right to dignity and to freedom from fear, but parents also have a right to take care of their children. So if a five year old is being abused, is it okay for him to run away to a foster home or somewhere?
Yes. Although two rights once again conflict, and even though “right to dignity and freedom from fear” might not be a real right and I kinda just made it up, it’s more important for the child to have a safe and healthy life than for the parents to exercise their “right” to take care of him. In fact, the latter right only exists as a heuristic pointing to the insight that children will usually do better with their parents taking care of them than without; since that insight clearly doesn’t apply here, we can send the child to foster care without qualms.
The proper procedure in cases like this is to change levels and go to consequentialism, not shout ever more loudly about how such-and-such a right is being violated.
Rules that are generally pretty good at keeping utility high are called moral heuristics. It is usually a better idea to follow moral heuristics than to calculate utility of every individual possible action, since the latter is susceptible to bias and ignorance. When forming a law code, use of moral heuristics allows the laws to be consistent and easy to follow. On a wider scale, the moral heuristics that bind the government are called rights. Although following moral heuristics is a very good idea, in certain cases when you’re very certain of the results – like saving your friend from an axe murderer or preventing someone from shouting “Fire!” in a crowded theater – it may be permissible to break the heuristic.
(An even simpler variation on this conception of rights might be to just regard them as those interests held by every individual which have such a high utility value that it would require a genuinely huge countervailing interest to override them. This might not be a perfect definition compared to Alexander’s – it might be better suited as a definition for “needs” or something – but you understand what I’m getting at. The point here, again, is just that everything must necessarily come down to utility once you get down to the most fundamental level.)
Now, having said all this, there is one sense in which I actually do think that moral rights and duties are fundamental – namely, I think the principle of maximizing global utility can itself be regarded as a moral duty (or more accurately, as a kind of meta-duty), and not just as something that’s good. And I think that the violation of that principle would likewise constitute an infringement on a real moral right (or meta-right). I’ll explain later why I think this is the case. But before we get to that, let me just finish addressing some of the more immediate questions about defining goodness in utilitarian terms, and what the implications of this system are.
One of the more common negative responses to the idea of consequentialism/utilitarianism is to ask: OK, if everything really does just come down to a calculation of utility tradeoffs, then wouldn’t the resulting lack of respect for human dignity as an end in itself lead to some disturbingly exploitative and unjust outcomes?
The short answer is that it wouldn’t, simply because any consequence that would be an overall negative (including a breakdown of respect for human dignity) would, by definition, not be prescribed by consequentialism. Arguing that a particular course of action would lead to worse outcomes isn’t an argument against consequentialism; it’s an argument within consequentialism, about which course of action will actually lead to the best consequences. As Alexander writes:
There’s a hilarious tactic one can use to defend consequentialism. Someone says “Consequentialism must be wrong, because if we acted in a consequentialist manner, it would cause Horrible Thing X.” Maybe X is half the population enslaving the other half, or everyone wireheading, or people being murdered for their organs. You answer “Is Horrible Thing X good?” They say “Of course not!”. You answer “Then good consequentialists wouldn’t act in such a way as to cause it, would they?”
By way of elaboration, he breaks down the slavery example:
[Q]: Wouldn’t utilitarianism lead to 51% of the population enslaving 49% of the population?
The argument goes: it gives 51% of the population higher utility. And it only gives 49% of the population lower utility. Therefore, the majority benefits. Therefore, by utilitarianism we should do it.
This is a fundamental misunderstanding of utilitarianism. It doesn’t say “do whatever makes the majority of people happier”, it says “do whatever increases the sum of happiness across people the most”.
Suppose that ten people get together – nine well-fed Americans and one starving African. Each one has a candy. The well-fed Americans get +1 unit utility from eating a candy, but the starving African gets +10 units utility from eating a candy. The highest utility action is to give all ten candies to the starving African, for a total utility of +100.
A person who doesn’t understand utilitarianism might say “Why not have all the Americans agree to take the African’s candy and divide it among them? Since there are 9 of them and only one of him, that means more people benefit.” But in fact we see that that would only create +10 utility – much less than the first option.
A person who thinks slavery would raise overall utility is making the same mistake. Sure, having a slave would be mildly useful to the master. But getting enslaved would be extremely unpleasant to the slave. Even though the majority of people “benefit”, the action is overall a very large net loss.
(if you don’t see why this is true, imagine I offered you a chance to live in either the real world, or a hypothetical world in which 51% of people are masters and 49% are slaves – with the caveat that you’ll be a randomly selected person and might end up in either group. Would you prefer to go into the pro-slavery world? If not, you’ve admitted that that’s not a “better” world to live in.)
He also addresses the organ-harvesting example (AKA “the transplant problem”):
[Q]: Wouldn’t utilitarianism lead to healthy people being killed to distribute their organs among people who needed organ transplants [assuming there was no other way to save them], since each person has a bunch of organs and so could save a bunch of lives?
We’ll start with the unsatisfying weaselish answers to this objection, which are nevertheless important. The first weaselish answer is that most people’s organs aren’t compatible and that most organ transplants don’t take very well, so the calculation would be less obvious than “I have two kidneys, so killing me could save two people who need kidney transplants.” The second weaselish answer is that a properly utilitarian society would solve the organ shortage long before this became necessary […] and so this would never come up.
But those answers, although true, don’t really address the philosophical question here, which is whether you can just go around killing people willy-nilly to save other people’s lives. I think that one important consideration here is the heuristic-related one mentioned [earlier]: having a rule against killing people is useful, and what any more complicated rule gained in flexibility, it might lose in sacrosanct-ness, making it more likely that immoral people or an immoral government would consider murder to be an option (see David Friedman on Schelling points).
But note that [this] still is a consequentialist argument and subject to discussion or refutation on consequentialist grounds.
Just as an aside, the transplant problem is particularly interesting to me, because although I share the immediate intuition that, say, a surgeon killing one unsuspecting bystander to save five lives (as in Judith Jarvis Thomson’s original formulation of the problem) would be acting immorally, Julia Galef points out that it’s possible to tweak the parameters of the problem so that the answer is much less intuitively obvious (or equally obvious but in the opposite direction). For instance, what if the surgeon couldn’t just save five lives by killing the bystander, but fifty lives, or five thousand lives, or five billion lives, or every life in existence? Would it still be more moral to let everyone in the world die than to kill one bystander? The fact that you can raise or lower the utility value on one side of the equation, and thereby change whether the action on the other side of the equation would be acceptable or not, seems like a pretty good indication that utility is in fact the key variable even here. That is to say, the fundamental question isn’t so much whether the bystander has a right not to be unexpectedly killed, but whether the benefit of upholding that right outweighs the alternative.
Assuming it does, though – i.e. assuming it’s right to think that killing the bystander to save only a few patients would be worse than sparing the bystander and letting the patients die – what’s the utilitarian reason for this conclusion? I think Alexander is right with his explanation here; we have an extremely strong heuristic against sudden unprovoked murder, and that heuristic produces a ton of utility in the world specifically because it’s so strong – so maintaining the authority of that rule for use in future situations has a high (if indirect) utility value in itself, on top of the basic object-level considerations of how many lives are lost or saved in the immediate situation. We also have a strong rule against using another person merely as an unconsenting means to some end, as Kant famously put it – even if that end is helping other people – and maintaining the authority of that heuristic likewise has a high utility value due to the better outcomes it’ll tend to produce over the long run. If you want to undermine these rules, then, there has to be an extremely high utility gain on the other side of the scale to justify it; and although saving five billion lives would surely be enough to do so, it’s not quite as obvious that saving only one or two extra lives would. True, it would be good that those lives had been saved – but considering the extent to which word of the surgeon’s actions would undoubtedly spread, and the extent to which it would make everyone afraid to ever go to the hospital again (not to mention all the other negative social effects it would have), it seems clear that it would make the broader society worse off overall.
Of course, there are other factors in play here too – and there are trickier variations on this thought experiment that we might imagine (What if it happened in secret? What if it was the last act of the surgeon’s life?) – so we’ll be coming back to it again later. For now though, the point is just that any interpretation of utilitarianism that claims it would allow for things like mass nonconsensual organ-harvesting (or some other such terrible outcome) makes the basic mistake of only applying the utilitarian calculus to the object level of the situation in question (e.g. lives saved vs. lives lost), rather than recognizing that if there were negative higher-order effects at the level of the broader society, the utilitarian calculus would necessarily take those into account as well. It’s the same kind of mistake creationists make when they argue that life on earth couldn’t have evolved, because the second law of thermodynamics prevents entropy from ever decreasing within a system (that is, a system can’t ever spontaneously grow more ordered, only more chaotic). What this argument overlooks is that there’s actually a huge external factor – namely the massive amount of energy being added into the system by the sun – which tips the entropic scales and does in fact provide the necessary fuel for life to emerge and evolve. And it’s the same story with utilitarianism: Sure, a particular harm-benefit calculation might not appear to make sense if you’re only considering an isolated object-level situation as a closed system – but once you recognize that it’s not a closed system at all and that there are all kinds of external factors to account for, the conclusions suddenly make a lot more sense. It’s true that sometimes you’ll find that allowing for exceptions to widespread moral rules would in fact be better for the world in the long run – and in those cases, that’s what utilitarianism would prescribe. But in other cases, you’ll find that it would be better for the world in the long run to set an extremely high bar for breaking certain moral rules, even if it means accepting some utility losses in the short run – and in those cases, that’s what utilitarianism would prescribe. The key is just that in whatever scenario you’re considering, you have to make sure to apply the utilitarian calculus to the whole big-picture situation – the whole state of the universe – rather than to only one small part of it.
(As for the question of how exactly to decide when to abide by moral rules and when to break them, it seems to me that a pretty good meta-heuristic (not a universal rule, of course, but just a general guideline) is to follow Alexander’s “be nice, at least until you can coordinate meanness” approach – that is, don’t make an individual habit of breaking moral rules in isolated situations where it would seem to increase utility more; only allow for moral rules to be broken when it’s collectively implemented in an officially-sanctioned way by the broader society.)
So all right, we’ve established that the function of consequentialist/utilitarian morality is, as Alexander summarizes it, to “try to make the world a better place according to the values and wishes of the people in it” – and we’ve established that the way to tell whether an act meets that standard is to evaluate it in the broadest possible terms, making sure to account for all the various side effects and long-term implications.
The next issue we have to deal with, then, is the fact that trying to “make the world a better place according to the values and wishes of the people in it” depends on what those people’s values and wishes actually are. If those people happen to have values and wishes that are irrational or destructive or otherwise terrible (as a lot of people certainly do), then isn’t that a problem for our system? What if people hold desires that are sadistic or bigoted – or even more problematic, what if people hold desires that don’t even maximize their own utility? How do we deal with that?
Well, the first part of the question – what if people hold desires that are sadistic or bigoted – is one that we’ve somewhat addressed already. It’s true that the simplest version of utilitarianism counts the satisfaction of these harmful desires just as positively as the satisfaction of more benign ones. If someone feels revulsion whenever they encounter an interracial couple, for instance, then that person’s discomfort is in fact real, and does count as a reduction in their utility. And if they could get a law passed to ban interracial marriage, then the alleviation of their discomfort would in fact register as an increase in their utility (just as the axe murderer from before would get a utility boost from murdering his victim). But of course, acknowledging that one person might derive some positive utility from a particular act doesn’t mean that the act is an overall positive in the broadest sense. As much utility as our hypothetical racist might derive from banning interracial marriage, it would be far more of a decrease in utility for all the interracial couples who would no longer be able to get married – so in global terms, it would still be a major utility reduction. In other words, you wouldn’t be able to call it morally good; at most, all you could say is that it would be good for the racist who no longer had to experience as much discomfort.
That being said, though, we can make this example more difficult. Let’s imagine, for instance, that we aren’t just talking about just one racist individual, but a whole society. Say there’s an entire country that forbids interracial marriage, because its populace mistakenly fears that legalizing it would destroy their society. What’s more, let’s imagine that there’s no one within this country who actually wants to marry outside their race – so no one’s utility is being reduced by the ban (at least not directly). So far, then, it seems like maintaining the ban would meet our definition of being morally good, right? Everyone’s preferences are being met and no one’s utility is being reduced. But is it really right to say that encouraging racism would be good, even in a society where everyone approves of it? After all, as much benefit as they might derive from banning interracial marriage, maybe they’d be even better off if they learned not to feel revulsion towards people of different races in the first place. Their conscious, explicit preference might still be to maintain their racism, but wouldn’t they actually gain more utility from not having such toxic feelings at all? Wouldn’t the real utilitarian thing to do here be to go against the action that the people have ascribed the most subjective value to?
I think this is a valid question. So let’s imagine that one day, the leader of our hypothetical country (who let’s assume has been legitimately democratically elected and entrusted with making these kinds of judgments for the good of the populace) decides to go against the ostensible will of the people and launch a campaign pushing for the legalization of interracial marriage – which ultimately succeeds, and to everyone’s surprise (except the leader’s), doesn’t end up destroying their society at all, but actually improves it significantly. Lonely single people are suddenly able to find spouses, extended families are enriched by the opportunity to discover new perspectives from their new in-laws, and so on. The thing that everyone had regarded as bad for them actually ended up providing a significant increase in their utility. In other words, they were wrong about what was best for them.
(If you want to consider a different example of this, you might instead imagine a misguided populace wanting to start a disastrous war with a neighboring country or something (in a way that would be totally self-destructive), with their leaders being the only ones who realize this and want to avoid it.)
How, then, does that fit into our system? This is the second part of our question from before – what if people hold desires that don’t actually maximize their own utility? Utilitarianism is supposed to be all about satisfying people’s preferences, right? So what if their preferences are flawed, and the amount of goodness they ascribe to a potential outcome doesn’t actually correspond with the amount of goodness that that outcome would produce for them? On what basis could our hypothetical political leader justify legalizing interracial marriage as a moral act if nobody actually considered it a good thing? Sure, we could say that it was a good thing after the fact, once everybody came around and started ascribing a positive value to it rather than a negative one – but what about before then? How could we say that it would be good to take a certain action if doing so would go against everyone’s subjective valuations? Is goodness defined by those subjective valuations, or isn’t it?
This isn’t just a one-off problem, either, nor one exclusive to national governance; it can apply to all kinds of different situations. If you see someone about to cross the street, oblivious to the bus that’s heading straight for them, would it be good to stop them from crossing even though their explicit preference is to cross? If you see someone about to drink a beverage, oblivious to the fact that it’s actually poison, would it be good to stop them from drinking it even though they’ve ascribed more goodness to drinking it than to remaining thirsty? If goodness is nothing but a subjective valuation that people ascribe to things, then how can going against those subjective valuations be considered good for them?
As you might have already figured out, this question isn’t actually as problematic as it seems. Yes, it’s true that goodness is defined by people’s subjective valuations; an act can’t be good unless someone judges it to be good. And it’s true that people’s judgments of what’s good – their conscious preferences – don’t always match up with what they’d actually prefer if they knew better. But the thing is, the explicit object-level preferences that people have toward their immediate situations aren’t the only preferences that people hold. They also have meta-preferences – i.e. preferences about their preferences – and in situations like the ones above, those higher-order meta-preferences can supersede their object-level preferences. So for instance, if you were about to cross the street, your object-level preference might be to cross freely without anyone stopping you; but at the same time, you’d have a meta-preference that if you were wrong in your assumption that crossing freely would be the most utility-maximizing outcome for you (like if there were an oncoming bus about to run you over), you’d actually prefer for someone to step in and stop you from crossing. Similarly, if you were about to drink a beverage, you might have an object-level preference to drink it, but you’d also have a higher-order preference that if your object-level preference would actually be harmful in some way you didn’t anticipate (like if the beverage were poisoned), then someone should intervene and stop you. In other words, whatever your object-level preference is toward the specific situation you’re in, your meta-preference will always be that if there’s alternative outcome that would maximize your utility more than your object-level preference would, then your misguided object-level preference should be disregarded; and if there’s someone around who has a better understanding of the situation and is in a position where they can overrule your misguided preference, then your meta-preference will be to defer to them and let them act on your behalf. The outcome of having your misguided preferences overruled, even if it frustrates you at the object level, is the one that you actually ascribe the greatest value to overall. So in this way, your meta-preference is a kind of Master Preference – a preference over all other preferences – which simply says that no matter what, you will always prefer outcomes that maximize your utility, even if you aren’t explicitly aware beforehand that they will do so.
(If this all seems trivially obvious to you, well, I’m glad you think so, because this concept will do quite a bit of heavy lifting for us later on.)
One of the crucial elements of this Master Preference (which, I should note, has been proposed in a few slightly different forms before – e.g. John Harsanyi’s concept of “manifest” preferences vs. “true” preferences, Eliezer Yudkowsky’s concept of coherent extrapolated volition, etc.) is that, like many other preferences, you can hold it without ever explicitly affirming that you hold it – or, for that matter, even consciously realizing that you hold it. At no point is it necessary for you to acknowledge (even to yourself) that you’d prefer an outcome like “being deprived of a beverage against your will” to one like “dying because you didn’t realize your beverage was poisoned;” all that’s necessary for such a preference to exist is that you actually would get more value out of the first outcome happening than the second outcome happening. The mere fact that you’d derive more value from the first situation than the second one is what would constitute the first one being preferable to you in the first place. And in that sense, simply the fact that you have any preferences at all (whether conscious or unconscious) means that you must necessarily hold this Master Preference, because all other preferences – merely by virtue of being preferences – are automatically subsumed by it. Even if you were the kind of stubborn contrarian who would explicitly deny having the Master Preference, claiming that you’d never want any of your object-level preferences overruled for any reason, you still wouldn’t be able to get away from it – because the principle of never wanting your object-level preferences overruled is itself a preference that might turn out to be misguided; and if there were ever a situation in which having that preference overruled would be preferable to you, then the Master Preference would supersede it. (Granted, you might never encounter such a situation – maybe by sheer luck, your valuations really would always be perfect – but that wouldn’t change the fact that the Master Preference would still remain in effect for you; it simply wouldn’t ever require anyone to act on it.) To deny that you’d ever want your object-level preferences overruled if doing so would give you greater utility would be an incoherent self-contradiction – it’d be like saying that what was preferable to you wasn’t preferable to you.
It’s for this same reason, by the way, that if someone’s judgment is being impaired somehow, utilitarianism doesn’t necessarily require that all their object-level preferences be met. That is, if someone is (say) in the grip of a crippling drug addiction, it isn’t automatically a good thing to give them more drugs – even if they say that’s what they want – because they’ll be far better off if they can break that addiction and come out of their impaired state of consciousness. That’s the outcome that would actually be preferable to them, even if they don’t realize it at the time. Similarly, if someone suffers brain damage and loses the ability to make good decisions for themselves, the best thing to do isn’t just to satisfy whatever misguided preference they might assert; it’s to do what actually maximizes their utility. This is also true for people who are born with certain cognitive disabilities, who never have the capacity for making fully informed decisions in the first place; merely the fact that they’re capable of considering some states of their existence better than others is all it takes for the Master Preference to apply to them, and to therefore make it morally good for them to be treated well (even if they never consciously understand that that’s what they want). And in fact, this is even true for animals, despite the fact that they lack the biological capacity for human-level judgment altogether; again, merely the fact that they’re capable of preferring some outcomes to others is enough to mean that the Master Preference applies to them, and that maximizing their utility is morally better than not doing so – regardless of whether they can ever fully understand what would be best for them or why. In other words, even if your family dog really wanted to run out into busy traffic – maybe because she saw a rabbit on the other side of the highway or something – it would be morally better not to let her, because getting hit by a car would be such a terrible outcome for her (and would certainly be an outcome she’d want to avoid if she were aware of it). Likewise, even if she didn’t understand the nutritional content of the food you were feeding her – and didn’t know that it would be in her best interest to prefer the healthy kibble over the kibble-flavored asbestos – it would still be morally better to feed her the healthy kibble than the asbestos. Merely her ability to instinctively want what’s best for herself is all it would take to reify that preference as a moral consideration.
I want to talk a little more about how these ideas can apply to non-humans; but before I do, I should clarify a few more conceptual points. First of all, if it wasn’t clear already, when we talk about people having preferences and wanting to maximize their utility, that doesn’t necessarily just mean wanting to selfishly do things that are only good for themselves. A lot of people derive immense satisfaction from things like helping others and abstaining from worldly pleasures – so for those people, the thing that would maximize their utility might not necessarily be a purely hedonistic lifestyle; it might be to spend their lives doing charity work or some other form of public service instead. Similarly, there are some instances in which a person’s preference isn’t to enhance their own well-being at all, but to sacrifice it (if necessary) for the sake of something they value more. (Think about a parent sacrificing their time or their health – or even their life – for their children, for example.) So when we talk about calculating net utilities and figuring out what would best meet people’s interests, we have to account for the fact that people don’t always prefer to just benefit themselves alone. Sometimes, the best way to help a particular person is actually to help others.
On that same note, because some people’s preferences end up producing greater benefits beyond themselves than other people’s preferences do, we have to make sure to incorporate those second-order effects into our utilitarian calculus when we weigh people’s preferences against each other. For example, if we were in some kind of scenario where we had to choose between releasing a sadistic serial killer from prison and releasing a compassionate aid worker who had also been imprisoned, then even though both of them might value their freedom equally highly, it’s clear that freeing the aid worker would produce more goodness overall due to all the positive utility they’d bring to others later on – so the utilitarian choice would be to free the aid worker. Or to take a slightly less obvious example, if we decided to donate a bunch of money to charity, it wouldn’t be right to assume (as a lot of people unfortunately seem to do) that our donation would be equally good regardless of which charity we actually donated to or how much utility the charity produced relative to other charities; the most utilitarian thing to do would be to figure out exactly which charities were doing the most good per dollar, and donate to one of those. As I’ve been emphasizing all along, the best actions within utilitarianism are those that produce the highest level of goodness within the most all-inclusive possible context – the full state of the universe and all the sentient beings within it.
With all these clarifications out of the way, then, let’s take a step back and return to our original question of whether it’s possible to say that things can be objectively good or bad – and let’s apply it to the whole global state of affairs. Is it possible to coherently talk about “moral progress” in the world? Can we really say that our species’ moral behavior has improved (or can improve) in any way over the course of history? A moral relativist might argue that it’s impossible to do so – that morality is entirely relative to its cultural context, and that what’s good in one era might be bad in another era. But within the utilitarian system we’ve set up here, we can say decisively that moral progress is possible. States of the world that have higher levels of utility are objectively better than states of the world that have lower levels of utility – regardless of era, and regardless of context; and to whatever extent we manage to bring our actions into alignment with what would bring about the highest level of global goodness, that’s the extent to which we’re making moral progress as a species. If we implement some new policy or program that makes everyone happier, that’s obviously a good thing. If we implement a policy that has some drawbacks and makes some people unhappy, but still does more good than harm overall, then that may not be ideal, but it can still be considered a positive. But even if we implement a policy that everyone seems to be against (at least at the object level), like legalizing interracial marriage in our hypothetical country from before, it can still be a good thing overall if it ultimately increases the people’s utility, because then it would still satisfy their Master Preference, which is the true measure of their well-being in the end.
Of course, I should caution here against abusing this idea of the Master Preference – because it really is easy to abuse if not understood properly. History is tragically rife with examples of people harming and oppressing others and claiming that it’s “for their own good.” But just because people have a meta-preference that their object-level preferences should be overruled if it would better increase their utility doesn’t mean that it would naturally be a good thing for us (or anyone else) to forcefully impose our preferred way of life on everyone else just because we think it would be better for them. The only way we could call such an intervention good would be if it actually were better for them (which forceful actions hardly ever are); otherwise, it would most definitely be a bad thing. (And I should point out that one of the biggest reasons of all why an intervention might not be morally acceptable is the simple fact that respecting people’s explicit wishes is such an important norm in itself – so even if a particular intervention seemed otherwise worthwhile, the mere fact that it would undermine this norm might still keep it from being justifiable.) In short, then, while it is in fact possible to make moral progress in the world, there’s a difference between actually making moral progress in the world, and causing things to become worse in the name of making moral progress. It’s critically important, when making any kind of major moral decision, to keep this distinction in mind – and to have enough humility to recognize that if you think somebody’s object-level preferences should be disregarded for the sake of upholding their Master Preference, it’s always possible that you may actually be the one whose preferences are misguided and ought to be overruled, both for your own sake and for the sake of those who will be affected by your actions.
Speaking of acting responsibly toward those who are less powerful, though, let’s quickly finish up that point about applying these ideas to non-human species before we move on to anything else. For simplicity, I’ve mostly been referring to humans up the this point – but really, when I talk about fulfilling individuals’ preferences and maximizing their utility and so on, I’m talking about sentient beings in general, which can include both human and non-human species.
A lot of moral theories don’t leave any room for non-human species at all. As Tyler Cowen writes:
Some frameworks, such as contractarianism, imply that animal welfare issues stand outside the mainstream discourse of ethics, because we have not and cannot conclude agreements, even hypothetical ones, with (non-domesticable) animals. There is no bargain with a starling, even though starlings seem to be much smarter than we had thought.
Likewise, many religious conceptions of morality don’t include any consideration for animals, on the basis that animals lack souls and are therefore entitled to no more moral consideration than plants or rocks.
But under the system I’ve been laying out here – in which the goodness or badness of an action is defined simply by how much value is ascribed to it by subjective agents – animals do have to count for something, because just like humans, they too are capable of preferring some outcomes to others. Granted, those preferences might be purely implicit; a cat might never explicitly articulate that it prefers one brand of cat food to another, or that it prefers playing with a ball of yarn to being mauled by a larger animal. But as we’ve already established, preferences don’t have to be articulated in order to exist; a human infant, for instance, doesn’t have to be capable of speech (or even particularly complex thought) in order to prefer being fed to going hungry. Merely the fact that a being (whether human or non-human) is capable of preferring some outcomes to others means that its interests must necessarily count for something in the moral calculus.
I think most of us understand this intuitively; it’s hard to find someone who thinks there’s nothing wrong with torturing kittens, for instance. Still, a lot of people will try to selectively deny their intuitions on this point (usually as a means of defending their carnivorous diets) by arguing that animals aren’t actually capable of holding mental states at all, but are basically just automata, and that their behavior is purely robotic. Sam Harris provides a good response to this argument:
What is it like to be a chimpanzee? If we knew more about the details of chimpanzee experience, even our most conservative use of them in research might begin to seem unconscionably cruel. Were it possible to trade places with one of these creatures, we might no longer think it ethical to so much as separate a pair of chimpanzee siblings, let alone perform invasive procedures on their bodies for curiosity’s sake. It is important to reiterate that there are surely facts of the matter to be found here, whether or not we ever devise methods sufficient to find them. Do pigs led to slaughter feel something akin to terror? Do they feel a terror that no decent man or woman would ever knowingly impose upon another sentient creature? We have, at present, no idea at all. What we do know (or should) is that an answer to this question could have profound implications, given our current practices.
All of this is to say that our sense of compassion and ethical responsibility tracks our sense of a creature’s likely phenomenology. Compassion, after all, is a response to suffering – and thus a creature’s capacity to suffer is paramount. Whether or not a fly is “conscious” is not precisely the point. The question of ethical moment is, What could it possibly be conscious of?
Much ink has been spilled over the question of whether or not animals have conscious mental states at all. It is legitimate to ask how and to what degree a given animal’s experience differs from our own (Does a chimpanzee attribute states of mind to others? Does a dog recognize himself in a mirror?), but is there really a question about whether any nonhuman animals have conscious experience? I would like to suggest that there is not. It is not that there is sufficient experimental evidence to overcome our doubts on this score; it is just that such doubts are unreasonable. Indeed, no experiment could prove that other human beings have conscious experience, were we to assume otherwise as our working hypothesis.
The question of scientific parsimony visits us here. A common misconstrual of parsimony regularly inspires deflationary accounts of animal minds. That we can explain the behavior of a dog without resort to notions of consciousness or mental states does not mean that it is easier or more elegant to do so. It isn’t. In fact, it places a greater burden upon us to explain why a dog brain (cortex and all) is not sufficient for consciousness, while human brains are. Skepticism about chimpanzee consciousness seems an even greater liability in this respect. To be biased on the side of withholding attributions of consciousness to other mammals is not in the least parsimonious in the scientific sense. It actually entails a gratuitous proliferation of theory – in much the same way that solipsism would, if it were ever seriously entertained. How do I know that other human beings are conscious like myself? Philosophers call this the problem of “other minds,” and it is generally acknowledged to be one of reason’s many cul de sacs, for it has long been observed that this problem, once taken seriously, admits of no satisfactory exit. But need we take it seriously?
Solipsism appears, at first glance, to be as parsimonious a stance as there is, until I attempt to explain why all other people seem to have minds, why their behavior and physical structure are more or less identical to my own, and yet I am uniquely conscious – at which time it reveals itself to be the least parsimonious theory of all. There is no argument for the existence of other human minds apart from the fact that to assume otherwise (that is, to take solipsism as a serious hypothesis) is to impose upon oneself the very heavy burden of explaining the (apparently conscious) behavior of zombies. The devil is in the details for the solipsist; his solitude requires a very muscular and inelegant bit of theorizing to be made sense of. Whatever might be said in defense of such a view, it is not in the least “parsimonious.”
The same criticism applies to any view that would make the human brain a unique island of mental life. If we withhold conscious emotional states from chimpanzees in the name of “parsimony,” we must then explain not only how such states are uniquely realized in our own case but also why so much of what chimps do as an apparent expression of emotionality is not what it seems. The neuroscientist is suddenly faced with the task of finding the difference between human and chimpanzee brains that accounts for the respective existence and nonexistence of emotional states; and the ethologist is left to explain why a creature, as apparently angry as a chimp in a rage, will lash out at one of his rivals without feeling anything at all. If ever there was an example of a philosophical dogma creating empirical problems where none exist, surely this is one.
What really drives home Harris’s argument is the fact that (as Richard Dawkins points out) although it’s easy for us humans to imagine that we’re morally distinct from other animals because there’s such a wide gap between our relative levels of intelligence, there’s no reason why this gap necessarily had to have appeared in the first place. It’s merely an accident of evolutionary history that all the intermediate species between humans and chimps happen to have gone extinct; and it’s perfectly possible to imagine a scenario in which they’d instead all survived to this day. Imagine if that actually had been the case: What if our modern-day world included humans and Neanderthals and australopiths and chimpanzees all living alongside one another? How would we think differently about the moral status of animals if there were no clear taxonomic cutoff between them and our own species? Here’s Dawkins:
[There is a popular] unspoken assumption of human moral uniqueness. [But] it is harder than most people realise to justify the unique and exclusive status that Homo sapiens enjoys in our unconscious assumptions. Why does ‘pro life’ always mean ‘pro human life?’ Why are so many people outraged at the idea of killing an 8-celled human conceptus, while cheerfully masticating a steak which cost the life of an adult, sentient and probably terrified cow? What precisely is the moral difference between our ancestors’ attitude to slaves and our attitude to nonhuman animals? Probably there are good answers to these questions. But shouldn’t the questions themselves at least be put?
One way to dramatize the non-triviality of such questions is to invoke the fact of evolution. We are connected to all other species continuously and gradually via the dead common ancestors that we share with them. But for the historical accident of extinction, we would be linked to chimpanzees via an unbroken chain of happily interbreeding intermediates. What would – should – be the moral and political response of our society, if relict populations of all the evolutionary intermediates were now discovered in Africa? What should be our moral and political response to future scientists who use the completed human and chimpanzee genomes to engineer a continuous chain of living, breathing and mating intermediates – each capable of breeding with its nearer neighbours in the chain, thereby linking humans to chimpanzees via a living cline of fertile interbreeding?
I can think of formidable objections to such experimental breaches of the wall of separation around Homo sapiens. But at the same time I can imagine benefits to our moral and political attitudes that might outweigh the objections. We know that such a living daisy chain is in principle possible because all the intermediates have lived – in the chain leading back from ourselves to the common ancestor with chimpanzees, and then the chain leading forward from the common ancestor to chimpanzees. It is therefore a dangerous but not too surprising idea that one day the chain will be reconstructed – a candidate for the ‘factual’ box of dangerous ideas. And – moving across to the ‘ought’ box – mightn’t a good moral case be made that it should be done? Whatever its undoubted moral drawbacks, it would at least jolt humanity finally out of the absolutist and essentialist mindset that has so long afflicted us.
Of course, just because we acknowledge that animals can in fact hold legitimate moral interests doesn’t mean that all animal interests necessarily count the same. The preferences of a mosquito don’t automatically carry as much weight as those of a human simply because the mosquito is sentient; we still have to account for how strongly those preferences are held, just as we did when we were weighing human interests against other human interests. Given that mosquitos lack the mental capacity to hold their interests particularly strongly (at least not as strongly as humans do), their preferences will always count for relatively little in the utilitarian calculus. By that same token, though, if you move a bit up the sentience scale to a species with a somewhat greater mental capacity, like a turtle or a lizard, its preferences will accordingly carry more weight. And if you move even further up the scale to a species with a greater mental capacity still, like a chimpanzee, its preferences will count for even more than the preferences of those other species. Once you get to humans, which are capable of holding the strongest preferences of any species, those preferences will count the most of all. Ultimately, then, the general trend here will be that the moral weight carried by species’ preferences will scale with their degree of sentience. (Or to phrase it slightly differently, the value of an animal’s overall well-being will be proportional to its degree of sentience.) That’s not necessarily because some species are just innately superior to others in some metaphysical way, mind you; after all, as Dawkins’s point reminds us, the whole concept of distinct species is a fairly fuzzy one to begin with, and there’s a lot of overlap and gray area between species. Rather, the reason why some species’ preferences count for more than others’ is simply because their greater mental capacity allows them to ascribe more goodness to those preferences, and they accordingly derive greater utility from having their preferences satisfied. In other words, it’s not even really necessary to consider a particular being’s species at all when weighing preferences against each other; all that matters in the utilitarian calculus is the amount of utility attached to each of those preferences.
And to be clear, this doesn’t mean that every human preference automatically outweighs every preference held by a non-human; it’s perfectly possible for a chicken, for instance, to derive more utility from continuing to live than a human would derive from eating that chicken for dinner. That being said, though, if there were a situation in which the chicken’s preference to continue living was being weighed against the same preference in a human – like if the human was starving or something and had no other source of food – then the human’s preference to continue living would outweigh the chicken’s, and it would be morally permissible for the human to eat the chicken. The human’s survival would produce a greater level of utility than the chicken’s survival, so that would be the better outcome. As always, the more subjective value is ascribed to an outcome, the more morally good it is – regardless of which species are involved.
Now you might be thinking, wait a minute, if the goodness of an action is defined by how much utility it gives living beings, and if the amount of utility a particular being can derive from a particular action depends on its degree of sentience or mental capacity, does that mean that it would be morally better to let a human infant die than to let a chimpanzee die? After all, chimps are by all accounts more intelligent and self-aware than human infants, so presumably they’d derive greater utility from staying alive, right? The thing is, though, despite the fact that human infants can’t hold their immediate preferences as strongly as chimps can, the things being desired in this scenario – namely, the human’s expected remaining life and the chimp’s expected remaining life – aren’t equivalent. Assuming an average life course for both, 80-odd years of human experiences will be better than 30-odd years of chimp experiences. So accordingly, it’s possible that a preference for the former could be valued more highly than a preference for the latter, even if the individual desiring the former wasn’t as sentient as the individual desiring the latter. It’s basically the chicken-vs.-human scenario all over again – only with the infant’s “experience a human lifetime” preference standing in for the chicken’s “experience a chicken lifetime” preference, and the chimp’s “experience a chimp lifetime” preference standing in for the hungry-but-not-starving human’s “enjoy a tasty chicken dinner” preference. Despite the human infant’s lower level of sentience, the more moral thing to do would be to spare its life instead of the chimp’s, because the utility value of its desire would outweigh the utility value of the chimp’s desire. And just to make this point clear, if we imagined a slightly different scenario in which this was no longer the case – like if, say, the world was going to end in a week (so the infant would never get the opportunity to grow up and experience human childhood and human adulthood and everything else that would make its continued existence preferable to the chimp’s continued existence) – then I actually think that sparing the chimp rather than the human infant would in fact be the better choice, because the chimp’s valuation of its remaining one week of life would now outweigh the infant’s. But more on this topic later. For now, let me just address one more thing regarding non-human species.
Up to this point, all the examples of sentient beings I’ve been using have been either humans or animals – i.e. the only kinds of sentient beings that we actually know to exist at the moment. But when I talk about sentient beings and their preferences, everything I’m saying could just as easily apply to any kind of being capable of valuing things as good or bad – and that could include anything from aliens to AIs to deities (assuming such things can exist). Because of this, it’s important for us not only to think about how to weigh the preferences of species less mentally developed than us humans, but also about how to weigh the preferences of those possible beings that might very well be more mentally developed and more sentient than us. The way I’ve been describing things so far, we humans are at the top of the sentience scale, so our interests will always count for more than those of other species (all else being equal). But what if some hypothetical being existed that was even more sentient and more capable of ascribing value to its preferences than we are? Would that being’s preferences have to take priority over everyone else’s, even if it caused some degree of suffering for the rest of us? Alexander phrases the question this way: “Wouldn’t utilitarianism mean if there was some monster or alien or something whose feelings and preferences were a gazillion times stronger than our own, that monster would have so much moral value that its mild inconveniences would be more morally important than the entire fate of humanity?”
Well… maybe so, actually. Alexander continues:
Imagine two ant philosophers talking to each other about the same question. “Imagine,” they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle.”
But I think humans are such a being! I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn’t just human chauvinism either – I think I could support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants (presumably) do.
I can’t imagine a creature as far beyond us as we are beyond ants, but if such a creature existed I think it’s possible that if I could imagine it, I would agree that its preferences were vastly more important than those of humans.
You might think that this idea of a “utility monster” (as Robert Nozick famously called it) is sufficient grounds for rejecting utilitarianism – but if you do think that, then by the same logic, you’d also have to reject the idea that the preferences of human beings must carry more weight than the preferences of ants. It might be uncomfortable to think about, but there isn’t technically any logical reason why humans have to be the be-all end-all of morality. Honestly, I think that most of our instinctive reluctance to accept the idea just comes from the fact that, although we might easily be able to imagine a being whose level of sentience was two or three times our own, the idea of a being trillions of times more mentally developed than we are is simply beyond our ability to imagine.
That being said, of course, it might have occurred to you that the idea of such a being does in fact already exist – in the idea of God – and that what’s more, billions of people do in fact accept the idea that the most moral thing to do is to satisfy this being’s preferences, even if it means lesser beings have to suffer. Does that mean, then, that divine command theory – the religious idea that the moral good is whatever God says it is – is actually valid? Well, in my last post on religion, I mentioned that if a god did exist (and that’s a big “if”), then it would in fact make sense to try and satisfy its preferences. Still though, I should stress that that’s not necessarily the same thing as saying that those preferences would represent any kind of absolute goodness in the sense that divine command theory claims. (That is, a god’s preferences wouldn’t just automatically be 100% good no matter what they were.) Under a utilitarian system, a god’s preferences would be just like any other kind of preferences – they’d still have to be weighed against the interests of every other being in the universe – and if there was enough negative utility on the other side of the scale to outweigh the positive utility that the god would gain from having its preferences satisfied, then the most morally good thing to do in that situation would be to deny the god’s preferences. What’s more, like every other preference in the universe, the preferences of a god would still be subject to the Master Preference – so for instance, if there were a sadistic god whose preference was to torture humans for fun, but it turned out that this preference wasn’t what actually maximized that god’s utility, it would once again be morally good to disregard the god’s explicit preference and instead do what produced the highest level of utility overall. At any rate, the very idea of such a cruel god whose sadism has to be indulged might not even be morally coherent in the first place; there’s good reason to believe that the higher a level of sentience a particular being attains, the more of an obligation it has to treat other sentient beings humanely. But there’s still a lot of ground to cover before I can properly explain the reasons for this, so for now I’ll just leave it at that and move on.
Before I do, though, I should note that all this talk about hypothetical beings and their interests raises a potentially interesting implication for the idea of moral truth in general. I mentioned at the start of this post that goodness and badness aren’t just inherent properties of the universe that exist “out there” somewhere; the only way something can be good or bad is if it’s judged to be so by a sentient being. In other words, goodness and badness aren’t “mind-independent,” as philosophers call it – they’re exclusively the product of sentient minds. That being said, though, the fact that we can make objective statements about what would be morally true if certain hypothetical beings existed suggests that it’s possible for there to be moral statements that are true regardless of whether the beings they describe actually exist. We can make conditional statements like “If sentient beings with properties A, B, and C existed, and if they held preferences X, Y, and Z, then the best moral course of action would be such-and-such” – and those conditional statements really can be true, in the same sense that mathematical statements are true, even if no sentient beings actually exist at all. So in that sense, it’s possible for moral truths to exist which are objective and mind-independent, even if goodness and badness themselves aren’t. In other words, it’s possible for objective definitions of good and bad, for every imaginable circumstance, to be fundamentally built into the logic of existence itself.
Of course, in saying all this, I don’t want to give the impression that good and bad are just two discrete black-and-white categories, because (as I hope I’ve made clear by now) that’s not really how utilitarianism quantifies things. Utilitarianism doesn’t frame morality in binary terms, with actions either being flat-out right or flat-out wrong; it’s more of a scalar system, with goodness and badness existing in varying degrees along a spectrum. That is, actions and their outcomes aren’t just sorted into either the “good” bucket or the “bad” bucket and left at that; they’re ranked from better to worse according to how much utility they produce, with some options being ranked moderately better than others, some options being ranked significantly better, and most of them falling into the gray area between “pretty good but not optimal” and “pretty bad but not the absolute worst.” (That’s how we can say things like “Killing a mouse would be more egregious than killing a mosquito, and killing a human would be even more egregious than that, and killing an angel would be more egregious still” – as opposed to just saying “Killing is wrong” and not providing any more nuance than that.)
To use an analogy, it’s a bit like how we talk about size. At the extremes, of course, we can say categorically that the universe is big, and that quarks are small. But in most other contexts, it’s not really possible to say that things are big or small in any kind of absolute sense – only that they’re big or small relative to other specific things. For instance, Shaquille O’Neal is commonly referred to as big, because he’s being compared to other humans; but if you stood him next to Godzilla, then all of a sudden it would seem silly to call him big – and if you put both of them next to the planet Jupiter, it would be ridiculous to call either of them big. Bigness and smallness, in other words, aren’t just two well-defined categories; size is a continuum. And the same is true for moral goodness and badness. As Alexander writes:
If we ask utilitarianism “Are drugs good or bad?” it returns: CATEGORY ERROR. Good for it.
Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That’s because it’s a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.
When people say “Utilitarianism says slavery is bad” or “Utilitarianism says murder is wrong” – well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is “In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so” and possibly “and the same would be true of any broadly similar situation”.
But why in blue blazes can’t we just go ahead and say “slavery is bad”? What could possibly go wrong?
Ask an anarchist. Taxation of X% means you’re forced to work for X% of the year without getting paid. Therefore, since slavery is “being forced to work without pay” taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.
([Granted], reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)
Naturally, some advocates of other moral systems like deontology, which do operate by labeling actions as unequivocally good or bad, might consider this kind of scalar approach to be a flaw in utilitarianism. But the fact that utilitarianism conceptualizes goodness and badness in this scalar way isn’t a bug, it’s a feature – and in fact, it’s one of the features that makes utilitarianism more functional than deontology, as Cowen points out:
When people say, “Oh, I’m a deontologist. Kant is my lodestar in ethics,” I don’t know how they ever make decisions at the margin based on that. It seems to me quite incoherent that deontology just says there’s a bunch of things you can’t do, or maybe some things you’re obliged to do, but when it’s about more or less – “Well, how much money should we spend on the police force?” – try to get a Kantian to have a coherent framework for answering that question, other than saying, “Crime is wrong,” or “You’re obliged to come to the help of victims of crime.” It can’t be done.
So again, the advantage of conceptualizing goodness in a scalar way is that it actually allows you to capture the subtle gradations between better and worse courses of action, in a way that just labeling things as flatly good or flatly bad doesn’t. Having said all this, though, the fact that this scalar approach does account for such subtleties also means that instead of just determining whether something is good or bad according to some well-defined rule (which is as simple as asking whether it follows the rule or doesn’t), you often have to determine how good or bad it is, in terms of the amount of utility it produces, relative to all the possible alternatives – and that’s not always easy. You can’t always have perfect knowledge of what the consequences of your actions will be, so it’s not always possible to quantify the exact amount of utility they’ll produce.
What this means for our system, then, is that when you’re trying to figure out which action will do the most good, the metric you have to use isn’t actually utility, but expected utility. That is, because it’s impossible to know with 100% certainty what the outcome of any particular action will be, you can’t just assign a utility value to a particular action based on how much utility it would produce if it led to some specific outcome; you have to think about all the possible different outcomes it might lead to, and how likely they are, and weight your utility estimation accordingly (based on the average utility from all those hypothetical outcomes). You won’t always be able to get this estimation exactly right – sooner or later, you’re bound to miscalculate by assuming that a particular outcome is more likely or less likely than it really is – but that’s where deontology-style heuristics actually can come in handy, as discussed earlier. If you’re not always able to maximize the utility of your actions by evaluating every situation on a case-by-case basis (due to flawed expectations or an imperfect understanding of your circumstances or whatever), then the real utilitarian thing to do can often be to just commit yourself to following a general rule of thumb that you know will have a low failure rate and will produce a high level of utility over the long run. Even if it’s not perfect on a case-by-case basis either, it can at least help you reach a higher level of utility on average than what you might otherwise reach on your own. Kelsey Piper illustrates this point with an analogy:
I used to drive myself to school; it was a 13 minute drive if I sped, and a 15 minute drive if I followed the law. If I got caught I got a $120 ticket, my insurance rates went up, my mother yelled at me, and I would be late to school.
A very naive utilitarian might say ‘ah, the utilitarian thing to do is to speed every time except when you’ll get caught, whereas those deontologists will tell you to never speed’. Except I don’t know which times I’ll get caught, and as long as there’s a substantial risk of it, the actual utilitarian thing to do is to compare the risk of getting caught to the benefits and come up with a rule to follow consistently. If ‘speed whenever I feel like I can get away with it’ turns out to do worse than ‘don’t speed’, DON’T SPEED.
No one makes this mistake with speeding, because people are way better at normal reasoning than at ethical reasoning, even when the actual dilemmas are exactly the same type. But they make this mistake with ethical problems all the time, and end up deciding the ethical equivalent of ‘speed if you have a good feeling you won’t get caught’, which shows up as ‘doxx people if you have a good feeling they totally deserve it’ or as ‘silence people with violence if they’re saying hateful things’.
And we can look back over those decisions and say ‘wow, often when people decide they’re doing a morally good thing in these circumstances, they aren’t, people are bad at evaluating this, the rule that would have served them well was ‘never violently silence people’ or ‘never doxx people’ and since I’m not any cleverer than they are, that rule will serve me as well.
If following the rule every time results in better outcomes than following the rule sometimes, then the utilitarian thing to do is to always follow the rule.
In other words, although utilitarianism is fundamentally built on the idea of evaluating individual situations on their own merits, the fact that we always have to deal with some measure of uncertainty means that there’s also plenty of room for rule-following heuristics within this framework, as a kind of meta-technique for maximizing expected utility. The fact that we have to deal with uncertainty in such ways doesn’t mean that utility itself is just a fuzzy approximation of goodness, mind you – only that our understanding of it often is. There’s still a specific course of action, for every situation, that would objectively produce the maximal possible utility for that situation; it’s just that we imperfect mortals can’t always know in advance exactly what that optimal course of action will be.
And this distinction highlights the key point of all this uncertainty talk: When we talk about an act being moral within this system, that isn’t necessarily the same thing as saying that its consequences must automatically be good; in order to qualify as moral, they need only be reasonably expected to be good. Up until now, I’ve more or less been using the terms “moral” and “good” interchangeably, but a lot of the time there can be a real difference between them. It’s perfectly possible for an action to be completely moral, yet unexpectedly produce an outcome that’s bad, or to be totally immoral, yet unexpectedly produce an outcome that’s good. Consider, for instance, a criminal who hits his victim in the head intending to harm her (obviously an immoral act due to its negative expected utility) but whose violent act inadvertently ends up curing her crippling neurological condition instead (a good outcome due to the actual positive utility produced). Or imagine someone saving the life of an innocent child (clearly a moral act) only for the child to grow up and become a genocidal dictator (clearly a bad outcome). The fact that an act can still be morally justified, even if its effects later turn out to be bad, shows that consequentialist morality, despite the name, isn’t actually defined by the ultimate consequences of actions – that is, an action won’t somehow retroactively become moral if there’s a good outcome and immoral if there’s a bad outcome – it’s based simply on what the consequences could reasonably be expected to be at the time the action was taken. The easiest way of conceptualizing it might be to just say that the term “goodness” refers to the utility level of ultimate outcomes, while the term “morality” refers to the expected utility level of the choices that lead to those outcomes. Whether you conceptualize it in this way or not, though, it’s evident that goodness and morality can’t necessarily be considered equivalent in every case.
And if that still isn’t clear, consider one more point: It’s entirely possible for certain outcomes to be good or bad (relative to the alternatives) even if those outcomes don’t involve anyone acting morally or immorally at all. (The philosophical term for this is “axiology.”) To revisit an earlier example, if the wind blows a rock off a cliff and that rock crushes someone’s foot, that would obviously be a bad outcome, since it would decrease that person’s utility; but it wouldn’t be right to call it immoral, or to accuse the rock of acting immorally (How can an inanimate object act immorally?) – it would just be a bad thing that happened. To actually say that something is moral or immoral, it has to involve some kind of intent. Again, you can’t just be referring to outcomes; you have to be referring to the choices, made by sentient actors, which lead to those outcomes. And because such choices always involve some degree of uncertainty, they can’t be judged as praiseworthy or blameworthy solely on the basis of the actual results they produce; they have to be judged according to what results they were expected to produce at the time they were made. In other words, motives and intentions may not be relevant to judging how much goodness or badness a person’s actions produced, but they are relevant to judging how morally or immorally the person was acting when they chose to take those actions. As Edmund Burke put it, “guilt resides in the intention.”
Steven Pinker expands on this point:
We blame people for an evil act or bad decision only when they intended the consequences and could have chosen otherwise. We don’t convict a hunter who shoots a friend he has mistaken for a deer, or the chauffeur who drove John F. Kennedy into the line of fire, because they could not foresee and did not intend the outcome of their actions. We show mercy to the victim of torture who betrays a comrade, to a delirious patient who lashes out at a nurse, or to a madman who strikes someone he believes to be a ferocious animal. We don’t put a small child on trial if he causes a death, nor do we try an animal or an inanimate object, because we believe them to be constitutionally incapable of making an informed choice.
Consider the process of deciding whether to punish someone who has caused a harm. Our sense of justice tells us that the perpetrator’s culpability depends not just on the harm done but on his or her mental state – the mens rea, or guilty mind, that is necessary for an act to be considered a crime in most legal systems. Suppose a woman kills her husband by putting rat poison in his tea. Our decision as to whether to send her to the electric chair very much depends on whether the container she spooned it out of was mislabeled DOMINO SUGAR or correctly labeled D-CON: KILLS RATS – that is, whether she knew she was poisoning him and wanted him dead, or it was all a tragic accident. A brute emotional reflex to the actus reus, the bad act (“She killed her husband! Shame!”), could trigger an urge for retribution regardless of her intention. [But there has to be a] crucial role played by the perpetrator’s mental state in our assignment of blame.
Of course, none of this is to say that just because someone thinks they’re doing the right thing, that’s enough to automatically make them morally blameless in every instance. After all, someone might commit an action with the honest expectation that it will produce a utility-positive outcome, but if they formed that positive expectation in a way that was utterly negligent or insensitive to the facts, then they could still be blamed for acting immorally on that basis – because although they might not have expected their object-level actions to be harmful, they surely would have known that forming one’s expectations in such an irresponsible way does tend to lead to harmful results. So the fact that they formed their expectations in such a manner anyway means that they did still commit an immoral act, even if the expectations they ultimately formed via that process were positive. (If we imagine a military commander, for instance, launching a disastrous military campaign that ends in massive utility reductions for everyone involved, that commander might very well have gone into the campaign with the honest expectation that it would produce utility-positive results – so at the object level, he might seem blameless – but if he’d only formed his positive expectations by surrounding himself with yes-men and stubbornly ignoring good intelligence – actions which he knew perfectly well to have a negative expected utility – then he would still be responsible for the negative consequences that resulted.) Again, even if you think you’re doing the right thing at the object level, it’s still possible that you might be acting immorally on the meta level if you fail to do your due diligence in making sure that your expectations are actually well-founded in the first place.
With all that being said, though, the important point here is that even in such cases, the morality or immorality of your actions is still defined by whether you’re maximizing expected utility, not whether you’re maximizing actual realized utility. Motives do matter; you just have to make sure that you’re maximizing expected utility at every level of your decision-making, not merely at the object level. As always, the most important frame for judgment is the big-picture one.
So all right, now that we’ve established this definition of what morality is in the descriptive sense, let’s turn to the elephant in the room that’s been hovering over this whole discussion: Why, exactly, should any of us do what’s moral, as opposed to just selfishly doing what’s best for ourselves? After all, even if we can objectively quantify goodness and say that some actions are better than others, that still doesn’t explain why we therefore ought to do those actions. How could it be possible, even in theory, to derive normative statements (i.e. statements about what should be done) solely from descriptive statements (i.e. statements about the way things are)? This is one of the most famous problems in moral philosophy, known as the “is-ought problem.” Alexander summarizes:
David Hume noted that it is impossible to prove “should” statements from “is” statements. One can make however many statements about physical facts of the world: fire is hot, hot things burn you, burning people makes their skin come off – and one can combined them into other statements of physical fact, such as “If fire is hot, and hot things burn you, then fire will burn you”, and yet from these statements alone you can never prove “therefore, you shouldn’t set people on fire” unless you’ve already got a should statement like “You shouldn’t burn people”.
It is possible to prove should statements from other should statements. For example, “fire is hot”, “hot things burn you”, “burning causes pain”, and “you should not cause pain” can be used to prove “you should not set people on fire”, but this requires a pre-existing should statement. Therefore, this method can be used to prove some moral facts if you already have other moral facts, but it cannot justify morality to begin with.
Of course, not everyone agrees that this is really a problem for morality. Some people don’t make any distinction at all between what’s good and what ought to be done – so that once you’ve successfully defined one in objective terms, the other naturally follows. According to this view, the statement “We should do X” means nothing more than “It would be more desirable for us to do X than to not do X;” so by objectively establishing that doing X would be good – the “is” problem – we’ve totally solved the “ought” problem as well. The question “Why should we do what’s good?” is just a tautology, essentially the equivalent of asking “Why would it be good for us to do what’s good?” It’s like asking “Why does a triangle have three sides?” – the question answers itself.
And if that’s how you feel about the subject, then great – the is-ought problem is a moot point, and we’re basically done here. But personally, I don’t find this approach entirely satisfying. It still seems to me that the question “Why should we do what’s good instead of what’s bad?” is a coherent one, and that the tautological definition of “should” doesn’t quite address what it’s really asking.
So what would be a better definition of “should”? Another popular approach – one that might be endorsed by Hume himself – is to say that the word “should” can only really be used in a contingent, instrumental kind of way. That is, you can only properly say that someone should do X if you also specify the goal or aim that they would better achieve if they did X – like, “You should run quickly if your goal is to win a footrace,” or “You should drink a beverage if your goal is to quench your thirst,” etc. You can’t just say that someone should do something and leave it at that; you have to have an “if” clause attached to your “should” clause (even if it’s an unspoken one) in order to have it make sense. Or in philosophical terms, you have to frame it as a “hypothetical imperative” rather than a “categorical imperative” – a conditional statement rather than an unconditional one. What this means for morality, then, is that you can’t just say that people should behave morally, period, regardless of their goals. The most you can say is that if they desire to do the most good, or if they desire to maximize the utility of sentient beings, then they should behave morally. Commenter LappenX sums it up:
Saying “You shouldn’t kill people” is different from “If you want world peace, then you shouldn’t kill people.” The latter is just another way of saying “If less people go around killing others, then we’re closer to world peace”, which is a statement that can be true or false, and can be tested and verified. I have no idea how to interpret the former.
Of course, the problem with this instrumental view (as LappenX points out) is that it doesn’t exactly help us with morality very much. After all, if the strongest moral statement you can make is that someone ought to behave morally if they desire to maximize global utility, then what happens if they don’t desire to maximize global utility, but instead just want to selfishly maximize their own utility? If that’s all they want, then this instrumental definition of “should” wouldn’t tell them to act morally at all – it would tell them that they should act selfishly.
And in fact, this isn’t even just a problem of some people potentially having selfish motives; according to the Master Preference mentioned earlier, everyone’s most fundamental preference is that their own utility be maximized. That’s what it means to prefer an outcome in the first place – that it would give you more utility if it happened than if it didn’t. So in that sense, there’s a strong case to be made that self-interest is at the core of everything we do. And that means that on top of all these other possible definitions of “should,” we now have to add one more: Maybe, when we ask why we should do what’s good, what we’re really asking is “How would doing what’s good maximize my utility?” (or more bluntly, “What’s in it for me?”).
Luckily, there’s almost always an immediate response we can give to this question; some of the biggest reasons to do good simply boil down to practical considerations like avoiding personal guilt and reprisal from others. In fact, these kinds of considerations were probably the biggest reasons why moral behavior originally came to exist in the first place, as Pinker explains:
The universality of reason is a momentous realization, because it defines a place for morality. If I appeal to you to do something that affects me – to get off my foot, or not to stab me for the fun of it, or to save my child from drowning – then I can’t do it in a way that privileges my interests over yours if I want you to take me seriously (say, by retaining my right to stand on your foot, or to stab you, or to let your children drown). I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.
You and I ought to reach this moral understanding not just so we can have a logically consistent conversation but because mutual unselfishness is the only way we can simultaneously pursue our interests. You and I are both better off if we share our surpluses, rescue each other’s children when they get into trouble, and refrain from knifing each other than we would be if we hoarded our surpluses while they rotted, let each other’s children drown, and feuded incessantly. Granted, I might be a bit better off if I acted selfishly at your expense and you played the sucker, but the same is true for you with me, so if each of us tried for these advantages, we’d both end up worse off. Any neutral observer, and you and I if we could talk it over rationally, would have to conclude that the state we should aim for is the one where we both are unselfish.
Morality, then, is not a set of arbitrary regulations dictated by a vengeful deity and written down in a book; nor is it the custom of a particular culture or tribe. It is a consequence of the interchangeability of perspectives and the opportunity the world provides for positive-sum games. This foundation of morality may be seen in the many versions of the Golden Rule that have been discovered by the world’s major religions, and also in Spinoza’s Viewpoint of Eternity, Kant’s Categorical Imperative, Hobbes and Rousseau’s Social Contract, and Locke and Jefferson’s self-evident truth that all people are created equal.
In other words, even if we were entirely self-interested, it wouldn’t take much reasoning to realize that others were just as self-interested as we were, and that therefore the best way to serve that self-interest would actually be to create cooperative social norms rather than just letting everyone behave selfishly. Not only would this kind of cooperative approach be the best way to ensure widespread safety and success; it would also be the only defensible position we could hold in purely logical terms. Pinker continues:
The assumptions of self-interest and sociality combine with reason to lay out a morality in which nonviolence is a goal. Violence is a Prisoner’s Dilemma in which either side can profit by preying on the other, but both are better off if neither one tries, since mutual predation leaves each side bruised and bloodied if not dead. In the game theorist’s definition of the dilemma, the two sides are not allowed to talk, and even if they were, they would have no grounds for trusting each other. But in real life people can confer, and they can bind their promises with emotional, social, or legal guarantors. And as soon as one side tries to prevail on the other not to injure him, he has no choice but to commit himself not to injure the other side either. As soon as he says, “It’s bad for you to hurt me,” he’s committed to “It’s bad for me to hurt you,” since logic cannot tell the difference between “me” and “you.” (After all, their meaning switches with every turn in the conversation.) As the philosopher William Godwin put it, “What magic is there in the pronoun ‘my’ that should justify us in overturning the decisions of impartial truth?” Nor can reason distinguish between Mike and Dave, or Lisa and Amy, or any other set of individuals, because as far as logic is concerned, they’re just a bunch of x’s and y’s. So as soon as you try to persuade someone to avoid harming you by appealing to reasons why he shouldn’t, you’re sucked into a commitment to the avoidance of harm as a general goal. And to the extent that you pride yourself on the quality of your reason, work to apply it far and wide, and use it to persuade others, you will be forced to deploy that reason in pursuit of universal interests, including an avoidance of violence.
Alexander makes a similar point:
If entities are alike, it’s irrational to single one out and treat it differently. For example, if there are sixty identical monkeys in a tree, it is irrational to believe all these monkeys have the right to humane treatment except Monkey # 11. Call this the Principle of Consistency.
You are like other humans, not an outlier from the human condition. You have no unique talents or virtues that make you a special case.
You want to satisfy your own preferences.
So by Principle of Consistency, it’s rational to want to satisfy the preferences of all humans.
Of course, logical consistency can only take us so far; a person who’s only interested in benefiting themselves might not care all that much about how logically consistent they’re being, as long as they come out ahead. But there are also good reasons to want to do what’s moral aside from just the logical ones. I think it’s fair to say that most of us feel just as compelled to do what’s right by the emotional dictates of our own conscience as we do by the practical considerations of whether others will retaliate against us for mistreating them, or by the rational considerations of what’s the most logically consistent. Aside from flat-out sociopaths, we’re all born with an innate sense of empathy that makes us feel good when we help others and bad when we harm them – and that means that in a lot of cases, the kinds of actions that we think of as selfless can actually be more satisfying than the ones we think of as self-serving, while a lot of the things that might seem to benefit us most, if they come at others’ expense, can actually fill us with so much guilt that the burden of it outweighs whatever immediate benefit we might be getting. In that sense, it’s often the case that the thing that gives us the most utility isn’t actually serving ourselves alone, but trying to maximize others’ utility. And the academic research on the subject bears out this conclusion, as Tenzin Gyatso (AKA the Dalai Lama) and Arthur C. Brooks point out:
In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. […] Americans who prioritize doing good for others are almost twice as likely to say they are very happy about their lives. In Germany, people who seek to serve society are five times likelier to say they are very happy than those who do not view service as important. Selflessness and joy are intertwined.
Aside from all the social benefits that come with being a moral person, then, there’s a very real sense in which morality is its own reward. Even if you’re starting from a place of pure self-interest, you’ll almost always find that the thing that produces the best outcomes for you is to try and produce the best outcomes for others as well.
Now, having said all that, you might notice my use of the word “almost” there. Unfortunately, it’s not always the case that everyone maximizes their own utility by acting morally all the time – there are plenty of cases where it’s perfectly possible to derive more utility from acting immorally than from doing the right thing – so we can’t just point to things like personal guilt and reprisal from others and declare that they completely solve the is-ought problem. It’s possible to imagine all kinds of scenarios in which someone might be able to benefit themselves at others’ expense, knowing full well that they’ll be able to get away with it and will never have to deal with any kind of punishment or retribution, or even any kind of personal guilt (maybe they’re just not a very empathetic person and don’t care about others very much). In those kinds of cases, it doesn’t seem like the “what’s in it for me?” definition of “should” gives us any basis for saying that they should act morally instead. So on what basis can we say that they should act morally? After all, it seems intuitively clear that there must be some kind of invisible moral law (or something) being broken here when they act immorally; it really does feel like there must be some reason why they’re obligated to do what’s good, and that they’d be breaking this obligation by acting selfishly instead. So are there any plausible definitions of “should” left that we can turn to, which can actually capture what we’re trying to get at here in an adequate way? Is it possible to ground morality in a way that’s not only objective, but binding as well?
Well actually, I think there might be – and funny enough, it turns out that the definition of “should” that best captures this idea of obligation is in fact the original historical definition of the word.
See, when the word “should” first came into use, it was as the past tense of the word “shall” (“from Middle English scholde, from Old English sceolde, first and third person preterite form of sculan (‘owe’, ‘be obliged’), the ancestor of English shall”). That is to say, if at some point in the past you had made a pledge like “I shall do X,” then here in the present we could say that doing X was something you “shalled.” So if you signed a contract promising to repay a loan, for instance, and you said something like, “I vow that I shall repay this loan,” then you “shalled” (i.e. should) repay the loan. Saying that you shalled/should repay it, under this original definition, was just another way of saying that you’d made a prior commitment to do so, and that you must now meet that commitment or else you’ll be breaking your pledge. In other words, you’ve placed yourself under a contractual obligation to do it; and so doing it is now something that you owe to your creditor. (Similarly, the word “ought” comes from “owed” – so if you said that you ought to do something for someone else, it meant that your doing that thing was owed to that person.) Being contractually bound to do something, of course, doesn’t mean that you must do it in the same literal sense that you must obey the laws of physics – there’s no invisible force physically compelling you to do what you’ve pledged to do – but it does mean that committed yourself to doing it in principle. In other words, it’s essentially the same kind of “invisible moral law” that we were grasping for a moment ago; it may not be binding in the absolute sense that you literally cannot violate it, but it is binding in the sense that you should not (“shalled not”) violate it. If you were to go against it, you’d be breaking an invisible obligation that you were pledged to keep. And it seems to me that that’s the only way that an obligation can be binding; after all, if obligations were binding in the same way that the laws of physics were binding – where we literally had no choice but to abide by them – then our world would look very different, to say the least.
At any rate, this historical definition of “should” has a couple of features that are extremely relevant to our discussion about morality. First, notice that under this definition, there’s no incompatibility between “is” and “ought.” The statement “I have made a commitment to do this thing and have thereby created and placed an obligation upon myself in principle to do it” is a fundamentally descriptive statement, not a normative one. All it’s saying is that I “shalled” do some action (and therefore have a commitment to do that action), which is an objective fact about the world in the same way that statements like “Person A promised to water Person B’s houseplants this week” and “Person C owes Person D five hundred dollars” are objective facts about the world.
By that same token, this definition of “should” neatly avoids the whole hypothetical imperative problem. When you say “I shalled do X,” you aren’t making a conditional statement like “If I desire to keep my pledge, then I should do X;” all you’re saying is “I have pledged to do X and therefore have a commitment to do X,” which is unconditionally true without any need for an “if” qualifier.
That being said, though, you might notice that this whole commitment-making mechanism does seem to have a hint of the same flavor as the hypothetical imperative mechanism, just in the sense that it seems more or less amoral. That is, it seems like it could apply just as easily to immoral pledges as to moral ones; you could have a hit man agree to a contract to murder somebody, for instance, and committing that murder would therefore be something he “shalled” do. After all, he did make a pledge to murder the target, and he’s contractually bound to keep his pledge. So under the historical definition of “should,” he should commit murder.
How, then, does this definition of “should” help us with morality at all? It seems like the only way it could serve as an adequate ground for morality would be if we’d all signed a literal social contract, at the moment we first came into existence, pledging to always do what was moral, and to never enter into any subsequent immoral contracts – and we definitely didn’t do that (at least not explicitly). Is it possible that we all entered into such a contract implicitly? Maybe – but how would that even work? And for that matter, even if we were given the opportunity to enter into such a social contract, why would we think that everyone – including even the most self-interested among us – would agree to it? Why would we all want to precommit to restricting our future actions by agreeing to such a contract, when we could just as easily reject the contract and thereby retain the option of always being able to act immorally when it suited us? Why would it ever be in anyone’s self-interest to precommit to restricting their own actions in such a way?
Well, this last question is easy enough to answer, so let’s start with it first. As you’ll know if you’ve ever entered into any kind of contractual agreement yourself (even something as simple as signing a contract for a loan that you couldn’t have gotten if you hadn’t agreed to repay it), there are actually plenty of situations in which making a self-restricting precommitment can be advantageous. The most famous example of this is probably the legend of Odysseus binding himself (literally!) to the mast of his ship so he could hear the song of the Sirens. If he had insisted on remaining free to walk around the deck, he would have immediately been lured to his death; but because he denied himself this freedom of movement (or more specifically, because he allowed his crew to deny him this freedom by tying him up), he was able to hear the beautiful song that no one had ever survived hearing before.
Another memorable example comes from the game of chicken – you know, that game in which two drivers careen toward each other at full speed, and whoever swerves away first is the loser. Imagine being caught up in a high-stakes version of this game, in which you felt like you had to win no matter what. (Let’s say the other driver was holding your loved ones hostage or something.) How could you win without getting killed? Well, some game theorists have suggested that your best bet might actually be to make a visible and irrevocable precommitment to not swerving, like removing your steering wheel and tossing it out the window in full view of the other driver. By making it clear that you’ve literally removed your ability to swerve, you show the other driver that they can no longer win the game, and so their only choice (if they want to survive) is to swerve themselves. You’ve restricted your freedom to act, but in doing so you’ve made yourself better off than you might have been if you’d kept all your options available. Granted, this strategy does depend on the other driver being rational – which, in a vacuum, is not guaranteed – but if you could be sure that the other driver was rational, then the self-restricting precommitment of discarding your steering wheel would be the best move you could make.
And this leads me to one more example in this vein – a variation on a game devised by Douglas Hofstadter, based on the classic Prisoner’s Dilemma. Imagine that you and a thousand other randomly-selected people are led into separate rooms and cut off from all communication with one another. An experimenter presents each of you with two buttons: a green one labeled “COOPERATE,” and a red one labeled “DEFECT.” You’re told that if you press the green button, you’ll be given $10 for every person who also presses the green button (including yourself). So if 387 people press it, you’ll get $3870; if nobody presses it, you’ll get $0; etc. But if you press the red button, you’ll get $1000 plus however much each of the green-button-pressers receive – so if 529 people press the green button, you’ll get $6290; if nobody presses the green button, you’ll get $1000; etc.
Which button do you press? From a purely self-interested perspective, the answer might at first seem obvious. No matter what happens, you’re guaranteed to get more money if you press the red button – so you should press the red button, right? But of course, all the other participants have that exact same incentive themselves – they’ll all want to press the red button too – and if they do, the result will be that nobody will press the green button, and everybody will go home with just a measly $1000. This hardly seems like the optimal solution, considering that if everybody pressed the green button instead, they could all go home with ten times that much. But how could you get everyone to cooperate and go along with that plan? The obvious solution, naturally, would be to have everyone get together and jointly precommit to pressing the green button. But all communication between participants has been totally cut off – and even if it weren’t, there’d be no way for each participant to know whether the others were truly precommitting or were just pretending to – so it doesn’t seem like there’s any plausible way to win here. There’s just no way of knowing what the other participants intend to do, much less influencing their decisions.
But what if it were somehow possible for all the participants to come to a mutual understanding without ever directly communicating with each other? Let’s imagine, for instance, that each participant was given access to an ultra-advanced supercomputer capable of calculating the answer to any question in the universe. Would it then be possible, even in theory, to figure out some way of coming to a mutually beneficial agreement without any of the participants ever directly communicating? Well, yes, actually – it would be trivially easy to do so, thanks to a process called acausal trade. Alexander explains:
Acausal trade (wiki article) works like this: let’s say you’re playing the Prisoner’s Dilemma against an opponent in a different room whom you can’t talk to. But you do have a supercomputer with a perfect simulation of their brain – and you know they have a supercomputer with a perfect simulation of yours.
You simulate them and learn they’re planning to defect, so you figure you might as well defect too. But they’re going to simulate you doing this, and they know you know they’ll defect, so now you both know it’s going to end up defect-defect. This is stupid. Can you do better?
Perhaps you would like to make a deal with them to play cooperate-cooperate. You simulate them and learn they would accept such a deal and stick to it. Now the only problem is that you can’t talk to them to make this deal in real life. They’re going through the same process and coming to the same conclusion. You know this. They know you know this. You know they know you know this. And so on.
So you can think to yourself: “I’d like to make a deal”. And because they have their model of your brain, they know you’re thinking this. You can dictate the terms of the deal in their head, and they can include “If you agree to this, think that you agree.” Then you can simulate their brain, figure out whether they agree or not, and if they agree, you can play cooperate. They can try the same strategy. Finally, the two of you can play cooperate-cooperate. This doesn’t take any “trust” in the other person at all – you can simulate their brain and you already know they’re going to go through with it.
(maybe an easier way to think about this – both you and your opponent have perfect copies of both of your brains, so you can both hold parallel negotiations and be confident they’ll come to the same conclusion on each side.)
It’s called acausal trade because there was no communication – no information left your room, you never influenced your opponent. All you did was be the kind of person you were – which let your opponent bargain with his model of your brain.
So in our button-pressing scenario, all you’d need to reach a mutually favorable outcome for everybody would be an ability to accurately model other participants’ behavior – which brain-simulating supercomputers would certainly allow you to do. That being said, though, brain-simulating supercomputers aren’t the only way in which individuals might be able coordinate without directly communicating. Imagine, for instance, if you and the rest of the participants were all advanced robots instead of humans, and you knew that you all shared the same programming. In that case, you wouldn’t need to simulate any of the other participants’ brains at all; all you’d need to do was commit to pressing the green button yourself, and you’d know that all the other participants, because they were running the exact same decision-making algorithms you were, would reach the same conclusion and press the green button as well.
Or imagine that you weren’t all robots, but were simply all identical copies of the same person. In that case, it’d be the same deal; you wouldn’t have any way of simulating the other participants’ decision-making process, but you wouldn’t have to, because you’d know that it was identical to your own – so whatever decision you ultimately made, that’d be the decision they would all ultimately make as well. If you decided to press the green button, you’d know that all the identical copies of you had also decided to press the green button. If you only pretended that you were going to press the green button, but then actually decided to press the red button, you’d know that all the identical copies of you had also attempted the same fake-out and had ultimately ended up pressing the red button as well. There wouldn’t be any way of outsmarting the game; the only way to win would be to actually cooperate.
So all right, that’s all well and good – but what does any of this have to do with morality or the social contract? Well, as you might have guessed, the whole button-pressing scenario is meant to represent the broader dilemma of whether or not to live a moral life in general. “Pressing the green button” – i.e. cooperating with your fellow sentient beings and acting morally toward them – clearly produces the best outcomes on a global basis. But on an individual basis, it seems more profitable to “press the red button” – i.e. to enjoy all the benefits produced by the cooperators, but then to also benefit a little more on top of that by acting immorally when it suits you. Yet if everyone did this, it would lead to sub-optimal outcomes across the board; so the best outcome – even in purely self-interested terms – is for everyone to jointly precommit to acting morally (i.e. to make a social contract).
But how does this button-pressing analogy actually reflect the reality we live in? After all, it’s not like any of us actually have perfect brain-simulating supercomputers in real life, and we aren’t all identical robots or clones; each of us has our own unique motives and biases and personality quirks and so on. So it doesn’t seem like there would be any way for us to actually commit to morality in such a way that we could know that everyone else was doing the same. We’re more like the helpless people in the first version of the analogy who have no way of coordinating their decision-making.
Well, let’s consider one more thought experiment. (And I know you’re probably wondering where all these thought experiments are going, but hang in there – it’ll all make sense in a minute.) Imagine going back to a time before you even existed – or maybe not that far back, but at least all the way back to the very first moment of your existence as a sentient being (like when you were still in the womb), before you ever had any complex thoughts or any interactions with other sentient beings or anything like that. In this pre-“you” state, which John Rawls calls the “original position,” you have no knowledge of the person you’ll eventually become, or even the kind of person you are in the moment. You don’t know which gender you are, which race you are, or even which species you are. You don’t know whether you’ve got genes for good health or bad health, for high intelligence or low intelligence, or for an attractive appearance or an unattractive appearance. You don’t know whether you’re going to grow up in an environment of wealth and comfort, or one of poverty and squalor. You don’t even know what your personality traits will end up being – whether you’ll be someone who’s callous or kind, shy or outgoing, lazy or hardworking, etc. Nor do you know what kind of preferences you’ll end up having; all you can know (if only on an implicit level) is that whatever your preferences and characteristics turn out to be, your ultimate meta-preference will be for your utility to be maximized. (As per our earlier discussion of the Master Preference, this is what it means to be able to have preferences in the first place.) In short, as Rawls puts it, you’re essentially positioned behind a “veil of ignorance” regarding your own identity; you’re just a blank slate whose only characteristic is the fact that you’re sentient (and therefore have the capacity to experience utility and disutility).
Viewing things from this position, then, we can notice a couple of things. The first insight – and the one for which this thought experiment is usually cited – is that stepping behind this veil of ignorance is a great way of testing whether our preferred policies and social norms are genuinely fair, or whether our thinking is being unduly biased by whatever position we might personally happen to occupy in the world. (You might recall Alexander doing this earlier with his slavery example.) Thinking About Stuff provides a great two-minute explanation of this point:
For this insight alone, this thought experiment (which I should mention was actually originally introduced by Harsanyi, even though everyone now most associates it with Rawls) is incredibly valuable. But there’s actually another part of it that’s just as valuable to our current discussion – namely, the fact that when you go back in time to that pre-“you” original position, the featureless blank-slate being that you become is exactly the same featureless blank-slate being that everyone else becomes when they go back in time to the original position. All the personal identifiers that make you a unique individual – all those motives and biases and personality quirks that we were talking about before – are wiped away behind the veil of ignorance. That’s the whole point of the thought experiment; in the original position, everyone is functionally indistinguishable from one another. And what this means for our purposes is that in the original position, because everyone is an identical copy, it becomes possible to make acausal trades.
So imagine that you’ve gone back in time to before your birth, all the way back to the original position, and you’re evaluating your situation. (I realize that fetuses aren’t capable of complex thoughts, but bear with me here.) You don’t know what kind of being you are; you don’t know what kind of world you’re about to be born into; and you don’t know if there will be any other sentient beings out there aside from yourself. But what you can say for sure is that if you end up being born into a world like the one we’re living in now, and that if there are other sentient beings in this world with whom you might interact – who themselves start off their existences from the same original position you’re starting from – then that would make the situation you’re presently facing analogous to the situation faced by the identical clones in the button-pressing game above. That is to say, if you precommit right now (conditional on the world being as described above) to living morally and trying to maximize global utility after you’re born, rather than just trying to maximize your own utility, then you can know that every other being in the original position will make that same precommitment, since they’re all identical to you and what’s rational for you will be rational for them as well. Likewise, if you decide not to pledge to maximize global utility, and instead just resolve to maximize your own utility after you’re born, then you’ll know that every other being in the original position will also make that same resolution. Given this fact, plus the fact that you don’t have any awareness of what your station in life will actually be after you’re born, it’s clear that the decision that would give you the greatest expected benefit would be to precommit to maximizing global utility, thereby ensuring that everyone else precommits to that choice as well – the equivalent of pressing the green button in the identical-clones version of the button-pressing game. Admittedly, it might be better for you in theory if you could somehow be the sole defector from this social contract; but because everyone in the original position is identical and makes the same choice as you, being the sole defector simply isn’t one of the available options. Either everyone presses the green button, or everyone presses the red button. Either everyone precommits to living morally, or no one does. Considering that you’d never want to live in a world where no one was obligated to treat you morally, then, your decision essentially makes itself; simply by preferring a universe in which everyone has pledged to treat you morally, you are endorsing a universal social contract and thereby buying into it yourself. You’re choosing the universe in which there’s an all-encompassing social contract – which applies to everyone, including yourself – over the one in which there is no such social contract. In other words, you – and everyone else in the original position – are jointly ratifying an implicit pledge to always do what’s moral after you’re born.
But wait a minute, you might be thinking – we just kind of glossed over the fact that fetuses aren’t actually capable of complex thought and therefore can’t consider any of this; but that’s not just a minor detail. I mean, it’s one thing for us fully-developed humans to imagine ourselves in the original position and conclude that the rationally self-interested thing to do in that situation would be to precommit to a universal social contract; but it’s another thing to actually be in that original position, as a barely-sentient fetus who can’t even understand basic concepts. Such a simple-minded being wouldn’t be capable of anticipating and weighing all their different options like we’ve just done, so how would they be capable of making the kind of pre-birth precommitment to morality that we’re talking about here? Even if doing so was in their best interest, wouldn’t it just be a moot point if they weren’t capable of knowing that and agreeing to it in the first place?
But that’s the thing; they wouldn’t have to be capable of understanding it. Remember the Master Preference – the one preference that necessarily applies to every sentient being, regardless of their mental capacity, simply by virtue of the fact that they’re capable of having preferences at all. What it says is that no matter what, you’ll always prefer outcomes that maximize your utility, even if you aren’t explicitly aware beforehand that they will do so. Your true preference if you’re about to step in front of a speeding bus (i.e. the preference extrapolated from your Master Preference) will be to avoid doing so, even if you aren’t aware on an object level that the bus exists at all. Your true extrapolated preference if your beverage is poisoned will be to avoid drinking it, even if you aren’t aware on an object level that it’s poisoned. Your true extrapolated preference if your friend has the chance to buy a winning lottery ticket on your behalf will be that they buy it on your behalf, even if you aren’t aware on an object level that there’s even a lottery going on in the first place. And so on. And this Master Preference is one that you can hold without ever having to realize or understand that you hold it; it’s simply an emergent by-product of the concept of preference itself. As we discussed earlier, even barely-sentient animals have it. Even barely-sentient brain-damage patients have it. And yes, even barely-sentient fetuses have it – meaning that in theory, it’s entirely possible for them to endorse a particular outcome (such as becoming part of a social contract) without ever explicitly realizing it.
To illustrate this point with an analogy, let’s go back to the identical-clones version of our button-pressing game – only this time, instead of identical human participants, let’s imagine that the participants are all, say, identical dogs. Instead of being rewarded with dollars, they’re rewarded with dog toys or treats or whatever; and instead of having to manually press a button to indicate whether they’re a cooperator or a defector, they’re hooked up to a supercomputer that can perfectly simulate how their brains would react to each of the two possible scenarios (“all cooperate” or “all defect”), then measure whether the “all cooperate” scenario would give them more utility than the “all defect” scenario, or vice-versa. If the “all cooperate” scenario would give them more utility (which it would), then that would mean that their extrapolated preference was to cooperate – so the computer would mark them down as cooperators. But if the “all defect” scenario would give them more utility (which it wouldn’t), then that would mean that their extrapolated preference was to defect – so the computer would mark them down as defectors. Obviously, the dogs would have no real understanding of what was happening or how the game worked – but they wouldn’t have to understand; simply the fact that they’d prefer the results of the “all cooperate” choice would register in the computer as an endorsement of that choice over the “all defect” alternative – and so all the dogs would end up jointly committing to the “all cooperate” choice without even realizing that they were doing so.
Likewise, while it’s true that fetuses aren’t capable of understanding that it would be in their best interest to precommit to a universal social contract, they are capable of meta-preferring outcomes that are in their best interest, even if they don’t know what any of that actually entails. As far as they’re concerned, having some indirect mechanism opt them into the social contract (in accordance with their Master Preference) works just as well as if they directly opted themselves into the social contract via their own conscious decision-making. So what does that indirect mechanism consist of, exactly? After all, it’s not like every fetus is hooked up to a brain-simulating supercomputer that can extrapolate their meta-preferences and then register the corresponding precommitments on their behalf. Well, one answer we could give might be that when they enter into the social contract, it’s not because they’ve explicitly entered themselves into it, or because some supercomputer is acting on their behalf, but because the people already existing out in the world are playing that role (like someone signing a literal paper contract on behalf of a friend who wished to sign it themselves but couldn’t), entering the pre-born individuals into the social contract simply by virtue of their own preference (as already-born individuals) that everyone act morally toward them. That is to say, any time someone thinks (even in just a vague, subconscious way) that others ought to do what’s good, or that they have an obligation to behave morally – which, of course, is something that people think all the time, since they desire to be treated morally themselves – that person is essentially doing the equivalent of pressing the green button on behalf of everyone in the original position who would want someone to do that for them. They’re opting them into the social contract merely by thinking it (since after all, thoughts are all that’s necessary to reify a social contract; there’s no physical action necessary like pressing a button). On this account, it’s possible that the first time a conscious being became intelligent enough to conceive of the idea that individuals ought to act morally (even if they didn’t think of it in those exact words but only conceptualized it abstractly), that thought alone would have been enough to opt every sentient being into the social contract from that point forward. It would have been the equivalent of declaring, “Any sentient being in the original position who agrees with this statement, that they would prefer (or meta-prefer) to be precommitted to acting morally after they’re born, is hereby precommitted to acting morally after they’re born.” And of course, since every sentient being in the original position would have a Master Preference agreeing with that statement, then they would all be precommitted to acting morally after their birth. It would be like when Odysseus’s crew bound him to the mast so he could enjoy the greater pleasure of hearing the Sirens’ song – only with everyone being figuratively bound to the mast at once, so they could all enjoy the benefits.
That’s one way of thinking about how people could be opted into the social contract without ever being conscious of it: It might be that we’re all simply born into it. Another idea, though, which I find more interesting, is that maybe it’s not necessary to have some outside agent (like a supercomputer or another person) opt you into the social contract at all; maybe, merely by the fact that your Master Preference is to cooperate, you’ve already implicitly opted yourself into it. That is, maybe the very logic of meta-preference itself necessitates you being precommitted to the social contract, simply by virtue of your implicitly endorsing that outcome over the alternative. Here’s how we might think of it: When you’re in the original position, you have an extrapolated preference which says, “If I’m currently in the original position, and if I’m about to be born into a world full of other beings who also started their lives from the original position, then (for acausal trade reasons) my preference is to act to maximize global utility from this point forward, even if it later comes at the expense of my own object-level utility.” True, you aren’t yet able to consciously understand that you have this preference; but that doesn’t really matter, because your awareness of it has no bearing on whether it actually applies to you or not. Given the forward-facing nature of its terms (“from this point forward”), it remains in effect over you for the rest of your life. And that in itself is enough to make it a kind of precommitment, in the same way that a preference to, say, never visit a particular restaurant under any circumstances (even if you later think it’d be a good idea) would constitute an implicit precommitment to never visit that restaurant. In other words, the preference is the precommitment, simply because it includes terms that apply unconditionally to your extended-future behavior. And although that doesn’t mean that you’ll be physically compelled to maintain your precommitment throughout the rest of your life (again, you don’t have to do anything in the literal sense – you can always go back on any precommitment you might have), it does mean that if you do later violate your precommitment (by trying to maximize your own object-level utility rather than global utility), you won’t just be going against the preferences of those others whose utility you’re reducing; you’ll also, in a sense, be going against your own preferences – maybe not at the object level, but at the meta level. You’ll be “getting your actions wrong,” in the same sense that someone might get a math problem wrong by saying that 2+2=5 or something – because in this framework, morality really is like a math problem that has a right answer and a wrong answer; and the right answer is that everyone be implicitly committed to maximizing global utility from the moment they first begin existing, and that no one ever go back on that commitment.
Of course, I should include an important qualification here, because this answer isn’t quite an absolute in every case. After all, there are plenty of individuals, like animals and severely brain-damaged people, who simply don’t have the mental capacity to understand and uphold such moral obligations – so it hardly seems fair to expect that those individuals should be able to act just as morally as those who can fully grasp the concept of morality. Where, then, do they fit into this system? Can we really consider them to be acting wrongly if they harm others without realizing that what they’re doing is bad? Are they just left out of the social contract on the basis that they can’t hold up their end of the bargain? Well, let’s think back to the original position again. When you’re behind the veil of ignorance, you don’t know whether you’re going to ultimately turn out to be a cognitively healthy human, or a human with cognitive disabilities, or an animal of some other species with a lower mental capacity. Considering that fact, it wouldn’t exactly make sense for your Master Preference to precommit you to an absolutist social contract that would exclude you if you ended up being born as one of those individuals with a lower mental capacity; by definition, the extrapolated preference that would most maximize your expected utility would have to be one that accounted for your different possible levels of cognitive ability. What this means, then, is that when you enter the social contract, you aren’t just making an implicit precommitment to always do what’s right, full stop; what you’re really doing is making an implicit precommitment to always do what’s right to the greatest extent that your cognitive ability allows for. And that means that if you happen to be born as, say, a crocodile or a boa constrictor, then you wouldn’t actually be breaking any kind of moral law or social contract if you regularly killed your prey slowly and painfully, despite the fact that your actions were a net negative in utilitarian terms. Your inability to comprehend the morality of what you were doing would exempt you from the kind of moral judgment we apply to most humans. That’s the reason why we don’t hold animals as morally accountable for their actions; and it’s also the reason why we consider mentally ill criminals to be less culpable than mentally healthy ones, and why we’re more forgiving toward children than toward adults, and so on. Moral obligation is something that can only really exist in proportion to a being’s ability to genuinely understand and consider the utility of other beings – because after all, as we discussed earlier, the morality of an action is based on its expected utility, not its actual utility; and if a particular being is only able to consider its own utility, then we can’t accuse it of acting immorally for failing to account for others’. We have to grade these individuals on a curve, so to speak. Even as we do so, though, the important thing to remember is that even though they have a lower capacity to judge the morality of their actions, that doesn’t entitle them to any less moral treatment from us – because after all, they are still part of the social contract, and we more cognitively developed individuals are capable of understanding that and treating them accordingly.
(Incidentally, this point also raises an interesting implication for our earlier discussion regarding the possibility of a “utility monster” like a deity or a super-advanced alien. Assuming such a being had a greater mental capacity than we do – and a greater ability to understand morality – it would accordingly also have an even greater obligation to act morally than we do. That is, even though its most strongly-held preferences would count for more than ours would in the utilitarian calculus, this being would also have a greater obligation than we do to satisfy the preferences of sentient beings other than itself. So although we might still worry that its interests would take priority over our own in incidental ways (like if it needed something important and we were in the way), we might not need to worry so much that it would be morally entitled to, say, torture us for fun (at least not if its nature was otherwise comparable to ours) – because in the same way that it’s immoral for us to torture animals for fun, it would be even less moral for a more advanced being to torture us for fun.)
So all right, with that last caveat out of the way, let’s recap: Goodness is a subjective valuation that sentient beings ascribe to particular outcomes – but it’s a subjective valuation that can be quantified objectively, in the same way that a subjective emotion like disgust can objectively exist in greater or lesser quantities. The more fully a particular action satisfies the subjective preferences of sentient beings (once you weigh all their various preferences against each other), the more objectively good it is. Of course, satisfying individuals’ preferences isn’t always as simple as just satisfying their immediate object-level preferences; individuals can also have (conscious or unconscious) meta-preferences that supersede their object-level preferences. Most fundamentally, every individual has an innate Master Preference in favor of outcomes that produce the highest level of goodness for them, whatever that might entail; that’s what it is to prefer an outcome in the first place. But that can sometimes mean that in order to get your best overall outcome, you have to forgo your preferences at the object level. And the implications of this are most significant before you’re even born, when you’re in the original position. At this point, your Master Preference dictates that what would be best for you would be if you were precommitted to always do what was moral after you were born – because by being precommitted to this goal, you’ll ensure that everyone else who comes into existence in the same way that you do will also be precommitted to it, and that they’ll be obligated to treat you morally throughout their lives. What this means, then, because you don’t need to be conscious of this Master Preference for it to apply to you, is that you do in fact assume an obligation to do what’s right from the moment you first begin existing – as does everyone else – in an implicit, universal social contract. And if you ever defect from this obligation, you’ll have violated your end of the contract; what you’re doing will be objectively wrong.
That’s the basic idea in a nutshell. As far as I can tell, it’s about as close as we can get to an objective, binding morality (or at least one that doesn’t ultimately rely on just saying, “Well, this is what seems intuitively true” to ground its claims). I’m not 100% sure how airtight it is, of course, but it does feel to me like it’s at least groping in the right direction. If nothing else, it seems to offer a way of addressing the is-ought problem that works for every applicable definition of “ought” (or “should”); that is, we can say something like “If you desire to maximize your own utility – which everyone does, due to their Master Preference – then you should be precommitted to an implicit social contract,” and that statement can fit both the instrumental (“if-then”) and self-interested (“what’s in it for me?”) definitions of “should.” Plus, the whole part about being obligated to meet your precommitment means that it can fit the historical “shalled” definition as well. The statement “Everyone has an implicit obligation to act morally because their Master Preference has precommitted them to it” is fundamentally an “is” statement, despite fully functioning as an “ought” statement. So that’s one reason why I like this framework: the way it addresses the question of should-ness and the is-ought problem.
In that same vein, another reason I like it is because it brings together so many other philosophical approaches toward morality – which might superficially seem to contradict each other – into one unified whole. Utilitarianism plays a major role, of course, as do contractarianism and Rawlsianism – but there’s even a role for things like natural-rights theories and Kantian deontology, despite some of my earlier criticisms of those systems. After all, some of Kant’s most central ideas were things like moral duty, universalizability, and the categorical imperative – and all of those ideas actually fit perfectly into this framework. The principle of universalizability, most famously encapsulated in Kant’s dictum that you should “act only according to that maxim whereby you can, at the same time, will that it should become a universal law,” appears in the acausal trade part of the original position scenario, in which the kind of behavior you precommit to practicing after you’re born is literally shared by everyone else who begins life from that same original position. The idea of a categorical imperative – an unconditional moral rule that must always be followed regardless of context – appears in the rule that we should always seek to maximize global utility no matter what. (Actually, Kant’s version of the categorical imperative was the rule of universalizability I just quoted, but the rule to always maximize global utility also functions as a categorical imperative in this system.) And our universal obligation to uphold this rule reflects Kant’s ideas about moral duty; we all have a moral duty to abide by the social contract that we’ve precommitted to, and this duty supersedes any object-level desires we might have. So ultimately, despite our system being rooted in utilitarianism, there’s plenty of room for duty-based ethics as well – it’s just that our only duty is to maximize global utility. And by that same token, there’s plenty of room for the idea of moral rights (i.e. the idea that we’re entitled to be treated in certain ways) – it’s just that we really only have one all-encompassing right: the right to be treated in a way that accords with what’s globally moral.
Now that we’ve got the basic skeleton of our moral system in place, then, let’s flesh it out a bit with some potential implications and applications. In particular, now that we’ve straightened out exactly what we mean by terms like “goodness” and “morality” and “expected utility,” let’s turn to the question of what exactly we mean when we say we want to maximize these things. This might seem like a fairly obvious answer; maximizing utility just means creating the highest level of preference satisfaction across the whole universe of sentient beings, right? But defining “the highest level,” as it turns out, isn’t actually as straightforward as you might think. When we say we want to maximize the utility of sentient beings, do we mean that we want to produce the highest sum total of utility, in aggregate terms? (And if so, does that mean we should be trying to have as many children as possible, even if their average quality of life isn’t all that high, just so we can have a greater raw amount of utility in the world?) Or do we mean that we want to produce the highest average level of preference satisfaction for our universe’s inhabitants? (And if so, does that mean we should kill all but the most ecstatic people on the grounds that they’re bringing down the average utility level?)
Well, that last question is easy enough to answer first: No, we obviously shouldn’t kill everyone whose satisfaction level is less than perfectly optimal. Killing people who want to live represents a significant reduction in utility, not an increase. True, in a vacuum, a universe containing (say) a billion extremely satisfied people plus a billion moderately satisfied people would have a lower average utility level than a universe containing just the billion extremely satisfied people alone. But for the purposes of our question here, taking those two universes in a vacuum wouldn’t be the right comparison; the actual choice we’d be making would be between a universe containing a billion extremely satisfied people plus a billion moderately satisfied people, or a universe containing those same two billion people except that half of them get killed. Killing those people wouldn’t remove their utility functions from the equation; it would just reduce their utility levels to zero, and would thereby reduce their universe’s overall utility level as well, both in aggregate terms and in average terms. Granted, the fact that they no longer existed would remove their utility functions from any subsequent utility calculations in future moral dilemmas; but there’s no basis for leaving their preferences out of the utility calculation determining whether to kill them or not, because their preferences would be very much affected by that decision. You can’t just wait an arbitrary amount of time after you commit your actions before you start weighing people’s utility levels; you have to count any and all preferences that your actions affect – whether those effects happen sometime in the future or right away. And that means you can’t kill people off just because you think the world’s average satisfaction level would be higher without them. Maybe, if you could find some people whose utility levels were so low that they were actually negative, and there was little hope that they could ever turn positive again – like if they were suffering from an agonizing terminal illness and were begging to die – then killing them could in fact be a good thing overall. But killing people who want to live (even if their satisfaction levels are less than optimal) obviously counts as a reduction in utility – so it can’t be considered moral, for the simple reason that you can’t just judge the morality of an action based on its later aftermath; you have to account for its immediate effects as well.
Now, having said that, it’s important not to overcorrect and start thinking that the immediate effects are all that matter either; the long-term effects still count just as much. This is another issue that sometimes comes up in these ethical discussions – whether we should consider our obligations to future generations to be just as strong as our obligations to each other here in the present, or whether our moral obligations diminish over time. But for the same reason that you can’t just wait an arbitrary amount of time after you commit your actions before you start weighing people’s utility levels, you also can’t just wait an arbitrary amount of time after you commit your actions and then stop weighing people’s utility levels. Even if you don’t expect the effects of your actions to happen until far into the future – in fact, even if the people who will be affected by your actions haven’t been born yet – that doesn’t mean you can simply disregard them or pay them less attention. Again, you have to count any and all preferences that your actions will affect, no matter when those effects happen. So that means your utility estimations have to not only account for whether your actions might cause some immediate harm or benefit, but also whether they might lead to potential harms or benefits in the distant future. Even some act that might not seem to have any immediate effect at all could still be extremely good or extremely bad depending on what long-term effects it might have on future generations.
Of course, a complicating factor here is the fact that morality is based on expected utility, and you can hardly ever be as certain about the future effects of your actions as you are about their immediate effects. When the effects of your actions are instantaneous, you can usually make a pretty decent estimate of how good or bad they’ll be, who will be most affected by them, and so on. But when the effects extend far into the future, it’s harder to make those kinds of estimations accurately – and the further into the future you look, the more uncertainty there is. If you’re planning on dumping a bunch of toxic chemicals into a nearby river, for instance, and those chemicals will instantly render the local water supply undrinkable, then it’s pretty obvious how negative the outcome of that action will be. But if the chemicals might not spoil the water supply for another hundred years, then it becomes harder to say for certain that the effects will still be just as bad. Maybe water purification technology will have advanced enough in that intervening hundred years that the chemicals will no longer pose a danger; or maybe we’ll all be living on Mars in a hundred years and no one will want to drink the water from Earth anyway; or maybe we’ll all have been destroyed by a meteor or something in the meantime and it’ll all be a moot point. Whatever negative utility you might expect an action (like spoiling the water supply) to produce in a vacuum, then, you have to discount that estimation in proportion to the odds that such a negative outcome might not actually happen. And the further into the future a particular outcome is projected to be, the more likely it is that some unforeseen event will stop it from happening; so all else being equal, the average amount of negative utility you might expect from a harmful action will typically be less if it’s further in the future (and likewise for the amount of positive utility you might expect from a beneficial action). That being said, of course, in some cases it’s entirely possible that something might happen in the future that would actually make the outcome more extreme than if the action were taken right now – like in the river-polluting example, maybe some new microbe might emerge in the water supply that would exacerbate the chemicals’ negative effects or something – so in that case, you’d have to adjust your estimations of the future effects to be more drastic than your estimations of the present effects. But for the most part, pushing an action’s effects further into the future tends to bring its expected outcome closer to neutral, simply because of the ever-increasing chance that the world might end or something in the meantime. (At least, that’s my personal intuition; I could certainly be wrong.) Either way, the point here is just that when we discount the expected utility of a far-future outcome, that’s solely because of the increased amount of uncertainty associated with it, not because individuals living in the future are in their own category of moral status, or because future preferences somehow constitute their own category of moral consideration. After all, every moral decision is fundamentally a question of satisfying future preferences; whether those preferences will be satisfied in the immediate future (i.e. right after you make your decision) or in the distant future is just a matter of degree, not of kind. As Derek Parfit sums up: “Later events may be less predictable; and a predictable event should count for less if it is less likely to happen. But it should not count for less merely because, if it happens, it will happen later.”
In short, we should try to take actions that maximize the utility of future individuals in the same way that we should try to take actions that maximize the utility of present-day individuals, given that they actually come into existence and are affected by our actions; and if we aren’t sure whether they will come into existence and be affected by our actions, then we should discount our utility estimates in proportion to that uncertainty. But now this raises another question – or more specifically, it brings us back around to the question I raised at the start of this section: If maximizing utility really is our goal, then does that mean that in addition to satisfying the preferences of future beings that we already expect will exist, we’re also obligated to bring new future beings into existence, so as to maximize their utility as well? Are we obligated to have as many children as possible, on the basis that they’d surely prefer existence to nonexistence, and that therefore the only way to truly maximize future utility would be to maximize the number of future people? Is this what we’re actually agreeing to when we enter the social contract? Or are we agreeing to some other definition of utility maximization?
Well, this is where things get a little trickier. See, when we talk about maximizing utility within the framework I’ve been outlining here, we aren’t just talking about maximizing abstract self-contained units of happiness; we’re talking about satisfying the preferences of sentient beings. As Parfit writes:
[The] principle [that the goal of morality is to maximize the aggregate quantity of utility in the universe] takes the wrong form, since it treats people as the mere containers or producers of value. Here is a passage in which this feature is especially clear. In his book The Technology of Happiness, [James] MacKaye writes:
Just as a boiler is required to utilize the potential energy of coal in the production of steam, so sentient beings are required to convert the potentiality of happiness, resident in a given land area, into actual happiness. And just as the Engineer will choose boilers with the maximum efficiency at converting steam into energy, Justice will choose sentient beings who have the maximum efficiency at converting resources into happiness.
This Steam Production Model could be claimed to be a grotesque distortion of this part of morality. We could appeal [instead] to
The Person-affecting Restriction: This part of morality, the part concerned with human well-being, should be explained entirely in terms of what would be good or bad for those people whom our acts affect.
This is the view advanced by [Jan] Narveson. On his view, it is not good that people exist because their lives contain happiness. Rather, happiness is good because it is good for people.
Given this premise, we can definitively say that if some being’s preferences exist or will exist in the future, then we’re obligated to include them in our moral calculus; that much is clear enough. But what’s not so clear is whether an as-yet-nonexistent being’s hypothetical preference that they be brought into existence could also be included in those considerations – because it’s hard to see how such a preference could even exist in reality at all. Sure, once a particular being came into existence, then they’d have a preference to continue existing; but it wouldn’t really make sense to talk about a nonexistent being having a preference to begin existing in the first place – because after all, such a being wouldn’t exist, and therefore couldn’t hold any such preference. The only way a preference can exist is if there’s a sentient being there to hold it; if there’s no sentient being, there can’t be any preference, much less an obligation to satisfy it. The rule of thumb that I keep coming back to here is that the morality of an action is determined by any and all preferences that it affects (and by “affects,” I mean “satisfies or violates”) – but if you go by that rule, then a hypothetical person’s preferences can’t be affected by your decision to bring them into existence, because they don’t have any preferences until after the fact. Obviously, if you do decide to bring them and their preferences into existence, then those preferences will be affected by whatever situation you bring them into – so in that context, you will have to consider how they’ll be affected – but you can’t put the cart before the horse. Again, preferences have to actually be satisfied or violated by your action in order for them to count toward your decision to take that action. What this suggests, then (as Johann Frick puts it), is that “our reasons to confer well-being on people are conditional on their existence.” And what this means that when we talk about our obligation to maximize global utility, we’re not just talking about an obligation to maximize the raw quantity of utility in our universe; what we’re really talking about is an obligation to maximize the utility of the beings that exist or will exist in our universe. It’s a subtle difference, but a crucial one – because while the former commitment would require us to create and satisfy as many new people and preferences as we could, the latter requires no such thing. As Narveson phrases it, the goal is simply “making people happy, not making happy people.” Or in William Shaw’s words, “Utilitarianism values the happiness of people, not the production of units of happiness. Accordingly, one has no positive obligation to have children. However, if you have decided to have a child, then you have an obligation to give birth to the happiest child you can.”
This might not seem quite right, given everything else we’ve been considering here. After all, even if there can be no such thing as a nonexistent person having a preference to start existing, wouldn’t bringing a new person into existence still be a utility-positive act (assuming their life was worth living) simply because of all the other preferences that they’d develop and then satisfy after they were born? And yes, it’s true that creating and then satisfying new preferences would in fact produce a greater quantity of utility in aggregate terms. But again, that’s not really the question here; the real question is whether it’s morally obligatory to give a nonexistent person the opportunity to create and satisfy all those hypothetical preferences in the first place. That is, when you tacitly agree to enter into the social contract, does that contract only include real people, or does it include hypothetical people as well? Is it even possible for a hypothetical person to enter into a social contract at all, or is a social contract conditional on the existence of its participants?
As another way of answering this, let’s imagine going back to the original position again. Behind the veil of ignorance, you have no idea who you’ll turn out to be after you’re born, or what your traits or preferences will be, or anything like that. Your only givens are that you exist, you’re sentient, and you accordingly have a Master Preference that your utility be maximized. We also know that whatever social contract you agree to in this position (as an extension of your Master Preference), every other being that comes into existence will also be precommitted to the same social contract, since every other being also starts its existence from the original position, and since all three of the above conditions (existence, sentience, Master Preference) must necessarily be met before a sentient being can do or think or prefer anything else. Given this situation, then, what kind of social contract would your Master Preference commit you to? Well, first and foremost, it would commit you to a social contract that accommodated the preferences of other sentient beings existing at the same time as you, since that would ensure that those other beings would likewise be committed to accommodating your preferences. Additionally, it would commit you to a social contract that accommodated the preferences of sentient beings that would exist in your future, since that would ensure that you’d be coming into a world where those who had lived before you had likewise been committed to accommodating your preferences (since you’d exist in their future). But would your Master Preference also compel you to enter into a social contract that included hypothetical beings along with these real ones? Would it be in your best interest to have yourself and everyone else committed to continually bringing new beings into existence, even at your own expense, as long as these new beings’ utility gains outweighed your utility losses? Even behind the veil of ignorance, where you’d have no idea who you might end up becoming, it’s hard to see how this could be the case – because the one thing you would know about yourself in the original position was that you already existed. In terms of your own self-interest, you wouldn’t have any reason whatsoever to want everyone to be committed to bringing new hypothetical beings into existence – because you’d know with 100% certainty that you wouldn’t be one of those hypothetical beings, but would be one of the already-existing ones who might be made worse off by bringing those new beings into existence. In other words, there’s no way that being committed to this kind of social contract could be in your best interest as an individual – so consequently, your Master Preference would not commit you to it. Granted, you would still be obligated to bring new beings into existence if doing so would increase the utility of those who already existed or would exist in the future, since you would be committed to improving the utility of that latter group. But for that same reason, if bringing new beings into existence decreased those existing beings’ utility, you’d actually be obligated not to bring them into existence, since your obligations would be toward the real beings, not the hypothetical ones. (It would be immoral, for instance, for you to create a utility monster that decreased everyone else’s utility, even if that utility monster enjoyed much greater positive utility itself.)
So all right, hopefully the premise here should be clear enough; we don’t have any obligations toward hypothetical people, so we aren’t morally required to bring them into existence, even if their lives would be utility-positive overall. Having accepted this premise, though, we’re now forced to grapple with another question: Does its logic apply in the other direction as well? That is, if we don’t have any obligations toward hypothetical people, does that mean there would be nothing wrong with bringing someone into existence whose life would be utility-negative (i.e. so utterly miserable that death would be preferable)? If we felt no need to recognize the preferences of people who didn’t exist, then would that mean we’d be free to ignore this person’s expected preference not to live? I think it’s fair to say that most of us find this idea a lot less intuitive; as Katja Grace writes, the popular consensus around this question seems to be that “you are not required to create a happy person, but you are definitely not allowed to create a miserable one.” In fact, this question has become such a sticking point in moral philosophy that it has come to be known simply as “the Asymmetry,” and has earned a reputation as one of the field’s more stubborn puzzles. That being said, though, I do think that the distinction we’ve made between hypothetical people and real people can help us navigate this issue in a coherent way. See, if you aren’t bringing anyone new into existence, then no one is actually affected by your actions; the only beings whose preferences come into play here are hypothetical ones, to whom you have no moral obligations. But if you are bringing someone new into existence, then that new person is very much going to be affected by your actions, so you are obligated to account for their preferences. (It’s Shaw’s point again: “One has no positive obligation to have children. However, if you have decided to have a child, then you have an obligation to give birth to the happiest child you can.”) The moment you decide to bring that new person into existence, you convert their status from “person not expected to exist” to “person expected to exist” – and accordingly, you also convert their moral status from “entitled to no moral consideration” to “entitled to full moral consideration.” Now you might be thinking, “But wait a minute, wouldn’t this go both ways? If I decided to bring a new person into existence whose life was expected to be happy, then wouldn’t the same thing apply to them? Wouldn’t my decision to bring them into existence also turn them from a hypothetical person into a real expected future person whose preferences had to be recognized?” And you’d be right – it absolutely would. But the crucial point here is that making that decision to bring them into existence – not even actually bringing them into existence, mind you, just deciding to do so – is itself an action that could only have been morally justifiable if it didn’t reduce the utility of the real people who already existed or were expected to exist in the future. In other words, the only moral way to introduce a new term into the utility calculus (representing a new expected person) is if doing so wouldn’t reduce the utility levels that are already part of the calculus; it’s only once you’re past this threshold that you can consider whether taking the next step of actually bringing the new person into existence would subsequently allow for enough of their preferences to be satisfied that the act as a whole wouldn’t be utility-negative.
I realize this is kind of a weird point, so let me clarify what I mean here. Let’s say you’ve got a married couple trying to decide whether to have a child. You might regard their decision-making process as happening in two stages: In the first stage, they’ll have to consider the morality of bringing a new set of preferences (i.e. those belonging to a new child) into existence – and at this point, the child will be purely hypothetical, so morality will obligate them to only consider how their choice will affect the real people who already exist or will exist (including themselves). So if they expect that having a child will greatly enrich their own lives and won’t detract from others’ lives too much (or will greatly enrich others’ lives and won’t detract from their own lives too much), then they’ll be morally justified in bringing that new set of expected preferences into existence. The moment they pass this threshold, though, they’ll be introducing the child’s utility function into the equation – and at that point, the question of satisfying its expected preferences will now have to be considered as part of the utilitarian calculus. Once this happens, if they expect the child’s life to be utility-positive (or even utility-neutral) overall, then they’ll still be morally justified in going ahead and having it (although if they change their minds and don’t have it after all, then no preferences will have been violated – so no harm done). But if they realize that the child’s life will actually be utility-negative, so much so that its negative utility will outweigh whatever positive utility its life will bring to others – like for instance, if the child is expected to be born with an incurable and agonizing medical condition worse than death – then they won’t be morally justified in having it. Their only moral option will be to abandon their intention to have the child, and to thereby remove its utility function from the moral calculus again, effectively converting it back from a real expected future person into a purely hypothetical person (i.e. reducing its probability of existing from some positive value down to zero).
Of course, in practice, they won’t have to literally go through the whole decision-making process step-by-step in this way; in most cases, they’ll simply be able to recognize in advance what the final result would be, and then form their intention to have a child (or not) based on that anticipated result. (So if they knew they could only ever have a miserable child, for instance, they’d recognize in advance that the negative utility of the calculation’s second stage would compel them to abandon whatever intention they might have formed to have the child based on the first stage, so they’d never form that intention in the first place.) From the inside, then, this decision-making process wouldn’t feel like it was split into two stages; it would just feel like one self-contained act of simultaneously imagining and weighing all the outcomes at once. But in terms of the morality underlying their choice, this is how the “order of operations” of the utility calculus would be working, whether they were consciously aware of it or not. The key point here is just that, as far as the utility calculus is concerned, the act of bringing a new set of preferences (representing a new person) into the world, and the act of subsequently causing those preferences to be satisfied, are two separate acts, each with its own set of moral variables; and while the utility function of the would-be person would count as an input in the latter decision, it wouldn’t count as an input in the former, because at that point the person wouldn’t yet fit into the category of “someone who exists or is expected to exist” – and only individuals that fit into that category are entitled to moral consideration. (Or to use a better framing, maybe I should say that the person’s expected preferences wouldn’t yet fit into the category of “preferences that exist or are expected to exist,” and only preferences that fit into that category can legitimately receive moral consideration – because after all, the preferences are what are actually being weighed in the utility calculus, not the specific individuals holding them.) In the case of having children, of course, we don’t typically notice that creating and satisfying new preferences are two separate acts, because the two are always so tightly coupled that they seem to just be one single act; whenever we introduce a new set of preferences (representing a new child) into the world, it’s always within the context of some specific situation that automatically causes most of those preferences to be immediately satisfied or violated. But just as violating a preference (e.g. by killing someone) isn’t the same as removing it from the utility calculus – despite one immediately preceding the other – adding a preference to the calculus isn’t the same as satisfying it. They’re two separate things, and they have to be evaluated accordingly.
Like I said, this is a pretty subtle distinction to make; but its implications are wide-ranging. The most obvious implication, of course, is that you shouldn’t necessarily feel obligated to have kids if you don’t want to. (For the record, I do think that all else being equal, having children is something that increases global utility on average; history has shown that the more people there are in the world, the more successful and prosperous it is, since additional people don’t just take resources from others – they also produce and innovate and make life better for others. But that’s certainly not always the biggest factor; so in situations where having children would be more of a burden than a blessing, this framework would say it’s perfectly fine not to have them.) On a larger scale, this approach also implies that even if all the humans on the planet voluntarily decided to stop reproducing all at once, because for some reason it would reduce their utility if they reproduced, this would be perfectly OK in moral terms – because in such a situation, no future generations would actually exist, so no one’s preferences would be violated. (Granted, it would surely take some kind of extreme circumstance to force humanity into this position – like some new worldwide plague that caused childbearing couples to suffer unbearable pain for the rest of their lives or something – but still.)
Aside from our own reproductive choices, though, we can also apply this approach to other areas, such as our interactions with other species. One of the more common arguments in favor of breeding animals for food, for instance – probably the strongest moral argument for it – is that by bringing these new animals into existence, we’re giving them the chance to live utility-positive lives (assuming their living conditions are humane), which they otherwise would never have gotten. So even though we’re killing them after only a few months of life, that’s still better for them than if they’d never been born at all, right? But aside from the obvious intuitive objections to this line of thinking (Would it still seem as persuasive if we replaced the word “animals” with “people of a minority race” or something like that?), the argument also fails to hold up under the preference-satisfaction framework described above. The fact that a hypothetical turkey would enjoy its life if it were brought into existence isn’t a valid reason for doing so – because until you’ve actually justified the act of bringing the turkey’s preferences into existence in the first place, its utility function isn’t part of the utility calculus. You could still try to justify that choice for reasons other than the turkey’s utility – like the utility boost you’d get from eating the turkey’s meat later – but if you did, then as soon as you ostensibly justified bringing the turkey into existence, its expected preferences suddenly would count for something. In particular, the turkey’s positive expected quality of life – which hadn’t previously counted as a valid reason for bringing it into existence – would now count as a valid reason for keeping it alive after it was hatched. And assuming its desire to stay alive outweighed your desire to eat it, that would mean that the moral thing to do would be to keep it alive – so you’d no longer be able to enjoy the utility boost of eating its meat after all (unless you waited until the end of its natural lifespan to do so).
To illustrate this point a little more clearly, here’s what the whole scenario might look like in numerical terms (albeit with the disclaimer that these numbers, like all the others I’ve been making up here, are just for illustrative purposes and aren’t intended to reflect the actual real-life utility scales):
- Outcome A: You don’t bring any new turkeys into existence. The net change in global utility is zero.
- Outcome B: You bring a new turkey into existence, spend all the necessary time and money to raise it to full size, then release it into the wild on Thanksgiving Day to live out its natural lifespan. You lose 20 utility from the experience (since you do all the work of raising the turkey but don’t get to eat it), while the turkey derives 1000 utility from being allowed to live past Thanksgiving (not counting whatever utility it had already derived from its life up to that point), for a total global utility increase of +980.
- Outcome C: You bring the turkey into existence, spend all the necessary time and money to raise it to full size, then kill and eat it. You gain 10 utility from the experience (since your +30 gain from eating the turkey outweighs the -20 loss from having to raise it), but the turkey misses out on that 1000 utility it would have derived from being allowed to live past Thanksgiving, so the global utility level only rises a mere +10.
As you can see here, if you decide to bring the turkey into existence, you’ll be morally obligated to keep it alive, since killing it (Outcome C) would cause a 970-point reduction from what the global utility levels would otherwise be (in Outcome B). Outcome C, in other words, is off the table no matter what. That means that when you initially decide whether to bring the turkey into existence or not, your only real choices are between Outcome A – which would leave your baseline utility level unchanged – and Outcome B – which would reduce your utility by 20. And although Outcome B would create plenty of utility for the hypothetical turkey, you have no obligations toward as-yet-hypothetical beings – only toward real ones. So that means that, when you make your decision, you’re morally obligated to maximize your own utility and select Outcome A over Outcome B. In short, the best outcome here is to never bring the new turkey into existence in the first place. You wouldn’t be doing yourself any favors by reducing your utility without any positive benefit, and you wouldn’t be doing the turkey any favors by bringing it into existence – but you would be doing it tremendous harm by killing it prematurely. Whether we’re talking about humans or animals, then, the bottom line here is worth reiterating: You are obligated to satisfy the preferences of sentient beings, but you aren’t obligated to create new preferences to be satisfied. Whenever the latter can only come at the expense of the former, your obligation is to satisfy the real preferences, not the hypothetical ones.
(And just to drive this idea home, consider one final point: If we really wanted to test the limits of this principle, we could do away with the complication of introducing new beings into the world entirely, and just look at individual preferences alone – and the same logic would still apply. In fact, we could even narrow our scope down so far as to just consider the introduction of new preferences within one single person, and the moral reasoning would scale perfectly. Imagine, for instance, if you as an individual could choose to give yourself some new preference (say, by taking a magical pill or something). Would you take the pill? Well, the answer would obviously depend on what the new preference was. If the pill would give you a new preference for, say, eating healthier, then you might rightly consider taking it to be in your self-interest, since it would help you with your already-existing preferences. (This would be analogous to bringing a new person into existence because their existence would be a net positive for all the people who already existed.) On the other hand, though, if you were offered a pill that would give you a new preference for, say, obsessively counting and re-counting your own eyelashes in the mirror all day, then it would most definitely not be in your self-interest to take it – because regardless of how much utility you might get from satisfying this new preference, doing so would undermine your ability to satisfy your already-existing preferences. (This would be analogous to bringing a new utility monster into existence, which would detract from the rest of the world’s utility despite enjoying immense positive utility itself.) It would be silly to claim that the utility you’d get from satisfying the eyelash-counting preference – even if it was a substantial amount of utility – would be a valid argument in favor of creating that new preference in yourself, if creating it would involve even the slightest reduction in the satisfaction of your existing preferences. That would be like if a government, in the name of trying to achieve maximal national security, went out and created as many new enemies as possible so it could spend even more money and resources protecting itself; it would be a fundamental misunderstanding of what “maximal national security” meant. True, I’ve talked a lot about how your Master Preference means that you’ll always prefer that your preference satisfaction be maximized – but what I mean by this is that you’ll always be in favor of whatever maximizes the satisfaction of the preferences you actually hold, not whatever maximizes the satisfaction of whatever preferences you could hypothetically hold if they were forced upon you against your will. The fact that you have a meta-preference not to be satisfied in this latter way means that the creation and satisfaction of some new unwanted preference would actually represent a violation of your utility function, not a satisfaction of it. (To use another analogy, it’d be like if your car’s GPS “fulfilled” your wish to get you to your destination as quickly as possible by changing your indicated destination to a closer location; that wouldn’t be what you actually wanted at all.) Now, granted, if you did mistakenly take the pill and become obsessed with eyelash-counting, then it would be better for you to satisfy that preference than not to satisfy it. But the point is just that the goodness of satisfying that preference would be conditional on the preference actually coming into existence first, not an argument for bringing it into existence in the first place. At the risk of using too many analogies here, Frick provides two more good ones: First, he points out that the act of creating new preferences to be satisfied is a lot like the act of making promises. If you make a promise, then it’s a good thing to be able to keep it – but the mere fact that you think you’d be able to keep a promise if you made it does not, in itself, constitute a sufficient reason to go around making as many promises as possible, just so you could then keep them. Secondly, Frick likens this to the prospect of owning an oxygen mask so that you can climb Mount Everest: True, it would be better to own an oxygen mask if you were planning on climbing Mount Everest, but merely owning an oxygen mask does not in itself make you obligated to climb Mount Everest in the first place. Ultimately, these are fundamentally conditional propositions. And the same is true of creating new preferences – not only new individual preferences within one person, but new bundles of preferences – i.e. new people – in the world. The nature of preference satisfaction is fundamentally conditional.)
I opened the last section by asking whether the goal of morality is to maximize aggregate utility or average utility. In most situations, this isn’t really a relevant question, since each potential outcome will have the same population, so the outcome with the highest average utility and the outcome with the highest aggregate utility will be one and the same. But in scenarios that do involve population differences, there can be a significant difference; for example, if you bring a new person into the world whose utility is below average but still positive, you can increase the aggregate utility level while decreasing the average. So which standard is the right one to use? Well, I’d be remiss if I didn’t bring up Parfit’s work here, since he’s undoubtedly the most famous thinker on the subject. What he showed, though, was that just trying to use one of these standards or the other isn’t really workable. For instance, let’s say you decided to go with average utility as your metric of choice. If this were your only standard for judging goodness and badness, then you’d have to conclude that a world containing ten million people who were all suffering extreme negative utility (e.g. having to undergo constant, unending torture) would be no worse than a world containing just ten people with the same average utility levels. (Parfit calls this “Hell One vs. Hell Two.”) That can’t be right, can it? But similarly, if you decided to stick with aggregate utility instead, you’d have to conclude that a world containing ten million people whose lives were all maximally ecstatic – i.e. people who had extremely high average levels of positive utility – would be worse than a world containing trillions of people whose lives were barely worth living – i.e. people whose utility levels were barely positive, but whose population levels were so much higher that their positive utility added up to a higher sum in aggregate terms. (Parfit calls this “the Repugnant Conclusion.”) That doesn’t seem right either. So how do we resolve this impasse?
This is another one of those notorious puzzles in moral philosophy that has always seemed to evade an easy answer. Again though, as with the Asymmetry, I think the framework I’ve been describing here offers us a plausible way of addressing it (and resolving both of Parfit’s thought experiments). By distinguishing between real people and hypothetical people, it provides a basis for saying that it would be immoral to add more people to the world in both cases. In the Hell One vs. Hell Two example, we can say that it would be immoral to add more people because they would be real people who would be entitled to moral treatment, and giving them such a negative-utility life would therefore count as a negative in the global utility calculus. And in the Repugnant Conclusion example, we can say that it would be immoral to introduce a bunch of new people into the world if doing so would lower the utility levels of those who already existed, because the as-yet-hypothetical people we were considering adding wouldn’t be real, so there wouldn’t be anything wrong with not bringing them into existence; no preferences would be violated.
In both of these thought experiments, the reason why the traditional average/aggregate dichotomy fails is that it frames the dilemma as a simple side-by-side comparison between the utility levels of two different universe-states. But this traditional approach overlooks the fact that in practice, it’s not actually possible to just magically change from one universe-state to another. In practice, you can only get from one population level to another by either creating new beings or killing existing ones – and both of those acts have built-in utility implications of their own. In other words, the relevant variable here isn’t just the static utility level of each universe-state and nothing else; it also matters how you go from one universe-state to the other. Morality is a pathfinding tool. If you’re only comparing after-the-fact utility levels, you’ll miss half the picture.
As an illustration of this point, consider that in the Repugnant Conclusion scenario, it may actually be both immoral to go from Universe A (the happy but sparsely-populated universe) to Universe B (the more populous but less happy universe), and also immoral to go from Universe B to Universe A. After all, in order to go from A to B, you’d have to reduce the utility levels of the already-existing population to introduce the new people, and our model wouldn’t condone that. But in order to go from B to A, you’d have to kill off most of the population to raise the utility of the survivors, and our model most definitely wouldn’t condone that either. If your only way of thinking about morality was in terms of aggregate utility levels, you might consider B to be unconditionally better than A – and so you might consider it perfectly moral to immiserate A’s population in order to bring about a universe more like B. Likewise, if your only way of thinking about morality was in terms of average utility levels, you might consider A unconditionally better than B – and so you might consider it perfectly moral to kill off most of B’s population in order to raise the survivors’ satisfaction levels to those of A. But under the system I’ve been describing here, the morality of the situation would depend crucially on which universe you were starting off in; the moral asymmetry between bringing people into existence and taking them out of existence would mean that the morality of going from one universe to another wouldn’t necessarily be reversible. So to try and make a moral judgment about which universe was better in an isolated side-by-side comparison, without any indication as to what the starting conditions were, would be like trying to determine which direction an object was moving in, and how quickly, without knowing anything except the object’s current position. The morally relevant question here, in short, wouldn’t be whether one static universe-state was better than another in a vacuum – it would be whether the act of going from one universe-state to another would be good or bad – and that answer might be totally different depending on which universe-state was the starting point. Sure, we might rightly be able to say that in some given universe, the average utility level would be higher if all the people who were less-satisfied-than-average with their lives had never been born; but in practice, we wouldn’t have any way of actually getting to that lower-population alternate universe with its higher utility levels – only to an alternate universe in which the population was lower because all those people had been killed and global utility had therefore been massively reduced. To use a physics analogy, the higher-average-utility universe would be outside of our moral light cone, so to speak. (Or to use a chess analogy, it would be like trying to arrive at some board configuration that would be impossible to ever legitimately arrive at according to the rules of the game.) So the answer to the question of whether it would be moral to try to create that higher-average-utility universe anyway (by killing all those people) would be a decisive no, regardless of whether the after-the-fact utility levels would seem more favorable in a vacuum. Again, just because it would be immoral to go from Universe A to Universe B doesn’t automatically mean that going from Universe B to Universe A would be moral. The moral asymmetry between creating and destroying life is critical here.
When it comes to this question of aggregate vs. average utility, then, there is in fact a way of reconciling the apparent disparities and resolving both the Hell One vs. Hell Two dilemma and the Repugnant Conclusion dilemma at the same time. But it requires cutting the Gordian knot of the average/aggregate dichotomy outright, and recognizing that it was never actually the right way to frame these dilemmas in the first place. The actual appropriate standard for whether it would be morally permissible to go from one particular universe-state to another, again, is simply whether doing so would maximize the utility of the sentient beings that already existed or were expected to exist. And so in this sense, the real dichotomy by which we should be judging these kinds of population-related decisions isn’t average utility vs. aggregate utility; it’s hypothetical preferences vs. real preferences.
Now, before I move on from this point, I should mention that Parfit does have an additional variation on his Repugnant Conclusion example that’s relevant to this discussion, which he calls the Mere Addition Paradox. The basic gist of it (as summarized by Wikipedia) is as follows:
Consider the four populations depicted in the following diagram: A, A+, B− and B. Each bar represents a distinct group of people, with the group’s size represented by the bar’s width and the happiness of each of the group’s members represented by the bar’s height. Unlike A and B, A+ and B− are complex populations, each comprising two distinct groups of people. It is also stipulated that the lives of the members of each group are good enough that it is better for them to be alive than for them to not exist.
How do these populations compare in value? Parfit makes the following three suggestions:
1. A+ seems no worse than A. This is because the people in A are no worse-off in A+, while the additional people who exist in A+ are better off in A+ compared to A, since it is stipulated that their lives are good enough that living them is better than not existing.
2. B− seems better than A+. This is because B− has greater total and average happiness than A+.
3. B seems equally as good as B−, as the only difference between B− and B is that the two groups in B− are merged to form one group in B.
Together, these three comparisons entail that B is better than A. However, Parfit also observes the following:
4. When we directly compare A (a population with high average happiness) and B (a population with lower average happiness, but more total happiness because of its larger population), it may seem that B can be worse than A.
Thus, there is a paradox. The following intuitively plausible claims are jointly incompatible: (1) that A+ is no worse than A, (2) that B− is better than A+, (3) that B− is equally as good as B, and (4) that B can be worse than A.
You might not actually share the intuition that B could be worse than A, but if you don’t, consider that the same chain of logic that took us from A to B could just as easily be applied to B to take us to yet another universe called C, with an even larger population and lower quality of life:
And then we could go from C to D, and D to E, and so on – making our way through the entire alphabet – until we finally reached a universe Z, with an extremely large population, but where life was barely worth living at all. At this point, it seems very hard to accept that this universe Z would be better than the original universe A (with its lower population but vastly higher quality of life) – hence the name of the idea we find ourselves having returned to: the Repugnant Conclusion.
The important thing to note about this paradox, of course, is that it only really seems paradoxical because it switches its definition of “better” halfway through the thought experiment. If you were judging the scenario solely in terms of average utility, Universe A would be unequivocally better than all the other universes; and if you were judging it solely in terms of aggregate utility, Universe Z would be the best. By trying to appeal to both standards at the same time, the Mere Addition Paradox creates what might seem to be a puzzling contradiction. But once we figure out an actually appropriate standard and stick to it, it becomes clear that there’s actually nothing paradoxical about the scenario at all. More specifically, if we apply the system I’ve been describing here, then simply knowing how each step in the sequence would lead to the next one would be sufficient grounds to nip the whole progression in the bud. We could say decisively that if we were starting from A, the act of creating a bunch of new people and thereby going to A+ would be immoral, because doing so would ultimately result in A’s original inhabitants having their utility reduced when A+ inevitably progressed to B- and B. True, if we didn’t know that A+ would eventually progress to B, then that would be a different story; remember, morality is based on expected utility, so if we didn’t expect any utility reduction for A’s existing inhabitants, then there wouldn’t be any basis for considering it immoral to introduce the new people of A+. But the fact that the situation would eventually progress to B- and B, and would thereby cause an unexpected utility reduction for A’s original inhabitants, doesn’t show that there’s anything paradoxical going on here; all it shows is that sometimes our utility estimations are inaccurate, and actions that we expect to be utility-positive or utility-neutral for ourselves or others sometimes turn out to be utility-negative, or vice-versa. That’s not a problem with our utility calculus – it’s just regular old human fallibility. Like we discussed earlier, it’s the difference between an act being good and an act being moral (i.e. expected to be good).
So all right, we’ve clarified the importance of distinguishing between beings that exist and beings that don’t exist, and what the implications are for population ethics. But what about those cases that are right on the borderline? What about edge cases between beings that don’t exist and beings that do, like fetuses developing in the womb? Does this framework have anything useful to say about moral dilemmas pertaining to these edge cases, like abortion?
I think it does. To be sure, this is a topic with a lot of gray areas; but I think the approach I’ve been describing provides us with a way of navigating these gray areas in a way that’s more logical and systematic than just working off our intuitions alone.
As usual, it all comes back to preference satisfaction. Obviously, in the cases where a new person has already been born and has developed their own preferences (even if those preferences are still relatively rudimentary), the morality is clear-cut; there’s no question that they’re a real person whose preferences have to be recognized. Killing babies is unmistakably bad; I don’t think any ethical person would dispute that.
Where things get a little more controversial, of course, is when we start talking about the period between conception and birth – especially the earliest part of this period. If you’re someone who believes that the fertilized egg is endowed with a soul at the moment of conception, and that killing anything with a soul is inherently immoral, then you might think it’s just as bad to destroy a five-day-old embryo as it is to kill a five-year-old child (although if you believed that the soul would persist even after the body was destroyed, it does raise the question of why destroying the body would be so bad (more on that topic in my previous post)). But under the system I’ve been describing here, an action can only be wrong if it violates someone’s preferences – and based on everything science has shown us about embryology, there’s no reason to believe that a pre-sentient embryo is capable of having preferences. During that initial stage of its development (before it develops into a fetus), the embryo is just a clump of cells; it hasn’t formed a brain yet, so it lacks the ability to think or feel or experience anything. In other words, there’s no actual person there yet, much less one capable of holding preferences – so that means there’s no inherent harm in destroying the embryo, aside from whatever effects it might have on the parents and so on. Benefits and harms can only accrue to people who actually exist at some point; if the embryo never develops into such a person, then there’s no basis for including it in the moral calculus.
(There’s an argument often made that early-term abortion is immoral because the embryo is a potential person, and it’s wrong to destroy something that would have become a person without your intervention; but this argument doesn’t really hold up in any other context. Imagine, for instance, if you had a futuristic machine that would allow you to construct a new human being from scratch, from the ground up, in about six hours. If you started running this machine one day, but then changed your mind a moment later and pressed the cancel button before the machine had finished constructing the first sliver of a toenail, would this act be equivalent to outright murder? In fact, would there be anything wrong with it at all, aside from the trivial waste of having to throw away a toenail clipping? It doesn’t seem to me that merely preventing the existence of someone who otherwise would have existed is enough to constitute an immoral act; in order for it to be wrong, there has to actually be a sentient being there whose preferences you’re violating.)
Now, there is an important corollary to this point, which is that if the embryo actually were expected to develop into a person – i.e. if no abortion were performed – then the embryo would merit moral consideration, just like any other expected future person would. For the same reason that it would be immoral to, say, pollute the air for future generations – since it’d be violating their preference for good health – it would be immoral for a pregnant woman to smoke or take hard drugs, even during the earliest stages of her pregnancy, if she expected to actually have her child – since it’d be violating the future child’s preference for good health. It might seem odd to think that it could be morally worse to merely damage a pre-sentient embryo by smoking than it would be to destroy it outright. But remember, preventing new preferences (and new beings) from coming into existence isn’t the same thing as violating preferences that will exist; only the latter can be considered inherently immoral. So while the act of aborting a pre-sentient embryo can’t be considered immoral because it doesn’t violate any sentient beings’ preferences (and there can be no obligation to satisfy a preference that never exists), the act of smoking while pregnant does violate the future child’s preferences, so it can be considered immoral. If there’s one possible outcome in which a child is born, and another possible outcome in which no child is born, then (all else being equal) that fact alone isn’t enough to make one outcome worse than the other; but if there’s one possible outcome in which a child is born with a higher utility level, and another possible outcome in which a child is born with a lower utility level, then that is enough to make the latter outcome morally worse.
(Readers of Parfit might be tempted to bring up the Non-Identity Problem at this point: What if some particular child could only be born under circumstances that would cause it to suffer more harm than a child born under better circumstances? For instance, what if Child X was only conceived because its mother decided not to quit smoking before getting pregnant – whereas if she’d decided to quit first, she would have delayed her pregnancy for a few months and wound up conceiving a totally different child, Child Y? Sure, Child Y’s quality of life would have been better than Child X’s (because of its better health resulting from having a non-smoking mother), but how can we say that Child X is worse off because its mother smoked, when it never would have existed in the first place if not for that fact? The thing is, though, the utility calculus doesn’t make any such distinctions between the specific identities of the people whose preferences it quantifies; all it counts is the preferences themselves. As far as the calculus is concerned, the two alternatives in this scenario are either (1) a child is born with a preference for good health and a preference to continue existing, and both of those preferences are fully satisfied, or (2) a child is born with a preference for good health and a preference to continue existing, and only the latter preference is fully satisfied. (Remember, a hypothetical child’s initial preference to be brought into existence in the first place can’t be included in the calculus, because it wouldn’t be possible for such a preference to exist prior to the child itself existing.) According to the utility calculus, then, the first outcome is morally better, and there’s no need to even bring the preference holders’ specific identities into the equation at all. The Non-Identity Problem, under this model, never even comes up in the first place.)
At any rate, this might all seem fairly straightforward when it comes to pre-sentient embryos in the earliest stage of development, before they ever develop brains or the ability to hold preferences. But at what point, exactly, does the transition occur between an unthinking clump of cells with no inherent right to exist and a thinking, feeling person whose preferences have to be fully respected? When is that critical threshold of sentience crossed, and how exactly does the transition work in terms of the utilitarian calculus?
Well, when it comes to the biology of it, there really isn’t a critical threshold at which a person’s sentience instantaneously “switches on;” as Michael S. Gazzaniga explains, it’s much more of a gradual process:
As soon as sperm meets egg, the embryo begins its mission: divide and differentiate, divide and differentiate, divide and differentiate. The embryo starts out as the melding of these two cells and must eventually become the approximately 50 trillion cells that make up the human organism. There is no time to lose – after only a few hours, three distinct areas of the embryo are apparent. These areas become the endoderm, mesoderm, and ectoderm, the initial three layers of cells that will differentiate to become all the organs and components of the human body. The layer of the ectoderm gives rise to the nervous system.
As the embryo continues to grow in the coming weeks, the base of the portion of the embryo called the neural tube eventually gives rise to neurons and other cells of the central nervous system, while an adjacent portion of the embryo called the neural crest eventually becomes cells of the peripheral nervous system (the nerves outside the brain and spinal cord). The cavity of the neural tube gives rise to the ventricles of the brain and the central canal of the spinal cord, and in week 4 the neural tube develops three distinct bulges that correspond to the areas that will become the three major divisions of the brain: forebrain, midbrain, and hindbrain. The early signs of a brain have begun to form.
Even though the fetus is now developing areas that will become specific sections of the brain, not until the end of week 5 and into week 6 (usually around forty to forty-three days) does the first electrical brain activity begin to occur. This activity, however, is not coherent activity of the kind that underlies human consciousness, or even the coherent activity seen in a shrimp’s nervous system. Just as neural activity is present in clinically brain-dead patients, early neural activity consists of unorganized neuron firing of a primitive kind. Neuronal activity by itself does not represent integrated behavior.
During weeks 8 to 10, the cerebrum begins its development in earnest. Neurons proliferate and begin their migration throughout the brain. The anterior commissure, which is the first interhemispheric connection (a small one), also develops. Reflexes appear for the first time during this period.
The frontal and temporal poles of the brain are apparent during weeks 12 to 16, and the frontal pole (which becomes the neocortex) grows disproportionately fast when compared with the rest of the cortex. The surface of the cortex appears flat through the third month, but by the end of the fourth month indentations, or sulci, appear. (These develop into the familiar folds of the cerebrum.) The different lobes of the brain also become apparent, and neurons continue to proliferate and migrate throughout the cortex. By week 13 the fetus has begun to move. Around this time the corpus callosum, the massive collection of fibers (the axons of neurons) that allow for communication between the hemispheres, begins to develop, forming the infrastructure for the major part of the cross talk between the two sides of the brain. Yet the fetus is not a sentient, self-aware organism at this point; it is more like a sea slug, a writhing, reflex-bound hunk of sensory-motor processes that does not respond to anything in a directed, purposeful way. Laying down the infrastructure for a mature brain and possessing a mature brain are two very different states of being.
Synapses – the points where two neurons, the basic building blocks of the nervous system, come together to interact – form in large numbers during the seventeenth and following weeks, allowing for communication between individual neurons. Synaptic activity underlies all brain functions. Synaptic growth does not skyrocket until around postconception day 200 (week 28). Nonetheless, at around week 23 the fetus can survive outside the womb, with medical support; also around this time the fetus can respond to aversive stimuli. Major synaptic growth continues until the third or fourth postnatal month. Sulci continue to develop as the cortex starts folding to create a larger surface area and to accommodate the growing neurons and their supporting glial cells. During this period, neurons begin to myelinate (a process of insulation that speeds their electrical communication). By the thirty-second week, the fetal brain is in control of breathing and body temperature.
By the time a child is born, the brain largely resembles that of an adult but is far from finished with development. The cortex will continue to increase in complexity for years, and synapse formation will continue for a lifetime.
That is the quick and easy neurobiology of fetal brain development. The embryonic stage reveals that the fertilized egg is a clump of cells with no brain; the processes that begin to generate a nervous system do not begin until after the fourteenth day. No sustainable or complex nervous system is in place until approximately six months of gestation.
If you had to pick the exact point at which the fetus becomes a person entitled to protection, then – if only for legal purposes – the most reasonable choice would probably be somewhere around the end of the second trimester (which, conveniently enough, is pretty much in line with current US laws as of 2021). Legal necessities aside, though, it seems clear that the actual level of sentience for a developing fetus at any given moment can’t be reduced to a simple black-and-white binary – it exists along a continuum. So what does this mean for the fetus’s moral status? Well, it means that it shouldn’t be reduced to a simple black-and-white binary either. Just like its level of sentience, the fetus’s moral status should also be regarded as something that emerges gradually. As Parfit writes:
[Under the binary view,] there must be a sharp borderline. It is implausible to claim that this borderline is birth; nor can any line be plausibly drawn during pregnancy. We may thus be led to the view that I started to exist at the moment of conception. We may claim that this is the moment when my life began.
[But according to a more gradualist view,] we can […] deny that a fertilized ovum is a person or a human being. This is like the plausible denial that an acorn is an oak-tree. Given the right conditions, an acorn slowly becomes an oak-tree. This transition takes time, and is a matter of degree. There is no sharp borderline. We should claim the same about person, and human beings. We can then plausibly take a different view about the morality of abortion. We can believe that there is nothing wrong in an early abortion, but that it would be seriously wrong to abort a child near the end of the pregnancy. Such a child, if unwanted, should be born and adopted. The cases in between we can treat as matters of degree. The fertilized ovum is not a first, but slowly becomes, a human being, and a person. In the same way, the destruction of this organism is not at first but slowly becomes seriously wrong.
In this sense, it’s actually a lot like what we were talking about before with the continuum of sentience across animal species, ranging from minimally-sentient animals like insects at the lower end all the way up to humans at the higher end – only instead of this continuum stretching across all the different species of the animal kingdom, here it occurs entirely within one person, stretching across the duration of their gestation period from conception to birth. Just as killing a (barely-sentient) mosquito would be less egregious than killing a (more sentient) mouse, which in turn would be less egregious than killing a (still more sentient) human, aborting an (entirely non-sentient) embryo would be less egregious than aborting a (somewhat sentient) fetus, which in turn would be less egregious than killing a (still more sentient) newborn baby. There’s no specific point at which the act goes from being totally acceptable to totally immoral – it’s not black-and-white like that – there’s a whole spectrum of shades of gray in between. Like Parfit says, it’s all a matter of degree.
For the most part, I think this is pretty intuitive. Most of us would probably agree that if there were, say, an emergency situation in which we could only save the life of either a pregnant woman or her fetus, the mother’s life would take priority over the fetus’s, because she’d be more sentient and her death would therefore be worse. We’d probably also agree that if someone performed a late-term abortion, that would be worse than if they’d preformed an early-term abortion, for the same reason. Having said that, though, we can’t just take it for granted that sentience level must be the one and only determinative factor in these kinds of cases. After all, I mentioned an example earlier saying that it would be better to save an infant’s life than a chimp’s life, despite the chimp’s greater level of sentience, because the infant would derive more utility from being granted a longer and fuller life. So if we accept that premise, then clearly the sentience level of the individual(s) in question can’t be the be-all end-all; their expected future lifespan and quality of life must also play a role somehow. This also explains why, to return to another example I gave a moment ago, it would be better to let a turkey live out its full lifespan than to kill it prematurely; the turkey would derive more utility from being granted longer lifespan, so therefore the most moral thing to do would be to give it the longest lifespan possible.
But if the thing determining the morality of cases like these is expected future utility, rather than present level of sentience, then would that imply that saving the life of someone who was younger would always have to be better than saving the life of someone who was older, regardless of the immediate difference in sentience levels between them? And if so, then would that mean that saving the life of a pregnant woman would actually be worse than saving the life of a fetus, and that early-term abortion would actually be worse than late-term abortion (and for that matter, worse than infanticide)? What’s going on here? Which standard really determines what’s moral – expected future utility, or present level of sentience?
Actually, I think the only sensible explanation here is one that incorporates both. How does that work, exactly? Well, think of it this way: It’s true that in a vacuum, a person’s preference to (say) live for 65 more years will have more expected utility associated with it than an equally sentient person’s preference to live for five more years (assuming roughly comparable quality of life). So for that reason, it would be worse to kill a 15-year-old than to kill a 100-year-old, all else being equal. But crucially, the reason it would be worse wouldn’t necessarily be because of all the future utility you’d be preventing from coming into existence by killing the 15-year-old; remember, after all, preventing new preferences from coming into existence can’t be counted as a negative in the same way that violating existing preferences counts as a negative. The only preferences that can be counted in the utility calculus for your decision are those that are actually satisfied or violated by that decision. And that means that the only real reason why it’s worse to kill the 15-year-old must be the fact that their existing preference structure ascribes more value to the outcome of being given 65 more expected years of life than the 100-year-old’s preference structure ascribes to the outcome of being given five more expected years of life. In other words, the utility calculus itself isn’t directly counting all the possible future utility that may or may not come into existence for each individual; rather, the individuals are incorporating that expected future utility into their existing preferences, and those preferences are what the utility calculus is counting. It’s basically just like what we’ve been discussing about the morality of bringing a new person into existence: Such a decision can’t be justified just on the basis of all the hypothetical preferences a new person would be able to satisfy if they existed; it has be justified on the basis of whether bringing them into existence would satisfy existing preferences. And likewise, the decision to allow a living person to continue to live can’t be justified just on the basis of all the preferences they’d be able to satisfy if they were allowed to continue to live; it has to be justified on the basis of whether allowing them to continue living would satisfy their existing preference to do so. The moral calculation in both of these cases is functionally the same; it’s just that in the latter case, the “new person” whose preferences are being created is “an existing person’s future self.” (And this makes sense, considering that the utilitarian calculus doesn’t recognize the particular identities of the beings whose preferences it’s quantifying. It’s only concerned with the preferences themselves – so for its purposes, there’s no difference between a preference belonging to someone who was just created and a preference belonging to someone who’s existed for 50 years.)
In short, then, the fact that the 15-year-old would derive more utility from an additional 65 years of life than a 100-year-old would derive from an additional five years of life isn’t the determinative factor here; what matters is that the 15-year-old prefers this outcome more than the 100-year-old prefers the alternative. It’s the strength of those held preferences, not the amount of expected future utility that causes them to be held as strongly as they are, that determines whose life should be prioritized. (The expected future utility does still matter, but only in this second-hand way; it isn’t directly counted in its own right.)
And granted, when we’re comparing people whose sentience levels are roughly the same, this distinction is usually a moot point. Usually, the outcome that produces the greatest expected future utility (in this case, the 15-year-old being spared) is also the one that’s most strongly preferred. In fact, even in cases where it wouldn’t seem to be the most strongly preferred outcome – like if the 15-year-old were overly pessimistic about their future and didn’t value it very highly at the object level – their Master Preference would typically supersede whatever mistaken object-level judgments they might make, so (given that their future prospects were actually better than they assumed) their extrapolated preference to continue living would still be stronger than that of the 100-year-old. Of course, that doesn’t mean it would be impossible for a 100-year-old’s preference to ever be greater than a 15-year-old’s; if the 100-year-old’s remaining five years of life were expected to be unbelievably amazing, for instance, while the 15-year-old’s remaining 65 years of life were expected to be barely worth living, then it really might be possible that sparing the 100-year-old’s life would be better (since it would produce greater utility, and the 100-year-old’s extrapolated preference to experience it would therefore be greater than the 15-year-old’s extrapolated preference to live out their own remaining lifespan). But the vast majority of the time, this won’t be the case – the younger person’s expected future utility will be greater than the older person’s – so the fact that the two individuals are equally sentient (and are accordingly capable of desiring future utility equally strongly) will mean that the younger person’s interests will win out.
That being said, though, the reason why it’s so important to draw this distinction between a person’s expected future utility on the one hand, and their preference to continue living so that they can experience that future utility on the other hand, is that when two individuals aren’t equally sentient, this disparity might mean that the more sentient individual’s preference to continue living can outweigh that of the less sentient individual, even if their expected future utility is lower, simply because they’re able to hold their preferences so much more strongly. For example, if you took a 15-year-old with a remaining life expectancy of 65 years, and compared them to a barely-sentient fetus with a life expectancy of 80 years, then (assuming comparable quality of life) the fetus would clearly get more future utility from being allowed to live than the 15-year-old would – but because the fetus was so much less sentient, it wouldn’t actually prefer that outcome as much as the 15-year-old would prefer the alternative, because it wouldn’t be capable of holding any preference very strongly (not even its Master Preference). It’d be like comparing an adult human’s preferences to those of a mosquito; no matter how much the mosquito might prefer some particular outcome, it could never prefer it as strongly as the human would be capable of preferring something – it just wouldn’t be sentient enough – so ultimately, its interests wouldn’t count for as much in the moral calculus.
Here’s another way of thinking about it: Imagine if you had two audio recordings – one of someone talking normally, and another of them yelling loudly. As far as the recordings themselves went, the second one would obviously be louder than the first one. But if you were playing the second recording on a tiny, weak speaker, while playing the first one on a giant, powerful speaker, then it might be possible for the first one to be louder, simply because the smaller speaker wouldn’t be able to reach as high a volume level as the larger one. In the same way, a preference for a particular outcome which, in a vacuum, might be associated with a higher level of expected future utility (the equivalent of the audio in the recording itself being louder) might turn out to carry less weight in the moral calculus if the individual holding it doesn’t actually prefer it that strongly due to their lower level of sentience (the equivalent of having a smaller, weaker speaker that can only reach a limited volume level).
It’s for this reason, then, that our aforementioned intuitions about the different levels of moral consideration merited by different types of individuals – adults, children, fetuses, non-humans, etc. – can be justified. We can say with total consistency that (all else being equal) aborting a barely-sentient fetus would be less egregious than killing a full-grown human – or even a non-human individual like a chimp – while at the same time acknowledging that as that fetus grew and developed into a child, its increasing level of sentience would allow it to prefer continued survival more strongly than before, so that at some point its preference for continued survival would come to outweigh that of those older individuals with their shorter remaining life expectancies. When exactly these intersections would occur is debatable, of course, but my own personal intuition is that the version of this model that says it would be better to spare a chimp than a barely-sentient fetus, better to spare a newborn than a chimp, and better to spare all of the above (except maybe the fetus, depending on how barely-sentient it was) than a 110-year-old on their death bed, seems right overall.
Now, having said all that, it’s worth noting that up to this point I’ve only been considering things from the perspective of the actual individuals whose lives are at stake – and while that’s often the biggest factor in these kinds of decisions, it isn’t always. When it comes to issues like abortion, there are always other individuals whose interests are involved – most notably the would-be parents – and their preferences have to be included in the moral calculus as well. The mere fact that the fetus would prefer to live is not, in itself, sufficient grounds to justify keeping it alive in every case; that preference has to actually outweigh whatever other preferences might be opposed to it. If the parents, for instance, have a strong enough preference not to carry the pregnancy to term, and the fetus’s level of sentience is still low enough that its preference to continue existing isn’t that strong, then it’s entirely possible that the parents’ preferences might outweigh those of the fetus, making it morally justifiable for them to abort the pregnancy. Needless to say, of course, once the fetus grows enough to become more sentient and develops stronger preferences, at some point its preference to continue existing will reach such a high level that (in almost all cases) it will outweigh any preference its parents might have to abort it. Nevertheless, it might still be possible for such extreme circumstances to arise that its preference to continue living could still be outweighed by the preferences of others. In fact, this could even occur after its birth; for instance, if the parents were forced to choose whether to save the life of their newborn on the one hand, or the life of their eight-year-old on the other hand (with whom they’d built an extremely close relationship and to whom they’d grown extremely attached), it’s possible to imagine a scenario in which the trauma of the parents losing their eight-year-old would be so much greater than the trauma of losing their newborn that it would outweigh whatever edge the newborn’s greater future life expectancy might have given it (despite its lower level of sentience) over its eight-year-old sibling.
Obviously this scenario is completely hypothetical, and it should go without saying that people’s preferences can differ dramatically, so the outcome described above wouldn’t necessarily be the right one for everyone who found themselves in such a dire situation. Thankfully, most of us have no reason to expect that we’ll ever have to make such a terrible decision ourselves, so for the most part it’s a moot point anyway. Still, based on people’s responses when these kinds of hypothetical scenarios are presented to them, it does seem that popular intuitions mostly correspond with the outcomes described above (which, I admit, came as a slight surprise to me when I first learned it). As Adam Benforado writes (summarizing research from Geoffrey P. Goodwin and Justin F. Landy):
Imagine that there was a ten-year-old girl [lying on the sidewalk next to a 63-year-old man] and that both she and [he] were in such critical condition that only one could be saved. If you were one of the [emergency responders], would you just flip a coin? Or would you choose to help one victim over the other?
Despite what we say about giving equal respect to all victims, most of us would save the girl. A ten-year-old is at the peak of her perceived value, and participants in experiments involving tragic tradeoffs tend to choose saving her over both older individuals (like [the 63-year-old]) and babies. Privileging the young seems to be grounded in an understanding that old people have had a chance to live a fuller life and that young people have more years left to contribute to the world. But clearly there is a limit: infants and toddlers have not had as much invested in them as their preteen counterparts, and their social relationships are not as significant, so people don’t perceive their deaths to be as costly.
Again, it’s worth stressing that these valuations are highly subjective, and that just because people find something intuitive doesn’t mean that their intuitions are well-founded. Our history is unfortunately replete with examples of people doing tremendous harm because their misguided moral intuitions seemed obvious to them. So it might not in fact be the case that saving a ten-year-old would have to be better than saving a toddler. (I’m not too sure about the answer here myself.) But that’s why it’s so important to figure out exactly how the logic underlying our moral decision-making works, and whether our intuitions actually correspond with what’s morally justifiable. Operating under the mistaken assumption that, say, sentience levels are all that matter, or that expected future utility levels are all that matter, is a recipe for trouble. We have to make sure that we’re correctly accounting for both – because based on all the factors we’ve considered here, it seems clear that both are absolutely fundamental.
Of course, having established the importance of things like sentience and expected future utility, we’re now forced to confront yet another question: How should we think about situations where those factors are entirely absent, like when we consider the preferences of people who’ve recently died? Should those people’s wishes receive any moral consideration whatsoever? Based on everything we’ve been discussing so far, it might not seem that they should. After all, dead people are no longer sentient, and can’t receive any future utility. There’s no longer any person there capable of experiencing benefits or harms. So on what basis could their preferences be included in the utilitarian calculus at all? Well, if you believe in an eternal afterlife, you might not actually accept the premise of the question; you might simply say that dead people still do exist, just in a different form, so their preferences must be recognized just like everyone else’s. But assuming for the sake of argument that it’s possible for someone to completely cease to exist after they die, how might this affect their moral status? Peter Singer ponders the question:
Does skepticism about a life after death force one to conclude that what happens after you die cannot make a difference to how well your life has gone?
In thinking about this issue, I vacillate between two incompatible positions: that something can only matter to you if it has an impact on your awareness, that is, if you experience it in some way; and that what matters is that your preferences be satisfied, whether or not you know of it, and indeed whether or not you are alive at the time when they are satisfied. The former view, held by classical utilitarians like Jeremy Bentham, is more straightforward, and in some ways easier to defend, philosophically. But imagine the following situation. A year ago a colleague of yours in the university department in which you work was told that she had cancer, and could not expect to live more than a year or so. On hearing the news, she took leave without pay and spent the year writing a book that drew together ideas that she had been working on during the ten years you had known her. The task exhausted her, but now it is done. Close to death, she calls you to her home and presents you with a typescript. “This,” she tells you, “is what I want to be remembered by. Please find a publisher for it.” You congratulate your friend on finishing the work. She is weak and tired, but evidently satisfied just with having put it in your hands. You say your farewells. The next day you receive a phone call telling you that your colleague died in her sleep shortly after you left her house. You read her typescript. It is undoubtedly publishable, but not ground-breaking work. “What’s the point?” you think to yourself, “We don’t really need another book on these topics. She’s dead, and she’ll never know if her book appears anyway.” Instead of sending the typescript to a publisher, you drop it in a recycling bin.
Did you do something wrong? More specifically, did you wrong your colleague? Did you in some way make her life less good than it would have been if you had taken the book to a publisher, and it had appeared, gaining as much and as little attention as many other worthy but not ground-breaking academic works? If we answer that question affirmatively, then what we do after a person dies can make a difference to how well their life went.
I’m pretty much in the same boat as Singer here, just in the sense that I can understand both sides of the argument and see why either one might be defensible. On the one hand, my instinctive reaction to the scenario he describes is to feel appalled that anyone would be capable of betraying their colleague’s wishes in such a way. Obviously it would be wrong to just throw away the colleague’s typescript like that, right? But on the other hand, if we accept that morality obliges us to honor the wishes of the dead, even at our own expense, then that raises some tough questions. Where do we draw the line, exactly? Do the preferences of everyone who’s ever died still carry the same moral weight, even centuries later? Should we still give moral weight to the preferences of cavemen? Are we morally bound to fulfill the wishes of the ancient Egyptians, even if those wishes are excessively elaborate and demanding? To take the most extreme example, if there were some kind of apocalyptic event that killed everyone on Earth except for one person, would that person be obliged to spend her days carrying out the wishes of all those dead people, who would never be able to appreciate it, even if it meant she would have to forego her own preferences and sacrifice some significant measure of her own happiness in the process?
I can think of a few possible ways of approaching this issue. One way, of course, would be to just bite the bullet and say that dead people really don’t have any moral standing whatsoever, and that we have no inherent obligation to respect the preferences they held while they were alive about what should happen after their deaths. This wouldn’t necessarily mean that their wishes could be totally ignored, mind you, because there would still be other good reasons to respect them. For instance, maintaining a social norm of respecting dead people’s wishes can provide real value to those who are still living, because it helps reassure them that their own wishes won’t be totally ignored after they die, which can be a genuine comfort (especially for people who might otherwise feel particularly anxious about it). So accepting this premise, that respecting the wishes of the dead is just something we do for the benefit of the living, wouldn’t automatically mean that we’d have to just let people go around desecrating gravesites or whatever. But on the other hand, it would mean that our hypothetical apocalypse survivor wouldn’t have to feel obligated to respect the wishes of the dead in the same way that we do, because there’d be no one left for her actions to affect aside from herself. She wouldn’t have to worry about undermining social norms or making anyone feel anxious; whatever was best for her as an individual, that’s what would be morally best overall.
If these conclusions seem intuitive to you, you might find this Benthamian stance appealing (as I do myself much of the time). Still, it doesn’t seem to me that this approach provides the most satisfactory resolution for examples like Singer’s above, because in that scenario, the dying colleague never told anyone else about her typescript, so discarding it wouldn’t have the effect of undermining social norms or making anyone feel disturbed that their wishes might also be ignored after their death. From the Benthamian perspective, then, ignoring her wishes – or the wishes of any dead person – would be totally fine, as long as no one else was aware of them. But that can’t be right, can it?
Well, maybe. But there is an alternative approach that might be available to us here, which goes like this: Granted, in Singer’s example, it does seem unavoidably true that you wouldn’t be causing any post-mortem harm to your dead colleague by disregarding her wishes. But at the same time, it might be possible that by disregarding her wishes, you would be causing her life to have gone worse, as Singer puts it. See, when you originally made your promise to her to find a publisher for her typescript, you were responding to her preference that you make that promise to her. But the thing is, her preference in that moment wasn’t that you falsely promise her that you’d find a publisher – it was that you truly promise her that you’d find a publisher. So depending on whether you subsequently went on to actually deliver on your promise after she died, your act of making that promise to her could have constituted either a satisfaction of her preferences or a violation of them, in that moment, while she was still living. In other words, the moral goodness of your promise would be predicated on your actually keeping it later; so if you ultimately decided not to do so, you’d be retroactively changing the moral status of your earlier promise from good to bad. You’d be retroactively causing your colleague to have been wronged by you while she was still alive.
This idea of retroactive preference satisfaction is a weird one, so it’s probably worth unpacking a bit more here. To start with, let’s consider the basic premise that an expectation or statement about the future can be made true or false – right now, in the present moment – by whether the events it refers to actually end up happening in the future. Here’s Steven Luper:
Propositions are either true or false, and what makes a proposition true is an event or state of affairs that can be labelled its ‘truth-maker’. For example, the proposition I am typing is made true by my typing right now. In some cases, as when I presently assert I am typing, the truth-maker occurs at the same time as the assertion of the proposition. But the two are not always simultaneous. In some cases, as in I went kayaking last week, truth-makers antedate asserted propositions. In others, the assertions come before the truth-makers. For example, the sun will rise tomorrow is made true now by the sun’s rising tomorrow. If the sun does not rise tomorrow, then the sun will not rise tomorrow is true now. One proposition or the other concerning the sun’s rising is true now even though neither truth-maker has occurred yet.
Luper adds, “No mysterious sort of reverse causation is involved in a proposition’s being made true by states of affairs holding at times before or after the proposition is asserted.” But as he goes on to explain, the fact that such a thing is possible means that in certain cases, it may also be possible for “some desires [pertaining to future events to] be fulfilled retroactively.” He writes:
This happens, for example, when a desire is fulfilled by virtue of posthumous events. Suppose that I now want the sun to rise tomorrow. If the sun will indeed rise tomorrow, my desire is fulfilled now – I get what I want now. (Contrast the case in which I want to be watching the sun rise but it is midnight.) This is true regardless of whether I live to see it rise.
What this means for Singer’s scenario, then, is that when your colleague makes you promise her that you’ll find a publisher for her typescript, the preference she holds in that moment – that your promise to her be a true one – is either satisfied or violated in that moment; but whether it’s satisfied or violated depends on events that won’t happen until later on, after her death. If you go on to fulfill your promise and find a publisher, then her preference in that earlier moment will have been satisfied; but if you don’t, then it won’t have been. Either way though, in the moments immediately following her death, the mere fact that your satisfaction or violation of her preferences happened in the past shouldn’t cause you to just blow it off on the basis that “what’s done is done” – because in this case, what’s done isn’t actually yet done. Whatever decision you make now will determine whether her preferences were satisfied or violated while she was alive, and will thereby have a genuine effect on how well her life can be said to have gone for her. So to not care about whether you can cause her preferences to have been satisfied during her life, simply because “it’s all in the past,” would be like not caring whether you could literally reach back through time and cause or prevent some morally important event from happening in a more direct way. I mean, just imagine if you could (say) press a magical button that would send a signal back in time that would cause your most recently deceased loved one to have gotten in a terrible accident shortly before the end of their life, causing them to have spent the last few months of their life in severe pain. Would you consider it morally bad to press the button? Or would you consider it morally neutral, since they’re dead now and it can’t matter to them currently whether their past preferences were fulfilled or not? If you would in fact consider it immoral to press the button – as I think most of us would – then likewise, you should consider it immoral to break your promise to your colleague in Singer’s scenario, since that would cause her past preferences to have been violated. Granted, keeping your promise wouldn’t be an absolute imperative, any more than any other moral act would be; it would have to be weighed in the utilitarian calculus alongside everything else. So if ensuring that your promise to her was true would turn out to cause so much disutility that it would outweigh whatever utility she might have gotten from it while she was alive – like for instance, if getting her typescript published would require you to ruin your own life and/or the lives of others – then that would constitute a sufficient moral reason for you to break your promise. The point here is just that, in terms of the utilitarian calculus, any past preferences that you might be able to retroactively satisfy or violate should count for just as much as present or future preferences do. Remember our cardinal rule, that when you calculate the expected goodness of your action, you have to account for any and all preferences that are affected by that action? Well, I’ve been talking this whole time about how that mandate includes both preferences that exist and preferences that will exist; but now we may have to add one more category to the pile – namely, preferences that have existed in the past, which your current actions might be able to retroactively satisfy or violate, despite their no longer existing in the present. True, it’s rare to encounter situations where such preferences actually come into play – but those situations can and do exist (assuming you buy Luper’s explanation), so ultimately you do have to account for them.
Of course, that’s not quite the end of the story; there are a few complicating factors to consider here. For one thing, it might not seem immediately obvious how this approach could be generalized to apply to cases in which no one has explicitly promised the dying person that they’ll fulfill their wishes after their death. If there’s never the possibility of keeping such a promise and thereby satisfying the person’s living preference that the promise be true, how can there be any retroactive benefit in fulfilling their wishes after their death? But I don’t think this is actually that big of an issue, because even if a dying person never makes anyone promise them that their wishes will be respected after they die, that person will still have a desire and an expectation that this will be the case – and they’ll still have a preference that this expectation be a true one rather than a futile one that never gets fulfilled. So respecting their wishes after their death can still retroactively satisfy that preference they held while they were alive, and can thereby cause their life to have gone better for them. In that sense, it’s just like satisfying the preferences of a still-living person; you don’t have to have ever explicitly promised someone that you’d respect their wishes in order for your respecting their wishes to be good for them, nor for your ignoring their wishes to be bad for them. Your actions can be better or worse for someone even if you’ve never met them before in your life; simply the fact that they want their wishes to be respected is enough to make it good for them if those wishes are respected and bad for them if they’re ignored. And the same applies to retroactive preference fulfillment for people who’ve died.
Now, like I said before, there are limits to all this. Retroactively-fulfillable preferences have to be weighed in the utility calculus alongside every other preference – so if someone wanted something done after their death which would be too much of an imposition on those who were still living, then fulfilling that desire wouldn’t necessarily be morally obligatory. If it would be unreasonable, for instance, for someone to expect their community to build them a giant golden monument while they were still alive, then it would be just as unreasonable for them to expect such a thing after their death (sorry, pharaohs). Likewise, in our earlier example of a sole post-apocalypse survivor, if it would be unreasonable for humanity to expect a single person to fulfill all of their preferences now, while everyone was still alive, then it would be just as unreasonable to expect a single person to shoulder the burden of retroactively fulfilling all of humanity’s post-mortem preferences after the apocalypse. As a general rule, if a dying person’s expectations just include basic things like “not desecrating their gravesite or violating their dead body,” then those expectations are simple enough that they can and should be respected; but if their expectations would demand significantly more than that from those still living – like if a father demanded that his children never be allowed to date anyone after his death, or if a wife demanded that her husband risk his life to scatter her ashes in the middle of a hurricane, or something like that – then the net expected utility of such expectations would be so obviously negative that the survivors wouldn’t have to feel morally obligated to fulfill them. In short, a post-mortem request would only be justifiable if (all else being equal) the retroactive preference satisfaction that the dying person would derive from having their request fulfilled would outweigh the negative preference satisfaction that the survivors would have to incur in order to fulfill the request.
Having said all that, though, we still aren’t quite done yet; this idea that we have to account for dead people’s wishes raises the broader question of what the larger-scale implications are here. Are we obligated to account in our moral calculus for all the post-mortem wishes of everyone who’s ever lived? I’m actually inclined to think so, personally; I don’t think that desecrating ancient gravesites is any more acceptable than desecrating the gravesites of people who’ve died more recently. But as dramatic as it sounds to say that we have to acknowledge the post-mortem preferences of everyone who’s ever lived, this might not actually be as big a deal as it you might think. (In fact, it may be largely a moot point.) After all, whatever most of our forebears’ object-level preferences might have been about what should happen after they died, they would also have had a meta-preference that those preferences would ultimately turn out to have been noble ones – i.e. that their wishes would turn out to be morally good for those still living, rather than misguided and harmful for them. So if it turned out that what they thought they wanted would actually be bad overall (like if their wish to have everyone follow their religion after they died ended up being harmful because their religion turned out to be wrong), then their true extrapolated preference would be that their misguided object-level preference be ignored, and that future generations just do whatever was most moral instead. It seems safe to assume that the vast majority of our forebears would have held such a preference – that what they would have wanted more than anything after their death was simply for things to go as well as possible for the survivors – so although there might have been a few of them whose post-mortem wishes really were purely selfish, such exceptions would have been vastly outweighed by the meta-preferences of the benevolent majority who just wanted what was best for the world after they died. Ultimately then, what that means for us here in the present is that, for the most part, the best way to retroactively maximize our forebears’ preference satisfaction is to just do what we would have considered to be globally best anyway.
So OK, maybe you accept all of that as far as it goes; but for all this talk about being able to retroactively do things for people who will never know it, there’s still a major assumption lying at the heart of this whole line of reasoning which we haven’t addressed yet – namely, the idea that it’s possible to satisfy or violate someone’s preferences, and thereby increase or decrease their utility, even if they never realize it or experience any subjective effect from it at all. Sure, in a scenario like Singer’s, it might seem clear that the act of lying to the colleague would go against her preferences – but if she never actually found out that she was lied to, and therefore never directly experienced any negative effect from the lie, then how can we say that she was made worse off by it? Can it really be the case that, as Parfit phrases it, “an event [can] be bad for me [even] if it makes no difference to the experienced quality of my life”?
(Note that this isn’t the same as simply asking whether it’s possible for someone to be harmed or benefited by actions that they’re never aware of. Obviously, if you were to save the world from an incoming asteroid without anyone ever being aware of it, it’s clear that you still would be providing a great benefit. Likewise, if you were to secretly slip a slow-acting toxin into someone’s drink, causing them to die a year later, it’s clear that you would be harming them even though they weren’t aware of it. I don’t think any reasonable person would dispute the rightness or wrongness of cases like these, in which people experience better or worse results despite being oblivious to their causes. What we’re talking about here instead are cases in which people never even perceive any positive or negative effects from the acts in question. Can such acts really have any moral relevance at all?)
Well, we can easily imagine other scenarios (that don’t involve people dying) in which this would certainly seem to be the case. For instance, if a dentist secretly molested one of his patients while she was under anesthesia, we’d surely consider that to be a grossly immoral act, even if the patient never found out about it or experienced any negative effects from it. Or to take another example from Crash Course:
We often think that what makes something wrong is that it causes harm. But consider this creepy scenario: Imagine you’re changing in a store’s dressing room, and outside, there’s some creep who takes pictures of you, and you never know it. This creep then shares the pictures with his creepy friends. They don’t know who you are, and you experience no ill effects, no uncomfortableness of any kind, because you never know the pictures were taken.
So here’s a couple questions: Were you harmed? And did the creeps do wrong? It’s hard to see how you were harmed by the pictures, because harm seems to be the type of thing that has to be experienced in order for it to exist. But most people would agree that the creeps did do wrong, that a violation of privacy is still a violation, even if you don’t know that it happened.
The possibility that harm and wrongdoing are actually two different things might not have occurred to you before; but when you think about it, it seems pretty right. [A] falling coconut could harm me without anyone having done wrong. And likewise, wrongdoing doesn’t have to lead to anyone being harmed.
It seems that the real question here, then, is what exactly we mean when we talk about people’s preferences being satisfied, and what exactly it means to say that different outcomes will give them higher or lower levels of utility. When we talk about someone’s preferences being satisfied, are we only referring to the level of preference satisfaction that they actually experience? Or is it possible for an outcome to be better or worse for somebody even if they subjectively experience no difference whatsoever (relative to what they would have experienced in the alternative outcome)? And if it’s the latter, then how does that even work? What exactly is the mechanism by which a person’s preferences are either satisfied or violated?
This may actually be the most fundamental question of this whole discussion. After all, we can theorize all we want about what it means to fulfill preferences in abstract, conceptual terms – but unless we can actually point to a real concrete mechanism by which preferences are fulfilled, abstract theorizing is all it’ll amount to. So what could this mechanism be? Galef offers a possible lead:
A friend was telling me last week about a celebrity sex tape that he particularly enjoys. I’m not going to help publicize this tape, for reasons that should become clear from reading my post, but I can sketch out the rough details for you: The woman is a young singer, publicly Christian, and she’s having sex with a married man. The tape was stolen (or hacked, I’m not sure) and leaked to the public. It’s theoretically possible that she leaked it herself for publicity, I suppose, but it seems unlikely given the cheating and the Christianity – it definitely tarnished the public image she’d carefully constructed for herself, in addition to being humiliating simply by virtue of it being a sex tape.
So I asked my friend if he feels any guilt about watching this tape, knowing that the woman didn’t want other people to see it, and we ended up having a friendly debate about whether there was anything ethically problematic about his behavior. Of course, the answer to that depends on what ethical system you’re using. You could, for example, take a deontological approach and declare that it’s just a self-evident principle that we don’t have a right to watch someone else’s private tape. Or alternately, you could take a virtue ethics approach and declare that enjoying the tape, after it’s already been leaked, is exploiting someone else’s misfortune, which isn’t a virtuous thing to do.
But my friend and I are both utilitarians, at heart, and neither of those lines of argument resonated with us. We were concerned, instead, with what I think is a more interesting question: does watching the tape harm the woman? As my friend emphasized, she’ll never know that he watched it. (At least, that’s true as long as he downloads it from Bit Torrent, or some other file-sharing site where the number of views of the video aren’t recorded such that she could ever see how much traffic it’s gotten.)
I agreed, but was still reluctant to conclude that no [wrong] was done. Do I necessarily have to know about something in order for its outcome to matter to me? If you tell me about two possible states of the world, one in which everyone has seen my awful humiliating sex tape, and one in which no one has, I’m going to have a very strong preference for the latter, even if people behave identically towards me in both potential worlds. So maybe it makes more sense to define “[wronging] someone” to mean, “helping create a world which that person would not want to exist, given the option,” rather than “causing that person to experience suffering or [harm].” My friend’s decision to watch the tape [wronged] the woman according to the first, but not the second, definition.
I think there’s something to this idea. See, up to this point, I’ve been talking a lot about utility as something that individuals “get” from particular outcomes – almost as if utility were some kind of tangible substance, like a currency, that people could accumulate. Really, though (despite its convenience for illustrative purposes), this isn’t the best way to think about it. In reality, when someone has their preferences satisfied, that doesn’t mean that the universe somehow magically conjures up an invisible substance called utility and bestows it upon them. Rather, all it means is that the universe has entered a state of affairs which matches the state of affairs that their brain has assigned the most value to. That is to say, when someone expresses a preference for outcome X over outcome Y, what they’re saying is that they ascribe more value to the universe-state in which X is true than the universe-state in which Y is true – and that means that if the universe is actually brought into a state in which X is true, then their preference has been satisfied, whereas if the universe is brought into a state in which Y is true, then their preference hasn’t been satisfied, even if they mistakenly think it has. So in our earlier example of the dentist molesting his unconscious patient, for instance, it might be true that the patient would never find out the truth about the situation – but that wouldn’t change the fact that she had ascribed less value to the universe-state in which she was unknowingly molested than to the universe-state in which she wasn’t. By molesting the patient, then, the dentist would be bringing the universe into a state that had had less goodness ascribed to it – i.e. an objectively worse state. And so, given that the dentist would know full well that molesting the patient would lead to this outcome, we can say unambiguously that he would be morally obligated to refrain from doing so – because after all, avoiding worse outcomes in favor of better ones is the whole point of consequentialism. Molesting the patient might not be any worse for her in terms of her subjective experience, but it would be worse in terms of objective global utility, as determined by her preference not to be unknowingly molested – so doing it would be morally wrong. In short, when we see the term “utility,” maybe we shouldn’t read it as referring to a person’s subjective satisfaction level; maybe we should read it instead as referring to the degree to which their preferences about the state of the universe are actually satisfied, whether they realize it or not – because although the preferences themselves may be subjective, the question of whether they align with the current state of the universe isn’t.
(As a quick side note, you might be tempted here to imagine hypothetical situations in which a person would be happier falsely believing that their preferences were satisfied than they would be if their preferences actually were satisfied; but the Master Preference would preclude that sort of thing. If someone really were better off with the former outcome than the latter, then that’s the outcome that their true extrapolated preferences would in fact favor – so that first outcome (in which they were happier) wouldn’t be a false fulfillment of their preferences at all; it would be a true fulfillment of them.)
Here’s a useful way of thinking about it: Imagine reality as one big branching timeline – like a train track that continually splits into a near-infinitude of possible universe-states as we travel from the present into the future. (If you subscribe to some of the more popular interpretations of quantum mechanics, you may regard this as literally being the case.) Any time any of us does anything, we direct the universe onto one of these branching paths – so if we’re presented with a choice between doing X and doing Y, for instance, our choice will cause the trajectory of the universe’s timeline to either follow the branch in which X happens, or the branch in which Y happens. What this means is that whenever anyone expresses a preference for outcome X, what they’re really saying is that they would rather have the universe follow the branch in which outcome X occurs than the branch in which it doesn’t. By acting in such a way as to direct the universe onto that particular branch, then, we’re causing their preference to be satisfied – not just in an abstract theoretical sense, but in a literal physical sense – by bringing them onto their preferred path; and conversely, if we act in such a way as to direct the universe onto some other branch, we’re causing their preferences to be violated by bringing them onto their non-preferred path. This is true even if they never realize which path they’ve actually been brought onto; so if they say that they don’t want to be molested in their sleep, for instance, or that they don’t want to be lied to about what will happen to their typescript after they die, then we’re obligated to respect those wishes, even if they’d never subjectively know the difference themselves. Their preferences, after all, aren’t about their subjective experiences; they’re about the state of the universe and which path it follows. So by taking them down a path they don’t want to be taken down, we’d be wronging them (by violating their preferences) even if we weren’t directly harming them – and by bringing about a universe-state that had had less value ascribed to it, we’d be doing what was objectively less moral in global terms as well.
Ultimately, this is what morality is all about; when we make moral decisions, our goal is to determine which branch of the timeline has the highest expected level of value associated with it, and then to try and steer our universe onto that timeline. And really, there’s no other way it could be – because after all, this moral decision-making process is the only one we’d willingly agree to if we were all back behind the veil of ignorance again, in the original position. If we all traveled back in time to before our births, before any of us knew who we were going to turn out to be, and we had to come up with a set of universal moral commitments that we’d all have to follow after we were born, we wouldn’t just decide that our preferences could be ignored as long as we never knew about it; we’d make it so everyone was obligated to respect each other’s preferences even if the effects of their actions would be hidden. Think about it: If you didn’t know in advance whether you’d turn out to be the dentist who wanted to molest his unconscious patient, or the patient who really didn’t want to be unknowingly molested, wouldn’t you rather have everyone in the original position make a precommitment that whoever turned out to be the dentist wouldn’t molest whoever turned out to be the patient? Likewise, if you didn’t know in advance whether you’d turn out to be the dying scholar who really wanted to get her typescript published after her death, or her colleague who wanted to avoid having to do so, wouldn’t you rather have everyone in the original position make a precommitment that whoever turned out to be the dying scholar’s colleague would actually keep their promise to get her typescript published? From the original position, we could imagine every possible scenario that might emerge after we were all born, and what would be the most moral course of action in each of those scenarios – and in each of those scenarios, we would end up being precommitted to acting in a way that steered the universe onto whichever branch of the timeline had the highest level of value ascribed to it by its inhabitants, even if those inhabitants didn’t always experience the full effect of that higher value directly. At the end of the day, the question would be as simple as, “Which branch of the timeline would you rather have the universe follow if you didn’t know in advance what your role in that universe would be?”
And this framing wouldn’t just help us navigate our everyday moral dilemmas; it could even help us resolve the thorniest hypothetical scenarios we could imagine. Recall, for instance, the transplant problem from earlier (you know, the one in which the surgeon has to choose between letting five of her patients die, or killing one innocent bystander in order to harvest his organs and save the patients’ lives). I mentioned that for most people, their intuitive response to the problem is to say that the surgeon shouldn’t be allowed to kill the bystander – which, granted, is an intuition that doesn’t necessarily hold up under more dramatic formulations of the problem (like killing one bystander in order to save five billion patients rather than just five), but which does seem to highlight an important idea: that if such a practice were allowed to occur, its broader negative impacts on society could easily outweigh whatever immediate utility gains it might produce. Taking one life to save five might sound like a utility-positive act in theory, but if it had significant negative side effects like lowering the surgeon’s inhibitions against killing, or making everyone who heard about it afraid to ever visit the hospital again, or openly undermining valuable social norms like “don’t commit sudden unprovoked murder” and “treat people as ends in themselves rather than as mere means to other ends,” then the positive benefit of saving four extra lives might not be worth it.
Having said that, though, it’s possible to come up with variations on this thought experiment that remove such considerations and make the problem more difficult. Like for instance, what if the surgeon were about to die herself, so it wouldn’t matter if she lowered her inhibitions against killing, since she’d never be able to do such a thing again either way? Or what if the whole thing happened in secret, in such a way that the bystander’s death would be guaranteed to look like an accident or a natural death, and no one would ever know that he had been killed deliberately? Under these particular circumstances, there would no longer be any broader social repercussions like openly undermining moral norms or deterring people from visiting the hospital. So where would that leave us? Would there be any basis left at all for forbidding the surgeon from killing the bystander?
Well, again, it’s always possible that our intuitions may simply be wrong in this case; it may be that killing the bystander actually would be the right thing to do in a situation that met all the above conditions. But let’s assume for the sake of argument that it wouldn’t be. On what consequentialist grounds could we actually say that this was the case? This is where I think the most-preferred-universe framework described above could potentially come into play. See, even though it might be true that the surgeon would be able to kill the bystander without anyone ever knowing about it, it could also be true that that outcome would nevertheless be one that the broader society (on balance) would still be opposed to in principle, whether it happened in secret or not. If considering the situation from the original position (with no one knowing whether they’d turn out to be the surgeon, the bystander, one of the patients, or an uninvolved person) would lead the broader society to conclude that a universe in which the five patients were allowed to die would be preferable to a universe in which the surgeon saved them by killing the bystander, then by choosing the latter option, the surgeon would be taking society down its less-preferred path. She’d be violating people’s preference not to live in a world in which surgeons secretly killed random bystanders and harvested their organs, and would thereby be producing an outcome that had had less value ascribed to it – i.e. a less moral outcome. The people themselves might never actually find out that their preferences had been violated, but their preferences would have still been violated. In order to avoid this less moral outcome, then, it would be incumbent on the surgeon not to kill the bystander, but to let the patients die instead.
Now, needless to say, there are a lot of “ifs” in this line of reasoning – and again, we can easily tweak the parameters in such a way as to produce the opposite conclusion. I could easily imagine, for instance, that in an alternative scenario in which the surgeon, the bystander, and the five patients were the last seven people on Earth, many people’s stance on the dilemma might actually change, such that they’d agree that it would now be morally permissible for the surgeon to kill the bystander. Why might this be the case? Well, it could just be the simple fact that, without a broader society of billions of people whose preferences would have to be taken into account, all those concerns about upholding moral norms and so on would no longer be a determinative factor; the main consideration would just be the immediate preferences of the seven people directly involved in the incident themselves (most importantly, the desire of the patients and the bystander to continue living). Unlike the original version of the thought experiment, in which anyone judging the situation from the original position would expect to be born as one of the billions of people who weren’t directly involved in the incident, this alternative version would remove all those “uninvolved outsider” roles from the equation altogether, so that anyone judging from the original position would now have to expect that, in all likelihood, they’d end up becoming one of the five patients. That would mean that instead of giving the most weight to the society-wide preferences like “I prefer not to live in a world in which norms against murder are secretly violated” and “I prefer not to live in a world in which I’m unknowingly living under the perpetual threat of having my organs harvested,” their biggest moral consideration would now simply be the patients’ preference of “I prefer not to die.” In light of that fact, then, it would be no surprise if their most highly-valued outcome in the last-seven-people-on-Earth scenario was for the five patients to survive rather than the one bystander.
That being said, I might be totally misjudging what someone in the original position would actually consider best; this is just my own personal intuition on the subject, and other people might not share it. But then again, the average person on the street, here in the real world, probably wouldn’t be so inclined to judge the dilemma according to this kind of Rawlsian moral calculus in the first place; instead, they’d probably rely more on heuristics like “actively killing someone is worse than passively letting someone die” – or to put it more broadly, “a sin of commission is worse than a sin of omission.” And to be sure, in a lot of contexts, these kinds of heuristics are plenty useful; they’re basically just a shorthand way of accounting for all the secondary utility ramifications an act might have aside from its most immediate effects – things like lowering the actor’s inhibitions against future wrongdoing, undermining valuable moral norms, and so on – without having to actually go through the whole utility calculus one consideration at a time and factor those things in directly. That being said, though, it’s important to remember that, as useful as they might be in select situations, these heuristics are just moral shortcuts, not fundamental moral laws that we should expect to stand entirely on their own. If we regarded them as the be-all-end-all of morality, we’d open ourselves up to outcomes that, from a Rawlsian point of view, would be positively disastrous – like forbidding the surgeon from killing the bystander even if the lives of millions were at stake. Unfortunately, a lot of people do seem to consider such rules to be fundamental, and the result is that, here in the real world, millions of people often do die because of the widespread attitude that, because passively allowing their deaths isn’t as bad as actively killing them, it’s therefore somehow morally permissible. But under the system I’ve been describing here, this kind of outcome is a moral scandal of the highest order; so in these last few sections of this post, I want to focus in on it more closely, and lay out some of the broader implications for how I believe we actually should be living instead.
Again, just to be clear, I do think that the distinction between acts of commission (like killing someone) and acts of omission (like passively letting someone die) can often be a useful one. As Alexander writes:
[There’s a moral difference] between murdering your annoying neighbor vs. not donating money to save a child dying of parasitic worms in Uganda. […] Even utilitarians who deny this distinction in principle will use it in everyday life: if their friend was considering not donating money, they would be a little upset; if their friend was considering murder, they would be horrified. If they themselves forgot to donate money, they’d feel a little bad; if they committed murder in the heat of passion, they’d feel awful.
Speaking from a utilitarian perspective myself, I don’t think there’s necessarily any inconsistency in this stance. Despite the fact that the immediate effect of directly murdering someone would be no worse than that of merely failing to save someone – namely, that someone’s life would end prematurely – the murder would be worse for all the second-order reasons I’ve just been describing, like severely disrupting our most valuable norms and so on. It would also represent a more extreme degree of compromised character on the murderer’s part, just in the sense that deliberately breaking their moral obligations would reflect a greater willingness to act wrongly than merely failing to live up to them would – which would carry its own whole set of negative-utility implications. So even if the immediate effects were the same, we’d still have plenty of reasons to consider the vast majority of killings to be more egregious than the vast majority of incidents in which one person merely failed to save another.
What’s crucial to notice here, though, is that once you take these secondary factors out of the equation, it’s no longer apparent that killing someone actually is categorically worse than letting them die – which suggests that our reasons for considering most killings to be worse than most incidents of letting someone die are actually entirely just these secondary factors. James Rachels illustrates this point:
Many people think that […] killing someone is morally worse than letting someone die. But is it? Is killing, in itself, worse than letting die? To investigate this issue, two cases may be considered that are exactly alike except that one involves killing whereas the other involves letting someone die. Then, it can be asked whether this difference makes any difference to the moral assessments. It is important that the cases be exactly alike, except for this one difference, since otherwise one cannot be confident that it is this difference and not some other that accounts for any variation in the assessments of the two cases. So, let us consider this pair of cases:
In the first, Smith stands to gain a large inheritance if anything should happen to his six-year-old cousin. One evening while the child is taking his bath, Smith sneaks into the bathroom and drowns the child, and then arranges things so that it will look like an accident.
In the second, Jones also stands to gain if anything should happen to his six-year-old cousin. Like Smith, Jones sneaks in planning to drown the child in his bath. However, just as he enters the bathroom Jones sees the child slip and hit his head, and fall face down in the water. Jones is delighted; he stands by, ready to push the child’s head back under if it is necessary, but it is not necessary. With only a little thrashing about, the child drowns all by himself, “accidentally,” as Jones watches and does nothing.
Now Smith killed the child, whereas Jones “merely” let the child die. That is the only difference between them. Did either man behave better, from a moral point of view? If the difference between killing and letting die were in itself a morally important matter, one should say that Jones’s behavior was less reprehensible than Smith’s. But does one really want to say that? I think not. In the first place, both men acted from the same motive, personal gain, and both had exactly the same end in view when they acted. It may be inferred from Smith’s conduct that he is a bad man, although that judgment may be withdrawn or modified if certain further facts are learned about him – for example, that he is mentally deranged. But would not the very same thing be inferred about Jones from his conduct? And would not the same further considerations also be relevant to any, modification of this judgment? Moreover, suppose Jones pleaded, in his own defense, “After all, I didn’t do anything except just stand there and watch the child drown. I didn’t kill him; I only let him die.” Again, if letting die were in itself less bad than killing, this defense should have at least some weight. But it does not. Such a “defense” can only be regarded as a grotesque perversion of moral reasoning. Morally speaking, it is no defense at all.
What this thought experiment demonstrates, then, is that the distinction between commission and omission doesn’t actually make any moral difference in itself; it’s merely coincidental (typically) with the factors that do make the moral difference. As Graham Oddie writes:
A typical killing […] has horrible features which a typical letting-die lacks – malicious intent, unnecessary suffering, acting without the person’s informed consent, perhaps violation of certain rights, and so forth. [And] it is those other horrible features which make killing typically worse than letting-die. […] It is precisely in those situations in which the badness of killing humans is controversial – in the case of the terminally or congenitally ill, say – that these other horrible features may well be absent from a killing and present in a letting-die. [That is to say, passively letting a terminally ill patient die a painful, prolonged death might be worse than granting them a quicker, easier death via euthanasia.] The point of [Rachel’s thought experiment] is to force us to abstract from the typical concomitants of killing or letting-die and focus on the possible value-contribution of killing and letting-die in themselves – [which, when you compare them to each other, turns out to be a non-factor].
And this point becomes even clearer when you examine cases in which the distinction between an act of commission and an act of omission can’t even be clearly drawn in the first place. Consider, for instance, a parent who stops feeding their baby, thereby causing it to starve to death. In the most literal sense, they’re merely “allowing the baby to die” – but how is that morally different from killing it? Or consider this example, from commenter unic0de000:
Is an air traffic controller who suddenly stops moving, speaking or responding at a critical moment during their shift, resulting in a collision, engaging in “inaction”? Or would it be more inactiony of them to continue performing their job as they’d done for the previous hour?
Unic0de000 concludes, “I for one don’t think ‘[moral] inaction’ is a really coherent notion in the first place.” And although this is a counterintuitive conclusion, it fits perfectly with everything else I’ve been talking about here. Under the framework I’ve been describing, there really are no acts of omission once you get past the heuristic level and look at the foundational reality. Any time you make a moral choice of any kind – even if it’s to just sit there and “not do anything” – you’re actually still doing something, because you’re still bringing the universe onto one particular timeline-branch instead of another branch – and that in itself is an act, not an “absence of an act.” In other words, “not doing anything” simply isn’t possible here; you’re unavoidably going to be directing the universe onto one timeline-branch or another, regardless of what you do (or don’t do) at the object level. So again, although the typical act of killing someone is certainly worse than the typical act of letting someone die, the reason for this isn’t that there’s some categorical difference between killing and letting die – because such an absolute difference doesn’t exist. It all comes down to those secondary factors. (And if you still don’t believe it, Oddie actually goes so far as to provide a simple mathematical proof to this effect, designed to demonstrate that once you’ve removed all the secondary factors, there’s no moral difference at all between killing someone and letting them die – or at most, that the difference is so infinitesimal as to be negligible.)
A lot of popular approaches toward morality seem to rely on the tacit assumption that, although taking some particular action may be good or bad, not taking any action is, in a sense, morally neutral. Sure, it might be commendable for you to (say) donate a bunch of money to charity, but it isn’t immoral for you not to do so, because you aren’t actively making the world worse (even if you aren’t making it better either). Under the system I’ve been describing here, though, that’s not really how it works. The value of a moral choice isn’t measured in relation to some neutral baseline; it’s measured in relation to the value of alternative universe-branches, and it can only really be called right or wrong based on how much better or worse it is than those alternatives. So if a particular course of action (or inaction) steers the universe down a better timeline-branch than the alternative choices would have, then it’s a morally better choice; and if it steers the universe down a worse timeline-branch, then it’s a morally worse choice. The only way an act can be morally neutral is if its expected outcome is exactly as good as that of every other possible alternative (no better, no worse) – and needless to say, none of the moral choices we’ve been discussing here meet that description.
I think a lot of the popular motivation for wanting to weigh acts of commission and omission differently just comes from the fact that people aren’t intuitively comfortable with the idea that they’re acting immorally by not being more altruistic, or that not being altruistic makes them bad people. And to be fair, a simple black-and-white dichotomy of “moral” vs. “immoral” really isn’t the best way of thinking about the question, so their intuitions are at least somewhat defensible. We can’t just put everything into two well-defined, fixed categories of “absolutely good” and “absolutely bad;” goodness and badness exist on a continuum, and the morality of actions is a matter of degree. Someone can still be a pretty good person overall even if they don’t always behave perfectly optimally, and someone can still be a pretty terrible person overall even if they occasionally do good things. That being said, though, “pretty good overall” isn’t the same thing as “morally faultless,” and it’s important to recognize this. Even if we use a scalar measure of goodness, there’s simply no getting around the fact that, if you consider someone who donates a bunch of money to charity to be acting morally better than someone who does nothing, then that necessarily means that the person who does nothing is acting morally worse. The logic goes both ways; so you can’t say that one choice is more moral without necessarily saying that the alternative choice is less moral.
In response to this point, some philosophers will argue that although it’s true that some acts are better than others, that doesn’t necessarily mean that they’re always morally obligatory. There are some acts that are morally obligatory, to be sure – like refraining from murder, or pulling a drowning child out of a bathtub – but according to this argument, others are supererogatory – meaning that although it would be commendable for you to do them, you aren’t necessarily acting immorally by not doing them. By doing them, you’re going above and beyond the call of duty, so to speak. This category would include things like donating money to help feed the poor, and providing medical assistance to the sick, and so on.
Under the system I’ve been describing, though, this distinction between obligatory acts and supererogatory ones essentially dissolves. According to this system, when we’re in the original position and we precommit ourselves to the acausal social contract, we’re placing ourselves under an obligation to always do what we expect will maximize global utility after we’re born. So that means that, here in the world today, we’re always obligated to do what’s morally best – and any dereliction of that obligation is a moral failing. In other words, in the same way that it would be wrong for us to not rescue a drowning child who was right in front of us, it would likewise be wrong for us to not donate money to rescue a dying child on the other side of the world. Singer has actually made this exact analogy famous by posing it in the form of a thought experiment, writing:
To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance.
At this point the students raise various practical difficulties. Can we be sure that our donation will really get to the people who need it? Doesn’t most aid get swallowed up in administrative costs, or waste, or downright corruption? Isn’t the real problem the growing world population, and is there any point in saving lives until the problem has been solved? These questions can all be answered: but I also point out that even if a substantial proportion of our donations were wasted, the cost to us of making the donation is so small, compared to the benefits that it provides when it, or some of it, does get through to those who need our help, that we would still be saving lives at a small cost to ourselves – even if aid organizations were much less efficient than they actually are.
Giving money to [an organization like] the Bengal Relief Fund is regarded as an act of charity in our society. The bodies which collect money are known as “charities.” These organizations see themselves in this way – if you send them a check, you will be thanked for your “generosity.” Because giving money is regarded as an act of charity, it is not thought that there is anything wrong with not giving. The charitable man may be praised, but the man who is not charitable is not condemned. People do not feel in any way ashamed or guilty about spending money on new clothes or a new car instead of giving it to famine relief. (Indeed, the alternative does not occur to them.) This way of looking at the matter cannot be justified. When we buy new clothes not to keep ourselves warm but to look “well-dressed” we are not providing for any important need. We would not be sacrificing anything significant if we were to continue to wear our old clothes, and give the money to famine relief. By doing so, we would be preventing another person from starving. It follows from what I have said earlier that we ought to give money away, rather than spend it on clothes which we do not need to keep us warm. To do so is not charitable, or generous. Nor is it the kind of act which philosophers and theologians have called “supererogatory” – an act which it would be good to do, but not wrong not to do. On the contrary, we ought to give the money away, and it is wrong not to do so.
Needless to say, this conclusion flies in the face of the conventional wisdom shared by many of us living comfortable lives here in the first world. But as Singer points out, the fact that it seems supererogatory to us to donate some significant percentage of our wealth to help the needy is largely just the product of our current culture, in which not donating is the norm:
Given a society in which a wealthy man who gives 5 percent of his income to famine relief is regarded as most generous, it is not surprising that a proposal that we all ought to give away half our incomes will be thought to be absurdly unrealistic. In a society which held that no man should have more than enough while others have less than they need, such a proposal might seem narrow-minded. What it is possible for a man to do and what he is likely to do are both, I think, very greatly influenced by what people around him are doing and expecting him to do.
I think his point here is spot on. If we imagined a world in which everyone had their basic needs met, because everyone felt just as much moral obligation toward each other as we presently feel toward members of our own families, then it’s not hard to imagine how anyone who decided to just worry about their own affairs and not concern themselves with the well-being of the less fortunate might be regarded with as much opprobrium as we currently regard those who don’t care for their own children. Conversely, if we imagined a world in which it was standard practice for people to wholly neglect their own children, so that those children were forced to either find their own food or starve to death, then it’s not hard to imagine how the inhabitants of that world might regard the act of sacrificing half their paycheck in order to provide their children with three meals a day as being supererogatory in the extreme. In fact, we can find similar examples of this type of mindset in our own history, where practices that we’d now regard as being absolutely obligatory – like not owning slaves, not beating your children, etc. – were considered at the time to be merely supererogatory (at best). The only reason why these practices persisted for as long as they did is that they were widely accepted as being “normal” or “standard;” but in a culture where such practices weren’t considered normal or standard – like our culture today – anyone who tried to reintroduce them as standard practice would be regarded as downright evil.
We look back on these practices from our modern perspective and consider the people who engaged in them to be moral monsters. But the thing is, I suspect that future generations will look back on some of the moral norms that we hold today with just as much horror. There are features of our modern way of life – defining features of it, actually – which we often take completely for granted, but which really only persist because we’ve already accepted them as normal, and would be unthinkable (in fact, would seem straight-up psychopathic) if the “default” mode of behavior were different. This includes the way we idolize some people for spending millions of dollars on themselves while others starve; and it also includes the way we lovingly welcome some animals into our homes while brutalizing and slaughtering others by the billions. We mostly treat these issues with a kind of casual nonchalance; but the truth is, they are absolute moral emergencies, and our descendants will in all likelihood consider us monsters for not caring more about them. (As the Jiddu Krishnamurti quotation goes, “It’s no measure of health to be well-adjusted to a profoundly sick society.”)
So what would it look like for us to actually respond to these problems with the seriousness they deserve? How much are we really obligated to sacrifice for the less fortunate? This can be a difficult question to wrestle with, because it’s hard to avoid any answer other than “a lot.” As Harris writes:
It is one thing to think it “wrong” that people are starving elsewhere in the world; it is another to find this as intolerable as one would if these people were one’s friends. There may, in fact, be no ethical justification for all of us fortunate people to carry on with our business while other people starve (see P. Unger, Living High & Letting Die: Our Illusion of Innocence [Oxford: Oxford Univ. Press, 1996]). It may be that a clear view of the matter – that is, a clear view of the dynamics of our own happiness – would oblige us to work tirelessly to alleviate the hunger of every last stranger as though it were our own. On this account, how could one go to the movies and remain ethical? One couldn’t. One would simply be taking a vacation from one’s ethics.
For this reason, it’s easy to feel like Singer’s philosophy simply asks too much of us. In fact, this is the most popular argument against it – also known as “the demandingness objection.” For a lot of us, our instinctive response to Singer’s perspective will be to just reject it, on the grounds that if it were true, then that would imply that we weren’t good people unless we cut back on the material parts of our lifestyle and donated to the poor instead – but we know we’re good people, even though we don’t do those things, so therefore there must be something wrong with Singer’s message. However, just because a moral system contradicts our intuitive feelings about whether we’re behaving entirely morally doesn’t automatically mean that it’s wrong; an alternative explanation here might just be that, in fact, maybe we really should be going out of our way to help the less fortunate, and maybe we really are morally worse when we fail to do so (even if we’re still good people overall), and maybe utilitarianism is serving an extremely valuable function by alerting us to this fact. As Alexander writes:
People sometimes complain that a flaw of utilitarianism is that it implies a heavy moral obligations to help all kinds of people whether or not any of their problems are our fault; the world is divided between those who consider that a bug and those who find it a very helpful feature.
As tempting as it might be to avert our eyes from the plight of the less fortunate and sweep our moral obligations under the rug, we have to ask ourselves, what would we think of the person who stood by the shallow pond with their hands on their hips and asked, “OK, but surely I’m not actually obligated to rescue this drowning child and ruin my nice $100 shoes, am I?”
Of course, to be fair, it’s reasonable to have concerns about just how far these obligations truly go. It’s one thing to accept that we should be willing to make a one-time donation of $100 if given the chance; but what about after we give that first $100 – are we obligated to give $100 more? Are we obligated to keep giving until we can’t anymore – until we’re just as poor as the people we’re trying to help? Cowen pushes the limits of the question:
Common sense morality suggests that we should work hard, take care of our families, and live virtuous but self-centered lives, while giving to charity at the margins and helping out others on a periodic basis. Utilitarian philosophy, on the other hand, appears to suggest an extreme degree of self-sacrifice. Why should a mother tend to her baby when she could sell it and send the proceeds to save a greater number of babies in Haiti? Shouldn’t anyone with the training of a doctor be obliged to move to sub-Saharan Africa to save the maximum number of lives? What percentage of your income do you give to charity? Given the existence of extreme poverty, shouldn’t it be at least fifty percent? If you belong to the upper middle class, how about giving away eighty percent of your income? You don’t really need cable or satellite TV, or perhaps you should eat beans with freshly ground cumin instead of meat. The bank might let you borrow to give away even more than you are earning. How terrible is personal bankruptcy anyway, if you have saved seven lives in the meantime? Is eating that ice cream cone so important? Common sense morality implies it’s OK to enjoy that chocolate but utilitarianism suggests maybe not.
These concerns are certainly understandable. After all, a world in which no one could ever enjoy any personal indulgences and parents routinely sold their children to the highest bidder doesn’t exactly sound ideal. But this brings us back to the point from earlier about what consequentialism actually means: If a particular outcome would be less desirable than the alternative, then by definition, consequentialism wouldn’t prescribe it. To be sure, if sacrificing some of your comfort for others would bring them greater utility than it would cost you, then naturally it would be good for you to do so – but if it got to such an extreme point that the sacrifices you were making were detracting from your life (or undermining valuable social norms) more than they were helping others, then clearly you’d no longer to be obligated to immiserate yourself in this way. As Alexander puts it, “Assigning nonzero value to other people doesn’t mean assigning zero value to yourself.” (Or as the popular saying goes, “You aren’t required to set yourself on fire just to keep someone else warm.”) Utilitarianism isn’t the same thing as complete self-abnegation; you still have to account for your own needs too.
Still, a lot of Cowen’s examples do seem like they’d still be utility-positive overall, even despite their demandingness. So how should we think about trying to strike the right balance here? Well, at the high end it’s not particularly difficult, so we can start there. If one person is a billionaire and has all their needs met, and another person is living on less than $1 per day and has to pick through landfills just to survive, then obviously it would be a major improvement in global utility for the first person to donate some of their resources to help the second person escape from poverty. You might question how a mere transfer of money could create an increase in utility; after all, wouldn’t $1000 provide the same amount of utility to whoever owned it, regardless of whether they were a billionaire or a poor person? But that’s not really how it works; money and utility aren’t the same thing, and they don’t map perfectly onto each other. According to the economic principle of diminishing marginal utility, the more you have of something – whether that be pairs of shoes or slices of pizza or dollar bills – the less utility each additional unit of it brings you. So if you gave an extremely poor person $10,000 (or took their last $10,000 away from them), it would make a massive difference in their quality of life – that is, it would represent a massive shift in their utility level – whereas if you gave a billionaire that same amount (or took it away from them), they’d scarcely notice any change in their quality of life at all – that is, their utility level would barely budge. The billionaire donating that $10,000, then, would be a clear utility-positive act. Like I said, this one is easy.
Where it gets trickier is when you start looking at examples where the donor’s quality of life is closer to the recipient’s. See, in addition to the whole diminishing marginal utility thing, there’s also an important factor at work here called loss aversion – a psychological phenomenon that causes people to ascribe up to twice as much negative utility to material losses as they ascribe positive utility to equivalent material gains. What this means is that, if two people start off at roughly the same level of utility, then (all else being equal) a monetary transfer from one to the other might very well hurt the donor twice as much as it helps the recipient – making the act a clear net negative overall. And what this means is that, when it comes to the question of how much we’re obligated to give to the less fortunate, the answer might very well be “a lot,” but it might not necessarily be “so much that you’d ultimately make yourself just as poor as they are” – because at some point before you reached that level, your loss aversion would start causing you so much disutility that it would outweigh whatever good you were doing. The extra harm you’d be bringing on yourself, by continuing to sacrifice your own well-being, would be greater than the benefit you’d be bringing to those you were trying to help – so at that point, the most utility-positive decision you could make would be to stop yourself short and preserve your own remaining utility.
(Of course, I’m assuming here that the utility you’d get from keeping a little bit of money for yourself would outweigh whatever emotional satisfaction you’d feel from spending your last dollars helping others. But the power of that satisfaction shouldn’t be underestimated; it might very well be that for many people, the gratification of sacrificing everything they have to help the less fortunate would be enough of a reward in itself to make it a positive-utility act overall. But more on that momentarily.)
As an illustration of how loss aversion and diminishing marginal utility can intersect and balance each other out, imagine a world in which wealth is denominated not in dollars, but in “wealth units.” Anyone who has ten of these wealth units (10WU) is rich enough to be able to afford whatever they want – whereas anyone who only has one of them (1WU) is barely scraping by. (And those without any WU at all are living wholly hand-to-mouth.) Under these conditions, someone with 12WU might not experience any perceptible utility reduction at all from donating 1WU to a poorer person, whereas the utility gained by the recipient would be immense – so such a donation would obviously be justified. On the other hand, someone with 5WU would be less insulated by the effects of diminishing marginal utility, so they’d experience considerably more negative utility from donating 1WU, such that it wouldn’t necessarily be an overall utility-positive action unless the recipient had started off with less than (say) 2WU themselves. And if the donor were only starting off with only 2WU, then the pain they’d experience from losing 1WU might be so great (due to loss aversion) that no recipient, not even the poorest, would derive enough benefit from receiving 1WU for it to outweigh the donor’s pain – so they’d be justified in not making any donation at all. In short, we could imagine a situation that looked like the following, and it would be perfectly compatible with what we know about diminishing marginal utility and loss aversion:
Obviously, this is just a hypothetical scenario contrived purely for explanatory purposes; the utility levels in these charts are probably significantly different from what they’d be here in our own world. The point I’m trying to illustrate here is just that it’s theoretically possible to have a world in which optimal morality doesn’t necessarily require total self-impoverishment. Exactly how far that principle truly extends in the real world is debatable, of course – and my suspicion (as I was saying before) is that even if it didn’t demand total self-impoverishment, optimal morality would still demand much more of us than we’d probably be comfortable with. (It may even be the case that the threshold at which giving is no longer obligatory is only just barely above total self-impoverishment.) That being said, though, my personal intuitions and speculations aren’t particularly important here. In the end, what really matters isn’t whether I – or anyone – would approve of a particular moral framework from our current biased perspective, but whether we’d approve of it from the Rawlsian original position. That’s the question we must always come back to: If you were told that you were about to be reincarnated (without any of your current memories or characteristics) as a completely random person somewhere in the world – and you had no way of knowing in advance what your life situation would be, what personality traits you would have, or anything like that – then what kind of moral system would you want everyone to follow (bearing in mind that the vast majority of people in the world are significantly worse-off than the average American)? Would you want those who were in the upper part of the wealth distribution to be able to live their lives without having to worry about the rest of the world’s problems at all? Would you want them to be obligated to sacrifice every last spare cent to help those who were less fortunate? Or would you prefer something in between those two extremes – and if so, where exactly would you consider the sweet spot to be?
I can’t offer an answer here that defines the exact optimal measure of obligation for every possible situation. But what does seem clear to me is that, on a societal level, we should be doing a lot more for the less fortunate than we currently are. The way our popular culture glorifies gratuitous levels of personal wealth accumulation and conspicuous consumption is simply indefensible; and accordingly, we should be working to change the social norms surrounding these practices so that the popular culture starts regarding them not only as tacky and obnoxious, but as flat-out immoral. Erin Janus delivers this point forcefully:
Andrew Carnegie, the richest man of [his] era, was blunt in his moral judgments. “The man who dies rich,” he is often quoted as saying, “dies disgraced.”
We can adapt that judgment to the man or woman who wears a $30,000 watch or buys similar luxury goods, like a $12,000 handbag. Essentially, such a person is saying; “I am either extraordinarily ignorant, or just plain selfish. If I were not ignorant, I would know that children are dying from diarrhea or malaria, because they lack safe drinking water or mosquito nets, and obviously what I have spent on this watch or handbag would have been enough to help several of them survive; but I care so little about them that I would rather spend my money on something that I wear for ostentation alone.
Of course, we all have our little indulgences. I am not arguing that every luxury is wrong. But to mock someone for having a sensible watch at a modest price puts pressure on others to join the quest for ever-greater extravagance. That pressure should be turned in the opposite direction, and we should celebrate those […] with modest tastes and higher priorities than conspicuous consumption.
I couldn’t agree more with this sentiment. We ought to be celebrating the people who live materially modest lives so as to better help others, not those who flaunt their disposable income by blowing it on meaningless toys and status symbols. And mind you, that doesn’t necessarily mean that we should demand that nobody ever be allowed to keep any of their hard-earned wealth for themselves, or that we should mandate absolute universal redistribution of resources to such an extent that everyone’s level of wealth would be made exactly equal. After all, people still need some personal incentive to work hard and contribute to the world – and as nice as it would be if pure altruism were sufficient motivation for everyone to do that, history has shown that it typically just isn’t enough. What usually leads people to do the most good for the world (perhaps ironically but not at all surprisingly) is being allowed to keep at least some of the fruits of their own labor; so if permitting people some degree of self-indulgence is what produces the highest-utility results for everyone overall, then that’s the norm we should adopt. Either way, though, simply ensuring that everyone at least has their most basic needs met (food, shelter, medical care, etc.) is a standard that anyone in the original position would undoubtedly judge to be the bare minimum baseline for what we ought to consider morally obligatory as a society – so that’s a goal that we should unambiguously be striving for.
Regardless, for all this talk about how much an isolated individual might be obligated to sacrifice for what’s right, if we were actually able to implement a utilitarian system of ethics on a truly society-wide scale, then each individual almost certainly wouldn’t need to sacrifice that much at all. Naturally, if you were only choosing how to make the most moral use of your resources as a lone individual, then you might decide to forgo a movie or concert in order to donate the ticket money to charity instead. But if all of society chose to act collectively on the same charitable issue, it would almost certainly still have enough left over to not have to forgo movies or concerts at all. As Alexander writes:
[Q:] Might it not end up that art and music and nature just aren’t very efficient at raising utility, and would have to be thrown out so we could redistribute those resources to feeding the hungry or something?
If you were a perfect utilitarian, then yes, [as an individual], you would stop funding symphonies in order to have more money to feed the hungry. But […] utilitarianism has nothing specifically against symphonies – in fact, symphonies probably make a lot of people happy and make the world a better place. People just bring that up as a hot-button issue in order to sound scary. There are a thousand things you might want to consider devoting to feeding the hungry before you start worrying about symphonies. The money spent on plasma TVs, alcohol, and stealth bombers would all be up there.
I think if we ever got a world utilitarian enough that we genuinely had to worry about losing symphonies, we would have a world utilitarian enough that we wouldn’t. By which I mean that if every government and private individual in the world who might fund a symphony was suddenly a perfect utilitarian dedicated to solving the world hunger issue among other things, their efforts in other spheres would be able to solve the world hunger issue long before any symphonies had to be touched.
And just to drive this point home, Beth Barnes (echoing a separate post of Alexander’s) describes the ways in which the world might look different if just the richest 10% of us donated 10% of our income to worthy causes (starting at the 2:46 mark):
(Naturally, there’s plenty of room to debate exactly which of these causes would do the most good and should therefore take the most priority. As I discussed a bit in my last post, I’m personally inclined to think that putting money into certain areas of scientific research – particularly those pertaining to things like AI, nanotechnology, brain-machine interfacing, and radical life extension – could potentially be the thing that produces more positive utility than anything else in the long term, despite how speculative these areas are. But that’s a topic for a whole other post.)
Of course, the unfortunate reality is that not everyone in the world is a perfect utilitarian – so problems like mass hunger and disease remain unsolved. That means that if you want to do the most moral thing by trying to help with these problems as much as possible, you’ll have to shoulder a lot more of the burden yourself, and the amount of material wealth you’ll have to give up really will be (though maybe not totally impoverishing) significant. Having to face such a morally demanding reality can be dispiriting; as Jia Tolentino writes, it can often feel like “the choice of this era is to be destroyed or to morally compromise ourselves in order to be functional – to be wrecked, or to be functional for reasons that contribute to the wreck.” For a lot of people, it can feel like the expectation is simply more than they can live up to. And if we’re talking about behaving perfectly morally at all times, well, that’s an expectation that none of us can live up to.
So what does it mean if we don’t behave perfectly morally? Does that make us bad people? If we don’t meet our moral obligations to the fullest possible extent, do we forfeit the right to call ourselves good? I think this is actually the question that people are most concerned with when it comes to morality – not necessarily “Am I doing as much good as I can?” but simply “Am I a good person?” And it’s for this reason that a lot of people reject utilitarianism out of hand – because they assume that if they’re not doing everything that utilitarianism says is good, then that means that, by its standards, they must be bad people. They reflexively reject this judgment, naturally – and as a result, they end up seeking reassurance from some alternative moral system that tells them they aren’t doing anything wrong at all if they ignore the less fortunate.
But is their understanding of utilitarianism actually true? Is anyone who falls short of perfection, according to utilitarian logic, necessarily a bad person? Of course not. As I’ve been saying this whole time, goodness and badness are a continuum, not two distinct all-or-nothing categories. Failing to meet 100% of your moral obligations doesn’t automatically make you a bad person; it simply makes you an imperfect person – and no one is morally perfect. We’re all somewhere in the gray area between “perfectly good” and “perfectly bad” – and while some of us do better than others, that doesn’t mean that anyone who does something bad is automatically evil; it just means that they’ve moved somewhat down the continuum of moral goodness. Here’s Alexander again:
[Q]: It seems impossible to ever be a good person. Not only do I have to avoid harming others, but I also have to do everything in my power to help others. Doesn’t that mean I’m immoral unless I donate 100% of my money (maybe minus living expenses) to charity?
In utilitarianism, calling people “moral” or “immoral” borders on a category error. Utilitarianism is only formally able to say that certain actions are more moral than other actions. If you want to expand that and say that people who do more moral actions are more moral people, that seems reasonable, but it’s not a formal implication of utilitarian theory.
Utilitarianism can tell you that you would be acting morally if you donated [practically] 100% of your money to charity, but you already knew that. I mean, Jesus said the same thing two thousand years ago (Matthew 19:21 – “If you want to be perfect, go and sell all your possessions and give the money to the poor”).
Most people don’t want to be perfect, and so they don’t sell all their possessions and give the money to the poor. You’ll have to live with the knowledge of being imperfect, but Jeremy Bentham’s not going to climb through your window at night and kill you in your sleep or anything. And since no one else is perfect, you’ll have a lot of company.
The important thing here is just to note that while doing something immoral doesn’t automatically make you a bad person, you shouldn’t use that as an excuse to neglect your moral obligations; any immoral action you take is still in fact bad, and does still in fact make you less good than you could otherwise be. Ignoring the poor and turning away from those who need help isn’t something that you can do and still call yourself morally perfect – it’s still a moral failing – and that’s why you should try your best to avoid it, even if you don’t succeed 100% of the time. In other words, you should be able to recognize that being morally imperfect doesn’t make you bad, while also recognizing which of your actions are the ones making you morally imperfect, and acknowledging them as bad. That’s how you can be as good as you can be – which, in the end, is what we should all be striving for.
That last line, by the way, might sound like just a trite platitude, but as Zephyr Teachout and Toby Buckle discuss near the end of this podcast conversation (starting at around the 1:00:25 mark), it actually gets at an important point. It seems like nowadays there’s a kind of tacit consensus among academics, political commentators, and other intellectuals that the thing lying at the heart of all of human behavior is a kind of narrow self-interest – that the forces driving people can be boiled down to things like incentives and self-maximization and so on. (Economists call it the Homo economicus model of human behavior.) And to be sure, this “everyone for themselves” way of seeing things can often be useful as a descriptive tool (I’ve been using it a lot here myself); but it has become so popular in recent years that it seems to have almost imperceptibly turned into a prescriptive worldview – an unstated presumption that people should act only according to narrow calculations of what’s best for themselves alone – as if all of life were one big Prisoner’s Dilemma. The implicit assumption seems to be that if you act unselfishly, that makes you irrational. And I think that this way of viewing the world can lead to subtly (and sometimes not-so-subtly) toxic results, because it has the effect of crowding out certain deep ideals like compassion and selflessness and courage and integrity, and making it so that instead of being revered as honorable values, they now come across as sounding old-fashioned, almost quaint – like a relic of our grandparents’ time. It creates a kind of invisible underlying social norm which says that, although such virtues might still be admirable, they aren’t expected of people anymore. Everyone is expected to see the world through a lens of self-interest rather than virtue – to be focusing first and foremost on their own personal fulfillment, their own accomplishments, their own self-betterment, and so on, even at the expense of other important considerations. And I think that this can be a genuinely harmful norm – not only because it causes people to neglect others’ needs, but because so much of the time it frankly doesn’t even work on its own terms. After all, when you spend all your time obsessing over whether you’re successfully achieving happiness and fulfillment for yourself, there’s no better way of ensuring that you’ll end up feeling anxious and unfulfilled. The best way to actually feel happy and fulfilled, in most cases, is to become so engrossed in something you find meaningful and valuable that you forget about yourself altogether – and one of the best ways to do that (as mentioned earlier) is to help others, and to focus on their needs rather than just your own.
(As a case in point, think about people who are part of a military unit or some similar organization, and how clear and strong their sense of purpose is when their entire focus is on serving that group and being part of something bigger than themselves – and then compare that to how acutely they can feel a sudden loss of that sense of purpose after they leave the group and are no longer part of it. Veterans often say that they’ve never felt such a well-defined sense of identity and necessity as they did when they were serving; and leaving active duty causes many of them to feel utterly lost and without passion. And the same is true of people who have left other organizations like religious groups or cults, where all their energy was poured into serving the cause, and it gave them a stronger sense of identity and purpose than anything they’ve experienced before or since. It might not necessarily be a good thing that they were ever part of that specific kind of organization, of course – I’m obviously not recommending going out and joining a cult here – but the point is just that being able to serve others and feel like you’re part of something greater than yourself clearly seems to fulfill a very real and fundamental human need. To quote a line from the old movie I Take This Woman, it’s very hard to feel useful and unhappy at the same time.)
I think that a large part (though certainly not all) of the dissatisfaction that has become so widespread and oppressive these days is the result of a popular culture that causes us to lose sight of that ideal. We spend so much time and energy trying to optimize our own lives that we forget about everything else – and then, because nobody can ever really attain a truly perfectly optimal life, we feel like failures when we’re unable to achieve that impossible level of perfection. Maybe we try to act selflessly where we can, but even when we do that, we still tend to frame it in self-oriented terms; we fixate on the question of “Am I a good person?” rather than “How can I help?” – and that inevitably leads to the kind of self-judgment that focuses only on the failures and makes us feel even worse about our inability to be perfect. It seems to me, then, that when we find ourselves becoming overly preoccupied with such thoughts, a better frame of mind to try and get into is one that’s more oriented toward others – which asks not “How good am I?” but simply “I see that others are suffering and need help; how can I be there for them?” In other words, instead of thinking of helping others as something we have to do in order to improve our own status – which makes it feel like a chore – we should try to get ourselves into the frame of mind where we want to help the less fortunate, simply because we care about them innately, in the same way that a loving parent cares about their children and wants them to be happy for their own sake.
Just think about the last time you saw someone being unfairly bullied or abused or neglected, for instance, and felt an overpowering urge to do whatever you could to help them – not because you thought it would reflect better on you as a person, but simply because you had an uncontrolled gut reaction, and your heart went out to them, and you wanted them to be in less pain and distress. Imagine your reaction if you saw, say, a child crying because no one came to their birthday party, or a terrified fawn trapped under a fallen tree limb. Would you see that situation as something that you’d grudgingly feel obligated to address just in order to preserve your status as a good person? Or would you actually want to help? I think for most people, it would be the latter. (To take another example, think about all the people who see the aftermath of a natural disaster on TV and feel compelled to donate to the relief effort.) That feeling of actively wanting to help – of seeing the chance to help the less fortunate not just as an obligation, but as an opportunity to make things better for someone who’s suffering – is one that’s worth cultivating – not only because of how much it can help those in need, but also because of how much more meaningful and gratifying it can make our own lives when we lean into it. As William MacAskill writes:
Imagine saving a single person’s life: you pass a burning building, kick the door down, rush through the smoke and flames, and drag a young child to safety. If you did that, it would stay with you for the rest of your life. [In fact, I’m betting you’d consider it one of the top two or three greatest and most defining moments of your life.] If you saved several people’s lives – running into a burning building one week, rescuing someone from drowning the next week, and diving in front of a bullet the week after – you’d think your life was really special. You’d be in the news. You’d be a hero.
But we can do far more than that.
According to the most rigorous estimates, the cost to save a life in the developing world [defined as extending someone’s life expectancy by 30 healthy years] is about $3,400 (or $100 for one [healthy year of life]). This is a small enough amount that most of us in affluent countries could donate that amount every year while maintaining about the same quality of life. Rather than just saving one life, we could save a life every working year of our lives. Donating to charity is not nearly as glamorous as kicking down the door of a burning building, but the benefits are just as great. Through the simple act of donating to the most effective charities, we have the power to save dozens of lives. That’s pretty amazing.
Now, admittedly, just typing your credit card information into a charity website and clicking “donate” isn’t exactly a super-fulfilling experience in itself – at least not compared to rescuing someone from a burning building – so for a lot of people, it may be that making such donations just isn’t intrinsically rewarding enough for them to feel compelled to keep doing it. In such cases, it may be that the best way for them to do the most good is to find some alternative way of helping others which, while maybe less impactful in absolute terms, feels more gratifying (because it’s more personal or more hands-on or what have you) and is therefore easier to keep up. But then again, knowing that you’re helping others not because it gratifies you personally, but simply because it’s the right thing to do, can bring an almost defiant kind of satisfaction all its own, if you’re the kind of person who’s able to take pride in that kind of thing. So if you are such a person, then the ideal thing to do would be to devote your resources not just to whichever cause gives you the most “warm fuzzy feelings” of immediate personal satisfaction, or to whichever one makes you feel like a particularly good person, but to whichever one actually does the most good in the world. Figuring out the absolute best way to do the most good can, of course, be a bit tricky; it might very well turn out, for instance, that the best way to spend a few thousand dollars helping a particular cause isn’t actually to donate that money to the cause directly, but instead to spend it hiring a lobbyist to bring the issue to the attention of Congress (since Congress controls trillions of dollars), or hiring a famous YouTuber to bring the issue to the attention of the voting public, or something like that. There are always a lot of possibilities, and the answer won’t always be immediately clear. But one thing that is clear is that the more money you have to spend, the more it’s worth taking the time to figure out which of those possibilities will have the highest expected utility value, and then to act accordingly.
(And even if you don’t have much to give, the fact that you can’t save the entire world all by yourself shouldn’t stop you from wanting to help, or from feeling good about helping – because this whole human endeavor is a group effort, and the job of each of us is just to play our own partial role in that effort, not to feel disappointed that we don’t get to be the sole hero of the story who fixes everything single-handedly. John Green’s video here provides one of the best insights I’ve ever heard on this subject (despite ostensibly being about something else entirely) and should in my opinion be required viewing for everyone.)
Like I said, the most important takeaway here is that we should try to focus not so much on how our moral achievements reflect on us as individuals, but instead on how those actions are benefiting others. Having said that, though, I understand that our tendency to wonder about our own status isn’t always so easy to overcome. Just in terms of our own psychological well-being, it’s important for us to feel like we can answer the “Am I a good person?” question positively, and that this answer actually be true – or at least attainable. (I guess I should technically be saying “moral person” instead of “good person” here, given how I’ve been distinguishing between goodness and morality, but you get what I mean.) How can we do so, then, if we’re always falling short of perfection? I think Alexander offers a good answer in his post here. I recommend reading the whole thing, but his basic idea in a nutshell is that we can consider a “good person” simply to be anyone whose behavior is above average. More specifically, we can say that a good person is anyone whose choices lead to a better universe than what the average person’s choices would have led to if they’d been put in those same circumstances (and had the same resources to work with and so on). So if you had the chance to heroically sacrifice yourself to save two other people, for instance, then although you would be morally obligated to do so, you wouldn’t necessarily be a bad person (with regard to that situation) if you failed to meet your obligation, assuming that most other people would have similarly failed. By that same token, if you made the tough choice to give up something of significant personal value in order to help another person, you might be considered a better person than a billionaire philanthropist who wouldn’t have been willing to do the same, even if that billionaire was actually helping more people and doing more good in absolute terms (without ever having to make any real sacrifice in their quality of life). And likewise for all the other moral dilemmas you might encounter: The only way you’d be considered a bad person overall would be if you lived your whole life in a way that was altogether less moral than how the average person would have lived it if they’d been in your shoes – but if you actually exceeded that standard, then you’d be considered a good person even if you weren’t accomplishing major world-changing moral feats in absolute terms.
This standard makes perfect sense for a moral system that regards goodness and badness as existing along a continuum; it seems entirely reasonable and appropriate to regard anything above the midpoint of that continuum as being good overall, and anything below it as bad overall, while still recognizing that it’s possible to have degrees of goodness and badness ranging from mild to extreme at either end of the scale. (Although I should clarify an important point here: When I talk about someone being a good person or a bad person morally, that’s not the same as saying that their existence itself is good or bad. Even if someone happened to be a bad person morally, it’s still entirely possible that their impact on the world might be positive overall (imagine someone who was cruel and selfish but came up with an important invention, for instance); and conversely, even if someone had a net negative impact on the world, that wouldn’t automatically mean that they were a bad person in terms of their moral behavior (imagine someone who involuntarily spread a deadly disease everywhere they went, for instance, despite only ever wanting the best for others). Neither of these factors in itself determines someone’s “innate value as a person” – because this framework simply doesn’t conceptualize things in that way. It only conceptualizes things in terms of whether someone’s behavior is good or bad, and whether their effect on the world is good or bad – which I consider to be a big point in its favor.)
Aside from the philosophical appeal, though, this “above average” standard also has more practical benefits. For one thing, it’s a lot more attainable than a standard of absolute perfection, so it’s more likely that people will actually be willing to try and meet it – as opposed to just throwing up their hands and giving up because they know they’ll never be able to even come close. In the end, then, adopting it can actually end up producing more global utility than a higher standard might produce. Alexander recounts his own experience with this counterintuitive effect:
When I was younger, I determined that I had an ethical obligation to donate more money to charity, and that I was a bad person for giving as little as I did. But I also knew that if I donated more, I would be a bad person for not donating even more than that. Given that there was no solution to my infinite moral obligation, I just donated the same small amount.
Then I met a committed group of people who had all agreed to donate 10%. They all agreed that if you donated that amount you were doing good work and should feel proud of yourself. And if you donated less than that, then they would question your choice and encourage you to donate more. I immediately pledged to donate 10%, which was much more than I had been doing until then.
[If you consider moral peace of mind to be like a product that you can sell, then] selling the “you can feel good about the amount you’re donating to charity” product for 10% produces higher profits for the charity industry than selling it for 100%, at least if many people are like me.
What’s more, even though adopting a simple “above average” standard is relatively easy to meet, the fact that more people will actually be willing to try and meet it means that it can have a self-reinforcing ratchet effect, whereby the more morally people act, the more their choices will continue improving in order to keep up with the ever-rising standard of what the new average has become. As Alexander writes:
This [“above average” standard] is a very low bar. I think you might beat the average person on animal rights activism just by not stomping on anthills. The yoke here is really mild.
But if you believe in something like universalizability or the categorical imperative, “act in such a way that you are morally better than average” is a really interesting maxim! If everyone is just trying to be in the moral upper half of the population, the population average morality goes up. And up. And up. There’s no equilibrium other than universal sainthood.
This sounds silly, but I think it might have been going on over the past few hundred years in areas like racism and sexism. The anti-racism crusaders of yesteryear were, by our own standards, horrendously racist. But they were the good guys, fighting people even more racist than they were, and they won. Iterate that process over ten or so generations, and you reach the point where you’ve got to run your Halloween costume past your Chief Diversity Officer.
I think he hits the nail on the head here; it seems to me that what he’s describing is the fundamental process by which moral progress happens within societies. The more that people try to distinguish themselves as good by acting more moral than average, the more it pulls up the standard of what actually constitutes above-average moral behavior. And the happy result is that, in simply trying to be good people ourselves, we all end up making each other better – quite literally a virtuous cycle. Of course, when it’s implemented in the wrong context, this kind of self-reinforcing process can easily have the opposite effect and become a vicious cycle; in subcultures with flawed definitions of morality (like those dominated by religious fundamentalism), it can promote ever-more extreme forms of that flawed morality, which can obviously lead to terrible results. But luckily, such subcultures seem to be a shrinking minority. As far as humanity as a whole, it does seem like the arc of history has continually bent toward ever-more moral behavior – not only in terms of people treating each other more humanely, but in terms of people expanding their definitions of whom they consider to be entitled to moral treatment in the first place. And it’s this last point that I want to turn to now.
Throughout our history, the most noticeable way in which we’ve progressed morally has been by expanding our in-groups to include more and more outsiders. As Stefan Klein and Stephen Cave write:
The possibility of moral progress […] is of course a vexed idea: […] what might seem like progress to one will seem like decadent descent to another. But we believe that it is possible to give it a definite content in a way that helps to make sense of both past and future ethical evolution.
That content is based on a simple and ancient idea: that morality means giving common concerns or the wellbeing of others as much weight as one’s own self-interest. Moral behaviour in this sense can be found in any society, because it is the glue that sticks individuals together and so makes society possible. Indeed, the basis of this morality – altruism – is innate to humans, as many recent studies have shown. Without ever having been told to do so, even toddlers are willing to help and to share with others.
The tricky question is who exactly counts as the ‘other’ whose interests we should set above our own? Every society has had its own answers, as does each one of us: we expect you would go to much greater lengths to do good for your child than for your neighbour, and it would be easier to lie to your boss than to your spouse. And some beings, whether animal, vegetable or microbial, are outside the realm of consideration altogether. In moral terms, some always matter more than others.
This understanding offers us a fairly straightforward idea of moral progress: it means including ever more people (or beings) in the group of those whose interests are to be respected. This too is an ancient insight: Hierocles, a Stoic philosopher of the second century, describes us as being surrounded by a series of concentric circles. The innermost circle of concern surrounds our own self; the next comprises the immediate family; then follow more remote family; then, in turn, neighbours, fellow city-dwellers, countrymen and, finally, the human race as a whole. Hierocles described moral progress as ‘drawing the circles somehow toward the centre’, or moving members of outer circles to the inner ones.
We can see the moral progress of recent centuries in these terms: we have witnessed an extension of the circle of respect and concern to various groups such as women, Jews, non-whites or homosexuals. And in these terms, we have come a long way. But it is equally clear that there is room for improvement.
This idea, also commonly associated with Singer’s book The Expanding Circle, might not necessarily be an absolute one, just in that it might not always be optimal for everyone to care exactly as much about those farthest away as they do about those who are right in front of them. In an imperfect world like ours, it’s not too hard to imagine how it might be better for people to have more love and affection for their own children than they do for complete strangers (if only for reasons of efficiency alone; it’s a lot easier to receive care and affection from someone right next to you than from someone a thousand miles away). That being said, though, I tend to think of this more in terms of it being good that people are giving their children extra love, not in terms of it being good that they’re giving strangers less love. If we could someday augment our brains so that everyone literally cared as much about everyone else as they did about themselves, then I could easily imagine the idea of family-exclusive love becoming obsolete, as it was subsumed by this more universal love. (I guess it would have to, actually, since it’s not possible to care about someone more than 100%.)
Caveats aside, I think the analogy of the expanding moral circle is a valuable one in general; and it seems to me that one of the best ways of imagining how we might become more moral in the future is by imagining how we (or our successors) might expand the circle even further than we already have. So what are some potential ways in which we could further expand our circle? Klein and Cave suggest a few possibilities – some of which I’ve already mentioned, but all of which merit further discussion:
1. Rights for future generations. Currently, only people alive now can claim rights. But just as we have extended our circle of moral concern among the living, so it can be extended in time. The problem is clear: we often make decisions that will have impacts on people far into the future – such as producing nuclear waste that will remain toxic for millions of years – yet those future people are not here to stand up for themselves. Neither defining nor granting these rights will be easy. But there are precedents on which we can draw, such as the ways we protect the rights of small children or animals, who also cannot speak for themselves.
So our successors will have to be imaginative in creating a framework robust enough to defend the unborn in the face of the interests of those alive today, with which they often conflict. For we should be under no illusions: to take the rights of future generations seriously would involve massive restrictions on our freedom of action. Currently, we despoil the earth and seas with impunity to enjoy a luxurious lifestyle (by historical standards). In 100 years, this will be seen as wickedness comparable to colonial powers despoiling their colonies. Though our successors will be appalled by our consumerism, we will not find it easy to adjust to more modest ways.
2. Rights for other conscious beings. This too is a plausible extension of the circle of moral concern, and one that is already underway. It is no longer in doubt that non‑human animals feel pain and indeed many more complex emotions too. They therefore clearly have interests, such as to run free, or pursue social interactions appropriate for their species. To take account of the interests only of humans and not of other animals is therefore increasingly regarded as speciesism – an unjustified discrimination akin to racism or sexism.
The situation for other species is apocalyptic: humans have never raised so many animals for slaughter – five times as many today as in 1950 – while our impact on the environment is causing other species to become extinct a thousand times faster than normal. There will be many difficult decisions ahead as we try to balance human interests against those of other creatures, or the interests of individual animals against the species or the ecosystem. But our descendants will not excuse us for failing to make these decisions just because they are difficult. More likely, they will curse us for killing all the rhinos, and find our consumption of factory‑farmed sausages as morally repulsive as we now find cannibalism.
As soon as our computers become conscious – and they will – then this extension of concern will apply to them, too.
3. Opening the floodgates. The widening of the circle of moral concern means that employers in the US, for example, can no longer refuse someone a job because he is black or white, Jewish or atheist, disabled or dyslexic – but they can if the applicant is not American. The same applies in most other nations: states can withhold rights and services, and employers can (or even must) withhold jobs just because of the passport someone carries. In 100 years, people might be impressed at today’s levels of welfare and prosperity in the industrialised world, but appalled that access to them depends on the lottery of whether you were born in London or Lagos.
In a world rapidly growing together, this is bound to change. Of course, there will be a great deal of resistance, as there is already to immigration in many wealthy countries. People do not give up their privileges lightly. And it will have its price: strong welfare systems depend on a sense of moral community that could easily be threatened by more migration.
This is closely related to the point that everyone should have the same rights to healthcare, welfare, etc, not just regardless of where they come from, but also regardless of where they are. In other words, we should be doing everything we can to alleviate suffering everywhere. This poses further challenges: influencing conditions in other countries is not so easy as within one’s own borders. Effective action will require nations to give up more of their sovereignty to supranational unions. This too will face fierce resistance. But eventually our descendants will regard themselves as global citizens – and will be appalled that we let 19,000 children every day die from preventable, poverty-related causes.
4. Healing criminals. At the moment, we lock up extraordinarily large numbers of people. In the US alone, two million humans are in prison, ruining not only their lives, but also making their dependents and communities suffer too. But in 100 years, no one will believe we have an absolutely free will and therefore that anyone chooses to be a criminal. Indeed, there is evidence that we lock up those who are least responsible for their decisions – those with the least capacity for self-control, those who suffer from addictions, or who are mentally ill. In the UK, for example, more than 70 per cent of those in prison have two or more mental-health disorders; in the US, more than three times as many people with serious mental illnesses are in prisons than are in hospitals.
We will not find it easy to decide whom to treat, how radically and when; nor to extend understanding and sympathy to those who have committed the worst of crimes. But our great-grandchildren will be appalled at how we locked up millions of people when we should instead have been helping them.
There are many more changes we could imagine. We have barely touched on the question of inequality, for example. Or perhaps our descendants will be appalled at the idea that the development of life-saving medicines is largely left to private industry. Or that flesh-and-blood humans rather than machines should fight wars, or that liberal democracies should export arms. Or perhaps they will look back on the loneliness of life and death for many in the industrialised world with righteous horror.
For many who live through them, these changes will be extremely uncomfortable. But, of course, they won’t be troubling for those who grow up with them, any more than it is troubling for us to see a black President of the US. What is at first experienced as a concession – spending time recycling rubbish, for example – can quickly seem normal, even necessary. Asking ourselves what we might be condemned for in 100 years is a way of smoothing that transition; of projecting ourselves into the shoes of our great-grandchildren, for whom these new conventions will already be unremarkable.
We can also turn our question around and ask, what will our great-grandchildren admire us for? When we look back, we admire those who courageously challenged the norms of their day, such people as Gandhi or the Suffragettes, Martin Luther King or Nelson Mandela; people who widened the circle of moral concern. We have the chance to do that, too. And if we manage, then perhaps our great-grandchildren will forgive us our sins.
Like I said, each of these points deserves an entire post of its own (and I hope to write at least one for each of them at some point down the line). For the record, I’m not quite as convinced as Klein and Cave are that extending our moral circles would necessarily require such severe sacrifice on everybody’s part – I think there are ways of accomplishing these goals that would only have slight negative effects on society’s wealthiest people, and would leave everyone else considerably better off – but that’s a whole other discussion.
In the meantime, I think their list is valuable because it forces us to think not only about whom we consider to be part of our circle of moral consideration, but also how and why we make that judgment. It seems to me that one of the most important takeaways from their list is how it illustrates the fundamental immorality of allowing people (and other sentient beings) to suffer worse outcomes due to factors that are entirely beyond their control. And that not only includes things like place of birth and mental health status, as Klein and Cave mention, but can even include things like upbringing and innate personality traits.
This last point is a challenging one, because it contradicts some of the principles that make up the very foundation of our modern moral culture. After all, we’ve built our whole society around ideas like meritocracy and “just deserts” and a general attitude that people should be rewarded in life based on things like their level of talent and intelligence and how hard they’re willing to work. But the thing is, traits like talent and intelligence and work ethic can only ever differ from person to person as a result of differences in their genes, or their upbringing, or various other outside influences – and none of those factors are things that people directly control themselves or can claim any personal credit for. If someone (say) happens to be born with a propensity to work hard and apply themselves, then that trait of industriousness is just a result of lucky genetics, not something they earned through their own merit. And likewise, if they got that trait from some other source outside of their own genes, like their parents or peers or mentors, then they can’t claim personal credit for that result either; the fact that they were fortunate enough to have encountered those positive influences, while others might not have been so fortunate, was just a lucky break for them. We put so much emphasis on distinguishing between, on the one hand, people who “don’t deserve” positive outcomes because they merely lucked into them, and on the other hand, people who “do deserve” positive outcomes because of their personal brilliance and diligence and grit and so on; but ultimately, wasn’t the fact that the latter group ended up with those characteristics also the result of pure arbitrary luck? How, then, can we say that anyone truly “deserves” anything?
I’ve been referring to Rawls a lot in this post. Well, as it happens, this idea is yet another major point of emphasis for him. As Thomas Nagel notes, “One point Rawls makes repeatedly is that the natural and social contingencies that influence welfare – talent, early environment, class background – are not themselves deserved. So differences in benefit that derive from them are morally arbitrary.” Sandel elaborates even further:
Rawls presents this argument by comparing several rival theories of justice, beginning with feudal aristocracy. These days, no one defends the justice of feudal aristocracies or caste systems. These systems are unfair, Rawls observes, because they distribute income, wealth, opportunity, and power according to the accident of birth. If you are born into nobility, you have rights and powers denied those born into serfdom. But the circumstances of your birth are no doing of yours. So it’s unjust to make your life prospects depend on this arbitrary fact.
Market societies remedy this arbitrariness, at least to some degree. They open careers to those with the requisite talents and provide equality before the law. Citizens are assured equal basic liberties, and the distribution of income and wealth is determined by the free market. This system – a free market with formal equality of opportunity – corresponds to the libertarian theory of justice. It represents an improvement over feudal and caste societies, since it rejects fixed hierarchies of birth. Legally, it allows everyone to strive and to compete. In practice, however, opportunities may be far from equal.
Those who have supportive families and a good education have obvious advantages over those who do not. Allowing everyone to enter the race is a good thing. But if the runners start from different starting points, the race is hardly fair. That is why, Rawls argues, the distribution of income and wealth that results from a free market with formal equality of opportunity cannot be considered just. The most obvious injustice of the libertarian system “is that it permits distributive shares to be improperly influenced by these factors so arbitrary from a moral point of view.”
One way of remedying this unfairness is to correct for social and economic disadvantage. A fair meritocracy attempts to do so by going beyond merely formal equality of opportunity. It removes obstacles to achievement by providing equal educational opportunities, so that those from poor families can compete on an equal basis with those from more privileged backgrounds. It institutes Head Start programs, childhood nutrition and health care programs, education and job training programs – whatever is needed to bring everyone, regardless of class or family background, to the same starting point. According to the meritocratic conception, the distribution of income and wealth that results from a free market is just, but only if everyone has the same opportunity to develop his or her talents. Only if everyone begins at the same starting line can it be said that the winners of the race deserve their rewards.
Rawls believes that the meritocratic conception corrects for certain morally arbitrary advantages, but still falls short of justice. For, even if you manage to bring everyone up to the same starting point, it is more or less predictable who will win the race – the fastest runners. But being a fast runner is not wholly my own doing. It is morally contingent in the same way that coming from an affluent family is contingent. “Even if it works to perfection in eliminating the influence of social contingencies,” Rawls writes, the meritocratic system “still permits the distribution of wealth and income to be determined by the natural distribution of abilities and talents.”
If Rawls is right, even a free market operating in a society with equal educational opportunities does not produce a just distribution of income and wealth. The reason: “Distributive shares are decided by the outcome of the natural lottery; and this outcome is arbitrary from a moral perspective. There is no more reason to permit the distribution of income and wealth to be settled by the distribution of natural assets than by historical and social fortune.”
Rawls concludes that the meritocratic conception of justice is flawed for the same reason (though to a lesser degree) as the libertarian conception; both base distributive shares on factors that are morally arbitrary. “Once we are troubled by the influence of either social contingencies or natural chance on the determination of the distributive shares, we are bound, on reflection, to be bothered by the influence of the other. From a moral standpoint the two seem equally arbitrary.”
Once we notice the moral arbitrariness that taints both libertarian and the meritocratic theories of justice, Rawls argues, we can’t be satisfied short of a more egalitarian conception.
[Of course, there’s a] challenging objection to Rawls’s theory of justice: What about effort? Rawls rejects the meritocratic theory of justice on the grounds that people’s natural talents are not their own doing. But what about the hard work people devote to cultivating their talents? Bill Gates worked long and hard to develop Microsoft. Michael Jordan put in endless hours honing his basketball skills. Notwithstanding their talents and gifts, don’t they deserve the rewards their efforts bring?
Rawls replies that even effort may be the product of a favorable upbringing. “Even the willingness to make an effort, to try, and so to be deserving in the ordinary sense is itself dependent upon happy family and social circumstances.” Like other factors in our success, effort is influenced by contingencies for which we can claim no credit. “It seems clear that the effort a person is willing to make is influenced by his natural abilities and skills and the alternatives open to him. The better endowed are more likely, other things equal, to strive conscientiously…”
When my students encounter Rawls’s argument about effort, many strenuously object. They argue that their achievements, including their admission to Harvard, reflect their own hard work, not morally arbitrary factors beyond their control. Many view with suspicion any theory of justice that suggests we don’t morally deserve the rewards our efforts bring.
After we debate Rawls’s claim about effort, I conduct an unscientific survey. I point out that psychologists say that birth order has an influence on effort and striving – such as the effort the students associate with getting into Harvard. The first-born reportedly have a stronger work ethic, make more money, and achieve more conventional success than their younger siblings. These studies are controversial, and I don’t know if their findings are true. But just for the fun of it, I ask my students how many are first in birth order. About 75 to 80 percent raise their hands. The result has been the same every time I have taken the poll.
No one claims that being first in birth order is one’s own doing. If something as morally arbitrary as birth order can influence our tendency to work hard and strive conscientiously, then Rawls may have a point. Even effort can’t be the basis of moral desert.
The claim that people deserve the rewards that come from effort and hard work is questionable for a further reason: although proponents of meritocracy often invoke the virtues of effort, they don’t really believe that effort alone should be the basis of income and wealth. Consider two construction workers. One is strong and brawny, and can build four walls in a day without breaking a sweat. The other is weak and scrawny, and can’t carry more than two bricks at a time. Although he works very hard, it takes him a week to do what his muscular co-worker achieves, more or less effortlessly, in a day. No defender of meritocracy would say the weak but hardworking worker deserves to be paid more, in virtue of his superior effort, than the strong one.
Or consider Michael Jordan. It’s true, he practiced hard. But some lesser basketball players practice even harder. No one would say they deserve a bigger contract than Jordan’s as a reward for all the hours they put in. So, despite the talk about effort, it’s really contribution, or achievement, that the meritocrat believes is worthy of reward. Whether or not our work ethic is our own doing, our contribution depends, at least in part, on natural talents for which we can claim no credit.
If Rawls’s argument about the moral arbitrariness of talents is right, it leads to a surprising conclusion: Distributive justice is not a matter of rewarding moral desert.
He recognizes that this conclusion is at odds with our ordinary way of thinking about justice: “There is a tendency for common sense to suppose that income and wealth, and the good things in life generally, should be distributed according to moral desert. Justice is happiness according to virtue… Now justice as fairness rejects this conception.” Rawls undermines the meritocratic view by calling into question its basic premise, namely, that once we remove social and economic barriers to success, people can be said to deserve the rewards their talents bring:
We do not deserve our place in the distribution of native endowments, any more than we deserve our initial starting point in society. That we deserve the superior character than enables us to make the effort to cultivate our abilities is also problematic; for such character depends in good part upon fortunate family and social circumstances in early life for which can claim no credit. The notion of desert does not apply here.
If distributive justice is not about rewarding moral desert, does this mean that people who work hard and play by the rules have no claim whatsoever on the rewards they get for their efforts? No, not exactly. Here Rawls makes an important but subtle distinction – between moral desert and what he calls “entitlements to legitimate expectations.” The difference is this: Unlike a desert claim, an entitlement can arise only once certain rules of the game are in place. It can’t tell us how to set up the rules in the first place.
The conflict between moral desert and entitlements underlies many of our most heated debates about justice: Some say that increasing tax rates on the wealthy deprives them of something they morally deserve; or that considering racial and ethnic diversity as a factor in college admissions deprives applicants with high SAT scores of an advantage they morally deserve. Others say no – people don’t morally deserve these advantages; we first have to decide what the rules of the game (the tax rates, the admissions criteria) should be. Only then can we say who is entitled to what.
Consider the difference between a game of chance and a game of skill. Suppose I play the state lottery. If my number comes up, I am entitled to my winnings. But I can’t say that I deserved to win, because a lottery is a game of chance. My winning or losing has nothing to do with my virtue or skill in playing the game.
Now imagine the Boston Red Sox winning the World Series. Having done so, they are entitled to the trophy. Whether or not they deserved to win would be a further question. The answer would depend on how they played the game. Did they win by a fluke (a bad call by the umpire at a decisive moment, for example) or because they actually played better than their opponents, displaying the excellences and virtues (good pitching, timely hitting, sparkling defense, etc.) that define baseball at its best?
With a game of skill, unlike a game of chance, there can be a difference between who is entitled to the winnings and who deserved to win. This is because games of skill reward the exercise and display of certain virtues.
Rawls argues that distributive justice is not about rewarding virtue or moral desert. Instead, it’s about meeting the legitimate expectations that arise once the rules of the game are in place. Once the principles of justice set the terms of social cooperation, people are entitled to the benefits they earn under the rules. But if the tax system requires them to hand over some portion of their income to help the disadvantaged, they can’t complain that this deprives them of something they morally deserve.
A just scheme, then, answers to what men are entitled to; it satisfies their legitimate expectations as founded upon social institutions. But what they are entitled to is not proportional to nor dependent upon their intrinsic worth. The principles of justice that regulate the basic structure of society… do not mention moral desert, and there is no tendency for distributive shares to correspond to it.
Rawls rejects moral desert as the basis for distributive justice on two grounds. First, as we’ve already seen, my having the talents that enable me to compete more successfully than others is not entirely my own doing. But a second contingency is equally decisive: the qualities that a society happens to value at any given time also morally arbitrary. Even if I had sole, unproblematic claim to my talents, it would still be the case that the rewards these talents reap will depend on the contingencies of supply and demand. In medieval Tuscany, fresco painters were highly valued; in twenty-first-century California, computer programmers are, and so on. Whether my skills yield a lot or a little depends on what the society happens to want. What counts as contributing depends on the qualities a given society happens to prize.
Consider these wage differentials:
• The average schoolteacher in the United States makes about $43,000 per year. David Letterman, the late-night talk show host, earns $31 million a year.
• John Roberts, chief justice of the U.S. Supreme Court, is paid $217,400 a year. Judge Judy, who has a reality television show, makes $25 million a year.
Are these pay differentials fair? The answer, for Rawls, would depend on whether they arose within a system of taxation and redistribution that worked to the benefit of the least well off. If so, Letterman and Judge Judy would be entitled to their earnings. But it can’t be said that Judge Judy deserves to make one hundred times more than Chief Justice Roberts, or that Letterman deserves to make seven hundred times as much as a schoolteacher. The fact that they happen to live in a society that lavishes huge sums on television stars is their good luck, not something they deserve.
The successful often overlook this contingent aspect of their success. Many of us are fortunate to possess, at least in some measure, the qualities our society happens to prize. In a capitalist society, it helps to have entrepreneurial drive. In a bureaucratic society, it helps to get on easily and smoothly with superiors. In a mass democratic society, it helps to look good on television, and to speak in short, superficial sound bites. In a litigious society, it helps to go to law school, and to have the logical and reasoning skills that will allow you to score well on the LSATs.
That our society values these things is not our doing. Suppose that we, with our talents, inhabited not a technologically advanced, highly litigious society like ours, but a hunting society, or a warrior society, or a society that conferred its highest rewards and prestige on those who displayed physical strength, or religious piety. What would become of our talents then? Clearly, they wouldn’t get us very far. And no doubt some of us would develop others. But would we be less worthy or less virtuous than we are now?
Rawls’s answer is no. We might receive less, and properly so. But while we would be entitled to less, we would be no less worthy, no less deserving than others. The same is true of those in our society who lack prestigious positions, and who possess fewer of the talents that our society happens to reward.
So, while we are entitled to the benefits that the rules of the game promise for the exercise of our talents, it is a mistake and a conceit to suppose that we deserve in the first place a society that values the qualities we have in abundance.
Woody Allen makes a similar point in his movie Stardust Memories. Allen, playing a character akin to himself, a celebrity comedian named Sandy, meets up with Jerry, a friend from his old neighborhood who is chagrined at being a taxi driver.
SANDY: So what are you doing? What are you up to?
JERRY: You know what I do? I drive a cab.
SANDY: Well, you look good. You – There’s nothing wrong with that.
JERRY: Yeah. But look at me compared to you…
SANDY: What do you want me to say? I was the kid in the neighborhood who told the jokes, right?
SANDY: So, so – we, you know, we live in a – in a society that puts a big value on jokes, you know? If you think of it this way – (clearing his throat) if I had been an Apache Indian, those guys didn’t need comedians at all, right? So I’d be out of work.
JERRY: So? Oh, come on, that doesn’t help me feel any better.
The taxi driver was not moved by the comedian’s riff on the moral arbitrariness of fame and fortune. Viewing his meager lot as a matter of bad luck didn’t lessen the sting. Perhaps that’s because, in a meritocratic society, most people think that worldly success reflects what we deserve; the idea is not easy to dislodge.
It’s true that this conception of meritocracy is deeply entrenched in the popular consciousness – and considering its usefulness as an everyday heuristic, this is perfectly understandable. Despite that everyday usefulness, though, there’s no getting around the fact that, once you consider all the factors discussed above, the idea of moral desert just isn’t tenable as a foundational basis for morality. And for a lot of moral philosophies, this poses a serious theoretical problem. After all, if the whole basis of your morality fundamentally comes down to achieving justice, and your definition of justice comes down to giving people what they deserve, then how can you deal with a reality in which no one can really be said to “deserve” anything (at least not in any kind of ultimate sense)?
The system I’ve been describing here, on the other hand, provides an easy answer to this puzzle: Once again, we just cut the Gordian knot outright. We don’t try to define morality in terms of justice or desert at all, because as Rawls says, “the notion of desert does not apply here.” Instead, we define morality simply in terms of maximizing expected global utility – i.e. bringing our universe onto its best possible timeline. That doesn’t mean ignoring the idea of justice altogether, mind you; I don’t think any reasonable person would consider it a good idea to let serial killers go entirely unpunished, or to let brilliant innovators whose ideas improve countless lives go entirely unrewarded. All it means is that we recognize our practice of doling out rewards and punishments to be an instrumental means of achieving the greatest possible global good, not an end in itself. In other words, just like moral rules and moral rights, the concepts of justice and desert and merit are heuristics, not the foundational cornerstones grounding all of morality. So if someone does something we might consider praiseworthy (like coming up with a useful new invention, or treating an illness), then by all means we can and should praise and reward them, so as to encourage such behavior and incentivize others to act similarly. Paying doctors more than the average person not only makes the doctors themselves better off, it also makes society as a whole better off, by ensuring that enough people will actually be willing to put in the work to become doctors – whereas they might not be willing to do so if the salary were no higher than anyone else’s. Likewise, if someone does something we might consider particularly blameworthy (like going on a killing spree), then we can and should punish them, so as to discourage such behavior and stop them from acting similarly in the future. Making murder legal would likely result in a lot more murders; so keeping it illegal is obviously the thing that makes society as a whole better off. All in all, then, a world that operates according to principles of utility maximization will tend to produce the same results as a world that operates based on ideas of justice and desert. The point here is just that those latter considerations aren’t the goal in themselves – merely a means of getting there. We should reward and punish people not because we consider them to innately deserve it more, but simply because doing so creates the best possible world. Alexander puts it this way:
The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of “bad person” and make them “deserve” bad treatment. Consequentialists don’t on a primary level want anyone to be treated badly, full stop; thus is it written: “Saddam Hussein doesn’t deserve so much as a stubbed toe.” But if consequentialists don’t believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future.
Of course, a lot of people believe the opposite – that hurting wrongdoers is good in and of itself, because getting justice is all that matters in the end. But that belief has been responsible for many – if not most – of the world’s worst problems. It causes people to take pleasure in others’ pain, and to feel morally vindicated when they see people suffer who they feel deserve it. And this vindictiveness doesn’t just corrode the humanity of the people indulging in it; it can also lead to deeply damaging outcomes for the rest of the world too. (Again, if you need examples here, all you have to do is think about every misguided war or feud or execution or act of violent mob justice you’ve ever heard of.)
There’s an old saying that goes, “Fiat iustitia et pereat mundus” – which roughly translates to “Let justice be done, even if the world perishes.” The appeal of this sentiment, I think, lies entirely in the fact that it sounds cool, and nothing more. As a moral proposition, it’s terrifying. We shouldn’t have to argue whether keeping the world from being destroyed is an important moral priority; it should go without saying that it’s our most important moral priority. Instead of the traditional version of this saying, then, I prefer Ludwig von Mises’s inversion: “Fiat iustitia ne pereat mundus” – which is to say, “Let justice be done, so that the world won’t perish.” Again, there’s nothing wrong with incentivizing good behavior and disincentivizing bad behavior; as a general heuristic, it makes perfect sense. But that’s only because it serves a more important purpose, which is to make life as good as possible for the inhabitants of our universe. That’s all that matters in the ultimate sense; by definition, it’s all that can matter. So when all’s said and done, everything we do ought to be aligned toward that purpose.
One final note here before I wrap up: When I say everything we do ought to be aligned with the system I’ve been describing, I mean everything, including even those decisions that we wouldn’t usually regard as having anything to do with morality. You might intuitively think, for instance, that certain choices you make which apply solely to yourself, and to no one else, might be outside the scope of moral consideration. But as I mentioned before, your precommitment to maximize global utility also includes your own utility function – so even if you’re the only one who stands to gain or lose from a particular choice, you’re still obligated to maximize global utility by steering the universe onto the timeline-branch that gives you the most utility. That might seem like a trivial point, but what it means is that this whole framework can not only function as a moral philosophy, but also as a universal decision-making guide for any dilemma you might encounter – and that’s not only important for everyday life, but could have potential implications for deeper theoretical areas as well. There are certain problems in decision theory, for instance, like Newcomb’s Paradox and Parfit’s Hitchhiker, which have become famous for the challenges they pose to standard theories of self-interest – specifically the way they seem to punish traditional “rational” decision-making. But under the framework I’ve been laying out here, it becomes possible to successfully navigate these dilemmas, because by having already precommitted yourself to maximizing your overall utility – even if it means acting in a way that would seem to be less than optimal at the object level – you enable yourself to successfully avoid the lower-utility branches of the timeline.
(If you’re not really into decision theory, by the way, I apologize for what will probably seem like a weird digression right here at the end – but if you are at least a little bit curious about this kind of thing, I hope you’ll understand why I left this part in.)
Consider Parfit’s Hitchhiker. If you haven’t heard of this thought experiment before, it goes like this: Imagine that you’re stranded out in the middle of a desert somewhere, about to die of thirst, when someone pulls up to you in their car and makes you an offer: “I’ll drive you into town,” they say, “but only if you promise to give me $100 from an ATM when we get there.” Let’s say for the sake of simplicity that this driver won’t actually derive any positive or negative utility from the situation regardless of what occurs (maybe they’re a robot or something), so all that’s at stake is your own utility (which you always want to maximize). But let’s also stipulate that the driver somehow has the ability to perfectly read your facial expressions and tone of voice (or maybe you’re just unable to lie convincingly), so the driver will know whether or not you’re making a false promise. What do you do here? Obviously, you’d like to tell the driver, “Yes, I’ll pay you the $100” – but you also know that once you actually get into town, the outcome that will give you the most utility will be to break your promise and not pay the driver after all. Knowing this, then, there will be no way for you to honestly agree to the driver’s terms – so even if you answer yes, the driver will know you’re lying, and will drive away, leaving you to die.
Is this just a hopeless situation for you? It might seem that way from a traditional “rational self-interest” perspective. But once you look at it instead from the perspective of the system I’ve been describing here, it’s not actually that challenging. After all, if you’re thinking about the situation in terms of possible timeline-branches, it quickly becomes apparent that the potential timeline-branch in which you successfully hitch a ride without paying for it later doesn’t actually exist; there’s no way for you to leave open the possibility of later stiffing the driver, while also somehow having that fact escape the driver’s notice. (Maybe if you’d woken up to find yourself already inside the car, with the driver having already decided to give you a ride, then you could potentially get away with not paying once you got into town – but that’s not the situation you find yourself in.) The only way to successfully get a ride is to precommit yourself to paying, and to actually mean it. That’s the timeline-branch that actually gives you the greatest utility. And since you’ve already made an implicit precommitment to maximize global utility, way back in the original position before you were even born, then that’s the course of action you should follow.
The same thing applies to Newcomb’s Paradox. In this thought experiment, the setup goes like this: Imagine that you’re at a carnival and you discover a mysterious-looking circus tent with a thousand people lined up to get in. You get in line as well, and when it’s finally your turn, you enter the tent to see a table with two boxes sitting on top of it – an opaque one, and a transparent one containing $1000. You’re offered the choice either to take both boxes, or to only take the opaque box. But here’s the twist: Seated at the table is an individual known as the Predictor (who might be a mad scientist or an alien superintelligence or some other such thing depending on which version of this thought experiment you like best); and this Predictor informs you that just before you entered the tent, your brain was scanned using an advanced brain-scanning device, which ascertained with utmost certainty whether you would choose to take both boxes or just take the opaque box alone. If it determined that you would take the opaque box alone, then the Predictor placed $1,000,000 inside that box before you walked into the tent; but if it determined that you would take both boxes, then the Predictor left the opaque box empty. The contents of the boxes are now set and cannot be changed; so in theory, this game seems like it should be easy to exploit. However, before you entered the tent, you watched the Predictor make the same offer to each of the other thousand people in front of you in line – and in each case, the Predictor was correct about which choice they ultimately made. That is to say, everyone who chose to take both boxes walked away with $1000, and everyone who chose to only take the opaque box walked away with $1,000,000. The Predictor’s ability to tell the future, in short, is functionally perfect.
So what do you choose? On the one hand, you know that the contents of the boxes are already fixed and can’t be changed – so either the opaque box contains nothing, in which case you’d be better off taking both boxes and at least getting $1000, or the opaque box contains $1,000,000, in which case you’d still be better off taking both boxes, since it’d get you an extra $1000 on top of the $1,000,000 that’s already in the opaque box. It might seem, then, like two-boxing would be your best bet. But on the other hand, given the Predictor’s perfect record of predicting the future, you also have every reason to believe that if you take both boxes, the opaque one will be empty. So what’s the right answer? Well, once again, your decision in this scenario comes down to your ability to distinguish between which timeline-branches actually exist and which ones only seem to exist. Sure, it might be easy in theory to imagine a universe in which both boxes could be full and you could take them both – but in reality, there are only two paths that the universe might actually follow here: Either the conditions of the universe when you entered the tent (i.e. the configuration of atoms in your brain and so on) were such that they would cause you to take both boxes, in which case they would also have necessarily caused the Predictor to have left the opaque one empty, or they were such that they would cause you to only take the opaque box, in which case they would also have caused the Predictor to have filled it. In other words, the possibilities here don’t look like this:
They look like this:
Or even more accurately, like this:
Given that reality, then, the actual best choice here is clearly to take the opaque box alone. It’s true that in doing so, you really are leaving money on the table; but the fact of the matter is that there’s no universe in which you can get the $1,000,000 without leaving the $1000 on the table – just like there was no universe in which you could have gotten a ride from the driver in the desert without paying for it later, and just like there was no universe in which Odysseus could have survived hearing the song of the Sirens while remaining unbound. (I mean, I’m sure you could imagine other creative solutions in his case, but you get what I’m saying.) It’s like buying a $5 ticket for a $500 raffle with no other entrants: There’s one possible universe in which you buy the ticket and win the prize, for a net gain of $495; and there’s a second possible universe in which you don’t buy the ticket and don’t win anything, but at least get to keep your $5; but there is no universe in which you somehow get to keep your $5 because you didn’t buy a ticket, and yet you still somehow win the raffle and get the $500 prize money as well. The “have your cake and eat it too” option doesn’t exist here; so the only correct choice if you want to maximize your utility is to have a self-restricting precommitment in place that you won’t try to have your cake and eat it too. You have to be willing to accept the small up-front utility reduction of paying $5 for the raffle ticket (or forgoing the $1000 in the transparent box), because there’s no other way of arriving at the much larger utility boost of winning the $500 prize (or getting the $1,000,000 in the opaque box). You might be tempted to argue that Newcomb’s Paradox is different from this raffle analogy, because your decision of whether to one-box or two-box doesn’t come until after the boxes have already been filled, so it can’t actually have any causal effect on their contents; the opaque box is already either filled or it isn’t. But that’s a misunderstanding of what decision you’re actually making here. The choice that decides the outcome of this scenario isn’t whether to take one or both boxes when they’re presented to you; that’s only a result of the actual choice, which is whether to go into the tent with the kind of brain that would be willing to subsequently forgo the $1000 or not. And that’s a choice that does have a causal effect on the boxes’ contents.
In other words, the key variable here isn’t the object-level decision of whether you want to one-box or two-box (which, although it might feel that way from the inside, isn’t really a free choice at all); the key variable is which decision-making algorithm you want to have implemented in your brain, which itself will be what determines whether you one-box or two-box. The right algorithm, of course, is the one that precommits you to always following the timeline-branch that maximizes utility. But it’s always possible to break that precommitment and do the wrong thing, by switching to an algorithm that would ultimately opt for taking both boxes – and naturally, that will lead you onto the timeline-branch where you only walk away with $1000.
If you’re still having trouble imagining how it might be possible for the Predictor to perfectly foresee your ultimate choice – i.e. if you’re asking yourself, “Couldn’t I just plan to one-box when I enter the tent but then change my mind and take both boxes after they’re actually presented to me?” – think about it this way: Imagine a version of this scenario in which, instead of being a human, you’re actually a highly-advanced AI, and the Predictor is a programmer who can simply examine your source code to see in advance exactly what your choice will be. It might feel to you from the inside like you’re free to make whichever decision strikes your fancy; but in reality, you can only choose what your programming dictates you will choose – and even if you start off planning to one-box but then change your mind and two-box, that change of mind will itself have only occurred because it was part of your code from the start. What that means, then, is that the question that you should really be focused on here isn’t which choice you’ll make in the tent – because that’s entirely determined by your code. The question you should be asking is which code you want to have making your decisions in the first place. In other words, the key question is: If you could modify your own programming, which algorithm would be best for you to implement in your own robot brain, and to thereby commit all of your future decisions to? Obviously, it would be the one that would precommit you to following whichever timeline-branch produced the greatest utility.
This seems logical enough for a purely mechanical AI, right? Well, the same logic applies to us flesh-and-blood humans too – because after all, at the end of the day, our brains are nothing but biological machines themselves, and the decisions we make are nothing but the product of our neural programming. (See my previous post for more on this.) Whatever choice you might make inside the Predictor’s tent is purely the result of whatever decision-making algorithm is running in your brain at that time; so even though it might feel like you’re freely choosing it in the moment, both your choice and the contents of the boxes are results of what your brain has been pre-programmed to make you do in that situation. Funny enough, that means that even if both boxes were completely transparent and you could actually see that one of them contained $1,000,000, the fact that it did contain that money would necessarily mean that you’d find yourself unable (or more accurately, unwilling) to take both boxes; the fact that the Predictor had perfectly read your programming would mean that there could be no other possible outcome. Seeing the full $1,000,000 box wouldn’t prompt you to also take the $1000 box, because the only way you could have gotten to that point would be if you were the kind of person whose brain was programmed to disregard that temptation. Seeing both boxes full would simply alert you to the fact that your brain was already in the process of deciding to one-box (even if you weren’t consciously aware of it yet). Again, there’s no timeline-branch in which it could be true both that the $1,000,000 box was full and that you would walk away with the $1000, any more than such a thing would be possible if you were a pre-programmed AI. Getting the $1,000,000 is the best outcome that it’s possible for you to achieve; so that’s the outcome that you should be precommitted to bringing about.
Of course, where this thought experiment gets a bit trickier is when you imagine a variation in which the Predictor isn’t actually perfectly accurate. To go back to the version of the thought experiment in which you were an AI, for instance, imagine if there were (say) a 5% chance that the programmer reading your code would smudge their glasses or get distracted or something, and would consequently misread your code in such a way that it would cause them to inaccurately predict your eventual choice. In that case, would it still be best to only take the opaque box? Well, again, you have to think of it in terms of potential timeline-branches. In the previous version of the problem, we could say with 100% certainty that the timeline-branch in which you walked away with $1,001,000 didn’t and couldn’t exist. But now, with this new fudge factor to account for, there’s a 5% chance that it actually could exist. Of course, that still leaves an 95% chance of it not existing – and it also leaves a 5% chance that you might one-box, only to find that the opaque box has been left empty because the programmer failed to see that that’s what you’d do. So this version of the problem isn’t quite as simple as just comparing the $1,000,000 payout to the $1000 payout and picking the higher one; instead, you have to calculate the average expected value of each possible choice, accounting for the fact that the best- and worst-case timeline-branches only have a 5% chance of actually existing. In other words, your possible outcomes look like this:
What this means, then, is that once you crunch the numbers, your average expected payout from one-boxing will be $950,000, whereas your average expected payout from two-boxing will be $51,000. Clearly, with an error rate of only 5%, your best decision-making algorithm will still be to precommit to one-boxing, for all the same reasons discussed above.
But what about if the error rate is higher – like 20%, or 30%, or 40%? Actually, in all of these cases, the expected payout from one-boxing is still higher than the expected payout from two-boxing; in fact, it remains higher even if the Predictor’s accuracy drops all the way down to 50.1%. Believe it or not, it’s not until the Predictor’s accuracy dips below 50.05% (at which point the expected payouts from one-boxing and two-boxing both equal $500,500) that it becomes worthwhile to take both boxes. Anything above that, and the expected benefit of potentially getting an extra $1000 is outweighed by the likelihood that that particular timeline-branch won’t even be available to you at all.
I could keep going here; there are about a dozen other variations of this thought experiment and others we could explore in this vein. (If you’re really interested, Yudkowsky and Nate Soares discuss several of them in this paper, which approaches these ideas from a somewhat different angle from the one I’ve described here but ends up reaching many of the same conclusions.) But at this point, you probably get the idea. In each of these examples, the solution will lie in your choice of which decision-making algorithm to have implemented in your brain, not necessarily which object-level decision seems to offer the highest reward in the moment it’s presented to you. Coming out ahead in these scenarios is entirely a matter of being precommitted to following the highest-utility possible branch of the timeline. And that’s not just true for isolated scenarios like the ones described above; it also applies to your entire life as a whole. To bring it all full circle here, then, we can go all the way back to the original position, back before you were even born, and say that there would have been no better way for you to have maximized your expected utility – not just for one or two hypothetical future situations, but for your whole life – than to have made the implicit precommitment (for acausal trade reasons) to follow whichever branch of the universe would best maximize the utility of its inhabitants, regardless of who you eventually became after you were born, or what specific situations you might have eventually encountered in your life. Fortunately, it just so happens that that’s the pledge you did precommit yourself to following (thanks to your Master Preference), whether you consciously realize it now or not – and it’s the one to which everyone else has precommitted themselves as well. What that means, then, is that when you’re in a situation where your own utility is the only thing at stake, you’re obligated to do whatever will produce the best results for you. And what it also means, as I’ve been saying all along, is that when you’re in a situation where you’re not the only one involved, and the preferences of others also have to be accounted for, you’re obligated to do what will satisfy those preferences to the greatest possible extent. In other words, you’re obligated to act in exactly the way that you would have wanted someone in your position to act if you were still in the original position, describing how you’d want future events to play out, and you didn’t know whether you’d actually become that person or not. That’s the principle that should always guide your actions: Imagine what someone in the original position would want you to do, despite not knowing whether they’d become you or not – then follow that course of action. (An alternative way of conceptualizing this is to imagine that you aren’t just an individual, but that you’re everyone – some kind of all-encompassing superorganism comprising all sentient life – and then do whatever maximizes the utility of that whole.) That, in short, is the objectively right way to act. And so if I had to sum up this whole framework in a nutshell, that’s what I’d say the bottom line is. The decisions we make do have objectively right answers. There is an objective morality, which we’re all obligated to follow. And what it says is that we should all act in such a way as to bring the universe onto the timeline-branch that provides the greatest degree of preference satisfaction for its inhabitants. What that means in practice, simply enough, is that the moral wisdom you’ve already heard is true: Love yourself, and love others as you love yourself. You may not always succeed, but all that matters is that you do the best you can. That, to quote Hillel the Elder, is the whole of the law – and everything else is commentary. ∎