Objective Morality

IIIIIIIVVVIVIIVIIIIXXXIXIIXIIIXIVXV – XVI – XVIIXVIIIXIXXX
[Single-page view]

So OK, maybe you accept all of that as far as it goes; but for all this talk about being able to retroactively do things for people who will never know it, there’s still a major assumption lying at the heart of this whole line of reasoning which we haven’t addressed yet – namely, the idea that it’s possible to satisfy or violate someone’s preferences, and thereby increase or decrease their utility, even if they never realize it or experience any subjective effect from it at all. Sure, in a scenario like Singer’s, it might seem clear that the act of lying to the colleague would go against her preferences – but if she never actually found out that she was lied to, and therefore never directly experienced any negative effect from the lie, then how can we say that she was made worse off by it? Can it really be the case that, as Parfit phrases it, “an event [can] be bad for me [even] if it makes no difference to the experienced quality of my life”?

(Note that this isn’t the same as simply asking whether it’s possible for someone to be harmed or benefited by actions that they’re never aware of. Obviously, if you were to save the world from an incoming asteroid without anyone ever being aware of it, it’s clear that you still would be providing a great benefit. Likewise, if you were to secretly slip a slow-acting toxin into someone’s drink, causing them to die a year later, it’s clear that you would be harming them even though they weren’t aware of it. I don’t think any reasonable person would dispute the rightness or wrongness of cases like these, in which people experience better or worse results despite being oblivious to their causes. What we’re talking about here instead are cases in which people never even perceive any positive or negative effects from the acts in question. Can such acts really have any moral relevance at all?)

Well, we can easily imagine other scenarios (that don’t involve people dying) in which this would certainly seem to be the case. For instance, if a dentist secretly molested one of his patients while she was under anesthesia, we’d surely consider that to be a grossly immoral act, even if the patient never found out about it or experienced any negative effects from it. Or to take another example from Crash Course:

We often think that what makes something wrong is that it causes harm. But consider this creepy scenario: Imagine you’re changing in a store’s dressing room, and outside, there’s some creep who takes pictures of you, and you never know it. This creep then shares the pictures with his creepy friends. They don’t know who you are, and you experience no ill effects, no uncomfortableness of any kind, because you never know the pictures were taken.

So here’s a couple questions: Were you harmed? And did the creeps do wrong? It’s hard to see how you were harmed by the pictures, because harm seems to be the type of thing that has to be experienced in order for it to exist. But most people would agree that the creeps did do wrong, that a violation of privacy is still a violation, even if you don’t know that it happened.

The possibility that harm and wrongdoing are actually two different things might not have occurred to you before; but when you think about it, it seems pretty right. [A] falling coconut could harm me without anyone having done wrong.  And likewise, wrongdoing doesn’t have to lead to anyone being harmed.

It seems that the real question here, then, is what exactly we mean when we talk about people’s preferences being satisfied, and what exactly it means to say that different outcomes will give them higher or lower levels of utility. When we talk about someone’s preferences being satisfied, are we only referring to the level of preference satisfaction that they actually experience? Or is it possible for an outcome to be better or worse for somebody even if they subjectively experience no difference whatsoever (relative to what they would have experienced in the alternative outcome)? And if it’s the latter, then how does that even work? What exactly is the mechanism by which a person’s preferences are either satisfied or violated?

This may actually be the most fundamental question of this whole discussion. After all, we can theorize all we want about what it means to fulfill preferences in abstract, conceptual terms – but unless we can actually point to a real concrete mechanism by which preferences are fulfilled, abstract theorizing is all it’ll amount to. So what could this mechanism be? Galef offers a possible lead:

A friend was telling me last week about a celebrity sex tape that he particularly enjoys. I’m not going to help publicize this tape, for reasons that should become clear from reading my post, but I can sketch out the rough details for you: The woman is a young singer, publicly Christian, and she’s having sex with a married man. The tape was stolen (or hacked, I’m not sure) and leaked to the public. It’s theoretically possible that she leaked it herself for publicity, I suppose, but it seems unlikely given the cheating and the Christianity – it definitely tarnished the public image she’d carefully constructed for herself, in addition to being humiliating simply by virtue of it being a sex tape.

So I asked my friend if he feels any guilt about watching this tape, knowing that the woman didn’t want other people to see it, and we ended up having a friendly debate about whether there was anything ethically problematic about his behavior. Of course, the answer to that depends on what ethical system you’re using. You could, for example, take a deontological approach and declare that it’s just a self-evident principle that we don’t have a right to watch someone else’s private tape. Or alternately, you could take a virtue ethics approach and declare that enjoying the tape, after it’s already been leaked, is exploiting someone else’s misfortune, which isn’t a virtuous thing to do.

But my friend and I are both utilitarians, at heart, and neither of those lines of argument resonated with us. We were concerned, instead, with what I think is a more interesting question: does watching the tape harm the woman? As my friend emphasized, she’ll never know that he watched it. (At least, that’s true as long as he downloads it from Bit Torrent, or some other file-sharing site where the number of views of the video aren’t recorded such that she could ever see how much traffic it’s gotten.)

I agreed, but was still reluctant to conclude that no [wrong] was done. Do I necessarily have to know about something in order for its outcome to matter to me? If you tell me about two possible states of the world, one in which everyone has seen my awful humiliating sex tape, and one in which no one has, I’m going to have a very strong preference for the latter, even if people behave identically towards me in both potential worlds. So maybe it makes more sense to define “[wronging] someone” to mean, “helping create a world which that person would not want to exist, given the option,” rather than “causing that person to experience suffering or [harm].” My friend’s decision to watch the tape [wronged] the woman according to the first, but not the second, definition.

I think there’s something to this idea. See, up to this point, I’ve been talking a lot about utility as something that individuals “get” from particular outcomes – almost as if utility were some kind of tangible substance, like a currency, that people could accumulate. Really, though (despite its convenience for illustrative purposes), this isn’t the best way to think about it. In reality, when someone has their preferences satisfied, that doesn’t mean that the universe somehow magically conjures up an invisible substance called utility and bestows it upon them. Rather, all it means is that the universe has entered a state of affairs which matches the state of affairs that their brain has assigned the most value to. That is to say, when someone expresses a preference for outcome X over outcome Y, what they’re saying is that they ascribe more value to the universe-state in which X is true than the universe-state in which Y is true – and that means that if the universe is actually brought into a state in which X is true, then their preference has been satisfied, whereas if the universe is brought into a state in which Y is true, then their preference hasn’t been satisfied, even if they mistakenly think it has. So in our earlier example of the dentist molesting his unconscious patient, for instance, it might be true that the patient would never find out the truth about the situation – but that wouldn’t change the fact that she had ascribed less value to the universe-state in which she was unknowingly molested than to the universe-state in which she wasn’t. By molesting the patient, then, the dentist would be bringing the universe into a state that had had less goodness ascribed to it – i.e. an objectively worse state. And so, given that the dentist would know full well that molesting the patient would lead to this outcome, we can say unambiguously that he would be morally obligated to refrain from doing so – because after all, avoiding worse outcomes in favor of better ones is the whole point of consequentialism. Molesting the patient might not be any worse for her in terms of her subjective experience, but it would be worse in terms of objective global utility, as determined by her preference not to be unknowingly molested – so doing it would be morally wrong. In short, when we see the term “utility,” maybe we shouldn’t read it as referring to a person’s subjective satisfaction level; maybe we should read it instead as referring to the degree to which their preferences about the state of the universe are actually satisfied, whether they realize it or not – because although the preferences themselves may be subjective, the question of whether they align with the current state of the universe isn’t.

(As a quick side note, you might be tempted here to imagine hypothetical situations in which a person would be happier falsely believing that their preferences were satisfied than they would be if their preferences actually were satisfied; but the Master Preference would preclude that sort of thing. If someone really were better off with the former outcome than the latter, then that’s the outcome that their true extrapolated preferences would in fact favor – so that first outcome (in which they were happier) wouldn’t be a false fulfillment of their preferences at all; it would be a true fulfillment of them.)

Here’s a useful way of thinking about it: Imagine reality as one big branching timeline – like a train track that continually splits into a near-infinitude of possible universe-states as we travel from the present into the future. (If you subscribe to some of the more popular interpretations of quantum mechanics, you may regard this as literally being the case.) Any time any of us does anything, we direct the universe onto one of these branching paths – so if we’re presented with a choice between doing X and doing Y, for instance, our choice will cause the trajectory of the universe’s timeline to either follow the branch in which X happens, or the branch in which Y happens. What this means is that whenever anyone expresses a preference for outcome X, what they’re really saying is that they would rather have the universe follow the branch in which outcome X occurs than the branch in which it doesn’t. By acting in such a way as to direct the universe onto that particular branch, then, we’re causing their preference to be satisfied – not just in an abstract theoretical sense, but in a literal physical sense – by bringing them onto their preferred path; and conversely, if we act in such a way as to direct the universe onto some other branch, we’re causing their preferences to be violated by bringing them onto their non-preferred path. This is true even if they never realize which path they’ve actually been brought onto; so if they say that they don’t want to be molested in their sleep, for instance, or that they don’t want to be lied to about what will happen to their typescript after they die, then we’re obligated to respect those wishes, even if they’d never subjectively know the difference themselves. Their preferences, after all, aren’t about their subjective experiences; they’re about the state of the universe and which path it follows. So by taking them down a path they don’t want to be taken down, we’d be wronging them (by violating their preferences) even if we weren’t directly harming them – and by bringing about a universe-state that had had less value ascribed to it, we’d be doing what was objectively less moral in global terms as well.

Ultimately, this is what morality is all about; when we make moral decisions, our goal is to determine which branch of the timeline has the highest expected level of value associated with it, and then to try and steer our universe onto that timeline. And really, there’s no other way it could be – because after all, this moral decision-making process is the only one we’d willingly agree to if we were all back behind the veil of ignorance again, in the original position. If we all traveled back in time to before our births, before any of us knew who we were going to turn out to be, and we had to come up with a set of universal moral commitments that we’d all have to follow after we were born, we wouldn’t just decide that our preferences could be ignored as long as we never knew about it; we’d make it so everyone was obligated to respect each other’s preferences even if the effects of their actions would be hidden. Think about it: If you didn’t know in advance whether you’d turn out to be the dentist who wanted to molest his unconscious patient, or the patient who really didn’t want to be unknowingly molested, wouldn’t you rather have everyone in the original position make a precommitment that whoever turned out to be the dentist wouldn’t molest whoever turned out to be the patient? Likewise, if you didn’t know in advance whether you’d turn out to be the dying scholar who really wanted to get her typescript published after her death, or her colleague who wanted to avoid having to do so, wouldn’t you rather have everyone in the original position make a precommitment that whoever turned out to be the dying scholar’s colleague would actually keep their promise to get her typescript published? From the original position, we could imagine every possible scenario that might emerge after we were all born, and what would be the most moral course of action in each of those scenarios – and in each of those scenarios, we would end up being precommitted to acting in a way that steered the universe onto whichever branch of the timeline had the highest level of value ascribed to it by its inhabitants, even if those inhabitants didn’t always experience the full effect of that higher value directly. At the end of the day, the question would be as simple as, “Which branch of the timeline would you rather have the universe follow if you didn’t know in advance what your role in that universe would be?”

And this framing wouldn’t just help us navigate our everyday moral dilemmas; it could even help us resolve the thorniest hypothetical scenarios we could imagine. Recall, for instance, the transplant problem from earlier (you know, the one in which the surgeon has to choose between letting five of her patients die, or killing one innocent bystander in order to harvest his organs and save the patients’ lives). I mentioned that for most people, their intuitive response to the problem is to say that the surgeon shouldn’t be allowed to kill the bystander – which, granted, is an intuition that doesn’t necessarily hold up under more dramatic formulations of the problem (like killing one bystander in order to save five billion patients rather than just five), but which does seem to highlight an important idea: that if such a practice were allowed to occur, its broader negative impacts on society could easily outweigh whatever immediate utility gains it might produce. Taking one life to save five might sound like a utility-positive act in theory, but if it had significant negative side effects like lowering the surgeon’s inhibitions against killing, or making everyone who heard about it afraid to ever visit the hospital again, or openly undermining valuable social norms like “Don’t commit sudden unprovoked murder” and “Treat people as ends in themselves rather than as mere means to other ends,” then the positive benefit of saving four extra lives might not be worth it.

Having said that, though, it’s possible to come up with variations on this thought experiment that remove such considerations and make the problem more difficult. Like for instance, what if the surgeon were about to die herself, so it wouldn’t matter if she lowered her inhibitions against killing, since she’d never be able to do such a thing again either way? Or what if the whole thing happened in secret, in such a way that the bystander’s death would be guaranteed to look like an accident or a natural death, and no one would ever know that he had been killed deliberately? Under these particular circumstances, there would no longer be any broader social repercussions like openly undermining moral norms or deterring people from visiting the hospital. So where would that leave us? Would there be any basis left at all for forbidding the surgeon from killing the bystander?

Well, again, it’s always possible that our intuitions may simply be wrong in this case; it may be that killing the bystander actually would be the right thing to do in a situation that met all the above conditions. But let’s assume for the sake of argument that it wouldn’t be. On what consequentialist grounds could we actually say that this was the case? This is where I think the most-preferred-universe framework described above could potentially come into play. See, even though it might be true that the surgeon would be able to kill the bystander without anyone ever knowing about it, it could also be true that that outcome would nevertheless be one that the broader society (on balance) would still be opposed to in principle, whether it happened in secret or not. If considering the situation from the original position (with no one knowing whether they’d turn out to be the surgeon, the bystander, one of the patients, or an uninvolved person) would lead the broader society to conclude that a universe in which the five patients were allowed to die would be preferable to a universe in which the surgeon saved them by killing the bystander, then by choosing the latter option, the surgeon would be taking society down its less-preferred path. She’d be violating people’s preference not to live in a world in which surgeons secretly killed random bystanders and harvested their organs, and would thereby be producing an outcome that had had less value ascribed to it – i.e. a less moral outcome. The people themselves might never actually find out that their preferences had been violated, but their preferences would have still been violated. In order to avoid this less moral outcome, then, it would be incumbent on the surgeon not to kill the bystander, but to let the patients die instead.

Now, needless to say, there are a lot of “ifs” in this line of reasoning – and again, we can easily tweak the parameters in such a way as to produce the opposite conclusion. I could easily imagine, for instance, that in an alternative scenario in which the surgeon, the bystander, and the five patients were the last seven people on Earth, many people’s stance on the dilemma might actually change, such that they’d agree that it would now be morally permissible for the surgeon to kill the bystander. Why might this be the case? Well, it could just be the simple fact that, without a broader society of billions of people whose preferences would have to be taken into account, all those concerns about upholding moral norms and so on would no longer be a determinative factor; the main consideration would just be the immediate preferences of the seven people directly involved in the incident themselves (most importantly, the desire of the patients and the bystander to continue living). Unlike the original version of the thought experiment, in which anyone judging the situation from the original position would expect to be born as one of the billions of people who weren’t directly involved in the incident, this alternative version would remove all those “uninvolved outsider” roles from the equation altogether, so that anyone judging from the original position would now have to expect that, in all likelihood, they’d end up becoming one of the five patients. That would mean that instead of giving the most weight to the society-wide preferences like “I prefer not to live in a world in which norms against murder are secretly violated” and “I prefer not to live in a world in which I’m unknowingly living under the perpetual threat of having my organs harvested,” their biggest moral consideration would now simply be the patients’ preference of “I prefer not to die.” In light of that fact, then, it would be no surprise if their most highly-valued outcome in the last-seven-people-on-Earth scenario was for the five patients to survive rather than the one bystander.

That being said, I might be totally misjudging what someone in the original position would actually consider best; this is just my own personal intuition on the subject, and other people might not share it. But then again, the average person on the street, here in the real world, probably wouldn’t be so inclined to judge the dilemma according to this kind of Rawlsian moral calculus in the first place; instead, they’d probably rely more on heuristics like “Actively killing someone is worse than passively letting someone die” – or to put it more broadly, “A sin of commission is worse than a sin of omission.” And to be sure, in a lot of contexts, these kinds of heuristics are plenty useful; they’re basically just a shorthand way of accounting for all the secondary utility ramifications an act might have aside from its most immediate effects – things like lowering the actor’s inhibitions against future wrongdoing, undermining valuable moral norms, and so on – without having to actually go through the whole utility calculus one consideration at a time and factor those things in directly. That being said, though, it’s important to remember that, as useful as they might be in select situations, these heuristics are just moral shortcuts, not fundamental moral laws that we should expect to stand entirely on their own. If we regarded them as the be-all-end-all of morality, we’d open ourselves up to outcomes that, from a Rawlsian point of view, would be positively disastrous – like forbidding the surgeon from killing the bystander even if the lives of millions were at stake. Unfortunately, a lot of people do seem to consider such rules to be fundamental, and the result is that, here in the real world, millions of people often do die because of the widespread attitude that, because passively allowing their deaths isn’t as bad as actively killing them, it’s therefore somehow morally permissible. But under the system I’ve been describing here, this kind of outcome is a moral scandal of the highest order; so in these last few sections of this post, I want to focus in on it more closely, and lay out some of the broader implications for how I believe we actually should be living instead.

Continued on next page →