Objective Morality

IIIIIIIVVVIVII – VIII – IXXXIXIIXIIIXIVXVXVIXVIIXVIIIXIXXX
[Single-page view]

Of course, in saying all this, I don’t want to give the impression that good and bad are just two discrete black-and-white categories, because (as I hope I’ve made clear by now) that’s not really how utilitarianism quantifies things. Utilitarianism doesn’t frame morality in binary terms, with actions either being flat-out right or flat-out wrong; it’s more of a scalar system, with goodness and badness existing in varying degrees along a spectrum. That is, actions and their outcomes aren’t just sorted into either the “good” bucket or the “bad” bucket and left at that; they’re ranked from better to worse according to how much utility they produce, with some options being ranked moderately better than others, some options being ranked significantly better, and most of them falling into the gray area between “pretty good but not optimal” and “pretty bad but not the absolute worst.” (That’s how we can say things like “Killing a mouse would be more egregious than killing a mosquito, and killing a human would be even more egregious than that, and killing an angel would be more egregious still” – as opposed to just saying “Killing is wrong” and not providing any more nuance than that.)

To use an analogy, it’s a bit like how we talk about size. At the extremes, of course, we can say categorically that the universe is big, and that quarks are small. But in most other contexts, it’s not really possible to say that things are big or small in any kind of absolute sense – only that they’re big or small relative to other specific things. For instance, Shaquille O’Neal is commonly referred to as big, because he’s being compared to other humans; but if you stood him next to Godzilla, then all of a sudden it would seem silly to call him big – and if you put both of them next to the planet Jupiter, it would be ridiculous to call either of them big. Bigness and smallness, in other words, aren’t just two well-defined categories; size is a continuum. And the same is true for moral goodness and badness. As Alexander writes:

If we ask utilitarianism “Are drugs good or bad?” it returns: CATEGORY ERROR. Good for it.

Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That’s because it’s a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.

When people say “Utilitarianism says slavery is bad” or “Utilitarianism says murder is wrong” – well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is “In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so” and possibly “and the same would be true of any broadly similar situation”.

But why in blue blazes can’t we just go ahead and say “slavery is bad”? What could possibly go wrong?

Ask an anarchist. Taxation of X% means you’re forced to work for X% of the year without getting paid. Therefore, since slavery is “being forced to work without pay” taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.

([Granted], reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)

Naturally, some advocates of other moral systems like deontology, which do operate by labeling actions as unequivocally good or bad, might consider this kind of scalar approach to be a flaw in utilitarianism. But the fact that utilitarianism conceptualizes goodness and badness in this scalar way isn’t a bug, it’s a feature – and in fact, it’s one of the features that makes utilitarianism more functional than deontology, as Cowen points out:

When people say, “Oh, I’m a deontologist. Kant is my lodestar in ethics,” I don’t know how they ever make decisions at the margin based on that. It seems to me quite incoherent that deontology just says there’s a bunch of things you can’t do, or maybe some things you’re obliged to do, but when it’s about more or less – “Well, how much money should we spend on the police force?” – try to get a Kantian to have a coherent framework for answering that question, other than saying, “Crime is wrong,” or “You’re obliged to come to the help of victims of crime.” It can’t be done.

So again, the advantage of conceptualizing goodness in a scalar way is that it actually allows you to capture the subtle gradations between better and worse courses of action, in a way that just labeling things as flatly good or flatly bad doesn’t. Having said all this, though, the fact that this scalar approach does account for such subtleties also means that instead of just determining whether something is good or bad according to some well-defined rule (which is as simple as asking whether it follows the rule or doesn’t), you often have to determine how good or bad it is, in terms of the amount of utility it produces, relative to all the possible alternatives – and that’s not always easy. You can’t always have perfect knowledge of what the consequences of your actions will be, so it’s not always possible to quantify the exact amount of utility they’ll produce.

What this means for our system, then, is that when you’re trying to figure out which action will do the most good, the metric you have to use isn’t actually utility, but expected utility. That is, because it’s impossible to know with 100% certainty what the outcome of any particular action will be, you can’t just assign a utility value to a particular action based on how much utility it would produce if it led to some specific outcome; you have to think about all the possible different outcomes it might lead to, and how likely they are, and weight your utility estimation accordingly (based on the average utility from all those hypothetical outcomes). You won’t always be able to get this estimation exactly right – sooner or later, you’re bound to miscalculate by assuming that a particular outcome is more likely or less likely than it really is – but that’s where deontology-style heuristics actually can come in handy, as discussed earlier. If you’re not always able to maximize the utility of your actions by evaluating every situation on a case-by-case basis (due to flawed expectations or an imperfect understanding of your circumstances or whatever), then the real utilitarian thing to do can often be to just commit yourself to following a general rule of thumb that you know will have a low failure rate and will produce a high level of utility over the long run. Even if it’s not perfect on a case-by-case basis either, it can at least help you reach a higher level of utility on average than what you might otherwise reach on your own. Kelsey Piper illustrates this point with an analogy:

I used to drive myself to school; it was a 13 minute drive if I sped, and a 15 minute drive if I followed the law. If I got caught I got a $120 ticket, my insurance rates went up, my mother yelled at me, and I would be late to school. 

A very naive utilitarian might say ‘ah, the utilitarian thing to do is to speed every time except when you’ll get caught, whereas those deontologists will tell you to never speed’. Except I don’t know which times I’ll get caught, and as long as there’s a substantial risk of it, the actual utilitarian thing to do is to compare the risk of getting caught to the benefits and come up with a rule to follow consistently. If ‘speed whenever I feel like I can get away with it’ turns out to do worse than ‘don’t speed’, DON’T SPEED.

No one makes this mistake with speeding, because people are way better at normal reasoning than at ethical reasoning, even when the actual dilemmas are exactly the same type. But they make this mistake with ethical problems all the time, and end up deciding the ethical equivalent of ‘speed if you have a good feeling you won’t get caught’, which shows up as ‘doxx people if you have a good feeling they totally deserve it’ or as ‘silence people with violence if they’re saying hateful things’.

And we can look back over those decisions and say ‘wow, often when people decide they’re doing a morally good thing in these circumstances, they aren’t, people are bad at evaluating this, the rule that would have served them well was ‘never violently silence people’ or ‘never doxx people’ and since I’m not any cleverer than they are, that rule will serve me as well. 

If following the rule every time results in better outcomes than following the rule sometimes, then the utilitarian thing to do is to always follow the rule.

In other words, although utilitarianism is fundamentally built on the idea of evaluating individual situations on their own merits, the fact that we always have to deal with some measure of uncertainty means that there’s also plenty of room for rule-following heuristics within this framework, as a kind of meta-technique for maximizing expected utility. The fact that we have to deal with uncertainty in such ways doesn’t mean that utility itself is just a fuzzy approximation of goodness, mind you – only that our understanding of it often is. There’s still a specific course of action, for every situation, that would objectively produce the maximal possible utility for that situation; it’s just that we imperfect mortals can’t always know in advance exactly what that optimal course of action will be.

And this distinction highlights the key point of all this uncertainty talk: When we talk about an act being moral within this system, that isn’t necessarily the same thing as saying that its consequences must automatically be good; in order to qualify as moral, they need only be reasonably expected to be good. Up until now, I’ve more or less been using the terms “moral” and “good” interchangeably, but a lot of the time there can be a real difference between them. It’s perfectly possible for an action to be completely moral, yet unexpectedly produce an outcome that’s bad, or to be totally immoral, yet unexpectedly produce an outcome that’s good. Consider, for instance, a criminal who hits his victim in the head intending to harm her (obviously an immoral act due to its negative expected utility) but whose violent act inadvertently ends up curing her crippling neurological condition instead (a good outcome due to the actual positive utility produced). Or imagine someone saving the life of an innocent child (clearly a moral act) only for the child to grow up and become a genocidal dictator (clearly a bad outcome). The fact that an act can still be morally justified, even if its effects later turn out to be bad, shows that consequentialist morality, despite the name, isn’t actually defined by the ultimate consequences of actions – that is, an action won’t somehow retroactively become moral if there’s a good outcome and immoral if there’s a bad outcome – it’s based simply on what the consequences could reasonably be expected to be at the time the action was taken. The easiest way of conceptualizing it might be to just say that the term “goodness” refers to the utility level of ultimate outcomes, while the term “morality” refers to the expected utility level of the choices that lead to those outcomes. Whether you conceptualize it in this way or not, though, it’s evident that goodness and morality can’t necessarily be considered equivalent in every case.

And if that still isn’t clear, consider one more point: It’s entirely possible for certain outcomes to be good or bad (relative to the alternatives) even if those outcomes don’t involve anyone acting morally or immorally at all. (The philosophical term for this is “axiology.”) To revisit an earlier example, if the wind blows a rock off a cliff and that rock crushes someone’s foot, that would obviously be a bad outcome, since it would decrease that person’s utility; but it wouldn’t be right to call it immoral, or to accuse the rock of acting immorally (How can an inanimate object act immorally?) – it would just be a bad thing that happened. To actually say that something is moral or immoral, it has to involve some kind of intent. Again, you can’t just be referring to outcomes; you have to be referring to the choices, made by sentient actors, which lead to those outcomes. And because such choices always involve some degree of uncertainty, they can’t be judged as praiseworthy or blameworthy solely on the basis of the actual results they produce; they have to be judged according to what results they were expected to produce at the time they were made. In other words, motives and intentions may not be relevant to judging how much goodness or badness a person’s actions produced, but they are relevant to judging how morally or immorally the person was acting when they chose to take those actions. As Edmund Burke put it, “Guilt resides in the intention.”

Steven Pinker expands on this point:

We blame people for an evil act or bad decision only when they intended the consequences and could have chosen otherwise. We don’t convict a hunter who shoots a friend he has mistaken for a deer, or the chauffeur who drove John F. Kennedy into the line of fire, because they could not foresee and did not intend the outcome of their actions. We show mercy to the victim of torture who betrays a comrade, to a delirious patient who lashes out at a nurse, or to a madman who strikes someone he believes to be a ferocious animal. We don’t put a small child on trial if he causes a death, nor do we try an animal or an inanimate object, because we believe them to be constitutionally incapable of making an informed choice.

He continues:

Consider the process of deciding whether to punish someone who has caused a harm. Our sense of justice tells us that the perpetrator’s culpability depends not just on the harm done but on his or her mental state – the mens rea, or guilty mind, that is necessary for an act to be considered a crime in most legal systems. Suppose a woman kills her husband by putting rat poison in his tea. Our decision as to whether to send her to the electric chair very much depends on whether the container she spooned it out of was mislabeled DOMINO SUGAR or correctly labeled D-CON: KILLS RATS – that is, whether she knew she was poisoning him and wanted him dead, or it was all a tragic accident. A brute emotional reflex to the actus reus, the bad act (“She killed her husband! Shame!”), could trigger an urge for retribution regardless of her intention. [But there has to be a] crucial role played by the perpetrator’s mental state in our assignment of blame.

Of course, none of this is to say that just because someone thinks they’re doing the right thing, that’s enough to automatically make them morally blameless in every instance. After all, someone might commit an action with the honest expectation that it will produce a utility-positive outcome, but if they formed that positive expectation in a way that was utterly negligent or insensitive to the facts, then they could still be blamed for acting immorally on that basis – because although they might not have expected their object-level actions to be harmful, they surely would have known that forming one’s expectations in such an irresponsible way does tend to lead to harmful results. So the fact that they formed their expectations in such a manner anyway means that they did still commit an immoral act, even if the expectations they ultimately formed via that process were positive. (If we imagine a military commander, for instance, launching a disastrous military campaign that ends in massive utility reductions for everyone involved, that commander might very well have gone into the campaign with the honest expectation that it would produce utility-positive results – so at the object level, he might seem blameless – but if he’d only formed his positive expectations by surrounding himself with yes-men and stubbornly ignoring good intelligence – actions which he knew perfectly well to have a negative expected utility – then he would still be responsible for the negative consequences that resulted.) Again, even if you think you’re doing the right thing at the object level, it’s still possible that you might be acting immorally on the meta level if you fail to do your due diligence in making sure that your expectations are actually well-founded in the first place.

With all that being said, though, the important point here is that even in such cases, the morality or immorality of your actions is still defined by whether you’re maximizing expected utility, not whether you’re maximizing actual realized utility. Motives do matter; you just have to make sure that you’re maximizing expected utility at every level of your decision-making, not merely at the object level. As always, the most important frame for judgment is the big-picture one.

Continued on next page →