Objective Morality (cont.)

[Single-page view]

So all right, now that we’ve established this definition of what morality is in the descriptive sense, let’s turn to the elephant in the room that’s been hovering over this whole discussion: Why, exactly, should any of us do what’s moral, as opposed to just selfishly doing what’s best for ourselves? After all, even if we can objectively quantify goodness and say that some actions are better than others, that still doesn’t explain why we therefore ought to do those actions. How could it be possible, even in theory, to derive normative statements (i.e. statements about what should be done) solely from descriptive statements (i.e. statements about the way things are)? This is one of the most famous problems in moral philosophy, known as the “is-ought problem.” Alexander summarizes:

David Hume noted that it is impossible to prove “should” statements from “is” statements. One can make however many statements about physical facts of the world: fire is hot, hot things burn you, burning people makes their skin come off – and one can combined them into other statements of physical fact, such as “If fire is hot, and hot things burn you, then fire will burn you”, and yet from these statements alone you can never prove “therefore, you shouldn’t set people on fire” unless you’ve already got a should statement like “You shouldn’t burn people”.

It is possible to prove should statements from other should statements. For example, “fire is hot”, “hot things burn you”, “burning causes pain”, and “you should not cause pain” can be used to prove “you should not set people on fire”, but this requires a pre-existing should statement. Therefore, this method can be used to prove some moral facts if you already have other moral facts, but it cannot justify morality to begin with.

Of course, not everyone agrees that this is really a problem for morality. Some people don’t make any distinction at all between what’s good and what ought to be done – so that once you’ve successfully defined one in objective terms, the other naturally follows. According to this view, the statement “We should do X” means nothing more than “It would be more desirable for us to do X than to not do X;” so by objectively establishing that doing X would be good – the “is” problem – we’ve totally solved the “ought” problem as well. The question “Why should we do what’s good?” is just a tautology, essentially the equivalent of asking “Why would it be good for us to do what’s good?” It’s like asking “Why does a triangle have three sides?” – the question answers itself.

And if that’s how you feel about the subject, then great – the is-ought problem is a moot point, and we’re basically done here. But personally, I don’t find this approach entirely satisfying. It still seems to me that the question “Why should we do what’s good instead of what’s bad?” is a coherent one, and that the tautological definition of “should” doesn’t quite address what it’s really asking.

So what would be a better definition of “should”? Another popular approach – one that might be endorsed by Hume himself – is to say that the word “should” can only really be used in a contingent, instrumental kind of way. That is, you can only properly say that someone should do X if you also specify the goal or aim that they would better achieve if they did X – like, “You should run quickly if your goal is to win a footrace,” or “You should drink a beverage if your goal is to quench your thirst,” etc. You can’t just say that someone should do something and leave it at that; you have to have an “if” clause attached to your “should” clause (even if it’s an unspoken one) in order to have it make sense. Or in philosophical terms, you have to frame it as a “hypothetical imperative” rather than a “categorical imperative” – a conditional statement rather than an unconditional one. What this means for morality, then, is that you can’t just say that people should behave morally, period, regardless of their goals. The most you can say is that if they desire to do the most good, or if they desire to maximize the utility of sentient beings, then they should behave morally. Commenter LappenX sums it up:

Saying “You shouldn’t kill people” is different from “If you want world peace, then you shouldn’t kill people.” The latter is just another way of saying “If less people go around killing others, then we’re closer to world peace”, which is a statement that can be true or false, and can be tested and verified. I have no idea how to interpret the former.

Of course, the problem with this instrumental view (as LappenX points out) is that it doesn’t exactly help us with morality very much. After all, if the strongest moral statement you can make is that someone ought to behave morally if they desire to maximize global utility, then what happens if they don’t desire to maximize global utility, but instead just want to selfishly maximize their own utility? If that’s all they want, then this instrumental definition of “should” wouldn’t tell them to act morally at all – it would tell them that they should act selfishly.

And in fact, this isn’t even just a problem of some people potentially having selfish motives; according to the Master Preference mentioned earlier, everyone’s most fundamental preference is that their own utility be maximized. That’s what it means to prefer an outcome in the first place – that it would give you more utility if it happened than if it didn’t. So in that sense, there’s a strong case to be made that self-interest is at the core of everything we do. And that means that on top of all these other possible definitions of “should,” we now have to add one more: Maybe, when we ask why we should do what’s good, what we’re really asking is “How would doing what’s good maximize my utility?” (or more bluntly, “What’s in it for me?”).

Luckily, there’s almost always an immediate response we can give to this question; some of the biggest reasons to do good simply boil down to practical considerations like avoiding personal guilt and reprisal from others. In fact, these kinds of considerations were probably the biggest reasons why moral behavior originally came to exist in the first place, as Pinker explains:

The universality of reason is a momentous realization, because it defines a place for morality. If I appeal to you to do something that affects me – to get off my foot, or not to stab me for the fun of it, or to save my child from drowning – then I can’t do it in a way that privileges my interests over yours if I want you to take me seriously (say, by retaining my right to stand on your foot, or to stab you, or to let your children drown). I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.

You and I ought to reach this moral understanding not just so we can have a logically consistent conversation but because mutual unselfishness is the only way we can simultaneously pursue our interests. You and I are both better off if we share our surpluses, rescue each other’s children when they get into trouble, and refrain from knifing each other than we would be if we hoarded our surpluses while they rotted, let each other’s children drown, and feuded incessantly. Granted, I might be a bit better off if I acted selfishly at your expense and you played the sucker, but the same is true for you with me, so if each of us tried for these advantages, we’d both end up worse off. Any neutral observer, and you and I if we could talk it over rationally, would have to conclude that the state we should aim for is the one where we both are unselfish.

Morality, then, is not a set of arbitrary regulations dictated by a vengeful deity and written down in a book; nor is it the custom of a particular culture or tribe. It is a consequence of the interchangeability of perspectives and the opportunity the world provides for positive-sum games. This foundation of morality may be seen in the many versions of the Golden Rule that have been discovered by the world’s major religions, and also in Spinoza’s Viewpoint of Eternity, Kant’s Categorical Imperative, Hobbes and Rousseau’s Social Contract, and Locke and Jefferson’s self-evident truth that all people are created equal.

In other words, even if we were entirely self-interested, it wouldn’t take much reasoning to realize that others were just as self-interested as we were, and that therefore the best way to serve that self-interest would actually be to create cooperative social norms rather than just letting everyone behave selfishly. Not only would this kind of cooperative approach be the best way to ensure widespread safety and success; it would also be the only defensible position we could hold in purely logical terms. Pinker continues:

The assumptions of self-interest and sociality combine with reason to lay out a morality in which nonviolence is a goal. Violence is a Prisoner’s Dilemma in which either side can profit by preying on the other, but both are better off if neither one tries, since mutual predation leaves each side bruised and bloodied if not dead. In the game theorist’s definition of the dilemma, the two sides are not allowed to talk, and even if they were, they would have no grounds for trusting each other. But in real life people can confer, and they can bind their promises with emotional, social, or legal guarantors. And as soon as one side tries to prevail on the other not to injure him, he has no choice but to commit himself not to injure the other side either. As soon as he says, “It’s bad for you to hurt me,” he’s committed to “It’s bad for me to hurt you,” since logic cannot tell the difference between “me” and “you.” (After all, their meaning switches with every turn in the conversation.) As the philosopher William Godwin put it, “What magic is there in the pronoun ‘my’ that should justify us in overturning the decisions of impartial truth?” Nor can reason distinguish between Mike and Dave, or Lisa and Amy, or any other set of individuals, because as far as logic is concerned, they’re just a bunch of x’s and y’s. So as soon as you try to persuade someone to avoid harming you by appealing to reasons why he shouldn’t, you’re sucked into a commitment to the avoidance of harm as a general goal. And to the extent that you pride yourself on the quality of your reason, work to apply it far and wide, and use it to persuade others, you will be forced to deploy that reason in pursuit of universal interests, including an avoidance of violence.

Alexander makes a similar point:

If entities are alike, it’s irrational to single one out and treat it differently. For example, if there are sixty identical monkeys in a tree, it is irrational to believe all these monkeys have the right to humane treatment except Monkey # 11. Call this the Principle of Consistency.

You are like other humans, not an outlier from the human condition. You have no unique talents or virtues that make you a special case.

You want to satisfy your own preferences.

So by Principle of Consistency, it’s rational to want to satisfy the preferences of all humans.

Therefore, morality.

Of course, logical consistency can only take us so far; a person who’s only interested in benefiting themselves might not care all that much about how logically consistent they’re being, as long as they come out ahead. But there are also good reasons to want to do what’s moral aside from just the logical ones. I think it’s fair to say that most of us feel just as compelled to do what’s right by the emotional dictates of our own conscience as we do by the practical considerations of whether others will retaliate against us for mistreating them, or by the rational considerations of what’s the most logically consistent. Aside from flat-out sociopaths, we’re all born with an innate sense of empathy that makes us feel good when we help others and bad when we harm them – and that means that in a lot of cases, the kinds of actions that we think of as selfless can actually be more satisfying than the ones we think of as self-serving, while a lot of the things that might seem to benefit us most, if they come at others’ expense, can actually fill us with so much guilt that the burden of it outweighs whatever immediate benefit we might be getting. In that sense, it’s often the case that the thing that gives us the most utility isn’t actually serving ourselves alone, but trying to maximize others’ utility. And the academic research on the subject bears out this conclusion, as Tenzin Gyatso (AKA the Dalai Lama) and Arthur C. Brooks point out:

In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. […] Americans who prioritize doing good for others are almost twice as likely to say they are very happy about their lives. In Germany, people who seek to serve society are five times likelier to say they are very happy than those who do not view service as important. Selflessness and joy are intertwined.

Aside from all the social benefits that come with being a moral person, then, there’s a very real sense in which morality is its own reward. Even if you’re starting from a place of pure self-interest, you’ll almost always find that the thing that produces the best outcomes for you is to try and produce the best outcomes for others as well.

Now, having said all that, you might notice my use of the word “almost” there. Unfortunately, it’s not always the case that everyone maximizes their own utility by acting morally all the time – there are plenty of cases where it’s perfectly possible to derive more utility from acting immorally than from doing the right thing – so we can’t just point to things like personal guilt and reprisal from others and declare that they completely solve the is-ought problem. It’s possible to imagine all kinds of scenarios in which someone might be able to benefit themselves at others’ expense, knowing full well that they’ll be able to get away with it and will never have to deal with any kind of punishment or retribution, or even any kind of personal guilt (maybe they’re just not a very empathetic person and don’t care about others very much). In those kinds of cases, it doesn’t seem like the “what’s in it for me?” definition of “should” gives us any basis for saying that they should act morally instead. So on what basis can we say that they should act morally? After all, it seems intuitively clear that there must be some kind of invisible moral law (or something) being broken here when they act immorally; it really does feel like there must be some reason why they’re obligated to do what’s good, and that they’d be breaking this obligation by acting selfishly instead. So are there any plausible definitions of “should” left that we can turn to, which can actually capture what we’re trying to get at here in an adequate way? Is it possible to ground morality in a way that’s not only objective, but binding as well?

Well actually, I think there might be – and funny enough, it turns out that the definition of “should” that best captures this idea of obligation is in fact the original historical definition of the word.

Continued on next page →