Objective Morality (cont.)

IIIIIIIVVVIVIIVIIIIX – X – XIXIIXIIIXIVXVXVIXVIIXVIIIXIXXX
[Single-page view]

See, when the word “should” first came into use, it was as the past tense of the word “shall” (“from Middle English scholde, from Old English sceolde, first and third person preterite form of sculan (‘owe’, ‘be obliged’), the ancestor of English shall). That is to say, if at some point in the past you had made a pledge like “I shall do X,” then here in the present we could say that doing X was something you “shalled.” So if you signed a contract promising to repay a loan, for instance, and you said something like, “I vow that I shall repay this loan,” then you “shalled” (i.e. should) repay the loan. Saying that you shalled/should repay it, under this original definition, was just another way of saying that you’d made a prior commitment to do so, and that you must now meet that commitment or else you’ll be breaking your pledge. In other words, you’ve placed yourself under a contractual obligation to do it; and so doing it is now something that you owe to your creditor. (Similarly, the word “ought” comes from “owed” – so if you said that you ought to do something for someone else, it meant that your doing that thing was owed to that person.) Being contractually bound to do something, of course, doesn’t mean that you must do it in the same literal sense that you must obey the laws of physics – there’s no invisible force physically compelling you to do what you’ve pledged to do – but it does mean that committed yourself to doing it in principle. In other words, it’s essentially the same kind of “invisible moral law” that we were grasping for a moment ago; it may not be binding in the absolute sense that you literally cannot violate it, but it is binding in the sense that you should not (“shalled not”) violate it. If you were to go against it, you’d be breaking an invisible obligation that you were pledged to keep. And it seems to me that that’s the only way that an obligation can be binding; after all, if obligations were binding in the same way that the laws of physics were binding – where we literally had no choice but to abide by them – then our world would look very different, to say the least.

At any rate, this historical definition of “should” has a couple of features that are extremely relevant to our discussion about morality. First, notice that under this definition, there’s no incompatibility between “is” and “ought.” The statement “I have made a commitment to do this thing and have thereby created and placed an obligation upon myself in principle to do it” is a fundamentally descriptive statement, not a normative one. All it’s saying is that I “shalled” do some action (and therefore have a commitment to do that action), which is an objective fact about the world in the same way that statements like “Person A promised to water Person B’s houseplants this week” and “Person C owes Person D five hundred dollars” are objective facts about the world.

By that same token, this definition of “should” neatly avoids the whole hypothetical imperative problem. When you say “I shalled do X,” you aren’t making a conditional statement like “If I desire to keep my pledge, then I should do X;” all you’re saying is “I have pledged to do X and therefore have a commitment to do X,” which is unconditionally true without any need for an “if” qualifier.

That being said, though, you might notice that this whole commitment-making mechanism does seem to have a hint of the same flavor as the hypothetical imperative mechanism, just in the sense that it seems more or less amoral. That is, it seems like it could apply just as easily to immoral pledges as to moral ones; you could have a hit man agree to a contract to murder somebody, for instance, and committing that murder would therefore be something he “shalled” do. After all, he did make a pledge to murder the target, and he’s contractually bound to keep his pledge. So under the historical definition of “should,” he should commit murder.

How, then, does this definition of “should” help us with morality at all? It seems like the only way it could serve as an adequate ground for morality would be if we’d all signed a literal social contract, at the moment we first came into existence, pledging to always do what was moral, and to never enter into any subsequent immoral contracts – and we definitely didn’t do that (at least not explicitly). Is it possible that we all entered into such a contract implicitly? Maybe – but how would that even work? And for that matter, even if we were given the opportunity to enter into such a social contract, why would we think that everyone – including even the most self-interested among us – would agree to it? Why would we all want to precommit to restricting our future actions by agreeing to such a contract, when we could just as easily reject the contract and thereby retain the option of always being able to act immorally when it suited us? Why would it ever be in anyone’s self-interest to precommit to restricting their own actions in such a way?

Well, this last question is easy enough to answer, so let’s start with it first. As you’ll know if you’ve ever entered into any kind of contractual agreement yourself (even something as simple as signing a contract for a loan that you couldn’t have gotten if you hadn’t agreed to repay it), there are actually plenty of situations in which making a self-restricting precommitment can be advantageous. The most famous example of this is probably the legend of Odysseus binding himself (literally!) to the mast of his ship so he could hear the song of the Sirens. If he had insisted on remaining free to walk around the deck, he would have immediately been lured to his death; but because he denied himself this freedom of movement (or more specifically, because he allowed his crew to deny him this freedom by tying him up), he was able to hear the beautiful song that no one had ever survived hearing before.

Another memorable example comes from the game of chicken – you know, that game in which two drivers careen toward each other at full speed, and whoever swerves away first is the loser. Imagine being caught up in a high-stakes version of this game, in which you felt like you had to win no matter what. (Let’s say the other driver was holding your loved ones hostage or something.) How could you win without getting killed? Well, some game theorists have suggested that your best bet might actually be to make a visible and irrevocable precommitment to not swerving, like removing your steering wheel and tossing it out the window in full view of the other driver. By making it clear that you’ve literally removed your ability to swerve, you show the other driver that they can no longer win the game, and so their only choice (if they want to survive) is to swerve themselves. You’ve restricted your freedom to act, but in doing so you’ve made yourself better off than you might have been if you’d kept all your options available. Granted, this strategy does depend on the other driver being rational – which, in a vacuum, is not guaranteed – but if you could be sure that the other driver was rational, then the self-restricting precommitment of discarding your steering wheel would be the best move you could make.

And this leads me to one more example in this vein – a variation on a game devised by Douglas Hofstadter, based on the classic Prisoner’s Dilemma. Imagine that you and a thousand other randomly-selected people are led into separate rooms and cut off from all communication with one another. An experimenter presents each of you with two buttons: a green one labeled “COOPERATE,” and a red one labeled “DEFECT.” You’re told that if you press the green button, you’ll be given $10 for every person who also presses the green button (including yourself). So if 387 people press it, you’ll get $3870; if nobody presses it, you’ll get $0; etc. But if you press the red button, you’ll get $1000 plus however much each of the green-button-pressers receive – so if 529 people press the green button, you’ll get $6290; if nobody presses the green button, you’ll get $1000; etc.

Which button do you press? From a purely self-interested perspective, the answer might at first seem obvious. No matter what happens, you’re guaranteed to get more money if you press the red button – so you should press the red button, right? But of course, all the other participants have that exact same incentive themselves – they’ll all want to press the red button too – and if they do, the result will be that nobody will press the green button, and everybody will go home with just a measly $1000. This hardly seems like the optimal solution, considering that if everybody pressed the green button instead, they could all go home with ten times that much. But how could you get everyone to cooperate and go along with that plan? The obvious solution, naturally, would be to have everyone get together and jointly precommit to pressing the green button. But all communication between participants has been totally cut off – and even if it weren’t, there’d be no way for each participant to know whether the others were truly precommitting or were just pretending to – so it doesn’t seem like there’s any plausible way to win here. There’s just no way of knowing what the other participants intend to do, much less influencing their decisions.

But what if it were somehow possible for all the participants to come to a mutual understanding without ever directly communicating with each other? Let’s imagine, for instance, that each participant was given access to an ultra-advanced supercomputer capable of calculating the answer to any question in the universe. Would it then be possible, even in theory, to figure out some way of coming to a mutually beneficial agreement without any of the participants ever directly communicating? Well, yes, actually – it would be trivially easy to do so, thanks to a process called acausal trade. Alexander explains:

Acausal trade (wiki article) works like this: let’s say you’re playing the Prisoner’s Dilemma against an opponent in a different room whom you can’t talk to. But you do have a supercomputer with a perfect simulation of their brain – and you know they have a supercomputer with a perfect simulation of yours.

You simulate them and learn they’re planning to defect, so you figure you might as well defect too. But they’re going to simulate you doing this, and they know you know they’ll defect, so now you both know it’s going to end up defect-defect. This is stupid. Can you do better?

Perhaps you would like to make a deal with them to play cooperate-cooperate. You simulate them and learn they would accept such a deal and stick to it. Now the only problem is that you can’t talk to them to make this deal in real life. They’re going through the same process and coming to the same conclusion. You know this. They know you know this. You know they know you know this. And so on.

So you can think to yourself: “I’d like to make a deal”. And because they have their model of your brain, they know you’re thinking this. You can dictate the terms of the deal in their head, and they can include “If you agree to this, think that you agree.” Then you can simulate their brain, figure out whether they agree or not, and if they agree, you can play cooperate. They can try the same strategy. Finally, the two of you can play cooperate-cooperate. This doesn’t take any “trust” in the other person at all – you can simulate their brain and you already know they’re going to go through with it.

(maybe an easier way to think about this – both you and your opponent have perfect copies of both of your brains, so you can both hold parallel negotiations and be confident they’ll come to the same conclusion on each side.)

It’s called acausal trade because there was no communication – no information left your room, you never influenced your opponent. All you did was be the kind of person you were – which let your opponent bargain with his model of your brain.

So in our button-pressing scenario, all you’d need to reach a mutually favorable outcome for everybody would be an ability to accurately model other participants’ behavior – which brain-simulating supercomputers would certainly allow you to do. That being said, though, brain-simulating supercomputers aren’t the only way in which individuals might be able coordinate without directly communicating. Imagine, for instance, if you and the rest of the participants were all advanced robots instead of humans, and you knew that you all shared the same programming. In that case, you wouldn’t need to simulate any of the other participants’ brains at all; all you’d need to do was commit to pressing the green button yourself, and you’d know that all the other participants, because they were running the exact same decision-making algorithms you were, would reach the same conclusion and press the green button as well.

Or imagine that you weren’t all robots, but were simply all identical copies of the same person. In that case, it’d be the same deal; you wouldn’t have any way of simulating the other participants’ decision-making process, but you wouldn’t have to, because you’d know that it was identical to your own – so whatever decision you ultimately made, that’d be the decision they would all ultimately make as well. If you decided to press the green button, you’d know that all the identical copies of you had also decided to press the green button. If you only pretended that you were going to press the green button, but then actually decided to press the red button, you’d know that all the identical copies of you had also attempted the same fake-out and had ultimately ended up pressing the red button as well. There wouldn’t be any way of outsmarting the game; the only way to win would be to actually cooperate.

So all right, that’s all well and good – but what does any of this have to do with morality or the social contract? Well, as you might have guessed, the whole button-pressing scenario is meant to represent the broader dilemma of whether or not to live a moral life in general. “Pressing the green button” – i.e. cooperating with your fellow sentient beings and acting morally toward them – clearly produces the best outcomes on a global basis. But on an individual basis, it seems more profitable to “press the red button” – i.e. to enjoy all the benefits produced by the cooperators, but then to also benefit a little more on top of that by acting immorally when it suits you. Yet if everyone did this, it would lead to sub-optimal outcomes across the board; so the best outcome – even in purely self-interested terms – is for everyone to jointly precommit to acting morally (i.e. to make a social contract).

But how does this button-pressing analogy actually reflect the reality we live in? After all, it’s not like any of us actually have perfect brain-simulating supercomputers in real life, and we aren’t all identical robots or clones; each of us has our own unique motives and biases and personality quirks and so on. So it doesn’t seem like there would be any way for us to actually commit to morality in such a way that we could know that everyone else was doing the same. We’re more like the helpless people in the first version of the analogy who have no way of coordinating their decision-making.

Well, let’s consider one more thought experiment. (And I know you’re probably wondering where all these thought experiments are going, but hang in there – it’ll all make sense in a minute.) Imagine going back to a time before you even existed – or maybe not that far back, but at least all the way back to the very first moment of your existence as a sentient being (like when you were still in the womb), before you ever had any complex thoughts or any interactions with other sentient beings or anything like that. In this pre-“you” state, which John Rawls calls the “original position,” you have no knowledge of the person you’ll eventually become, or even the kind of person you are in the moment. You don’t know which gender you are, which race you are, or even which species you are. You don’t know whether you’ve got genes for good health or bad health, for high intelligence or low intelligence, or for an attractive appearance or an unattractive appearance. You don’t know whether you’re going to grow up in an environment of wealth and comfort, or one of poverty and squalor. You don’t even know what your personality traits will end up being – whether you’ll be someone who’s callous or kind, shy or outgoing, lazy or hardworking, etc. Nor do you know what kind of preferences you’ll end up having; all you can know (if only on an implicit level) is that whatever your preferences and characteristics turn out to be, your ultimate meta-preference will be for your utility to be maximized. (As per our earlier discussion of the Master Preference, this is what it means to be able to have preferences in the first place.) In short, as Rawls puts it, you’re essentially positioned behind a “veil of ignorance” regarding your own identity; you’re just a blank slate whose only characteristic is the fact that you’re sentient (and therefore have the capacity to experience utility and disutility).

Viewing things from this position, then, we can notice a couple of things. The first insight – and the one for which this thought experiment is usually cited – is that stepping behind this veil of ignorance is a great way of testing whether our preferred policies and social norms are genuinely fair, or whether our thinking is being unduly biased by whatever position we might personally happen to occupy in the world. (You might recall Alexander doing this earlier with his slavery example.) Thinking About Stuff provides a great two-minute explanation of this point:

For this insight alone, this thought experiment (which I should mention was actually originally introduced by Harsanyi, even though everyone now most associates it with Rawls) is incredibly valuable. But there’s actually another part of it that’s just as valuable to our current discussion – namely, the fact that when you go back in time to that pre-“you” original position, the featureless blank-slate being that you become is exactly the same featureless blank-slate being that everyone else becomes when they go back in time to the original position. All the personal identifiers that make you a unique individual – all those motives and biases and personality quirks that we were talking about before – are wiped away behind the veil of ignorance. That’s the whole point of the thought experiment; in the original position, everyone is functionally indistinguishable from one another. And what this means for our purposes is that in the original position, because everyone is an identical copy, it becomes possible to make acausal trades.

So imagine that you’ve gone back in time to before your birth, all the way back to the original position, and you’re evaluating your situation. (I realize that fetuses aren’t capable of complex thoughts, but bear with me here.) You don’t know what kind of being you are; you don’t know what kind of world you’re about to be born into; and you don’t know if there will be any other sentient beings out there aside from yourself. But what you can say for sure is that if you end up being born into a world like the one we’re living in now, and that if there are other sentient beings in this world with whom you might interact – who themselves start off their existences from the same original position you’re starting from – then that would make the situation you’re presently facing analogous to the situation faced by the identical clones in the button-pressing game above. That is to say, if you precommit right now (conditional on the world being as described above) to living morally and trying to maximize global utility after you’re born, rather than just trying to maximize your own utility, then you can know that every other being in the original position will make that same precommitment, since they’re all identical to you and what’s rational for you will be rational for them as well. Likewise, if you decide not to pledge to maximize global utility, and instead just resolve to maximize your own utility after you’re born, then you’ll know that every other being in the original position will also make that same resolution. Given this fact, plus the fact that you don’t have any awareness of what your station in life will actually be after you’re born, it’s clear that the decision that would give you the greatest expected benefit would be to precommit to maximizing global utility, thereby ensuring that everyone else precommits to that choice as well – the equivalent of pressing the green button in the identical-clones version of the button-pressing game. Admittedly, it might be better for you in theory if you could somehow be the sole defector from this social contract; but because everyone in the original position is identical and makes the same choice as you, being the sole defector simply isn’t one of the available options. Either everyone presses the green button, or everyone presses the red button. Either everyone precommits to living morally, or no one does. Considering that you’d never want to live in a world where no one was obligated to treat you morally, then, your decision essentially makes itself; simply by preferring a universe in which everyone has pledged to treat you morally, you are endorsing a universal social contract and thereby buying into it yourself. You’re choosing the universe in which there’s an all-encompassing social contract – which applies to everyone, including yourself – over the one in which there is no such social contract. In other words, you – and everyone else in the original position – are jointly ratifying an implicit pledge to always do what’s moral after you’re born.

But wait a minute, you might be thinking – we just kind of glossed over the fact that fetuses aren’t actually capable of complex thought and therefore can’t consider any of this; but that’s not just a minor detail. I mean, it’s one thing for us fully-developed humans to imagine ourselves in the original position and conclude that the rationally self-interested thing to do in that situation would be to precommit to a universal social contract; but it’s another thing to actually be in that original position, as a barely-sentient fetus who can’t even understand basic concepts. Such a simple-minded being wouldn’t be capable of anticipating and weighing all their different options like we’ve just done, so how would they be capable of making the kind of pre-birth precommitment to morality that we’re talking about here? Even if doing so was in their best interest, wouldn’t it just be a moot point if they weren’t capable of knowing that and agreeing to it in the first place?

But that’s the thing; they wouldn’t have to be capable of understanding it. Remember the Master Preference – the one preference that necessarily applies to every sentient being, regardless of their mental capacity, simply by virtue of the fact that they’re capable of having preferences at all. What it says is that no matter what, you’ll always prefer outcomes that maximize your utility, even if you aren’t explicitly aware beforehand that they will do so. Your true preference if you’re about to step in front of a speeding bus (i.e. the preference extrapolated from your Master Preference) will be to avoid doing so, even if you aren’t aware on an object level that the bus exists at all. Your true extrapolated preference if your beverage is poisoned will be to avoid drinking it, even if you aren’t aware on an object level that it’s poisoned. Your true extrapolated preference if your friend has the chance to buy a winning lottery ticket on your behalf will be that they buy it on your behalf, even if you aren’t aware on an object level that there’s even a lottery going on in the first place. And so on. And this Master Preference is one that you can hold without ever having to realize or understand that you hold it; it’s simply an emergent by-product of the concept of preference itself. As we discussed earlier, even barely-sentient animals have it. Even barely-sentient brain-damage patients have it. And yes, even barely-sentient fetuses have it – meaning that in theory, it’s entirely possible for them to endorse a particular outcome (such as becoming part of a social contract) without ever explicitly realizing it.

To illustrate this point with an analogy, let’s go back to the identical-clones version of our button-pressing game – only this time, instead of identical human participants, let’s imagine that the participants are all, say, identical dogs. Instead of being rewarded with dollars, they’re rewarded with dog toys or treats or whatever; and instead of having to manually press a button to indicate whether they’re a cooperator or a defector, they’re hooked up to a supercomputer that can perfectly simulate how their brains would react to each of the two possible scenarios (“all cooperate” or “all defect”), then measure whether the “all cooperate” scenario would give them more utility than the “all defect” scenario, or vice-versa. If the “all cooperate” scenario would give them more utility (which it would), then that would mean that their extrapolated preference was to cooperate – so the computer would mark them down as cooperators. But if the “all defect” scenario would give them more utility (which it wouldn’t), then that would mean that their extrapolated preference was to defect – so the computer would mark them down as defectors. Obviously, the dogs would have no real understanding of what was happening or how the game worked – but they wouldn’t have to understand; simply the fact that they’d prefer the results of the “all cooperate” choice would register in the computer as an endorsement of that choice over the “all defect” alternative – and so all the dogs would end up jointly committing to the “all cooperate” choice without even realizing that they were doing so.

Likewise, while it’s true that fetuses aren’t capable of understanding that it would be in their best interest to precommit to a universal social contract, they are capable of meta-preferring outcomes that are in their best interest, even if they don’t know what any of that actually entails. As far as they’re concerned, having some indirect mechanism opt them into the social contract (in accordance with their Master Preference) works just as well as if they directly opted themselves into the social contract via their own conscious decision-making. So what does that indirect mechanism consist of, exactly? After all, it’s not like every fetus is hooked up to a brain-simulating supercomputer that can extrapolate their meta-preferences and then register the corresponding precommitments on their behalf. Well, one answer we could give might be that when they enter into the social contract, it’s not because they’ve explicitly entered themselves into it, or because some supercomputer is acting on their behalf, but because the people already existing out in the world are playing that role (like someone signing a literal paper contract on behalf of a friend who wished to sign it themselves but couldn’t), entering the pre-born individuals into the social contract simply by virtue of their own preference (as already-born individuals) that everyone act morally toward them. That is to say, any time someone thinks (even in just a vague, subconscious way) that others ought to do what’s good, or that they have an obligation to behave morally – which, of course, is something that people think all the time, since they desire to be treated morally themselves – that person is essentially doing the equivalent of pressing the green button on behalf of everyone in the original position who would want someone to do that for them. They’re opting them into the social contract merely by thinking it (since after all, thoughts are all that’s necessary to reify a social contract; there’s no physical action necessary like pressing a button). On this account, it’s possible that the first time a conscious being became intelligent enough to conceive of the idea that individuals ought to act morally (even if they didn’t think of it in those exact words but only conceptualized it abstractly), that thought alone would have been enough to opt every sentient being into the social contract from that point forward. It would have been the equivalent of declaring, “Any sentient being in the original position who agrees with this statement, that they would prefer (or meta-prefer) to be precommitted to acting morally after they’re born, is hereby precommitted to acting morally after they’re born.” And of course, since every sentient being in the original position would have a Master Preference agreeing with that statement, then they would all be precommitted to acting morally after their birth. It would be like when Odysseus’s crew bound him to the mast so he could enjoy the greater pleasure of hearing the Sirens’ song – only with everyone being figuratively bound to the mast at once, so they could all enjoy the benefits.

That’s one way of thinking about how people could be opted into the social contract without ever being conscious of it: It might be that we’re all simply born into it. Another idea, though, which I find more interesting, is that maybe it’s not necessary to have some outside agent (like a supercomputer or another person) opt you into the social contract at all; maybe, merely by the fact that your Master Preference is to cooperate, you’ve already implicitly opted yourself into it. That is, maybe the very logic of meta-preference itself necessitates you being precommitted to the social contract, simply by virtue of your implicitly endorsing that outcome over the alternative. Here’s how we might think of it: When you’re in the original position, you have an extrapolated preference which says, “If I’m currently in the original position, and if I’m about to be born into a world full of other beings who also started their lives from the original position, then (for acausal trade reasons) my preference is to act to maximize global utility from this point forward, even if it later comes at the expense of my own object-level utility.” True, you aren’t yet able to consciously understand that you have this preference; but that doesn’t really matter, because your awareness of it has no bearing on whether it actually applies to you or not. Given the forward-facing nature of its terms (“from this point forward”), it remains in effect over you for the rest of your life. And that in itself is enough to make it a kind of precommitment, in the same way that a preference to, say, never visit a particular restaurant under any circumstances (even if you later think it’d be a good idea) would constitute an implicit precommitment to never visit that restaurant. In other words, the preference is the precommitment, simply because it includes terms that apply unconditionally to your extended-future behavior. And although that doesn’t mean that you’ll be physically compelled to maintain your precommitment throughout the rest of your life (again, you don’t have to do anything in the literal sense – you can always go back on any precommitment you might have), it does mean that if you do later violate your precommitment (by trying to maximize your own object-level utility rather than global utility), you won’t just be going against the preferences of those others whose utility you’re reducing; you’ll also, in a sense, be going against your own preferences – maybe not at the object level, but at the meta level. You’ll be “getting your actions wrong,” in the same sense that someone might get a math problem wrong by saying that 2+2=5 or something – because in this framework, morality really is like a math problem that has a right answer and a wrong answer; and the right answer is that everyone be implicitly committed to maximizing global utility from the moment they first begin existing, and that no one ever go back on that commitment.

Of course, I should include an important qualification here, because this answer isn’t quite an absolute in every case. After all, there are plenty of individuals, like animals and severely brain-damaged people, who simply don’t have the mental capacity to understand and uphold such moral obligations – so it hardly seems fair to expect that those individuals should be able to act just as morally as those who can fully grasp the concept of morality. Where, then, do they fit into this system? Can we really consider them to be acting wrongly if they harm others without realizing that what they’re doing is bad? Are they just left out of the social contract on the basis that they can’t hold up their end of the bargain? Well, let’s think back to the original position again. When you’re behind the veil of ignorance, you don’t know whether you’re going to ultimately turn out to be a cognitively healthy human, or a human with cognitive disabilities, or an animal of some other species with a lower mental capacity. Considering that fact, it wouldn’t exactly make sense for your Master Preference to precommit you to an absolutist social contract that would exclude you if you ended up being born as one of those individuals with a lower mental capacity; by definition, the extrapolated preference that would most maximize your expected utility would have to be one that accounted for your different possible levels of cognitive ability. What this means, then, is that when you enter the social contract, you aren’t just making an implicit precommitment to always do what’s right, full stop; what you’re really doing is making an implicit precommitment to always do what’s right to the greatest extent that your cognitive ability allows for. And that means that if you happen to be born as, say, a crocodile or a boa constrictor, then you wouldn’t actually be breaking any kind of moral law or social contract if you regularly killed your prey slowly and painfully, despite the fact that your actions were a net negative in utilitarian terms. Your inability to comprehend the morality of what you were doing would exempt you from the kind of moral judgment we apply to most humans. That’s the reason why we don’t hold animals as morally accountable for their actions; and it’s also the reason why we consider mentally ill criminals to be less culpable than mentally healthy ones, and why we’re more forgiving toward children than toward adults, and so on. Moral obligation is something that can only really exist in proportion to a being’s ability to genuinely understand and consider the utility of other beings – because after all, as we discussed earlier, the morality of an action is based on its expected utility, not its actual utility; and if a particular being is only able to consider its own utility, then we can’t accuse it of acting immorally for failing to account for others’. We have to grade these individuals on a curve, so to speak. Even as we do so, though, the important thing to remember is that even though they have a lower capacity to judge the morality of their actions, that doesn’t entitle them to any less moral treatment from us – because after all, they are still part of the social contract, and we more cognitively developed individuals are capable of understanding that and treating them accordingly.

(Incidentally, this point also raises an interesting implication for our earlier discussion regarding the possibility of a “utility monster” like a deity or a super-advanced alien. Assuming such a being had a greater mental capacity than we do – and a greater ability to understand morality – it would accordingly also have an even greater obligation to act morally than we do. That is, even though its most strongly-held preferences would count for more than ours would in the utilitarian calculus, this being would also have a greater obligation than we do to satisfy the preferences of sentient beings other than itself. So although we might still worry that its interests would take priority over our own in incidental ways (like if it needed something important and we were in the way), we might not need to worry so much that it would be morally entitled to, say, torture us for fun (at least not if its nature was otherwise comparable to ours) – because in the same way that it’s immoral for us to torture animals for fun, it would be even less moral for a more advanced being to torture us for fun.)

Continued on next page →