Now that we’ve got the basic skeleton of our moral system in place, then, let’s flesh it out a bit with some potential implications and applications. In particular, now that we’ve straightened out exactly what we mean by terms like “goodness” and “morality” and “expected utility,” let’s turn to the question of what exactly we mean when we say we want to maximize these things. This might seem like a fairly obvious answer; maximizing utility just means creating the highest level of preference satisfaction across the whole universe of sentient beings, right? But defining “the highest level,” as it turns out, isn’t actually as straightforward as you might think. When we say we want to maximize the utility of sentient beings, do we mean that we want to produce the highest sum total of utility, in aggregate terms? (And if so, does that mean we should be trying to have as many children as possible, even if their average quality of life isn’t all that high, just so we can have a greater raw amount of utility in the world?) Or do we mean that we want to produce the highest average level of preference satisfaction for our universe’s inhabitants? (And if so, does that mean we should kill all but the most ecstatic people on the grounds that they’re bringing down the average utility level?)
Well, that last question is easy enough to answer first: No, we obviously shouldn’t kill everyone whose satisfaction level is less than perfectly optimal. Killing people who want to live represents a significant reduction in utility, not an increase. True, in a vacuum, a universe containing (say) a billion extremely satisfied people plus a billion moderately satisfied people would have a lower average utility level than a universe containing just the billion extremely satisfied people alone. But for the purposes of our question here, taking those two universes in a vacuum wouldn’t be the right comparison; the actual choice we’d be making would be between a universe containing a billion extremely satisfied people plus a billion moderately satisfied people, or a universe containing those same two billion people except that half of them get killed. Killing those people wouldn’t remove their utility functions from the equation; it would just reduce their utility levels to zero, and would thereby reduce their universe’s overall utility level as well, both in aggregate terms and in average terms. Granted, the fact that they no longer existed would remove their utility functions from any subsequent utility calculations in future moral dilemmas; but there’s no basis for leaving their preferences out of the utility calculation determining whether to kill them or not, because their preferences would be very much affected by that decision. You can’t just wait an arbitrary amount of time after you commit your actions before you start weighing people’s utility levels; you have to count any and all preferences that your actions affect – whether those effects happen sometime in the future or right away. And that means you can’t kill people off just because you think the world’s average satisfaction level would be higher without them. Maybe, if you could find some people whose utility levels were so low that they were actually negative, and there was little hope that they could ever turn positive again – like if they were suffering from an agonizing terminal illness and were begging to die – then killing them could in fact be a good thing overall. But killing people who want to live (even if their satisfaction levels are less than optimal) obviously counts as a reduction in utility – so it can’t be considered moral, for the simple reason that you can’t just judge the morality of an action based on its later aftermath; you have to account for its immediate effects as well.
Now, having said that, it’s important not to overcorrect and start thinking that the immediate effects are all that matter either; the long-term effects still count just as much. This is another issue that sometimes comes up in these ethical discussions – whether we should consider our obligations to future generations to be just as strong as our obligations to each other here in the present, or whether our moral obligations diminish over time. But for the same reason that you can’t just wait an arbitrary amount of time after you commit your actions before you start weighing people’s utility levels, you also can’t just wait an arbitrary amount of time after you commit your actions and then stop weighing people’s utility levels. Even if you don’t expect the effects of your actions to happen until far into the future – in fact, even if the people who will be affected by your actions haven’t been born yet – that doesn’t mean you can simply disregard them or pay them less attention. Again, you have to count any and all preferences that your actions will affect, no matter when those effects happen. So that means your utility estimations have to not only account for whether your actions might cause some immediate harm or benefit, but also whether they might lead to potential harms or benefits in the distant future. Even some act that might not seem to have any immediate effect at all could still be extremely good or extremely bad depending on what long-term effects it might have on future generations.
Of course, a complicating factor here is the fact that morality is based on expected utility, and you can hardly ever be as certain about the future effects of your actions as you are about their immediate effects. When the effects of your actions are instantaneous, you can usually make a pretty decent estimate of how good or bad they’ll be, who will be most affected by them, and so on. But when the effects extend far into the future, it’s harder to make those kinds of estimations accurately – and the further into the future you look, the more uncertainty there is. If you’re planning on dumping a bunch of toxic chemicals into a nearby river, for instance, and those chemicals will instantly render the local water supply undrinkable, then it’s pretty obvious how negative the outcome of that action will be. But if the chemicals might not spoil the water supply for another hundred years, then it becomes harder to say for certain that the effects will still be just as bad. Maybe water purification technology will have advanced enough in that intervening hundred years that the chemicals will no longer pose a danger; or maybe we’ll all be living on Mars in a hundred years and no one will want to drink the water from Earth anyway; or maybe we’ll all have been destroyed by a meteor or something in the meantime and it’ll all be a moot point. Whatever negative utility you might expect an action (like spoiling the water supply) to produce in a vacuum, then, you have to discount that estimation in proportion to the odds that such a negative outcome might not actually happen. And the further into the future a particular outcome is projected to be, the more likely it is that some unforeseen event will stop it from happening; so all else being equal, the average amount of negative utility you might expect from a harmful action will typically be less if it’s further in the future (and likewise for the amount of positive utility you might expect from a beneficial action). That being said, of course, in some cases it’s entirely possible that something might happen in the future that would actually make the outcome more extreme than if the action were taken right now – like in the river-polluting example, maybe some new microbe might emerge in the water supply that would exacerbate the chemicals’ negative effects or something – so in that case, you’d have to adjust your estimations of the future effects to be more drastic than your estimations of the present effects. But for the most part, pushing an action’s effects further into the future tends to bring its expected outcome closer to neutral, simply because of the ever-increasing chance that the world might end or something in the meantime. (At least, that’s my personal intuition; I could certainly be wrong.) Either way, the point here is just that when we discount the expected utility of a far-future outcome, that’s solely because of the increased amount of uncertainty associated with it, not because individuals living in the future are in their own category of moral status, or because future preferences somehow constitute their own category of moral consideration. After all, every moral decision is fundamentally a question of satisfying future preferences; whether those preferences will be satisfied in the immediate future (i.e. right after you make your decision) or in the distant future is just a matter of degree, not of kind. As Derek Parfit sums up: “Later events may be less predictable; and a predictable event should count for less if it is less likely to happen. But it should not count for less merely because, if it happens, it will happen later.”
In short, we should try to take actions that maximize the utility of future individuals in the same way that we should try to take actions that maximize the utility of present-day individuals, given that they actually come into existence and are affected by our actions; and if we aren’t sure whether they will come into existence and be affected by our actions, then we should discount our utility estimates in proportion to that uncertainty. But now this raises another question – or more specifically, it brings us back around to the question I raised at the start of this section: If maximizing utility really is our goal, then does that mean that in addition to satisfying the preferences of future beings that we already expect will exist, we’re also obligated to bring new future beings into existence, so as to maximize their utility as well? Are we obligated to have as many children as possible, on the basis that they’d surely prefer existence to nonexistence, and that therefore the only way to truly maximize future utility would be to maximize the number of future people? Is this what we’re actually agreeing to when we enter the social contract? Or are we agreeing to some other definition of utility maximization?
Well, this is where things get a little trickier. See, when we talk about maximizing utility within the framework I’ve been outlining here, we aren’t just talking about maximizing abstract self-contained units of happiness; we’re talking about satisfying the preferences of sentient beings. As Parfit writes:
[The] principle [that the goal of morality is to maximize the aggregate quantity of utility in the universe] takes the wrong form, since it treats people as the mere containers or producers of value. Here is a passage in which this feature is especially clear. In his book The Technology of Happiness, [James] MacKaye writes:
Just as a boiler is required to utilize the potential energy of coal in the production of steam, so sentient beings are required to convert the potentiality of happiness, resident in a given land area, into actual happiness. And just as the Engineer will choose boilers with the maximum efficiency at converting steam into energy, Justice will choose sentient beings who have the maximum efficiency at converting resources into happiness.
This Steam Production Model could be claimed to be a grotesque distortion of this part of morality. We could appeal [instead] to
The Person-affecting Restriction: This part of morality, the part concerned with human well-being, should be explained entirely in terms of what would be good or bad for those people whom our acts affect.
This is the view advanced by [Jan] Narveson. On his view, it is not good that people exist because their lives contain happiness. Rather, happiness is good because it is good for people.
Given this premise, we can definitively say that if some being’s preferences exist or will exist in the future, then we’re obligated to include them in our moral calculus; that much is clear enough. But what’s not so clear is whether an as-yet-nonexistent being’s hypothetical preference that they be brought into existence could also be included in those considerations – because it’s hard to see how such a preference could even exist in reality at all. Sure, once a particular being came into existence, then they’d have a preference to continue existing; but it wouldn’t really make sense to talk about a nonexistent being having a preference to begin existing in the first place – because after all, such a being wouldn’t exist, and therefore couldn’t hold any such preference. The only way a preference can exist is if there’s a sentient being there to hold it; if there’s no sentient being, there can’t be any preference, much less an obligation to satisfy it. The rule of thumb that I keep coming back to here is that the morality of an action is determined by any and all preferences that it affects (and by “affects,” I mean “satisfies or violates”) – but if you go by that rule, then a hypothetical person’s preferences can’t be affected by your decision to bring them into existence, because they don’t have any preferences until after the fact. Obviously, if you do decide to bring them and their preferences into existence, then those preferences will be affected by whatever situation you bring them into – so in that context, you will have to consider how they’ll be affected – but you can’t put the cart before the horse. Again, preferences have to actually be satisfied or violated by your action in order for them to count toward your decision to take that action. What this suggests, then (as Johann Frick puts it), is that “our reasons to confer well-being on people are conditional on their existence.” And what this means that when we talk about our obligation to maximize global utility, we’re not just talking about an obligation to maximize the raw quantity of utility in our universe; what we’re really talking about is an obligation to maximize the utility of the beings that exist or will exist in our universe. It’s a subtle difference, but a crucial one – because while the former commitment would require us to create and satisfy as many new people and preferences as we could, the latter requires no such thing. As Narveson phrases it, the goal is simply “making people happy, not making happy people.” Or in William Shaw’s words, “Utilitarianism values the happiness of people, not the production of units of happiness. Accordingly, one has no positive obligation to have children. However, if you have decided to have a child, then you have an obligation to give birth to the happiest child you can.”
This might not seem quite right, given everything else we’ve been considering here. After all, even if there can be no such thing as a nonexistent person having a preference to start existing, wouldn’t bringing a new person into existence still be a utility-positive act (assuming their life was worth living) simply because of all the other preferences that they’d develop and then satisfy after they were born? And yes, it’s true that creating and then satisfying new preferences would in fact produce a greater quantity of utility in aggregate terms. But again, that’s not really the question here; the real question is whether it’s morally obligatory to give a nonexistent person the opportunity to create and satisfy all those hypothetical preferences in the first place. That is, when you tacitly agree to enter into the social contract, does that contract only include real people, or does it include hypothetical people as well? Is it even possible for a hypothetical person to enter into a social contract at all, or is a social contract conditional on the existence of its participants?
As another way of answering this, let’s imagine going back to the original position again. Behind the veil of ignorance, you have no idea who you’ll turn out to be after you’re born, or what your traits or preferences will be, or anything like that. Your only givens are that you exist, you’re sentient, and you accordingly have a Master Preference that your utility be maximized. We also know that whatever social contract you agree to in this position (as an extension of your Master Preference), every other being that comes into existence will also be precommitted to the same social contract, since every other being also starts its existence from the original position, and since all three of the above conditions (existence, sentience, Master Preference) must necessarily be met before a sentient being can do or think or prefer anything else. Given this situation, then, what kind of social contract would your Master Preference commit you to? Well, first and foremost, it would commit you to a social contract that accommodated the preferences of other sentient beings existing at the same time as you, since that would ensure that those other beings would likewise be committed to accommodating your preferences. Additionally, it would commit you to a social contract that accommodated the preferences of sentient beings that would exist in your future, since that would ensure that you’d be coming into a world where those who had lived before you had likewise been committed to accommodating your preferences (since you’d exist in their future). But would your Master Preference also compel you to enter into a social contract that included hypothetical beings along with these real ones? Would it be in your best interest to have yourself and everyone else committed to continually bringing new beings into existence, even at your own expense, as long as these new beings’ utility gains outweighed your utility losses? Even behind the veil of ignorance, where you’d have no idea who you might end up becoming, it’s hard to see how this could be the case – because the one thing you would know about yourself in the original position was that you already existed. In terms of your own self-interest, you wouldn’t have any reason whatsoever to want everyone to be committed to bringing new hypothetical beings into existence – because you’d know with 100% certainty that you wouldn’t be one of those hypothetical beings, but would be one of the already-existing ones who might be made worse off by bringing those new beings into existence. In other words, there’s no way that being committed to this kind of social contract could be in your best interest as an individual – so consequently, your Master Preference would not commit you to it. Granted, you would still be obligated to bring new beings into existence if doing so would increase the utility of those who already existed or would exist in the future, since you would be committed to improving the utility of that latter group. But for that same reason, if bringing new beings into existence decreased those existing beings’ utility, you’d actually be obligated not to bring them into existence, since your obligations would be toward the real beings, not the hypothetical ones. (It would be immoral, for instance, for you to create a utility monster that decreased everyone else’s utility, even if that utility monster enjoyed much greater positive utility itself.)
So all right, hopefully the premise here should be clear enough; we don’t have any obligations toward hypothetical people, so we aren’t morally required to bring them into existence, even if their lives would be utility-positive overall. Having accepted this premise, though, we’re now forced to grapple with another question: Does its logic apply in the other direction as well? That is, if we don’t have any obligations toward hypothetical people, does that mean there would be nothing wrong with bringing someone into existence whose life would be utility-negative (i.e. so utterly miserable that death would be preferable)? If we felt no need to recognize the preferences of people who didn’t exist, then would that mean we’d be free to ignore this person’s expected preference not to live? I think it’s fair to say that most of us find this idea a lot less intuitive; as Katja Grace writes, the popular consensus around this question seems to be that “you are not required to create a happy person, but you are definitely not allowed to create a miserable one.” In fact, this question has become such a sticking point in moral philosophy that it has come to be known simply as “the Asymmetry,” and has earned a reputation as one of the field’s more stubborn puzzles. That being said, though, I do think that the distinction we’ve made between hypothetical people and real people can help us navigate this issue in a coherent way. See, if you aren’t bringing anyone new into existence, then no one is actually affected by your actions; the only beings whose preferences come into play here are hypothetical ones, to whom you have no moral obligations. But if you are bringing someone new into existence, then that new person is very much going to be affected by your actions, so you are obligated to account for their preferences. (It’s Shaw’s point again: “One has no positive obligation to have children. However, if you have decided to have a child, then you have an obligation to give birth to the happiest child you can.”) The moment you decide to bring that new person into existence, you convert their status from “person not expected to exist” to “person expected to exist” – and accordingly, you also convert their moral status from “entitled to no moral consideration” to “entitled to full moral consideration.” Now you might be thinking, “But wait a minute, wouldn’t this go both ways? If I decided to bring a new person into existence whose life was expected to be happy, then wouldn’t the same thing apply to them? Wouldn’t my decision to bring them into existence also turn them from a hypothetical person into a real expected future person whose preferences had to be recognized?” And you’d be right – it absolutely would. But the crucial point here is that making that decision to bring them into existence – not even actually bringing them into existence, mind you, just deciding to do so – is itself an action that could only have been morally justifiable if it didn’t reduce the utility of the real people who already existed or were expected to exist in the future. In other words, the only moral way to introduce a new term into the utility calculus (representing a new expected person) is if doing so wouldn’t reduce the utility levels that are already part of the calculus; it’s only once you’re past this threshold that you can consider whether taking the next step of actually bringing the new person into existence would subsequently allow for enough of their preferences to be satisfied that the act as a whole wouldn’t be utility-negative.
I realize this is kind of a weird point, so let me clarify what I mean here. Let’s say you’ve got a married couple trying to decide whether to have a child. You might regard their decision-making process as happening in two stages: In the first stage, they’ll have to consider the morality of bringing a new set of preferences (i.e. those belonging to a new child) into existence – and at this point, the child will be purely hypothetical, so morality will obligate them to only consider how their choice will affect the real people who already exist or will exist (including themselves). So if they expect that having a child will greatly enrich their own lives and won’t detract from others’ lives too much (or will greatly enrich others’ lives and won’t detract from their own lives too much), then they’ll be morally justified in bringing that new set of expected preferences into existence. The moment they pass this threshold, though, they’ll be introducing the child’s utility function into the equation – and at that point, the question of satisfying its expected preferences will now have to be considered as part of the utilitarian calculus. Once this happens, if they expect the child’s life to be utility-positive (or even utility-neutral) overall, then they’ll still be morally justified in going ahead and having it (although if they change their minds and don’t have it after all, then no preferences will have been violated – so no harm done). But if they realize that the child’s life will actually be utility-negative, so much so that its negative utility will outweigh whatever positive utility its life will bring to others – like for instance, if the child is expected to be born with an incurable and agonizing medical condition worse than death – then they won’t be morally justified in having it. Their only moral option will be to abandon their intention to have the child, and to thereby remove its utility function from the moral calculus again, effectively converting it back from a real expected future person into a purely hypothetical person (i.e. reducing its probability of existing from some positive value down to zero).
Of course, in practice, they won’t have to literally go through the whole decision-making process step-by-step in this way; in most cases, they’ll simply be able to recognize in advance what the final result would be, and then form their intention to have a child (or not) based on that anticipated result. (So if they knew they could only ever have a miserable child, for instance, they’d recognize in advance that the negative utility of the calculation’s second stage would compel them to abandon whatever intention they might have formed to have the child based on the first stage, so they’d never form that intention in the first place.) From the inside, then, this decision-making process wouldn’t feel like it was split into two stages; it would just feel like one self-contained act of simultaneously imagining and weighing all the outcomes at once. But in terms of the morality underlying their choice, this is how the “order of operations” of the utility calculus would be working, whether they were consciously aware of it or not. The key point here is just that, as far as the utility calculus is concerned, the act of bringing a new set of preferences (representing a new person) into the world, and the act of subsequently causing those preferences to be satisfied, are two separate acts, each with its own set of moral variables; and while the utility function of the would-be person would count as an input in the latter decision, it wouldn’t count as an input in the former, because at that point the person wouldn’t yet fit into the category of “someone who exists or is expected to exist” – and only individuals that fit into that category are entitled to moral consideration. (Or to use a better framing, maybe I should say that the person’s expected preferences wouldn’t yet fit into the category of “preferences that exist or are expected to exist,” and only preferences that fit into that category can legitimately receive moral consideration – because after all, the preferences are what are actually being weighed in the utility calculus, not the specific individuals holding them.) In the case of having children, of course, we don’t typically notice that creating and satisfying new preferences are two separate acts, because the two are always so tightly coupled that they seem to just be one single act; whenever we introduce a new set of preferences (representing a new child) into the world, it’s always within the context of some specific situation that automatically causes most of those preferences to be immediately satisfied or violated. But just as violating a preference (e.g. by killing someone) isn’t the same as removing it from the utility calculus – despite one immediately preceding the other – adding a preference to the calculus isn’t the same as satisfying it. They’re two separate things, and they have to be evaluated accordingly.
Like I said, this is a pretty subtle distinction to make; but its implications are wide-ranging. The most obvious implication, of course, is that you shouldn’t necessarily feel obligated to have kids if you don’t want to. (For the record, I do think that all else being equal, having children is something that increases global utility on average; history has shown that the more people there are in the world, the more successful and prosperous it is, since additional people don’t just take resources from others – they also produce and innovate and make life better for others. But that’s certainly not always the biggest factor; so in situations where having children would be more of a burden than a blessing, this framework would say it’s perfectly fine not to have them.) On a larger scale, this approach also implies that even if all the humans on the planet voluntarily decided to stop reproducing all at once, because for some reason it would reduce their utility if they reproduced, this would be perfectly OK in moral terms – because in such a situation, no future generations would actually exist, so no one’s preferences would be violated. (Granted, it would surely take some kind of extreme circumstance to force humanity into this position – like some new worldwide plague that caused childbearing couples to suffer unbearable pain for the rest of their lives or something – but still.)
Aside from our own reproductive choices, though, we can also apply this approach to other areas, such as our interactions with other species. One of the more common arguments in favor of breeding animals for food, for instance – probably the strongest moral argument for it – is that by bringing these new animals into existence, we’re giving them the chance to live utility-positive lives (assuming their living conditions are humane), which they otherwise would never have gotten. So even though we’re killing them after only a few months of life, that’s still better for them than if they’d never been born at all, right? But aside from the obvious intuitive objections to this line of thinking (Would it still seem as persuasive if we replaced the word “animals” with “people of a minority race” or something like that?), the argument also fails to hold up under the preference-satisfaction framework described above. The fact that a hypothetical turkey would enjoy its life if it were brought into existence isn’t a valid reason for doing so – because until you’ve actually justified the act of bringing the turkey’s preferences into existence in the first place, its utility function isn’t part of the utility calculus. You could still try to justify that choice for reasons other than the turkey’s utility – like the utility boost you’d get from eating the turkey’s meat later – but if you did, then as soon as you ostensibly justified bringing the turkey into existence, its expected preferences suddenly would count for something. In particular, the turkey’s positive expected quality of life – which hadn’t previously counted as a valid reason for bringing it into existence – would now count as a valid reason for keeping it alive after it was hatched. And assuming its desire to stay alive outweighed your desire to eat it, that would mean that the moral thing to do would be to keep it alive – so you’d no longer be able to enjoy the utility boost of eating its meat after all (unless you waited until the end of its natural lifespan to do so).
To illustrate this point a little more clearly, here’s what the whole scenario might look like in numerical terms (albeit with the disclaimer that these numbers, like all the others I’ve been making up here, are just for illustrative purposes and aren’t intended to reflect the actual real-life utility scales):
- Outcome A: You don’t bring any new turkeys into existence. The net change in global utility is zero.
- Outcome B: You bring a new turkey into existence, spend all the necessary time and money to raise it to full size, then release it into the wild on Thanksgiving Day to live out its natural lifespan. You lose 20 utility from the experience (since you do all the work of raising the turkey but don’t get to eat it), while the turkey derives 1000 utility from being allowed to live past Thanksgiving (not counting whatever utility it had already derived from its life up to that point), for a total global utility increase of +980.
- Outcome C: You bring the turkey into existence, spend all the necessary time and money to raise it to full size, then kill and eat it. You gain 10 utility from the experience (since your +30 gain from eating the turkey outweighs the -20 loss from having to raise it), but the turkey misses out on that 1000 utility it would have derived from being allowed to live past Thanksgiving, so the global utility level only rises a mere +10.
As you can see here, if you decide to bring the turkey into existence, you’ll be morally obligated to keep it alive, since killing it (Outcome C) would cause a 970-point reduction from what the global utility levels would otherwise be (in Outcome B). Outcome C, in other words, is off the table no matter what. That means that when you initially decide whether to bring the turkey into existence or not, your only real choices are between Outcome A – which would leave your baseline utility level unchanged – and Outcome B – which would reduce your utility by 20. And although Outcome B would create plenty of utility for the hypothetical turkey, you have no obligations toward as-yet-hypothetical beings – only toward real ones. So that means that, when you make your decision, you’re morally obligated to maximize your own utility and select Outcome A over Outcome B. In short, the best outcome here is to never bring the new turkey into existence in the first place. You wouldn’t be doing yourself any favors by reducing your utility without any positive benefit, and you wouldn’t be doing the turkey any favors by bringing it into existence – but you would be doing it tremendous harm by killing it prematurely. Whether we’re talking about humans or animals, then, the bottom line here is worth reiterating: You are obligated to satisfy the preferences of sentient beings, but you aren’t obligated to create new preferences to be satisfied. Whenever the latter can only come at the expense of the former, your obligation is to satisfy the real preferences, not the hypothetical ones.
(And just to drive this idea home, consider one final point: If we really wanted to test the limits of this principle, we could do away with the complication of introducing new beings into the world entirely, and just look at individual preferences alone – and the same logic would still apply. In fact, we could even narrow our scope down so far as to just consider the introduction of new preferences within one single person, and the moral reasoning would scale perfectly. Imagine, for instance, if you as an individual could choose to give yourself some new preference (say, by taking a magical pill or something). Would you take the pill? Well, the answer would obviously depend on what the new preference was. If the pill would give you a new preference for, say, eating healthier, then you might rightly consider taking it to be in your self-interest, since it would help you with your already-existing preferences. (This would be analogous to bringing a new person into existence because their existence would be a net positive for all the people who already existed.) On the other hand, though, if you were offered a pill that would give you a new preference for, say, obsessively counting and re-counting your own eyelashes in the mirror all day, then it would most definitely not be in your self-interest to take it – because regardless of how much utility you might get from satisfying this new preference, doing so would undermine your ability to satisfy your already-existing preferences. (This would be analogous to bringing a new utility monster into existence, which would detract from the rest of the world’s utility despite enjoying immense positive utility itself.) It would be silly to claim that the utility you’d get from satisfying the eyelash-counting preference – even if it was a substantial amount of utility – would be a valid argument in favor of creating that new preference in yourself, if creating it would involve even the slightest reduction in the satisfaction of your existing preferences. That would be like if a government, in the name of trying to achieve maximal national security, went out and created as many new enemies as possible so it could spend even more money and resources protecting itself; it would be a fundamental misunderstanding of what “maximal national security” meant. True, I’ve talked a lot about how your Master Preference means that you’ll always prefer that your preference satisfaction be maximized – but what I mean by this is that you’ll always be in favor of whatever maximizes the satisfaction of the preferences you actually hold, not whatever maximizes the satisfaction of whatever preferences you could hypothetically hold if they were forced upon you against your will. The fact that you have a meta-preference not to be satisfied in this latter way means that the creation and satisfaction of some new unwanted preference would actually represent a violation of your utility function, not a satisfaction of it. (To use another analogy, it’d be like if your car’s GPS “fulfilled” your wish to get you to your destination as quickly as possible by changing your indicated destination to a closer location; that wouldn’t be what you actually wanted at all.) Now, granted, if you did mistakenly take the pill and become obsessed with eyelash-counting, then it would be better for you to satisfy that preference than not to satisfy it. But the point is just that the goodness of satisfying that preference would be conditional on the preference actually coming into existence first, not an argument for bringing it into existence in the first place. At the risk of using too many analogies here, Frick provides two more good ones: First, he points out that the act of creating new preferences to be satisfied is a lot like the act of making promises. If you make a promise, then it’s a good thing to be able to keep it – but the mere fact that you think you’d be able to keep a promise if you made it does not, in itself, constitute a sufficient reason to go around making as many promises as possible, just so you could then keep them. Secondly, Frick likens this to the prospect of owning an oxygen mask so that you can climb Mount Everest: True, it would be better to own an oxygen mask if you were planning on climbing Mount Everest, but merely owning an oxygen mask does not in itself make you obligated to climb Mount Everest in the first place. Ultimately, these are fundamentally conditional propositions. And the same is true of creating new preferences – not only new individual preferences within one person, but new bundles of preferences – i.e. new people – in the world. The nature of preference satisfaction is fundamentally conditional.)