Objective Morality

IIIIIIIVVVIVIIVIIIIXXXIXIIXIIIXIVXVXVI – XVII – XVIIIXIXXX
[Single-page view]

Again, just to be clear, I do think that the distinction between acts of commission (like killing someone) and acts of omission (like passively letting someone die) can often be a useful one. As Alexander writes:

[There’s a moral difference] between murdering your annoying neighbor vs. not donating money to save a child dying of parasitic worms in Uganda. […] Even utilitarians who deny this distinction in principle will use it in everyday life: if their friend was considering not donating money, they would be a little upset; if their friend was considering murder, they would be horrified. If they themselves forgot to donate money, they’d feel a little bad; if they committed murder in the heat of passion, they’d feel awful.

Speaking from a utilitarian perspective myself, I don’t think there’s necessarily any inconsistency in this stance. Despite the fact that the immediate effect of directly murdering someone would be no worse than that of merely failing to save someone – namely, that someone’s life would end prematurely – the murder would be worse for all the second-order reasons I’ve just been describing, like severely disrupting our most valuable norms and so on. It would also represent a more extreme degree of compromised character on the murderer’s part, just in the sense that deliberately breaking their moral obligations would reflect a greater willingness to act wrongly than merely failing to live up to them would – which would carry its own whole set of negative-utility implications. So even if the immediate effects were the same, we’d still have plenty of reasons to consider the vast majority of killings to be more egregious than the vast majority of incidents in which one person merely failed to save another.

What’s crucial to notice here, though, is that once you take these secondary factors out of the equation, it’s no longer apparent that killing someone actually is categorically worse than letting them die – which suggests that our reasons for considering most killings to be worse than most incidents of letting someone die are actually entirely just these secondary factors. James Rachels illustrates this point:

Many people think that […] killing someone is morally worse than letting someone die. But is it? Is killing, in itself, worse than letting die? To investigate this issue, two cases may be considered that are exactly alike except that one involves killing whereas the other involves letting someone die. Then, it can be asked whether this difference makes any difference to the moral assessments. It is important that the cases be exactly alike, except for this one difference, since otherwise one cannot be confident that it is this difference and not some other that accounts for any variation in the assessments of the two cases. So, let us consider this pair of cases:

In the first, Smith stands to gain a large inheritance if anything should happen to his six-year-old cousin. One evening while the child is taking his bath, Smith sneaks into the bathroom and drowns the child, and then arranges things so that it will look like an accident.

In the second, Jones also stands to gain if anything should happen to his six-year-old cousin. Like Smith, Jones sneaks in planning to drown the child in his bath. However, just as he enters the bathroom Jones sees the child slip and hit his head, and fall face down in the water. Jones is delighted; he stands by, ready to push the child’s head back under if it is necessary, but it is not necessary. With only a little thrashing about, the child drowns all by himself, “accidentally,” as Jones watches and does nothing.

Now Smith killed the child, whereas Jones “merely” let the child die. That is the only difference between them. Did either man behave better, from a moral point of view? If the difference between killing and letting die were in itself a morally important matter, one should say that Jones’s behavior was less reprehensible than Smith’s. But does one really want to say that? I think not. In the first place, both men acted from the same motive, personal gain, and both had exactly the same end in view when they acted. It may be inferred from Smith’s conduct that he is a bad man, although that judgment may be withdrawn or modified if certain further facts are learned about him – for example, that he is mentally deranged. But would not the very same thing be inferred about Jones from his conduct? And would not the same further considerations also be relevant to any, modification of this judgment? Moreover, suppose Jones pleaded, in his own defense, “After all, I didn’t do anything except just stand there and watch the child drown. I didn’t kill him; I only let him die.” Again, if letting die were in itself less bad than killing, this defense should have at least some weight. But it does not. Such a “defense” can only be regarded as a grotesque perversion of moral reasoning. Morally speaking, it is no defense at all.

What this thought experiment demonstrates, then, is that the distinction between commission and omission doesn’t actually make any moral difference in itself; it’s merely coincidental (typically) with the factors that do make the moral difference. As Graham Oddie writes:

A typical killing […] has horrible features which a typical letting-die lacks – malicious intent, unnecessary suffering, acting without the person’s informed consent, perhaps violation of certain rights, and so forth. [And] it is those other horrible features which make killing typically worse than letting-die. […] It is precisely in those situations in which the badness of killing humans is controversial – in the case of the terminally or congenitally ill, say – that these other horrible features may well be absent from a killing and present in a letting-die. [That is to say, passively letting a terminally ill patient die a painful, prolonged death might be worse than granting them a quicker, easier death via euthanasia.] The point of [Rachel’s thought experiment] is to force us to abstract from the typical concomitants of killing or letting-die and focus on the possible value-contribution of killing and letting-die in themselves – [which, when you compare them to each other, turns out to be a non-factor].

And this point becomes even clearer when you examine cases in which the distinction between an act of commission and an act of omission can’t even be clearly drawn in the first place. Consider, for instance, a parent who stops feeding their baby, thereby causing it to starve to death. In the most literal sense, they’re merely “allowing the baby to die” – but how is that morally different from killing it? Or consider this example, from commenter unic0de000:

Is an air traffic controller who suddenly stops moving, speaking or responding at a critical moment during their shift, resulting in a collision, engaging in “inaction”? Or would it be more inactiony of them to continue performing their job as they’d done for the previous hour?

Unic0de000 concludes, “I for one don’t think ‘[moral] inaction’ is a really coherent notion in the first place.” And although this is a counterintuitive conclusion, it fits perfectly with everything else I’ve been talking about here. Under the framework I’ve been describing, there really are no acts of omission once you get past the heuristic level and look at the foundational reality. Any time you make a moral choice of any kind – even if it’s to just sit there and “not do anything” – you’re actually still doing something, because you’re still bringing the universe onto one particular timeline-branch instead of another branch – and that in itself is an act, not an “absence of an act.” In other words, “not doing anything” simply isn’t possible here; you’re unavoidably going to be directing the universe onto one timeline-branch or another, regardless of what you do (or don’t do) at the object level. So again, although the typical act of killing someone is certainly worse than the typical act of letting someone die, the reason for this isn’t that there’s some categorical difference between killing and letting die – because such an absolute difference doesn’t exist. It all comes down to those secondary factors. (And if you still don’t believe it, Oddie actually goes so far as to provide a simple mathematical proof to this effect, designed to demonstrate that once you’ve removed all the secondary factors, there’s no moral difference at all between killing someone and letting them die – or at most, that the difference is so infinitesimal as to be negligible.)

A lot of popular approaches toward morality seem to rely on the tacit assumption that, although taking some particular action may be good or bad, not taking any action is, in a sense, morally neutral. Sure, it might be commendable for you to (say) donate a bunch of money to charity, but it isn’t immoral for you not to do so, because you aren’t actively making the world worse (even if you aren’t making it better either). Under the system I’ve been describing here, though, that’s not really how it works. The value of a moral choice isn’t measured in relation to some neutral baseline; it’s measured in relation to the value of alternative universe-branches, and it can only really be called right or wrong based on how much better or worse it is than those alternatives. So if a particular course of action (or inaction) steers the universe down a better timeline-branch than the alternative choices would have, then it’s a morally better choice; and if it steers the universe down a worse timeline-branch, then it’s a morally worse choice. The only way an act can be morally neutral is if its expected outcome is exactly as good as that of every other possible alternative (no better, no worse) – and needless to say, none of the moral choices we’ve been discussing here meet that description.

I think a lot of the popular motivation for wanting to weigh acts of commission and omission differently just comes from the fact that people aren’t intuitively comfortable with the idea that they’re acting immorally by not being more altruistic, or that not being altruistic makes them bad people. And to be fair, a simple black-and-white dichotomy of “moral” vs. “immoral” really isn’t the best way of thinking about the question, so their intuitions are at least somewhat defensible. We can’t just put everything into two well-defined, fixed categories of “absolutely good” and “absolutely bad;” goodness and badness exist on a continuum, and the morality of actions is a matter of degree. Someone can still be a pretty good person overall even if they don’t always behave perfectly optimally, and someone can still be a pretty terrible person overall even if they occasionally do good things. That being said, though, “pretty good overall” isn’t the same thing as “morally faultless,” and it’s important to recognize this. Even if we use a scalar measure of goodness, there’s simply no getting around the fact that, if you consider someone who donates a bunch of money to charity to be acting morally better than someone who does nothing, then that necessarily means that the person who does nothing is acting morally worse. The logic goes both ways; so you can’t say that one choice is more moral without necessarily saying that the alternative choice is less moral.

In response to this point, some philosophers will argue that although it’s true that some acts are better than others, that doesn’t necessarily mean that they’re always morally obligatory. There are some acts that are morally obligatory, to be sure – like refraining from murder, or pulling a drowning child out of a bathtub – but according to this argument, others are supererogatory – meaning that although it would be commendable for you to do them, you aren’t necessarily acting immorally by not doing them. By doing them, you’re going above and beyond the call of duty, so to speak. This category would include things like donating money to help feed the poor, and providing medical assistance to the sick, and so on.

Under the system I’ve been describing, though, this distinction between obligatory acts and supererogatory ones essentially dissolves. According to this system, when we’re in the original position and we precommit ourselves to the acausal social contract, we’re placing ourselves under an obligation to always do what we expect will maximize global utility after we’re born. So that means that, here in the world today, we’re always obligated to do what’s morally best – and any dereliction of that obligation is a moral failing. In other words, in the same way that it would be wrong for us to not rescue a drowning child who was right in front of us, it would likewise be wrong for us to not donate money to rescue a dying child on the other side of the world. Singer has actually made this exact analogy famous by posing it in the form of a thought experiment, writing:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance.

At this point the students raise various practical difficulties. Can we be sure that our donation will really get to the people who need it? Doesn’t most aid get swallowed up in administrative costs, or waste, or downright corruption? Isn’t the real problem the growing world population, and is there any point in saving lives until the problem has been solved? These questions can all be answered: but I also point out that even if a substantial proportion of our donations were wasted, the cost to us of making the donation is so small, compared to the benefits that it provides when it, or some of it, does get through to those who need our help, that we would still be saving lives at a small cost to ourselves – even if aid organizations were much less efficient than they actually are.

He continues:

Giving money to [an organization like] the Bengal Relief Fund is regarded as an act of charity in our society. The bodies which collect money are known as “charities.” These organizations see themselves in this way – if you send them a check, you will be thanked for your “generosity.” Because giving money is regarded as an act of charity, it is not thought that there is anything wrong with not giving. The charitable man may be praised, but the man who is not charitable is not condemned. People do not feel in any way ashamed or guilty about spending money on new clothes or a new car instead of giving it to famine relief. (Indeed, the alternative does not occur to them.) This way of looking at the matter cannot be justified. When we buy new clothes not to keep ourselves warm but to look “well-dressed” we are not providing for any important need. We would not be sacrificing anything significant if we were to continue to wear our old clothes, and give the money to famine relief. By doing so, we would be preventing another person from starving. It follows from what I have said earlier that we ought to give money away, rather than spend it on clothes which we do not need to keep us warm. To do so is not charitable, or generous. Nor is it the kind of act which philosophers and theologians have called “supererogatory” – an act which it would be good to do, but not wrong not to do. On the contrary, we ought to give the money away, and it is wrong not to do so.

Needless to say, this conclusion flies in the face of the conventional wisdom shared by many of us living comfortable lives here in the first world. But as Singer points out, the fact that it seems supererogatory to us to donate some significant percentage of our wealth to help the needy is largely just the product of our current culture, in which not donating is the norm:

Given a society in which a wealthy man who gives 5 percent of his income to famine relief is regarded as most generous, it is not surprising that a proposal that we all ought to give away half our incomes will be thought to be absurdly unrealistic. In a society which held that no man should have more than enough while others have less than they need, such a proposal might seem narrow-minded. What it is possible for a man to do and what he is likely to do are both, I think, very greatly influenced by what people around him are doing and expecting him to do.

I think his point here is spot on. If we imagined a world in which everyone had their basic needs met, because everyone felt just as much moral obligation toward each other as we presently feel toward members of our own families, then it’s not hard to imagine how anyone who decided to just worry about their own affairs and not concern themselves with the well-being of the less fortunate might be regarded with as much opprobrium as we currently regard those who don’t care for their own children. Conversely, if we imagined a world in which it was standard practice for people to wholly neglect their own children, so that those children were forced to either find their own food or starve to death, then it’s not hard to imagine how the inhabitants of that world might regard the act of sacrificing half their paycheck in order to provide their children with three meals a day as being supererogatory in the extreme. In fact, we can find similar examples of this type of mindset in our own history, where practices that we’d now regard as being absolutely obligatory – like not owning slaves, not beating your children, etc. – were considered at the time to be merely supererogatory (at best). The only reason why these practices persisted for as long as they did is that they were widely accepted as being “normal” or “standard;” but in a culture where such practices weren’t considered normal or standard – like our culture today – anyone who tried to reintroduce them as standard practice would be regarded as downright evil.

We look back on these practices from our modern perspective and consider the people who engaged in them to be moral monsters. But the thing is, I suspect that future generations will look back on some of the moral norms that we hold today with just as much horror. There are features of our modern way of life – defining features of it, actually – which we often take completely for granted, but which really only persist because we’ve already accepted them as normal, and would be unthinkable (in fact, would seem straight-up psychopathic) if the “default” mode of behavior were different. This includes the way we idolize some people for spending millions of dollars on themselves while others starve; and it also includes the way we lovingly welcome some animals into our homes while brutalizing and slaughtering others by the billions. We mostly treat these issues with a kind of casual nonchalance; but the truth is, they are absolute moral emergencies, and our descendants will in all likelihood consider us monsters for not caring more about them. (As the Jiddu Krishnamurti quotation goes, “It’s no measure of health to be well-adjusted to a profoundly sick society.”)

So what would it look like for us to actually respond to these problems with the seriousness they deserve? How much are we really obligated to sacrifice for the less fortunate? This can be a difficult question to wrestle with, because it’s hard to avoid any answer other than “a lot.” As Harris writes:

It is one thing to think it “wrong” that people are starving elsewhere in the world; it is another to find this as intolerable as one would if these people were one’s friends. There may, in fact, be no ethical justification for all of us fortunate people to carry on with our business while other people starve (see P. Unger, Living High & Letting Die: Our Illusion of Innocence [Oxford: Oxford Univ. Press, 1996]). It may be that a clear view of the matter – that is, a clear view of the dynamics of our own happiness – would oblige us to work tirelessly to alleviate the hunger of every last stranger as though it were our own. On this account, how could one go to the movies and remain ethical? One couldn’t. One would simply be taking a vacation from one’s ethics.

For this reason, it’s easy to feel like Singer’s philosophy simply asks too much of us. In fact, this is the most popular argument against it – also known as “the demandingness objection.” For a lot of us, our instinctive response to Singer’s perspective will be to just reject it, on the grounds that if it were true, then that would imply that we weren’t good people unless we cut back on the material parts of our lifestyle and donated to the poor instead – but we know we’re good people, even though we don’t do those things, so therefore there must be something wrong with Singer’s message. However, just because a moral system contradicts our intuitive feelings about whether we’re behaving entirely morally doesn’t automatically mean that it’s wrong; an alternative explanation here might just be that, in fact, maybe we really should be going out of our way to help the less fortunate, and maybe we really are morally worse when we fail to do so (even if we’re still good people overall), and maybe utilitarianism is serving an extremely valuable function by alerting us to this fact. As Alexander writes:

People sometimes complain that a flaw of utilitarianism is that it implies a heavy moral obligations to help all kinds of people whether or not any of their problems are our fault; the world is divided between those who consider that a bug and those who find it a very helpful feature.

As tempting as it might be to avert our eyes from the plight of the less fortunate and sweep our moral obligations under the rug, we have to ask ourselves, what would we think of the person who stood by the shallow pond with their hands on their hips and asked, “OK, but surely I’m not actually obligated to rescue this drowning child and ruin my nice $100 shoes, am I?”

Of course, to be fair, it’s reasonable to have concerns about just how far these obligations truly go. It’s one thing to accept that we should be willing to make a one-time donation of $100 if given the chance; but what about after we give that first $100 – are we obligated to give $100 more? Are we obligated to keep giving until we can’t anymore – until we’re just as poor as the people we’re trying to help? Cowen pushes the limits of the question:

Common sense morality suggests that we should work hard, take care of our families, and live virtuous but self-centered lives, while giving to charity at the margins and helping out others on a periodic basis. Utilitarian philosophy, on the other hand, appears to suggest an extreme degree of self-sacrifice. Why should a mother tend to her baby when she could sell it and send the proceeds to save a greater number of babies in Haiti? Shouldn’t anyone with the training of a doctor be obliged to move to sub-Saharan Africa to save the maximum number of lives? What percentage of your income do you give to charity? Given the existence of extreme poverty, shouldn’t it be at least fifty percent? If you belong to the upper middle class, how about giving away eighty percent of your income? You don’t really need cable or satellite TV, or perhaps you should eat beans with freshly ground cumin instead of meat. The bank might let you borrow to give away even more than you are earning. How terrible is personal bankruptcy anyway, if you have saved seven lives in the meantime? Is eating that ice cream cone so important? Common sense morality implies it’s OK to enjoy that chocolate but utilitarianism suggests maybe not.

These concerns are certainly understandable. After all, a world in which no one could ever enjoy any personal indulgences and parents routinely sold their children to the highest bidder doesn’t exactly sound ideal. But this brings us back to the point from earlier about what consequentialism actually means: If a particular outcome would be less desirable than the alternative, then by definition, consequentialism wouldn’t prescribe it. To be sure, if sacrificing some of your comfort for others would bring them greater utility than it would cost you, then naturally it would be good for you to do so – but if it got to such an extreme point that the sacrifices you were making were detracting from your life (or undermining valuable social norms) more than they were helping others, then clearly you’d no longer to be obligated to immiserate yourself in this way. As Alexander puts it, “Assigning nonzero value to other people doesn’t mean assigning zero value to yourself.” (Or as the popular saying goes, “You aren’t required to set yourself on fire just to keep someone else warm.”) Utilitarianism isn’t the same thing as complete self-abnegation; you still have to account for your own needs too.

Still, a lot of Cowen’s examples do seem like they’d still be utility-positive overall, even despite their demandingness. So how should we think about trying to strike the right balance here? Well, at the high end it’s not particularly difficult, so we can start there. If one person is a billionaire and has all their needs met, and another person is living on less than $1 per day and has to pick through landfills just to survive, then obviously it would be a major improvement in global utility for the first person to donate some of their resources to help the second person escape from poverty. You might question how a mere transfer of money could create an increase in utility; after all, wouldn’t $1000 provide the same amount of utility to whoever owned it, regardless of whether they were a billionaire or a poor person? But that’s not really how it works; money and utility aren’t the same thing, and they don’t map perfectly onto each other. According to the economic principle of diminishing marginal utility, the more you have of something – whether that be pairs of shoes or slices of pizza or dollar bills – the less utility each additional unit of it brings you. So if you gave an extremely poor person $10,000 (or took their last $10,000 away from them), it would make a massive difference in their quality of life – that is, it would represent a massive shift in their utility level – whereas if you gave a billionaire that same amount (or took it away from them), they’d scarcely notice any change in their quality of life at all – that is, their utility level would barely budge. The billionaire donating that $10,000, then, would be a clear utility-positive act. Like I said, this one is easy.

Where it gets trickier is when you start looking at examples where the donor’s quality of life is closer to the recipient’s. See, in addition to the whole diminishing marginal utility thing, there’s also an important factor at work here called loss aversion – a psychological phenomenon that causes people to ascribe up to twice as much negative utility to material losses as they ascribe positive utility to equivalent material gains. What this means is that, if two people start off at roughly the same level of utility, then (all else being equal) a monetary transfer from one to the other might very well hurt the donor twice as much as it helps the recipient – making the act a clear net negative overall. And what this means is that, when it comes to the question of how much we’re obligated to give to the less fortunate, the answer might very well be “a lot,” but it might not necessarily be “so much that you’d ultimately make yourself just as poor as they are” – because at some point before you reached that level, your loss aversion would start causing you so much disutility that it would outweigh whatever good you were doing. The extra harm you’d be bringing on yourself, by continuing to sacrifice your own well-being, would be greater than the benefit you’d be bringing to those you were trying to help – so at that point, the most utility-positive decision you could make would be to stop yourself short and preserve your own remaining utility.

(Of course, I’m assuming here that the utility you’d get from keeping a little bit of money for yourself would outweigh whatever emotional satisfaction you’d feel from spending your last dollars helping others. But the power of that satisfaction shouldn’t be underestimated; it might very well be that for many people, the gratification of sacrificing everything they have to help the less fortunate would be enough of a reward in itself to make it a positive-utility act overall. But more on that momentarily.)

As an illustration of how loss aversion and diminishing marginal utility can intersect and balance each other out, imagine a world in which wealth is denominated not in dollars, but in “wealth units.” Anyone who has ten of these wealth units (10WU) is rich enough to be able to afford whatever they want – whereas anyone who only has one of them (1WU) is barely scraping by. (And those without any WU at all are living wholly hand-to-mouth.) Under these conditions, someone with 12WU might not experience any perceptible utility reduction at all from donating 1WU to a poorer person, whereas the utility gained by the recipient would be immense – so such a donation would obviously be justified. On the other hand, someone with 5WU would be less insulated by the effects of diminishing marginal utility, so they’d experience considerably more negative utility from donating 1WU, such that it wouldn’t necessarily be an overall utility-positive action unless the recipient had started off with less than (say) 2WU themselves. And if the donor were only starting off with only 2WU, then the pain they’d experience from losing 1WU might be so great (due to loss aversion) that no recipient, not even the poorest, would derive enough benefit from receiving 1WU for it to outweigh the donor’s pain – so they’d be justified in not making any donation at all. In short, we could imagine a situation that looked like the following, and it would be perfectly compatible with what we know about diminishing marginal utility and loss aversion:

Obviously, this is just a hypothetical scenario contrived purely for explanatory purposes; the utility levels in these charts are probably significantly different from what they’d be here in our own world. The point I’m trying to illustrate here is just that it’s theoretically possible to have a world in which optimal morality doesn’t necessarily require total self-impoverishment. Exactly how far that principle truly extends in the real world is debatable, of course – and my suspicion (as I was saying before) is that even if it didn’t demand total self-impoverishment, optimal morality would still demand much more of us than we’d probably be comfortable with. (It may even be the case that the threshold at which giving is no longer obligatory is only just barely above total self-impoverishment.) That being said, though, my personal intuitions and speculations aren’t particularly important here. In the end, what really matters isn’t whether I – or anyone – would approve of a particular moral framework from our current biased perspective, but whether we’d approve of it from the Rawlsian original position. That’s the question we must always come back to: If you were told that you were about to be reincarnated (without any of your current memories or characteristics) as a completely random person somewhere in the world – and you had no way of knowing in advance what your life situation would be, what personality traits you would have, or anything like that – then what kind of moral system would you want everyone to follow (bearing in mind that the vast majority of people in the world are significantly worse-off than the average American)? Would you want those who were in the upper part of the wealth distribution to be able to live their lives without having to worry about the rest of the world’s problems at all? Would you want them to be obligated to sacrifice every last spare cent to help those who were less fortunate? Or would you prefer something in between those two extremes – and if so, where exactly would you consider the sweet spot to be?

I can’t offer an answer here that defines the exact optimal measure of obligation for every possible situation. But what does seem clear to me is that, on a societal level, we should be doing a lot more for the less fortunate than we currently are. The way our popular culture glorifies gratuitous levels of personal wealth accumulation and conspicuous consumption is simply indefensible; and accordingly, we should be working to change the social norms surrounding these practices so that the popular culture starts regarding them not only as tacky and obnoxious, but as flat-out immoral. Erin Janus delivers this point forcefully:

Singer concludes:

Andrew Carnegie, the richest man of [his] era, was blunt in his moral judgments. “The man who dies rich,” he is often quoted as saying, “dies disgraced.”

We can adapt that judgment to the man or woman who wears a $30,000 watch or buys similar luxury goods, like a $12,000 handbag. Essentially, such a person is saying; “I am either extraordinarily ignorant, or just plain selfish. If I were not ignorant, I would know that children are dying from diarrhea or malaria, because they lack safe drinking water or mosquito nets, and obviously what I have spent on this watch or handbag would have been enough to help several of them survive; but I care so little about them that I would rather spend my money on something that I wear for ostentation alone.

Of course, we all have our little indulgences. I am not arguing that every luxury is wrong. But to mock someone for having a sensible watch at a modest price puts pressure on others to join the quest for ever-greater extravagance. That pressure should be turned in the opposite direction, and we should celebrate those […] with modest tastes and higher priorities than conspicuous consumption.

I couldn’t agree more with this sentiment. We ought to be celebrating the people who live materially modest lives so as to better help others, not those who flaunt their disposable income by blowing it on meaningless toys and status symbols. And mind you, that doesn’t necessarily mean that we should demand that nobody ever be allowed to keep any of their hard-earned wealth for themselves, or that we should mandate absolute universal redistribution of resources to such an extent that everyone’s level of wealth would be made exactly equal. After all, people still need some personal incentive to work hard and contribute to the world – and as nice as it would be if pure altruism were sufficient motivation for everyone to do that, history has shown that it typically just isn’t enough. What usually leads people to do the most good for the world (perhaps ironically but not at all surprisingly) is being allowed to keep at least some of the fruits of their own labor; so if permitting people some degree of self-indulgence is what produces the highest-utility results for everyone overall, then that’s the norm we should adopt. Either way, though, simply ensuring that everyone at least has their most basic needs met (food, shelter, medical care, etc.) is a standard that anyone in the original position would undoubtedly judge to be the bare minimum baseline for what we ought to consider morally obligatory as a society – so that’s a goal that we should unambiguously be striving for.

Regardless, for all this talk about how much an isolated individual might be obligated to sacrifice for what’s right, if we were actually able to implement a utilitarian system of ethics on a truly society-wide scale, then each individual almost certainly wouldn’t need to sacrifice that much at all. Naturally, if you were only choosing how to make the most moral use of your resources as a lone individual, then you might decide to forgo a movie or concert in order to donate the ticket money to charity instead. But if all of society chose to act collectively on the same charitable issue, it would almost certainly still have enough left over to not have to forgo movies or concerts at all. As Alexander writes:

[Q:] Might it not end up that art and music and nature just aren’t very efficient at raising utility, and would have to be thrown out so we could redistribute those resources to feeding the hungry or something?

If you were a perfect utilitarian, then yes, [as an individual], you would stop funding symphonies in order to have more money to feed the hungry. But […] utilitarianism has nothing specifically against symphonies – in fact, symphonies probably make a lot of people happy and make the world a better place. People just bring that up as a hot-button issue in order to sound scary. There are a thousand things you might want to consider devoting to feeding the hungry before you start worrying about symphonies. The money spent on plasma TVs, alcohol, and stealth bombers would all be up there.

I think if we ever got a world utilitarian enough that we genuinely had to worry about losing symphonies, we would have a world utilitarian enough that we wouldn’t. By which I mean that if every government and private individual in the world who might fund a symphony was suddenly a perfect utilitarian dedicated to solving the world hunger issue among other things, their efforts in other spheres would be able to solve the world hunger issue long before any symphonies had to be touched.

And just to drive this point home, Beth Barnes (echoing a separate post of Alexander’s) describes the ways in which the world might look different if just the richest 10% of us donated 10% of our income to worthy causes (starting at the 2:46 mark):

(Naturally, there’s plenty of room to debate exactly which of these causes would do the most good and should therefore take the most priority. As I discussed a bit in my last post, I’m personally inclined to think that putting money into certain areas of scientific research – particularly those pertaining to things like AI, nanotechnology, brain-machine interfacing, and radical life extension – could potentially be the thing that produces more positive utility than anything else in the long term, despite how speculative these areas are. But that’s a topic for a whole other post.)

Continued on next page →