Objective Morality

IIIIIIIVVVI – VII – VIIIIXXXIXIIXIIIXIVXVXVIXVIIXVIIIXIXXX
[Single-page view]

Speaking of acting responsibly toward those who are less powerful, though, let’s quickly finish up that point about applying these ideas to non-human species before we move on to anything else. For simplicity, I’ve mostly been referring to humans up the this point – but really, when I talk about fulfilling individuals’ preferences and maximizing their utility and so on, I’m talking about sentient beings in general, which can include both human and non-human species.

A lot of moral theories don’t leave any room for non-human species at all. As Tyler Cowen writes:

Some frameworks, such as contractarianism, imply that animal welfare issues stand outside the mainstream discourse of ethics, because we have not and cannot conclude agreements, even hypothetical ones, with (non-domesticable) animals. There is no bargain with a starling, even though starlings seem to be much smarter than we had thought.

Likewise, many religious conceptions of morality don’t include any consideration for animals, on the basis that animals lack souls and are therefore entitled to no more moral consideration than plants or rocks.

But under the system I’ve been laying out here – in which the goodness or badness of an action is defined simply by how much value is ascribed to it by subjective agents – animals do have to count for something, because just like humans, they too are capable of preferring some outcomes to others. Granted, those preferences might be purely implicit; a cat might never explicitly articulate that it prefers one brand of cat food to another, or that it prefers playing with a ball of yarn to being mauled by a larger animal. But as we’ve already established, preferences don’t have to be articulated in order to exist; a human infant, for instance, doesn’t have to be capable of speech (or even particularly complex thought) in order to prefer being fed to going hungry. Merely the fact that a being (whether human or non-human) is capable of preferring some outcomes to others means that its interests must necessarily count for something in the moral calculus.

I think most of us understand this intuitively; it’s hard to find someone who thinks there’s nothing wrong with torturing kittens, for instance. Still, a lot of people will try to selectively deny their intuitions on this point (usually as a means of defending their carnivorous diets) by arguing that animals aren’t actually capable of holding mental states at all, but are basically just automata, and that their behavior is purely robotic. Sam Harris provides a good response to this argument:

What is it like to be a chimpanzee? If we knew more about the details of chimpanzee experience, even our most conservative use of them in research might begin to seem unconscionably cruel. Were it possible to trade places with one of these creatures, we might no longer think it ethical to so much as separate a pair of chimpanzee siblings, let alone perform invasive procedures on their bodies for curiosity’s sake. It is important to reiterate that there are surely facts of the matter to be found here, whether or not we ever devise methods sufficient to find them. Do pigs led to slaughter feel something akin to terror? Do they feel a terror that no decent man or woman would ever knowingly impose upon another sentient creature? We have, at present, no idea at all. What we do know (or should) is that an answer to this question could have profound implications, given our current practices.

All of this is to say that our sense of compassion and ethical responsibility tracks our sense of a creature’s likely phenomenology. Compassion, after all, is a response to suffering – and thus a creature’s capacity to suffer is paramount. Whether or not a fly is “conscious” is not precisely the point. The question of ethical moment is, What could it possibly be conscious of?

Much ink has been spilled over the question of whether or not animals have conscious mental states at all. It is legitimate to ask how and to what degree a given animal’s experience differs from our own (Does a chimpanzee attribute states of mind to others? Does a dog recognize himself in a mirror?), but is there really a question about whether any nonhuman animals have conscious experience? I would like to suggest that there is not. It is not that there is sufficient experimental evidence to overcome our doubts on this score; it is just that such doubts are unreasonable. Indeed, no experiment could prove that other human beings have conscious experience, were we to assume otherwise as our working hypothesis.

The question of scientific parsimony visits us here. A common misconstrual of parsimony regularly inspires deflationary accounts of animal minds. That we can explain the behavior of a dog without resort to notions of consciousness or mental states does not mean that it is easier or more elegant to do so. It isn’t. In fact, it places a greater burden upon us to explain why a dog brain (cortex and all) is not sufficient for consciousness, while human brains are. Skepticism about chimpanzee consciousness seems an even greater liability in this respect. To be biased on the side of withholding attributions of consciousness to other mammals is not in the least parsimonious in the scientific sense. It actually entails a gratuitous proliferation of theory – in much the same way that solipsism would, if it were ever seriously entertained. How do I know that other human beings are conscious like myself? Philosophers call this the problem of “other minds,” and it is generally acknowledged to be one of reason’s many cul de sacs, for it has long been observed that this problem, once taken seriously, admits of no satisfactory exit. But need we take it seriously?

Solipsism appears, at first glance, to be as parsimonious a stance as there is, until I attempt to explain why all other people seem to have minds, why their behavior and physical structure are more or less identical to my own, and yet I am uniquely conscious – at which time it reveals itself to be the least parsimonious theory of all. There is no argument for the existence of other human minds apart from the fact that to assume otherwise (that is, to take solipsism as a serious hypothesis) is to impose upon oneself the very heavy burden of explaining the (apparently conscious) behavior of zombies. The devil is in the details for the solipsist; his solitude requires a very muscular and inelegant bit of theorizing to be made sense of. Whatever might be said in defense of such a view, it is not in the least “parsimonious.”

The same criticism applies to any view that would make the human brain a unique island of mental life. If we withhold conscious emotional states from chimpanzees in the name of “parsimony,” we must then explain not only how such states are uniquely realized in our own case but also why so much of what chimps do as an apparent expression of emotionality is not what it seems. The neuroscientist is suddenly faced with the task of finding the difference between human and chimpanzee brains that accounts for the respective existence and nonexistence of emotional states; and the ethologist is left to explain why a creature, as apparently angry as a chimp in a rage, will lash out at one of his rivals without feeling anything at all. If ever there was an example of a philosophical dogma creating empirical problems where none exist, surely this is one.

What really drives home Harris’s argument is the fact that (as Richard Dawkins points out) although it’s easy for us humans to imagine that we’re morally distinct from other animals because there’s such a wide gap between our relative levels of intelligence, there’s no reason why this gap necessarily had to have appeared in the first place. It’s merely an accident of evolutionary history that all the intermediate species between humans and chimps happen to have gone extinct; and it’s perfectly possible to imagine a scenario in which they’d instead all survived to this day. Imagine if that actually had been the case: What if our modern-day world included humans and Neanderthals and australopiths and chimpanzees all living alongside one another? How would we think differently about the moral status of animals if there were no clear taxonomic cutoff between them and our own species? Here’s Dawkins:

[There is a popular] unspoken assumption of human moral uniqueness. [But] it is harder than most people realise to justify the unique and exclusive status that Homo sapiens enjoys in our unconscious assumptions. Why does ‘pro life’ always mean ‘pro human life?’ Why are so many people outraged at the idea of killing an 8-celled human conceptus, while cheerfully masticating a steak which cost the life of an adult, sentient and probably terrified cow? What precisely is the moral difference between our ancestors’ attitude to slaves and our attitude to nonhuman animals?  Probably there are good answers to these questions. But shouldn’t the questions themselves at least be put?

One way to dramatize the non-triviality of such questions is to invoke the fact of evolution. We are connected to all other species continuously and gradually via the dead common ancestors that we share with them. But for the historical accident of extinction, we would be linked to chimpanzees via an unbroken chain of happily interbreeding intermediates. What would – should – be the moral and political response of our society, if relict populations of all the evolutionary intermediates were now discovered in Africa? What should be our moral and political response to future scientists who use the completed human and chimpanzee genomes to engineer a continuous chain of living, breathing and mating intermediates  – each capable of breeding with its nearer neighbours in the chain, thereby linking humans to chimpanzees via a living cline of fertile interbreeding?

I can think of formidable objections to such experimental breaches of the wall of separation around Homo sapiens. But at the same time I can imagine benefits to our moral and political attitudes that might outweigh the objections. We know that such a living daisy chain is in principle possible because all the intermediates have lived – in the chain leading back from ourselves to the common ancestor with chimpanzees, and then the chain leading forward from the common ancestor to chimpanzees. It is therefore a dangerous but not too surprising idea that one day the chain will be reconstructed – a candidate for the ‘factual’ box of dangerous ideas. And – moving across to the ‘ought’ box – mightn’t a good moral case be made that it should be done? Whatever its undoubted moral drawbacks, it would at least jolt humanity finally out of the absolutist and essentialist mindset that has so long afflicted us.

Of course, just because we acknowledge that animals can in fact hold legitimate moral interests doesn’t mean that all animal interests necessarily count the same. The preferences of a mosquito don’t automatically carry as much weight as those of a human simply because the mosquito is sentient; we still have to account for how strongly those preferences are held, just as we did when we were weighing human interests against other human interests. Given that mosquitos lack the mental capacity to hold their interests particularly strongly (at least not as strongly as humans do), their preferences will always count for relatively little in the utilitarian calculus. By that same token, though, if you move a bit up the sentience scale to a species with a somewhat greater mental capacity, like a turtle or a lizard, its preferences will accordingly carry more weight. And if you move even further up the scale to a species with a greater mental capacity still, like a chimpanzee, its preferences will count for even more than the preferences of those other species. Once you get to humans, which are capable of holding the strongest preferences of any species, those preferences will count the most of all. Ultimately, then, the general trend here will be that the moral weight carried by species’ preferences will scale with their degree of sentience. (Or to phrase it slightly differently, the value of an animal’s overall well-being will be proportional to its degree of sentience.) That’s not necessarily because some species are just innately superior to others in some metaphysical way, mind you; after all, as Dawkins’s point reminds us, the whole concept of distinct species is a fairly fuzzy one to begin with, and there’s a lot of overlap and gray area between species. Rather, the reason why some species’ preferences count for more than others’ is simply because their greater mental capacity allows them to ascribe more goodness to those preferences, and they accordingly derive greater utility from having their preferences satisfied. In other words, it’s not even really necessary to consider a particular being’s species at all when weighing preferences against each other; all that matters in the utilitarian calculus is the amount of utility attached to each of those preferences.

And to be clear, this doesn’t mean that every human preference automatically outweighs every preference held by a non-human; it’s perfectly possible for a chicken, for instance, to derive more utility from continuing to live than a human would derive from eating that chicken for dinner. That being said, though, if there were a situation in which the chicken’s preference to continue living was being weighed against the same preference in a human – like if the human was starving or something and had no other source of food – then the human’s preference to continue living would outweigh the chicken’s, and it would be morally permissible for the human to eat the chicken. The human’s survival would produce a greater level of utility than the chicken’s survival, so that would be the better outcome. As always, the more subjective value is ascribed to an outcome, the more morally good it is – regardless of which species are involved.

Now you might be thinking, wait a minute, if the goodness of an action is defined by how much utility it gives living beings, and if the amount of utility a particular being can derive from a particular action depends on its degree of sentience or mental capacity, does that mean that it would be morally better to let a human infant die than to let a chimpanzee die? After all, chimps are by all accounts more intelligent and self-aware than human infants, so presumably they’d derive greater utility from staying alive, right? The thing is, though, despite the fact that human infants can’t hold their immediate preferences as strongly as chimps can, the things being desired in this scenario – namely, the human’s expected remaining life and the chimp’s expected remaining life – aren’t equivalent. Assuming an average life course for both, 80-odd years of human experiences will be better than 30-odd years of chimp experiences. So accordingly, it’s possible that a preference for the former could be valued more highly than a preference for the latter, even if the individual desiring the former wasn’t as sentient as the individual desiring the latter. It’s basically the chicken-vs.-human scenario all over again – only with the infant’s “experience a human lifetime” preference standing in for the chicken’s “experience a chicken lifetime” preference, and the chimp’s “experience a chimp lifetime” preference standing in for the hungry-but-not-starving human’s “enjoy a tasty chicken dinner” preference. Despite the human infant’s lower level of sentience, the more moral thing to do would be to spare its life instead of the chimp’s, because the utility value of its desire would outweigh the utility value of the chimp’s desire. And just to make this point clear, if we imagined a slightly different scenario in which this was no longer the case – like if, say, the world was going to end in a week (so the infant would never get the opportunity to grow up and experience human childhood and human adulthood and everything else that would make its continued existence preferable to the chimp’s continued existence) – then I actually think that sparing the chimp rather than the human infant would in fact be the better choice, because the chimp’s valuation of its remaining one week of life would now outweigh the infant’s. But more on this topic later. For now, let me just address one more thing regarding non-human species.

Up to this point, all the examples of sentient beings I’ve been using have been either humans or animals – i.e. the only kinds of sentient beings that we actually know to exist at the moment. But when I talk about sentient beings and their preferences, everything I’m saying could just as easily apply to any kind of being capable of valuing things as good or bad – and that could include anything from aliens to AIs to deities (assuming such things can exist). Because of this, it’s important for us not only to think about how to weigh the preferences of species less mentally developed than us humans, but also about how to weigh the preferences of those possible beings that might very well be more mentally developed and more sentient than us. The way I’ve been describing things so far, we humans are at the top of the sentience scale, so our interests will always count for more than those of other species (all else being equal). But what if some hypothetical being existed that was even more sentient and more capable of ascribing value to its preferences than we are? Would that being’s preferences have to take priority over everyone else’s, even if it caused some degree of suffering for the rest of us? Alexander phrases the question this way: “Wouldn’t utilitarianism mean if there was some monster or alien or something whose feelings and preferences were a gazillion times stronger than our own, that monster would have so much moral value that its mild inconveniences would be more morally important than the entire fate of humanity?”

Well… maybe so, actually. Alexander continues:

Imagine two ant philosophers talking to each other about the same question. “Imagine,” they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle.”

But I think humans are such a being! I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn’t just human chauvinism either – I think I could support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants (presumably) do.

I can’t imagine a creature as far beyond us as we are beyond ants, but if such a creature existed I think it’s possible that if I could imagine it, I would agree that its preferences were vastly more important than those of humans.

You might think that this idea of a “utility monster” (as Robert Nozick famously called it) is sufficient grounds for rejecting utilitarianism – but if you do think that, then by the same logic, you’d also have to reject the idea that the preferences of human beings must carry more weight than the preferences of ants. It might be uncomfortable to think about, but there isn’t technically any logical reason why humans have to be the be-all end-all of morality. Honestly, I think that most of our instinctive reluctance to accept the idea just comes from the fact that, although we might easily be able to imagine a being whose level of sentience was two or three times our own, the idea of a being trillions of times more mentally developed than we are is simply beyond our ability to imagine.

That being said, of course, it might have occurred to you that the idea of such a being does in fact already exist – in the idea of God – and that what’s more, billions of people do in fact accept the idea that the most moral thing to do is to satisfy this being’s preferences, even if it means lesser beings have to suffer. Does that mean, then, that divine command theory – the religious idea that the moral good is whatever God says it is – is actually valid? Well, in my last post on religion, I mentioned that if a god did exist (and that’s a big “if”), then it would in fact make sense to try and satisfy its preferences. Still though, I should stress that that’s not necessarily the same thing as saying that those preferences would represent any kind of absolute goodness in the sense that divine command theory claims. (That is, a god’s preferences wouldn’t just automatically be 100% good no matter what they were.) Under a utilitarian system, a god’s preferences would be just like any other kind of preferences – they’d still have to be weighed against the interests of every other being in the universe – and if there was enough negative utility on the other side of the scale to outweigh the positive utility that the god would gain from having its preferences satisfied, then the most morally good thing to do in that situation would be to deny the god’s preferences. What’s more, like every other preference in the universe, the preferences of a god would still be subject to the Master Preference – so for instance, if there were a sadistic god whose preference was to torture humans for fun, but it turned out that this preference wasn’t what actually maximized that god’s utility, it would once again be morally good to disregard the god’s explicit preference and instead do what produced the highest level of utility overall. At any rate, the very idea of such a cruel god whose sadism has to be indulged might not even be morally coherent in the first place; there’s good reason to believe that the higher a level of sentience a particular being attains, the more of an obligation it has to treat other sentient beings humanely. But there’s still a lot of ground to cover before I can properly explain the reasons for this, so for now I’ll just leave it at that and move on.

Before I do, though, I should note that all this talk about hypothetical beings and their interests raises a potentially interesting implication for the idea of moral truth in general. I mentioned at the start of this post that goodness and badness aren’t just inherent properties of the universe that exist “out there” somewhere; the only way something can be good or bad is if it’s judged to be so by a sentient being. In other words, goodness and badness aren’t “mind-independent,” as philosophers call it – they’re exclusively the product of sentient minds. That being said, though, the fact that we can make objective statements about what would be morally true if certain hypothetical beings existed suggests that it’s possible for there to be moral statements that are true regardless of whether the beings they describe actually exist. We can make conditional statements like “If sentient beings with properties A, B, and C existed, and if they held preferences X, Y, and Z, then the best moral course of action would be such-and-such” – and those conditional statements really can be true, in the same sense that mathematical statements are true, even if no sentient beings actually exist at all. So in that sense, it’s possible for moral truths to exist which are objective and mind-independent, even if goodness and badness themselves aren’t. In other words, it’s possible for objective definitions of good and bad, for every imaginable circumstance, to be fundamentally built into the logic of existence itself.

Continued on next page →