This is essentially what I see as the basis for a kind of utilitarianism. If you’ve studied moral philosophy before, you’ll already be familiar with utilitarianism – but if not, its core idea is that the morality of an action is defined by how much that action benefits sentient beings like us and how little it harms them. (For this reason, it’s considered a branch of consequentialism, which simply says that the morality of an action is defined by its consequences.) This isn’t the only way of trying to define morality, of course; there are also philosophies like deontology – utilitarianism’s biggest rival – which says that the consequences of an action aren’t actually what determine its morality at all, but that morality instead consists solely of following certain unconditional rules at all times, like “Don’t lie” and “Don’t steal” and so on, regardless of the consequences. Under the deontological model, if an axe murderer shows up at your front door and asks you where your best friend is, you should tell him exactly where she is – even if you know he’s going to go kill her – because lying is immoral, period, in every context. But according to utilitarianism, morality is more than just unconditional rule-following; the conditions actually matter. You have to weigh the harms against the benefits – and whichever choice produces the least amount of harm and the greatest amount of benefit (AKA utility, AKA the kind of subjectively-ascribed goodness I’ve been talking about), that’s the moral choice.
So in the above example, for instance, if you imagined using a numerical scale to rate the amount of utility that would accrue to everyone involved, you might conclude something like: Telling the axe murderer where your friend is would give a +20 utility boost to the murderer (since it would help him in his mission to kill her), a +10 boost to yourself (since you’d get the satisfaction of being honest), and a +500 boost to the broader society (since it would help reinforce a social norm of everyone being honest all the time), but it would also result in a reduction of -10,000,000 utility for your friend (since she would be killed and lose everything), a -10,000 reduction for all her loved ones (since they’d be devastated by her death), a -300 reduction for yourself (since you’d have to live with the guilt of knowing you abetted your friend’s murder), and a -500 reduction for the broader society (since it would help reinforce a social norm of everyone readily abetting axe murderers upon request) – so on net, telling the axe murderer where your friend is would be a significant negative in terms of total utility (a loss of -10,010,270 overall) relative to the alternative choice of lying to him. These numerical utility ratings, of course, would be based entirely on each person’s subjective valuation of the possible outcomes; the utility boost that the murderer would get from killing your friend might be higher or lower than +20 depending on how much he wanted to kill her, the utility reduction that your friend would get from dying might be higher or lower than -10,000,000 depending on how much she wanted to live, and so on. But knowing the subjective valuations of everyone involved would give us all the information we’d need to objectively say whether the act of lying to the murderer would ultimately be a good thing (positive utility) or a bad thing (negative utility) – because again, goodness and badness are nothing but subjective valuations themselves, and it is in fact possible to make objective statements about such things.
(Obviously, here in the real world we can’t know the utility values of every situation with the kind of exact numerical precision used in the example above; I just made up the numbers there for illustrative purposes. But we can at least approximate. And who knows, maybe at some point in the near future, we’ll develop such advanced brain scanning and computing technology that we actually will be able to determine exactly how much utility people ascribe to things. Either way though, the fact that we might only ever have an imperfect understanding of the objective moral truth of any given situation doesn’t change the fact that there is an objective truth there to be found, whether our estimations of it are perfect or not.)
Needless to say, I think utilitarianism/consequentialism makes a lot more sense than deontology as a foundation for morality – not only because it just seems more intuitively plausible (Would a deontologist refuse to tell a lie even if the consequence was that the entire universe would be destroyed?), but also because the fundamental basis for deontology just doesn’t seem like something you can actually pin down once you probe into its internal logic. For instance, in the example of the axe murderer, you might refuse to tell a lie because you consider “Tell the truth” to be an inviolable moral duty; but who’s to say that you couldn’t just as easily consider something like “Protect the innocent” or “Don’t abet murder” to be an inviolable moral duty (which would lead you to take the opposite action and lie to the murderer)? Which rules are the ones you should consider absolute, and how should you make that determination? The guideline given by Immanuel Kant, history’s most famous deontologist philosopher, is that you should only consider something to be moral if it’s universalizable – meaning that (according to its most popular interpretation) you should only do something if you’d be willing to live in a world where everyone did it. Hence, telling lies would be considered immoral, since you wouldn’t want to live in a world where everyone lied whenever they thought they could get away with it. But by that same token, refusing to protect an innocent person from being murdered would also have to be considered immoral, since you wouldn’t want to live in a world where people readily abetted murderers upon request. The moral duty of telling the truth and the moral duty of protecting innocent life would fundamentally conflict with each other in this scenario – and since (for the purposes of this thought experiment) you wouldn’t be able to uphold both at the same time, how would you then figure out what to do? One approach, of course, would be to try and avoid the contradiction by formulating rules that were more narrowly tailored to particular circumstances – i.e. instead of having broad rules like “Don’t lie” and “Protect the innocent,” you could have much more specific rules like “Don’t lie to firefighters when they ask you where the occupants of a burning building are” or “Do go ahead and lie to axe murderers when they ask you where their would-be victims are hiding.” But even then, there are always so many nuances of circumstance in every decision, and accordingly so many potential moral tradeoffs to account for, that no two decisions are exactly alike; so if you wanted rules that were specific enough to never conflict with any other rules, you’d have to get extremely specific, to the point where you’d practically be creating a new rule for every individual situation on a one-by-one basis. And at that point, you’d be defeating the whole purpose of deontology, because you’d no longer be using broadly generalizable rules of moral behavior at all; you’d essentially just be doing a more convoluted version of what utilitarians do, weighing different moral considerations against each other according to the circumstances of each situation.
At any rate, it seems to me that the whole idea of trying to have a rule-based system of morality that’s separate from all consequentialist considerations is futile from the start, regardless of how you resolve such conflicts between rules. After all, the only way of determining whether a rule can be considered universalizable in the first place is to ask whether you’d rationally want to live in a hypothetical world where the rule had been made standard for everyone to follow – and how can you answer that question without basing it on your subjective judgment of what the consequences of that hypothetical scenario would be? How can you determine whether it would be desirable to live in a world where everybody lied, or everybody stole, or everybody killed, without considering what the consequences of those actions would be if they were universalized?
This was the argument that Jeremy Bentham, the founder of utilitarianism, made against deontology. According to him (and his successor John Stuart Mill), despite deontology’s claim to be a rival theory to consequentialism, once you drilled down far enough, it turned out that it was actually based on consequentialist considerations itself without realizing it. In fact, not only was deontology subsumed by consequentialism; so was every other serious ethical theory on the market. As Michael Sandel puts it:
Bentham’s argument for the principle that we should maximize utility takes the form of a bold assertion: There are no possible grounds for rejecting it. Every moral argument, he claims, must implicitly draw on the idea of maximizing [utility]. People may say they believe in certain absolute, categorical duties or rights. But they would have no basis for defending these duties or rights unless they believed that respecting them would maximize human [utility], at least in the long run.
In other words, even if you had a system of morality that tried to define the goodness of an act by something other than its consequences – like whether the person doing it had good motives, or whether their actions adhered to a certain set of rules, etc. – once you actually dug a little deeper and asked what the basis was for those criteria (What is it that makes good motives good? What is it that makes adherence to certain rules good? Etc.), you’d ultimately have to either arrive at some consequentialist/utilitarian justification for what you were calling good, or else find yourself stuck in a tautology. That isn’t to say, of course, that it wouldn’t even be theoretically possible to have a normative system that had nothing to do with utility – you could, for instance, have a system that said something like “What constitutes goodness is always wearing green on Sundays” – but at that point it seems fair to say that what you’d then have would no longer be a real system of morality at all, but something else entirely.
(Speaking of which, I should mention for the sake of completeness that the brand of deontology promoted by Kant himself was actually somewhat different from the more popular formulation of deontology I’ve been addressing here, and did in fact make more of an effort to define goodness in a non-consequentialist way. I don’t think he was ultimately successful in this effort; as G. W. F. Hegel points out, the standard for goodness Kant uses seems more like a test of non-contradiction than a true test of morality. Still, a lot of Kant’s ideas are genuinely valuable even within a consequentialist framework, and I’ll be bringing a few of them back into the discussion later on.)
I could keep going on about the whole consequentialism vs. deontology debate, but others have already covered it exhaustively and it’s not really my main focus here, so I don’t want to spend too much more time on it. I’ll just add that if you’re still on the fence about it (or even if you aren’t), I’d highly recommend Scott Alexander’s post on the subject here; I’ll be quoting him quite a bit in this discussion. (See also his brief response here to T.M. Scanlon’s “incommensurability” counterargument.)