If you’ve ever spent any time browsing through right-wing discussion threads online, you’ve probably come across the slogan “Taxation is theft.” The argument, simply enough, is that because taxes are mandatory, and because the government doesn’t receive direct consent from its citizens to take their money or property, its taking of that property can’t be regarded as anything other than an outright violation of its citizens’ rights – no more legitimate than a mugger robbing them in the streets. Sure, the government might attempt to justify itself by claiming that it’s acting in accordance with some kind of “social contract” that we’ve all implicitly bought into; but none of us have ever actually signed any such contract in physical reality – so how can this be considered valid grounds for government coercion? Isn’t the government really just a kind of “stationary bandit” (to use Martin C. McGuire and Mancur Olson’s phrase), taking from its citizens for no other reason than because it’s too powerful to be opposed?
I don’t think anyone would argue that this can’t certainly be true in at least some cases. If we imagine some Third World country being taken over by merciless warlords, for instance, who use their power solely to extract resources and enrich themselves at the population’s expense, it seems clear enough that this wouldn’t be a legitimate government so much as just a band of violent thugs calling themselves a government. (And it might also be fair to say that back in the old days, before the invention of democracy and rule of law and so on, this was essentially how every large-scale government worked.) But does this description apply equally to all governments, including modern democracies? Is all government action inherently coercive and unjust? Or can there be such a thing as legitimate government?
Often, the go-to response here will just be to shrug and concede that maybe government is inherently coercive, but it’s still worth it anyway because of all the benefits it brings. But some theorists have sought to go further than that, arguing that government power isn’t actually coercive at all; they’ve put forward a number of explanations (some more satisfying than others) for why government might be just and legitimate even if it doesn’t necessarily have the express consent of its citizens. Mike Huben, for instance, argues that even if citizens don’t give their explicit verbal or written consent to live under a particular government, they might still be giving their tacit consent simply by choosing to live in that particular government’s jurisdiction rather than moving somewhere else:
There are several explicit means by which people make the social contract with government. The commonest is when your parents choose your residency and/or citizenship after your birth. In that case, your parents or guardians are contracting for you, exercising their power of custody. No further explicit action is required on your part to continue the agreement, and you may end it at any time by departing and renouncing your citizenship.
Immigrants, residents, and visitors contract through the oath of citizenship (swearing to uphold the laws and constitution), residency permits, and visas. Citizens reaffirm it in whole or part when they take political office, join the armed forces, etc. This contract has a fairly common form: once entered into, it is implicitly continued until explicitly revoked. Many other contracts have this form: some leases, most utility services (such as phone and electricity), etc.
Some libertarians make a big deal about needing to actually sign a contract. Take them to a restaurant and see if they think it ethical to walk out without paying because they didn’t sign anything. Even if it is a restaurant with a minimum charge and they haven’t ordered anything. The restaurant gets to set the price and the method of contract so that even your presence creates a debt. What is a libertarian going to do about that? Create a regulation?
[…]
Why should we be coerced to accept the social contract? Why can’t we be left alone?
You are not coerced to accept US government services any more than you are coerced to rent or purchase a place to live. If pretty much all territory is owned by governments, and pretty much all houses and apartments are owned, well, did you want them to grow on trees? There ain’t no such thing as a free lunch.
Michael Huemer (himself a staunch anti-statist) gives a few more examples of popular justifications for government power, all of which rely on the idea of implicit consent rather than requiring an explicit social contract:
Explicit consent is consent that one indicates by stating, either verbally or in writing, that one consents. By contrast, implicit consent is consent that one indicates through one’s conduct, without actually stating one’s agreement. If citizens have not embraced a social contract explicitly, perhaps they have embraced it implicitly.
How can one indicate agreement without stating agreement? In some situations, one expresses agreement to a proposal simply by refraining from opposing it. I call this ‘passive consent’. Suppose you are in a board meeting, where the chairman says, ‘Next week’s meeting will be moved to Tuesday at ten o’clock. Any objections?’ He pauses, and no one says anything. ‘Good, it’s agreed’, the chairman concludes. In this situation, it is plausible that their failure to express dissent when invited to do so indicates that the board members consent to the change.
In other cases, one commits oneself to accepting certain demands by soliciting or voluntarily accepting benefits to which those demands are known to be attached. I call this ‘consent through acceptance of benefits’. For example, suppose you enter a restaurant and order a nice, tasty veggie wrap. After you eat the wrap, the waitress brings the check. ‘What’s this?’ you say. ‘I never said I was going to pay for any of this. If you wanted payment, you should have said so at the start. I’m sorry, but I don’t owe you anything.’ In this case, the restaurant could plausibly argue that, by ordering the food, you implicitly indicated agreement with the usual demand connected with the provision of that food: namely, payment of the price mentioned on the menu. Because it is well known in this society (and presumably known to you) that restaurants are generally only willing to provide food in order to get paid, it was your responsibility, if you wanted free food, to state this up front. Otherwise, the default assumption is that you agree to participate in the normal practice. For that reason, you would be obligated to pay for your meal, notwithstanding your protestations to the contrary.
A third form of implicit consent is what I call ‘consent through presence’, whereby one indicates agreement to a proposal merely by remaining in some location. While having a party at my house, I announce, loudly and clearly to everyone present, that anyone who wants to stay at my party must agree to help clean up afterwards. After hearing my announcement, you carry on partying. In so doing, you imply that you agree to help clean up at the end.
Finally, sometimes one implicitly consents to the rules governing a practice by voluntarily participating in the practice. I call this ‘consent through participation’. Suppose that, during one of my philosophy classes, I tell the students that I am going to run a voluntary class lottery. ‘Those who want to participate’, I explain, ‘will put their names into this hat. I will draw one name out at random. Each of the other participants will then pay $1 to the person whose name is drawn.’ Suppose that you put your name into my hat. When the winner’s name is drawn, you discover, alas, that the winner was not you. I come to collect $1 from you to give to the winning student. ‘I don’t owe you anything’, you insist. ‘I never said that I agreed to pay a dollar. All I did was drop my name into your hat. Maybe I was dropping it in just because I like putting my name into hats.’ In this situation, it seems that you are obligated to hand over the dollar. Your voluntary participation in the process, when it was well known how the scheme was supposed to work, implied that you agreed to accept the possible financial burden associated with my lottery scheme.
Each of these four kinds of implicit consent – passive consent, consent through acceptance of benefits, consent through presence, and consent through participation – might be used as a model for citizens’ implicit acceptance of the social contract. To begin with, perhaps citizens typically consent to the social contract merely by refraining from objecting to it (passive consent). Just as few if any of us have ever explicitly stated that we accept the social contract, few have ever stated that we do not accept it. (The exceptions are anarchists who have explicitly stated their rejection of government.)
Consent through acceptance of benefits would also confer a nearly universal authority. Nearly everyone has accepted at least some benefits from their government. There are certain public goods – such as national security and crime prevention – that the state provides automatically to everyone within its territory. These goods are not relevant to consent, because these are benefits given whether citizens want them or not. Pacifists, for instance, are given the ‘good’ of military defense, against their will. However, there are other goods that citizens have a choice about accepting. For example, nearly everyone uses roads that were built by a government. The government does not force people to use these roads; thus, this is a case of voluntary acceptance of a governmental benefit. Similarly, if one calls the police to ask for assistance or protection, if one takes another person to court, if one voluntarily sends one’s children to public schools, or if one takes advantage of government social welfare programs, then one is voluntarily accepting governmental benefits. It can then be argued that one implicitly accepts the conditions known to be attached to the having of a government – that one should help pay the monetary costs of government and obey the laws of the government.
Consider next the case of consent through presence. This, in my experience, is the most popular theory of how citizens give their consent to the state, perhaps because it is the only account that can be applied to everyone within the state’s territory. The government does not require anyone (other than prisoners) to remain in the country, and it is well known that those who live within a given country are expected to obey the laws and pay taxes. Therefore, by voluntarily remaining, perhaps we implicitly accept the obligation to obey the laws and pay taxes.
Lastly, some citizens might give implicit consent through participation in the political system. If one votes in elections, it might be inferred that one accepts the political system in which one is participating. This, in turn, might obligate one to abide by the outcome of the political process, including the laws made in accordance with the rules of the system, even when these are different from the laws that one desired.
If any of these four suggestions hold up, they would account for both political obligation and political legitimacy, at least with respect to some citizens.
Of course, naturally, there are plenty of counterarguments to these “implicit consent” arguments, pointing out that we wouldn’t automatically accept them as valid if they were applied to a tyrannical government like the aforementioned government-by-violent-warlords, so they can’t really be considered legitimate justifications for government in and of themselves. If they were – i.e. if the legitimacy of government were just as simple as saying “Taxation isn’t coercive because you’re free to move to another country” – then we could just as easily apply that line of reasoning to other areas, like “Domestic violence isn’t abuse because you’re free to leave your marriage,” or “School bullying isn’t violence because you’re free to change schools” – which would be pretty problematic, to say the least. As commenter BastiatFan writes:
[According to the implicit consent] argument:
In order to be taxed, you must first engage in some kind of voluntary transaction such as buying goods, owning a business, choosing to accept a gift (estate tax), etc. If you take this action knowing that you will be taxed, you have given consent to pay that tax.
This seems to rest on the assumption that so long as one has an option to avoid some threatened response from another, one consents to the response if they take the action. This is an old argument, going back at least as far as Aristotle, who argued that since slaves did not commit suicide, they consented to be ruled by their masters.
The flaw with this argument seems obvious to me, but apparently it is an attractive one to many, as it constantly recurs.
[…]
It’s quite simple: no one accepts this argument when it is applied anywhere else in life. Nowhere can someone argue this as a defense and have it accepted.
A mugger steps out of the shadows and presses a knife to his victim’s belly. “Make a sound and I’ll gut you like a fish,” he says. When the victim shrieks, have they consented to having their insides become their outsides?
The [implicit consent argument] says: “If you take this action knowing that you will be taxed, you have given consent to pay that tax.”
Is it also true that “if you take this action knowing that you will be stabbed, you have given consent to endure the knife”?
I hope not!
And so the problem is that the underlying principle is simply not one that anyone accepts. We don’t need to delve deeply into meta-ethics. It doesn’t matter which ethical framework they assume.
I don’t believe they actually accept the principle that this argument is based on.
The whole “You’re consenting by not moving away” argument can be especially hard to swallow because for most people, packing up their entire life and moving to a whole other country just isn’t that realistic an option, even if they strongly object to their government. As Scott Alexander writes:
The United States allows its citizens to leave the country by buying a relatively cheap passport and go anywhere that will take them in, with the exception of a few arch-enemies like Cuba – and those exceptions are laughably easy to evade. It allows them to hold dual citizenship with various foreign powers. It even allows them to renounce their American citizenship entirely and become sole citizens of any foreign power that will accept them.
Few Americans take advantage of this opportunity in any but the most limited ways. When they do move abroad, it’s usually for business or family reasons, rather than a rational decision to move to a different country with policies more to their liking. There are constant threats by dissatisfied Americans to move to Canada, and one in a thousand even carry through with them, but the general situation seems to be that America has a very large neighbor that speaks the same language, and has an equally developed economy, and has policies that many Americans prefer to their own country’s, and isn’t too hard to move to, and almost no one takes advantage of this opportunity. Nor do I see many people, even among the rich, moving to Singapore or Dubai.
Heck, the US has fifty states. Moving from one to another is as easy as getting in a car, driving there, and renting a room, and although the federal government limits exactly how different their policies can be you better believe that there are very important differences in areas like taxes, business climate, education, crime, gun control, and many more. Yet aside from the fascinating but small-scale Free State Project there’s little politically-motivated interstate movement, nor do states seem to have been motivated to converge on their policies or be less ideologically driven.
What if we held an exit rights party, and nobody came?
Even aside from the international problems of gaining citizenship, dealing with a language barrier, and adapting to a new culture, people are just rooted – property, friends, family, jobs. The end result is that the only people who can leave their countries behind are very poor refugees with nothing to lose, and very rich jet-setters.
[…]
So although the idea of being able to choose your country like a savvy consumer appeals to me, just saying “exit rights!” isn’t going to make it happen, and I haven’t heard any more elaborate plans.
Besides, even if it were easier for people to relocate solely for political reasons, that still wouldn’t mean much if the thing they were objecting to wasn’t just having to live under the authority of one particular government, but having to live under the authority of government in general. As Jason Brennan puts it:
You have no reasonable way of opting out of government control. Governments control all the habitable land, so we have no reasonable way to escape government rule. You can’t even move to Antarctica—the governments of the world forbid you to live there. At most, a small minority of us—those who have the financial means and legal permission to emigrate—can choose which government will rule us.
Even that—choosing which government will rule you—does not signify real consent. Imagine a group of men said to a woman, “You must marry one of us, or die, but we will let you chose whom you marry.” When she picks a husband, she does not consent to being married. She has no real choice.
So in light of all these considerations, is the implicit consent argument just totally worthless? Well, not necessarily. I’ve been giving all the counterarguments here, but I should say I’m actually fairly sympathetic to some variations of the implicit consent argument when it comes to government power. As Huemer’s examples illustrate, implicit consent is a real thing that can exist in certain situations – and it seems entirely plausible to me that this might include large-scale social contracts. (In fact, if you’ve read my post on metaethics, you’ll know that I consider a similar kind of mechanism to be what basically grounds all of morality.) Implicit consent might not fully legitimize all government action 100% of the time, of course, but that doesn’t mean there might not be at least some partial validity to the idea.
Having said that, though, it’s clear that not everyone will find the idea quite so compelling. So are there any other arguments left that we might turn to as valid justifications for government coercion? Huemer considers one other possibility (before ultimately rejecting it as well) – that although government might not always have its citizens’ consent, maybe it doesn’t necessarily need their consent for its actions to be legitimate, if the citizens themselves are acting in such an unreasonable way that coercing them is justifiable:
The legitimacy of a political system is a matter of the permissibility of imposing that system on all the members of a given society. It is, in part, a matter of the permissibility of intentionally, coercively harming those who disobey the rules produced by the system. The hypothetical social contract theory, on the present interpretation, offers the following candidate justification for this sort of coercion: one may coercively impose an arrangement on individuals, provided that the individuals would be unreasonable to reject the arrangement.
This principle stands in stark conflict with common sense morality. Imagine that an employer approaches a prospective employee with an entirely fair, reasonable, and attractive job offer, including generous pay, reasonable hours, pleasant working conditions, and so on. If the worker were fully informed, rational, and reasonable, he would accept the employment offer. Nevertheless, the employer is not ethically entitled to coerce the employee into working for him in the event that the employee, however unreasonably, declines the offer. The reasonableness of the offer, together with hypothetical consent, would bear very little ethical weight, at most slightly mitigating the wrongness of imposing forced labor.
Similar judgments apply to other exercises of coercion that would normally require consent: it is not permissible for a physician to coercively impose a medical procedure on a patient, even if the patient was unreasonable to refuse the treatment; nor for a vendor to extort money from a customer, even if the customer was unreasonable to refuse to buy the vendor’s product; nor for a boxer to compel another boxer to fight, even if the latter was unreasonable to reject the offer of a match.
Similar remarks apply to the issue of political obligation. The unreasonableness of rejecting an arrangement does not suffice to generate an obligation to comply with the arrangement. The worker in the above example is entitled to refuse the offer of employment, unreasonable though this refusal may be.
Contrasting intuitions may be drawn from another analogy. A shipwreck has stranded a number of people on a hitherto uninhabited island. The island has a limited supply of wild game, which may be hunted for food but must be conserved against extinction. Assume that the only reasonable plan is for the shipwrecked passengers to carefully limit the number of animals harvested each week. Despite these facts, one passenger refuses to accept any such limit. It seems plausible to hold that the other passengers may coercively restrain the unreasonable passenger from excessive hunting for the benefit of all on the island. Furthermore, the reasonableness of limiting the rate of hunting and the unreasonableness of rejecting such limits seems to play a crucial role in the justification for such coercion.
What is the difference between the island case and the employment contract case? The most important difference is that the employment contract case involves the seizure of a resource, the employee’s labor, to which the victim of coercion has a moral right; whereas the island case involves the protection of a resource, the wild game, over which it is plausible to ascribe a collective right, held only partly by the coercee but mostly by the coercers. The unreasonable passenger in the latter case lacks any moral right to decide unilaterally on the use or distribution of the wild game, in the way that an individual has a moral right to decide on the use of his own labor.
If we accept this account of the cases, the hypothetical social contract is more like the rejected employment contract, for the social contract concerns, perhaps among other things, the coercive redistribution of resources that individuals have rights over. Among other things, the state lays claim to a portion of all persons’ earnings, whatever the source. […] Nor is the state’s coercion undertaken solely or even chiefly in the service of protecting collective resources. Often, the state deploys coercion in the service of paternalistic, moralistic, or charitable ends or for the sake of providing indirect economic benefits for small segments of society at the expense of others. No private individual or organization would be considered entitled to use coercion for these sorts of purposes, however reasonable his plans.
Here as elsewhere, our attitudes toward government differ from our attitudes toward other agents. The unreasonableness of rejection clearly does not license a private individual to force the terms of some contract upon another individual. Yet the unreasonableness of rejecting the social contract is thought to license the state to force the terms of that contract on its citizens. What the hypothetical contract theory gives, then, is another example of the particularly lenient moral attitudes applied to government rather than a justification of those attitudes. One must begin by ascribing some special moral status to the state to believe the state morally entitled to force an arrangement on individuals merely because they would be unreasonable to reject the arrangement.
Huemer rejects the idea of government coercion because he considers it to be more like the employment contract case than the island case. And certainly, we can see for ourselves that this diagnosis is often the correct one – history is full of examples of governments unjustly forcing their citizens to give up what’s rightfully theirs, unjustly penalizing them for acts that don’t harm anyone else, and so on. Having said that, though, we again have to return to the question of whether this really describes all governments all of the time. Is there really no context in which government coercion might be justifiable as a way of preventing someone from infringing on the rights or property of others (as in the island example)? I don’t think this is the case; I think that there are plenty of ways in which government action can be legitimate in this sense. Some of the more obvious examples, naturally, include things like coercing murderers to stop murdering people, and coercing robbers to stop robbing people, and so on. But then again, even adamant anti-statists like Huemer would agree that violent criminals shouldn’t just be allowed to freely lay waste to everyone else’s lives. The real sticking point for them is just the question of whether it can therefore be considered justifiable for the government to pay for this law enforcement by coercing the rest of us as well, via compulsory taxation. And this brings us back to our original “Taxation is theft” debate; is there any basis for saying that taxation, coercive as it is, can nevertheless be ultimately just and legitimate? This is the question I want to delve into a bit more deeply here – because in my view, the situation may actually be a lot more similar to Huemer’s island case than he and his fellow anti-statists think.
II.
Anti-statists’ main argument is that people have a fundamental right to their own property, and that the government therefore can’t claim any of that property for itself without unjustly coercing them. It’s a pretty straightforward premise as far as it goes. But it does raise some even more basic questions that are worth considering: What exactly does it mean to say that someone is entitled to the property that they own? And for that matter, what does it even mean to say that someone owns property in the first place?
Let’s stick with the island analogy for a moment: Imagine that you and a bunch of other people are shipwrecked on an uncharted island (or a whole other planet, if you really want to take the thought experiment to the extreme) and have to start an entirely new civilization from scratch. How would you handle the question of property rights and who owns what? If someone just marched over to the eastern side of the island, for instance, and declared that the whole eastern half of the island belonged exclusively to them and that no one else could use it without their permission, would you be automatically obligated to respect that claim as a legitimate one? Would it be unjust and coercive to ignore their assertion and say that they had to share at least some of the island with everyone else?
The reason why these questions are important is because this scenario isn’t actually an entirely hypothetical one. When we think about how the concept of land ownership originally came about here in the real world, we can see that it would have essentially been the same kind of situation. In the initial state of nature, way back before anybody owned anything, the entire earth was a common-pool resource – which is to say, it “belonged” to everybody. Generations later, here in modern times, land is owned by specific people who enjoy the exclusive rights to it, while everyone else is forbidden to use it or access it without the owners’ permission. How did we get from Point A to Point B? Well, if you ask a modern-day landowner where their exclusive entitlement to their land came from, they’ll probably tell you that they bought it fair and square from the previous owner; and if you ask that previous owner the same question, they’ll probably give you the same answer; and you can keep going like that for centuries back into the past, with each owner buying the right to the land from the previous owner. But what about when you reach the very beginning of that chain? How did the very first owners of that land come to acquire it? Well, by all accounts, there wasn’t any kind of formalized process to it at all; they simply went over and asserted that they were now the exclusive owners of the land, and that everyone else would be excluded, by force if necessary. That is, it wasn’t a matter of mutually contracting with everyone else or obtaining unanimous consent from the community; they just seized the land from the commons and declared their willingness to fight off anyone who would challenge them on it. In those days before laws and governments became widespread, to say that a piece of property was “owned” by somebody simply meant that it was something they were willing and able to hold by force. In other words, their exclusive entitlement to the land they held wasn’t some kind of natural right bestowed upon them by God or whatever; it was something they were actively forcing the rest of the population to give up to them without consent.
Needless to say, this is a bit awkward for the anti-statists’ position that private property must be regarded as completely inviolable because coercing people to give it up would be unjust. Their argument is that taxing private property is wrong because it constitutes a kind of “theft” – but in this case, the property itself is the product of a kind of theft from the commons; so how can it really be the case that someone who seizes it from the rest of the community in this way should be regarded as the only one rightly entitled to its fruits?
The philosopher Pierre-Joseph Proudhon was the first one to really bring this point into the mainstream political discourse, inverting the “Taxation is theft” slogan with his equally pithy assertion that, actually, “property is theft.” But he wasn’t the only one to have noticed this issue, which has since become known as “the problem of initial acquisition.” In fact, as Matt Bruenig argues, he was just following the logic of the situation to its natural conclusion:
In Proudhon’s period, the animating assumption of essentially all works on property was that God created the earth and gave it to mankind in common to use. This was the long-standing Christian view, which was notably reflected in Thomist thought and even received a ringing endorsement from John Locke. The Christianist idea of the common ownership of all of the earth is Proudhon’s starting point.
From there, a question arises: if God initially gave the entire earth to mankind to own in common, then how can you ever have individual property? Or, to borrow a line from Locke, since the earth is given to mankind in common, “it seems to some a very great difficulty, how any one should ever come to have a property in any thing.” The correct answer to this question, reached by Proudhon but not Locke, is that the only way to move from universal common ownership to individual private ownership is through theft. When an individual appropriates pieces of the earth (e.g. land) out of the commons and into private ownership, that individual steals from everyone else. Everyone else’s ownership share in that piece of the earth is taken from them, violently and without their consent.
[…]
At this point, someone might try to get out of this outcome by saying that they don’t believe in initial common ownership. But that’s not really crucial to the argument.
Even under the hypothetical stories libertarians tell (“fact-defective potential explanations” in [Robert] Nozick’s parlance) about how property can originate, the fact is that at the initial point in time, everyone can access and use every single piece of the earth at their will. There are no restrictions. You can move about the world freely. Nobody can stop you. You truly have negative liberty in the sense that it would be wrong to interfere with your bodily movements.
But then something curious happens. Somehow (regardless of how it’s justified), individuals are permitted to appropriate pieces of the world privately. The upshot of such appropriation is that everyone else’s previously-existing ability to access and use the appropriated piece of the world is stolen from them without their consent. Those who do not go along with having their access and use stolen from them are met with violence. This is theft. Access and use, both valuable things, are taken from people at the barrel of a gun.
It’s often argued that philosophies like libertarianism and anarchism can be boiled down to a simple attitude of “You leave me alone, I’ll leave you alone.” But the fundamental problem with this idea, according to Proudhon, is that simply by claiming to be the sole owner of a piece of land or some other natural resource, you aren’t leaving others alone; you’re actively taking from them without their permission. As commenter dominosci puts it:
Even libertarian anarchists are inconsistent. The problem is that they claim to both
Oppose the initiation of force.
Support the institution of private property.
These two are in direct opposition. When someone claims private property they are claiming the right to exclude others by force. This “right” was not contractually acquired. They did not enter into an agreement with anyone. Rather, they seek to force this obligation (to give up access to the property) on others without their consent.
To be clear: I support private property. But a moral justification for property cannot be rooted the kind of contractual framework libertarians (anarchist or not) claim to adhere to.
If you really want to argue that it’s right and just for someone to own a piece of property, you have to provide some legitimate reason why they’re the ones who should be solely entitled to it, aside from just “Might makes right.” If the property in question is something that they’ve created entirely through their own labor, for instance, then that might give you a good argument for why they should be entitled to the full value of what they’ve created. But land and other natural resources aren’t created by anyone’s labor; the only way of acquiring them is by taking them from the commons. So for that reason, it’s hard to come up with a compelling reason why anyone should be able to claim the right to exclude everyone else from their use without providing them with any kind of compensation. In fact, as David Friedman notes, even if landowners could somehow demonstrate that all their land had initially been appropriated from the commons in a just and fair way, that still wouldn’t even come close to proving that here in the modern day, centuries later, the distribution of this land was still just and fair, since our species’ history of perpetual war and conquest and theft would practically guarantee that it would have been unjustly seized from its rightful owners by force (and thereby become illegitimately-held property) at some point along the way:
[One] difficulty with moral accounts of rights, in particular of property rights, is the degree to which the property rights that people actually respect seem to depend on facts that are morally irrelevant. This difficulty presents itself in libertarian accounts of property as the problem of initial acquisition. It is far from clear even in principle how unowned resources such as land can become private property. Even if one accepts an account, such as that of Locke, of how initial acquisition might justly have occurred, that account provides little justification for the existing pattern of property rights, given the high probability that any piece of property has been unjustly seized at least once since it was first cleared. Yet billions of people, now and in the past, base much of their behavior on respect for property claims that seem either morally arbitrary or clearly unjust.
As Michael Sandel writes (citing Nozick), the question of whether a piece of private property can be considered legitimate fundamentally comes down to a simple two-part test:
Nozick […] argues that distributive justice depends on two requirements—justice in initial holdings and justice in transfer.
The first asks if the resources you used to make your money were legitimately yours in the first place. (If you made a fortune selling stolen goods, you would not be entitled to the proceeds.) The second asks if you made your money either through free exchanges in the marketplace or from gifts voluntarily bestowed upon you by others. [Only] if the answer to both questions is yes [are you] entitled to what you have.
But by Proudhon’s reasoning, all instances of privately-held land and natural resources fail this test. So what are the implications of this? Should we just abolish all private ownership of land and natural resources and completely return everything to the commons? This seems like a bad idea, for all the reasons I laid out in my last post. Having said that, though, it also seems clear that if we’re going to allow people to take resources away from the rest of the population, they should at least be required to provide some kind of compensation for the resources they’re taking away – because otherwise they really would just be committing flat-out theft. As commenter Glory2Hypnotoad puts it:
Land is unique in that it’s not something you can just make; it can only be taken from the commons. So it makes sense that when a person takes a public resource and turns it into private property, they should give something back to the commons.
If we actually care about people’s right not to have their property forcefully taken away from them without their consent – the thing that anti-statists point to as the absolute foundation of their philosophy – then we should want to ensure that those whose property is taken away from them should be reimbursed for what they’ve lost, and that those who took the property should be the ones to provide that reimbursement – even if they have to be coerced into doing so – simply as a matter of basic justice. But what this means, ironically enough for the anti-statist position, is that taxing privately-held land and natural resources and redistributing their value to the broader population might not actually constitute theft at all, but on the contrary, might be a legitimate corrective measure rectifying the actual theft, and returning the value to those who were rightly entitled to it all along. In other words, it would be the precise opposite of theft. As DePonySum writes:
According to the non-aggression principle one should never interfere with the person or legitimate property of another without their permission, unless they have initiated aggression against one first. The non-aggression principle is sometimes taken to be a master argument for libertarian views against the redistribution of money or property – e.g., left wing proposals to redistribute money from the rich to the poor. I won’t argue either for or against the principle of nonaggression, as there are far more pressing ethical issues. Instead I’ll be contending that the non-aggression principle tells us nothing, at least directly, about the topic of redistribution.
In the definition of the non-aggression principle I insisted that the non-aggression principle applies to legitimate property. I’m not trying to smuggle anything especially controversial in here, by insisting on the term legitimate I’m simply insisting that you actually have to rightfully own the thing in question, it’s not enough to simply proclaim that one owns it. A moment’s reflection will show that this stipulation is necessary, if one owned everything one proclaimed one owned then many things would have multiple inconsistent ownership claims.
Consider the case of Bob. Bob passionately claims that he owns the Atlantic Ocean, he actually seems to believe this, and insists that no one should cross the Atlantic without his permission. When asked to justify this, he responds by saying that crossing his ocean without his permission is aggression, and everyone should accept an ethical norm against aggression. When confronted with this argument, there is no need to say anything for or against the non-aggression principle, one simply has to say that the Atlantic Ocean is not actually Bob’s, therefore no aggression against Bob has occurred.
This is where the champion of the non-aggression principle as a basis for libertarianism hits a problem. The supporter of redistributive taxation typically does not accept that the goods and monies to be redistributed are, in fact, the legitimate property of those they are being taken from. They hold, on the basis of a differing theory of distributive justice than that held by the libertarian, that they are the rightful property of someone else.
The libertarian will respond by insisting that, yes, the prior owner is the legitimate owner of the goods or monies in question, but notice that the argument has now strayed beyond the issue of non-aggression into a debate about who owns what. Our point is simple then, non-aggression tells us nothing about redistribution unless we assume that redistribution is a process of removing something from its rightful owner and giving it to someone else but this is part of what is under dispute in debates about distributive justice. The debate is really about who is the rightful owner of what, and unless one can win this debate, one might as well be Bob insisting that he owns the Atlantic. Just as there is no aggression against Bob implicit in sailing across the Atlantic Ocean and ‘breaching’ his sovereignty over that ocean, so perhaps there is no aggression in ‘taking’ money off [a rich person] to pay for redistribution, if the recipients of that redistribution are already the rightful owners of that money.
Put simply, taking your stuff is not aggression unless it actually does rightfully belong to you, and the whole project of the advocate for redistribution is to try and prove that, in some cases, it doesn’t.
In fact if the supporter of redistribution is correct about who rightfully owns what, then in the non-aggression principle would imply that action resisting redistribution is impermissible, as it would be a form of aggression.
Now of course the libertarian has responses to the advocate for redistribution. They can critique the arguments in favour of redistribution and propound their own theories of who owns what that do not allow much of a role for redistribution, for example, as Nozick does in Anarchy, State, and Utopia. However such arguments are not primarily appeals to non-aggression, rather they are total theories of who owns what. Nonaggression simply doesn’t cut at the difference between the libertarian and the redistributivist.
[…]
I think it’s useful to take a breath and clear our mind when we think about property. A lot of people imagine property as somehow metaphysically tied to a specific owner by intangible golden threads, and it’s worthwhile to remind ourselves that this is not so.
Never forget that ultimately there are just objects. Tables, chairs, parts of land, and people, which are a special kind of object. What is property then? Property is a kind of social arrangement giving certain people certain bundles of permissions regarding certain objects, and denying those permissions to everyone else. In the final analysis then, like all permissions and refusals, property is a collection of threats of social sanction, including violence.
It seems deeply unlikely to me that we will ever be free of property understood in this way, or that this is even desirable. Even a communist state wouldn’t want people trespassing in the nuclear power reactor without the right expertise – and what is the right to collectively exclude all people who lack special permission from a site but a kind of collective property?
Essential though it may be, re-framing property as the threat of sanction and violence, and not some metaphysical linkage, brings it into a new perspective. From this standpoint there is nothing especially ‘non coercive’ about, say, anarcho-capitalism, unless you take it as given that the claims it makes about who is entitled to what are ethically just.
To their credit, the libertarians and anarcho-capitalists are right about one thing: Given a particular distribution of wealth and natural resources, the free market really is a remarkably effective mechanism for enabling the people who own that wealth to transact with each other in such a way that they’re all made better off than they would have been otherwise. What the market doesn’t guarantee, though, is that the original distribution of that wealth will have been allocated fairly or justly in the first place. And even a passing familiarity with the history of land ownership here in the real world makes it abundantly clear that our original allocation of property most certainly did not meet that standard. Historically speaking, the distribution of property rights over land and other natural resources was largely just decided by whoever was the most powerful and dangerous and could seize the most of it for themselves by force. And even by the anti-statists’ own standard, such forcible seizure of property can’t rightly be regarded as legitimate. As commenter Deonatus puts it, “The idea that someone simply claims something as theirs first and therefore it’s theirs to protect by force seems pretty arbitrary.” We can easily see how obvious this is in the above example of Bob claiming to own the Atlantic Ocean; but as bluerepublik points out (harking back to our island analogy from earlier), it’s just as true of the land as it is of the sea and the air and the sky:
As a thought experiment… imagine you were on a sinking ship, and managed to swim to an island. 10 meters in front of you is another swimmer, and he reaches the island before you. By the time you get there, he tells you to stop and turn around. “The land is mine, I own it now. Go drown in the sea.” Is this fair, that he is able to stake a permanent and exclusionary right to land simply by arriving on it 30 seconds before you? What about a few hours? What about a year? 10 years? At what point does a man have the right to decide the earth is his forever, and that all who trespass upon it lose all of their own rights?
While obviously this is an extreme example, and not one we expect to see regularly in our own lives, the basic moral principle on negative rights holds true. You are demanding that others not exist on a certain portion of the earth. All Georgists say is that if you want such exclusionary rights, you need to pay a tax – no nationalization, not public ownership, just a tax on the rent; the scarcity of the land (and have the rents given back [to the rest of the population]).
That term “Georgist” near the end there, by the way, is referring to Henry George, the 19th-century political economist who was most responsible for popularizing this idea of a land tax. But although he’s the one whose name has become most associated with the concept, it’s an idea with a long history here in the US; and in fact, as a general principle, it goes as far back as the founding itself. Here, for instance, is Thomas Jefferson on the subject:
Whenever there are in any country uncultivated lands and unemployed poor, it is clear that the laws of property have been so far extended as to violate natural right. The earth is given as a common stock for man to labor and live on. If for the encouragement of industry we allow it to be appropriated, we must take care that other employment be provided to those excluded from the appropriation.
And here’s Thomas Paine, who was especially outspoken on the matter:
It is a position not to be controverted that the earth, in its natural, cultivated state was, and ever would have continued to be, the common property of the human race. In that state every man would have been born to property. He would have been a joint life proprietor with rest in the property of the soil, and in all its natural productions, vegetable and animal.
But the earth in its natural state […] is capable of supporting but a small number of inhabitants compared with what it is capable of doing in a cultivated state. And as it is impossible to separate the improvement made by cultivation from the earth itself, upon which that improvement is made, the idea of landed property arose from that parable connection; but it is nevertheless true, that it is the value of the improvement, only, and not the earth itself, that is individual property.
Every proprietor, therefore, of cultivated lands, owes to the community ground-rent (for I know of no better term to express the idea) for the land which he holds.
[…]
In advocating the case of the persons thus dispossessed, it is a right, and not a charity, that I am pleading for.
Paine insisted that [taxing land to fund government benefits for the rest of the population] did not represent an abandonment of his principles of private property and free markets. Individualist to the last, Paine justified his social insurance system on strict Lockean property principles. Revenues for social insurance would come from an inheritance tax, which in his day amounted to a land tax. This was just, because landowners, in enclosing a part of the earth that was originally held in common by all, had failed to compensate everyone else for their taking. Even if they had mixed their labor with the land in the original appropriation, this entitled them only to the value their labor added to the land. They could not claim to deserve the value of the raw natural resources, or the value of surrounding uses that enhanced the market price of land. Each member of society was entitled to their per capita share of these values. So, landowners still owed a rent to everyone else. By this reasoning, Paine justified social insurance as a universal right, not a charity.
And like I said, Paine was far from alone in his views. Commenter Lbuntu provides a nice list of quotations here from all kinds of prominent thinkers expressing support for land value taxation on the same basis – from free-market economists like Adam Smith and Milton Friedman, to liberals like Joseph Stiglitz and Ralph Nader, to heads of state like Winston Churchill and Franklin Roosevelt, to various other luminaries like Albert Einstein, Aldous Huxley, Leo Tolstoy, Mark Twain, Henry Ford, Bertrand Russell, and Frank Lloyd Wright.
Alexander condenses the whole premise into this simple summary:
Capitalists deserve to keep the value they create, but they also owe rent on common resources which they enclose and monopolize (eg land, raw materials). That rent gets paid to the State (as representative of the people who are denied use of the commons) in the form of taxes. The State then redistributes it to all the people who would otherwise be able to enjoy the monopolized resources – eg everybody. I think this process where businesses pay off the government for their raw materials is pretty similar to the process where they pay off the investors for their seed money, and that the whole thing fits within capitalism pretty nicely.
Now, at some point along the way here, a couple of potential objections might have occurred to you. First, you might think, “Wait a minute, this all sounds fine if we’re talking about taxing the people who first appropriated the land from the commons hundreds of years ago – but those people are long dead now, and the people who currently own the land are a whole other set of people; so how would any of these considerations still apply to them? Are we expecting them to be the ones to pay back the full value of the land even though they aren’t the ones who originally appropriated it?” But as those last few quotations above indicate, this isn’t really the right way to understand what a land value tax is. It’s not so much that the people who currently own land are in possession of property that was never paid for, and so they now have to foot the entire bill themselves; rather, it’s that land was never truly the legitimate property of anyone other than the community in the first place – it always rightfully belonged to the community even when it was solely being occupied by one person – and so rather than making a one-time lump-sum payment for the entire purchase price of the land, those who were holding it should have been paying rent on it to the community for that entire span of time. Unfortunately, the people who controlled the land back in pre-taxation times managed to get away with never paying the rent that they rightfully owed to the community. But those who own it today can pay the rent on it; that is, they can pay an annual tax based on the rental value of the land (as opposed to the full selling price) in exchange for being allowed to continue having exclusive rights over that land for another year.
Naturally, one logical consequence of this is that the higher a particular piece of land’s rental value is, the more its owners will be expected to pay their community in taxes. But it makes perfect sense that this should be the case – because after all, the reason why some pieces of land are more valuable than others in the first place has much less to do with whatever improvements their private owners might make to them than with what kind of value is added by the communities in which they’re located. That is, the reason why pieces of property in San Francisco and Manhattan are so much more valuable than those in the middle of nowhere is not because the landowners have built such nice buildings on top of them; in fact, the owners of the less valuable land might just as well have built identical buildings on their own properties, and those properties still wouldn’t be worth a fraction as much. (The buildings would increase the value some, of course, but they wouldn’t be the main factor.) Rather, the reason why the properties in thriving cities are so much more valuable than those in rural backwaters is precisely because they’re located in such attractive communities, which not only have all the important necessities like accessible roads, reliable utilities, well-equipped police and fire departments, etc., but also offer access to the best available schools, hospitals, restaurants, bars, movie theaters, parks, libraries, and opportunities for employment, dating, networking, and social life (not to mention, for business owners, proximity to customers, suppliers, and other businesses). For landowners in these cities, all these things increase their property values irrespective of whether they’ve actually done anything at all themselves to increase them. As George points out, someone could move out onto some land in the middle of nowhere, with no one else around for hundreds of miles, and that property would initially be worth very little – but if a few more people eventually moved in nearby and built (say) a grocery store, a drugstore, and a gas station, suddenly the land would be worth quite a bit more; and if still more people started moving in and making still more improvements to the budding community, the value of the land would increase further still; and it could keep on increasing like that as the area grew more and more developed until eventually the land was worth a fortune – and yet none of this would have required any contribution or effort whatsoever from the landowner, who could have just been sitting on their hands the entire time. The value that was created, in other words, would have had nothing to do with their own labor, but would have been created solely by those in the community around them – and so, because people are entitled to the value that they create by their own labor, that means that the members of community would be the ones entitled to the additional land value that was produced, not the landowner themselves. As a matter of basic justice and fairness, then, it would only be right for the landowner to pay them a tax reflecting that value.
Taxes on windfall gains arising through no effort are popular and just. The tax system should target windfalls, not work, whenever possible. This is the aim of the land value tax. […] It targets [an] annual windfall that at present is hardly taxed at all. The lion’s share of this goes to powerful and privileged freeloaders who fight tooth and nail to keep every penny. In doing so they harm the economy and […] damage the environment.
Who are these freeloaders? Nobody has explained this better than Winston Churchill in a speech in 1909: “Roads are made, streets are made, railway services are improved, electric light turns night into day, electric trams glide swiftly to and fro, water is brought from reservoirs a hundred miles off in the mountains – and all the while the landlord sits still… To not one of these improvements does the land monopolist as a land monopolist contribute, and yet by every one of them the value of his land is sensibly enhanced.”
Churchill knew that landowners cannot change the value of a plot of land. Its value depends only on location and size. Is it near a station? A park? Good schooling? All of these factors are determined by the community, not the landowner. The landowner can increase the value of the property, by building on it, or extending existing structures. But any increase in the value per square foot of the plot on which the buildings stand is a free ride, and any profit made from this is pure freeloading on the efforts of the community.
Again, it’s worth noting that under a true Georgist land value tax, landowners would still be entitled to any value that they created through their own efforts, like any crops they might grow or any buildings they might construct on the land. Everyone would still be entitled to the fruits of their own labor. The taxes they were paying would simply represent the rent that they owed to the rest of the community for occupying the land in the process of earning that income; in other words, they’d only be giving up the portion of their wealth that they hadn’t earned themselves, as is only fair.
(Note: I’m hoping to write a whole separate post on Georgism here eventually – but in the meantime, if you want to read more about it, you can check out Lars A. Doucet’s rundown here.)
But this brings us to the second potential objection that might have occurred to you, which is that the land value tax, as reasonable as it might sound, isn’t actually the kind of tax that most governments currently impose on their citizens. I initially framed this discussion as a question of how taxation might be justified in a hypothetical scenario where we were starting completely from scratch on an uninhabited island – and okay, sure, talking about why a land value tax would be justifiable might make sense in that context – but if we actually want to defend taxation in the real world, don’t we need to justify a bunch of other taxes, like income taxes and sales taxes and so on? Here in real life, people are taxed on all kinds of things other than just the land and natural resources that they use – so merely justifying land taxes alone won’t really cut it, will it?
Well, to some extent this is kind of a moot point, since in the broadest strokes, this whole assortment of other taxes tends to roughly approximate the effects of a land value tax, with rich people who have lots of assets and properties generally paying more in taxes, and poor people generally paying less. So it’s not like we can plausibly insist that none of the tax revenue currently being collected by these governments can be considered legitimate, since the bulk of it would still be collected under a land value tax anyway. Still though, the overlap is far from perfect; there are a lot of people out there right now paying considerably more in taxes than they would be if land value taxes were the only form of taxation around, and a lot of other people paying considerably less. So all those other taxes still need some kind of justification if we’re going to argue that they should continue existing, right?
I don’t think it’s too hard to find fairly strong justification for most such taxes simply on the basis of things like the “consent through acceptance of benefits” rationale mentioned earlier, as well as more straightforwardly utilitarian considerations, which I’ll get into momentarily. Having said that, though, I should also say that as it happens, I’m not entirely convinced that all our current taxes necessarily should be defended; I think that in all likelihood, we really would be better off if we got rid of a lot of them and replaced them with land value taxes instead. But what does this really mean? How many kinds of taxation could actually be replaced in this way, exactly? Well, funny you should ask – because believe it or not, there’s reason to think that the answer might in fact be as many of them as we want. According to some economists, the revenue generated by a land value tax, even if it were the only tax in effect, would actually be sufficient to fund all government services entirely on its own – so in theory, if we were ever able to phase out our current grab bag of assorted taxes while simultaneously phasing in a universal land value tax, that might be the only tax we’d need to justify having at all. (It’s why George’s original version of the land value tax was just called the “single tax.”) Stiglitz developed an economic proof in the 1970s showing just how this would be possible (which came to be known, appropriately enough, as the “Henry George theorem”), and the idea has since been condensed into the modern acronym ATCOR – “All Taxes Come Out of Rents” – which, in short, says that reducing taxes on things other than land increases land values, and conversely, increasing them reduces land values, so that ultimately, one will always end up perfectly displacing the other. I’m not an economist myself, so I can’t claim to have personally confirmed all the math behind this assertion or anything like that (although Stiglitz is a Nobel Prize winner, for whatever that’s worth) – but I will say that it does make sense to me just in basic logical terms. Imagine, for instance, if there was a particular city or state you were considering moving to, and you were willing to pay $20,000 per year to live on a piece of land there. If that jurisdiction one day decided to change its laws such that you’d have to pay an additional $5,000 per year in sales taxes or income taxes, the prospect of living in that jurisdiction would now be $5,000 less attractive to you – in other words, the rental value of the land there would have decreased to $15,000 per year. And conversely, if the jurisdiction lowered your income taxes or sales taxes by $5,000 per year, living there would now be $5,000 more attractive to you, so the rental value of the land would now have increased to $25,000. What this means, then, is that if the jurisdiction completely did away with its income taxes and/or sales taxes, and replaced them outright with a full land value tax, it would end up with exactly as much tax revenue as before, since the two would completely offset; eliminating the old taxes would increase the land rent by a particular amount, then that amount would be collected by the government as a land value tax. The only difference would be that unlike the old taxes, which would have inevitably created various market inefficiencies – decreasing people’s incentive to create value and earn money, for instance, by taxing them on sales and income – a land value tax wouldn’t create any such market inefficiencies at all. As Doucet explains:
Today land value tax is widely considered to be the only tax that doesn’t suffer from Deadweight Loss.
Deadweight Loss is the lost economic activity or value caused by some policy. It’s often summarized by the phrase “If you want less of something, tax it.”
The place where the demand curve (red) and supply curve (blue) meet is the equilibrium point that the market naturally tends towards. But if we impose a price control lower than what the market will bear, the yellow area of the curve shows economic activity that can’t happen. If you put price controls on gasoline, for instance, you’ll get shortages because there’s more demand than supply, and supply can’t profitably rise to meet the extra bit of demand that’s willing to pay a little more.
But here’s how things look with a land value tax, notice that the supply curve is vertical – that’s weird, what does that mean?
A vertical supply curve means no matter what the price of land is, the same amount will always be supplied. This is because you can’t make land – the supply is effectively fixed.
[…]
The supply of land being fixed has some really interesting properties. By contrast, consider oil, the supply of which is not fixed. If we tax oil, some of the more marginal wells will be too expensive to operate and make a profit, so producers shut those down and the supply of oil decreases. Deadweight loss comes from a producer’s ability to change the amount of product they supply in response to price signals. You’ll notice the above graph of land tax has no deadweight loss at all!
Since nobody produces land, it’s the one thing you can tax without getting less of it. This drives out speculators entirely. Speculators can no longer distort rents by bidding up the price of land and holding it out of use, and can no longer compete with those who actually intend to use the land. This restores the proper balance of land, labor, and capital.
Now if you work harder, or invest more capital, you can actually expect to see an increasing return without it all being gobbled up by ever-increasing rent.
If you think about it this way, land value tax has negative deadweight loss, because it eliminates the speculative distortion that is the unearned privilege of landownership.
Okay, but won’t the landlords just pass the land tax on to their tenants?
By George, no. Rent is a price, and price is governed by supply and demand. Supply of land is fixed, so land value tax has no effect on supply. What about demand? Except in cases where it causes the economy to boom (a good thing), land value tax won’t increase land value – what it always does, however, is reduce the demand for land by speculators. If it costs nothing to hold on to land, of course I’m going to want to grab some and [hold it]. If the rent I could hope to gain is taxed away, I won’t bother.
Consider the case of oil again, where a tax reduces the supply. Reduced supply, given unchanged demand, causes a rise in price. And you’ll find the increase in price tracks very closely with the amount of tax.
Land value tax is just about the only kind of tax that can’t be passed off to someone else. For more on deadweight loss and the land value tax, see Welfare Economics of the Land Value Tax by BlueRepublik.
So does this mean there can never be profitable landlords ever again? Of course not – they just have to earn their living honestly like everyone else. Remember, we don’t tax the improvements, just the “ground rent.” So [a landlord who actually works to improve and maintain the properties she manages] still gets paid for all her honest work and judicious investments, but [a landlord who makes no such efforts] doesn’t make a dime until he gets off his lazy butt and does something productive.
This is really important, because aside from speculation, the principal cause of land value increase is the productivity of your neighbors. An empty lot in the middle of nowhere is worthless, but an otherwise identical empty lot in the middle of New York City is priceless. As they say in real estate – “location, location, location.” The reason location is valuable is because of the activity and contributions of the community, and yet the landlord claims the right to seize it all as rent.
Modern economists have some interesting things to say about George’s ideas, too. In 1977 Joseph Stiglitz demonstrated that land rents have a tendency to almost perfectly equal the value of investment in public goods. He called this the Henry George Theorem. Milton Friedman famously called land value tax the “Least Worst” tax.
But one of my all-time favorite endorsements will always be that one time the economist Ramin Shokrizade unwittingly re-derived land value tax from first principles to (successfully!) fix recessions in EVE Online.
For all the reasons we’ve discussed, it seems clear that the land value tax has the strongest philosophical grounding of any tax, just in terms of being able to justify it as legitimate and fair and just and so on – which is why it’s incredibly convenient and satisfying that it also seems to be the one that creates the least inefficiency and the fewest distortions in the broader economy. What’s more, if the ATCOR principle really is true, it would mean that we could have the land value tax be the only tax we collected at all, if we were so inclined. But does all this mean that it’s therefore the only tax we should collect? Is it the only tax we should even bother trying to justify? That’s a whole other question – and the answer to that question, in my opinion, is no. Doucet talked about how land value taxes are not only non-distortive, but could in a sense be considered anti-distortive, since they effectively counteract the distortions that would otherwise occur as landowners claimed unearned wealth at their communities’ expense. But landownership isn’t actually the only area where this kind of thing can happen; in fact, there are quite a few circumstances in which some individuals can impose costs on others without their consent – and in such cases, as with land value taxation, it can make sense to impose taxes in order to offset these distortions. So what kinds of situations are we talking about, exactly? Well, let’s consider a few examples.
III.
To start us off, here’s Tim Harford on some of the negative side effects of automobile use:
Washington DC, London, Tokyo, Atlanta, Los Angeles, and Bangkok, and indeed any of the world’s great cities, are full of cars, buses, and trucks. Those vehicles seriously damage the happiness of innocent bystanders. They cause severe air pollution. Admittedly, London’s current air pollution is not as severe as the “Great Stink” of the 1850s, in which tens of thousands died of cholera. But still, air pollution from traffic is not trivial: many thousands of people die because other people want to drive. Around seven thousand people a year die prematurely because of traffic pollution in Britain, a little more than one in ten thousand. In the United States, the Environmental Protection Agency estimates that fifteen thousand people die prematurely because of the particulate matter produced from sources such as diesel engines. Within urban areas like London, the cost of delays from congestion are even worse, if you consider the number of hours spent sitting in traffic as being in any way a significant loss of productive or enjoyable life. Then there is the noise, the accidents and the “barrier effect,” which discourages people, and particularly children, from walking to school, the local stores, or even to meet their neighbors across the street.
People are not fools: it’s almost certainly true that anyone taking a trip in a car is benefiting from driving. But they are doing so at the expense of everyone else around them—the other drivers stuck in traffic, the parents who dare not let their children walk to school, the pedestrians who risk their lives dashing across the street because they are tired of waiting for the light to change, the office workers who even in the sweltering summer cannot open their windows because of the roar of the traffic.
Because each driver who gets into his car is creating misery for other people, the free market cannot deliver a solution to the problem of traffic. The external effects of congestion and pollution are important departures from the [ideal of a perfectly efficient market]. In [such a market,] every act of selfish behavior is turned to the common good. I selfishly buy underwear because I want it, but in doing so channel resources into the hands of underwear manufacturers, and do nobody any harm. Textile workers in China, where the underwear is made, selfishly look for the best job, while manufacturers selfishly look for the most capable employees. All of this works to everyone’s benefit: goods are manufactured only if people want them, and they are manufactured only by the most appropriate people to do the job. Self-centered motives are put to work for everybody.
Drivers are in a different situation. They do not offer compensation for the cost they inflict on other people. When I buy underwear, the money I spend is compensation for all of the costs incurred in making it and selling it to me. When I take the car for a drive then I do not even need to think about the costs incurred by the rest of society as I avail myself of the free roads.
In other words, whenever there’s a transaction that only involves two parties – just a buyer and a seller – the free market is an excellent mechanism for helping them coordinate and set prices at a level that makes them both better off. But in cases where a transaction has secondary effects that extend beyond just the two parties directly engaging in it, the standard market pricing mechanism often fails to account for all the relevant costs (sometimes dramatically so). The negative side effects, instead of being factored into the costs of the transaction, are simply externalized onto outside third parties – hence why they’re known as “external costs” or “externalities.” But the affected third parties never get any say in the decision-making process themselves – they never give their consent to bear any of these external costs, nor do they receive any compensation for doing so; they’re just forced to unwillingly bear someone else’s costs themselves. The “market price” in such a transaction, then, no longer serves as an accurate reflection of the true cost; it’s an unjust distortion. And so imposing a tax that makes it so all the costs actually are accounted for and all the affected parties can be properly compensated makes sense – not only in terms of justice and fairness, but also in terms of overall economic efficiency and market functioning. Anti-government critics will often argue that all taxes are bad because they can only ever impose new costs on taxpayers; but unlike many other taxes, taxes on externalities – known as Pigovian taxes (after the economist Arthur Cecil Pigou) – don’t unduly impose new costs on anyone, so much as they simply recognize the costs that already exist and are being unduly forced onto innocent bystanders, and re-assign those costs to the people who are actually responsible for producing them — something which proper market functioning demands. Again, as with land value taxes, this is one case where if you care about keeping the market efficient, and you care about protecting people’s rights and respecting their consent, you should want there to be some kind of government intervention in the market (whether it be a tax or a regulation or some other such measure) – because without one, the distortionary effects of these externalities would persist unabated. And this is something that even such ardent defenders of the market economy as Milton Friedman and Thomas Sowell fully acknowledge. As Sowell writes:
Economic decisions made through the marketplace are not always better than decisions that governments can make. Much depends on whether those market transactions accurately reflect both the costs and the benefits that result. Under some conditions, they do not.
When someone buys a table or a tractor, the question as to whether it is worth what it cost is answered by the actions of the purchaser who made the decision to buy it. However, when an electric utility company buys coal to burn to generate electricity, a significant part of the cost of the electricity-generating process is paid by people who breathe the smoke that results from the burning of the coal and whose homes and cars are dirtied by the soot. Cleaning, repainting and medical costs paid by these people are not taken into account in the marketplace, because these people do not participate in the transactions between the coal producer and the utility company.
Such costs are called “external costs” by economists because such costs fall outside the parties to the transaction which creates these costs. External costs are therefore not taken into account in the marketplace, even when these are very substantial costs, which can extend beyond monetary losses to include bad health and premature death. While there are many decisions that can be made more efficiently through the marketplace than by government, this is one of those decisions that can be made more efficiently by government than by the marketplace. Even such a champion of free markets as Milton Friedman acknowledged that there are “effects on third parties for which it is not feasible to charge or recompense them.”
Clean air laws can reduce harmful emissions by legislation and regulations. Clean water laws and laws against disposing of toxic wastes where they will harm people can likewise force decisions to be made in ways that take into account the external costs that would otherwise be ignored by those transacting in the marketplace.
While externalities are a serious consideration in determining the role of government, they do not simply provide a blanket justification or a magic word which automatically allows economics to be ignored and politically attractive goals to be pursued without further ado. Both the incentives of the market and the incentives of politics must be weighed when choosing between them on any particular issue.
And this is a fair point. When trying to figure out how to best handle a negative externality, it’s often tempting not only to tax it, but to immediately jump to the most extreme solution of just banning it outright. And to be sure, in some cases, this actually is the best approach. In many cases, though, it can make more sense to allow the externality to continue but levy a charge for it – i.e. a Pigovian tax – or to take some similarly lighter-handed approach. As Timothy Taylor explains, there are a number of different options that might be appropriate depending on the situation:
The central economic concept here is an “externality,” which occurs when a party other than the immediate buyer and seller is directly affected by a transaction. The idea of a free market is based in part on the notion that buyers and sellers will act in their own best interests. However, when a market transaction adversely affects a third party—one who didn’t choose to be involved in the transaction—the argument that free markets will benefit all parties does not hold as well.
Externalities can be positive or negative. As an example, imagine your next-door neighbor is throwing a party and hires a really loud band. Your neighbor is happy to have music; the band is happy to be hired. You, as the external party, could go either way. If you like the music, great! Free concert! If you don’t like the music, not so great. You’ll have to suffer through it (or call the police). Either way, the deal between your neighbor and the band didn’t take you into account.
Pollution is the most important example of a negative externality. In an unfettered market transaction, the firm looks only at the private costs of production of a good. Social costs, the costs of production that the firm doesn’t pay for, don’t figure into the calculation. If a firm doesn’t have to pay anything to dump its garbage, it’s likely to generate a lot of garbage. But if firms have to pay for garbage disposal, you can be sure they’ll find ways to reduce their waste. Similarly, public policies concerning pollution seek to make those who create pollution face its costs and take them into account.
“Command and control” is the name economists give to the kind of regulatory policies that specify a maximum amount of pollution that can legally be emitted. Early environmental regulation in the United States in the 1970s took this approach with the passage of the original Clean Air Act and Clean Water Act, and they were effective. According to U.S. Environmental Protection Agency statistics, between 1970 and 2001, the level of particulates in the air fell by 76 percent, sulfur dioxide by 44 percent, volatile organic compounds by 38 percent, and carbon monoxide by 19 percent. The level of lead in the air—which is particularly harmful to growing children—fell by 98 percent, mainly from the use of unleaded gasoline. It’s harder to measure water quality consistently, but the widespread construction of better sewage treatment plants and better provisions for disposing of wastewater has made a huge difference over the past four decades.
Despite this good news, command-and-control environmental regulations have some prominent weaknesses. One obvious weakness of any regulatory system is that the regulators may start acting in the interests of industry—[a phenomenon known as] regulatory capture. […] In addition, command-and-control regulatory standards are typically inflexible. They often specify exactly what technology must be used to reduce a certain kind of pollution. Command-and-control regulation doesn’t reward innovative ways of avoiding pollution in the first place or reducing pollution below the legal standard.
The alternative to command-and-control regulation marches under the broad heading of market-oriented environmental policies. These policies seek to work with market incentives, rather than ordering firms to take certain actions. These policies come in several flavors. One is a pollution tax imposed on producers per unit of pollution. For those allergic to the word “tax,” it can instead be called a pollution “charge.” Such a charge creates an obvious incentive to reduce pollution, and unlike in a command-and-control system, it encourages firms to keep seeking ways to reduce pollution—rather than cutting pollution to just a hair below the legal limit. A pollution charge is also highly flexible, allowing producers to determine the best way to clean up their act.
Another market-oriented environmental policy is a marketable permit system. Marketable permits give polluters the legal right to emit a certain amount of pollution; often the permissible levels are set to decline over time. If the polluter can reduce pollution by more than the amount of the permit, then the permit can be sold to someone else—hence the word “marketable.” If a new producer wants to enter the market, it has to purchase a pollution permit from some existing firm. The United States has had some success with marketable permits—to reduce lead in gasoline, for example. Permits provide the same reason to reduce pollution and create cleaner technology as the pollution tax, but instead of reducing a tax, cleaning up their act enables producers to make a profit. In recent years, the European Union has sought to use marketable permits as a way of reducing carbon emissions into the atmosphere, and the U.S. Congress has debated similar measures.
Yet another alternative for [applying] market-oriented environmentalism is the use of property rights as an incentive. Think about the problem of protecting elephants or rhinoceroses in Africa. If no one owns the animals, they are vulnerable to poachers and a shrinking habitat. But if you declare their habitat a protected park, and everyone who lives around the park has an economic incentive from tourism to protect the park, then the people surrounding the animals have a sound economic reason to protect them.
Over the past twenty to thirty years, environmental policy has moved away from pure command and control and toward market-oriented mechanisms. In general, economists have tended to favor these mechanisms.
One of the biggest environmental issues of our time is the threat of global warming due to emissions of carbon dioxide and other gases. It’s a controversial topic, to say the least, from both economic and policy standpoints. Here’s my take, as an economist who has no claim to any specialized knowledge about climate science. A number of prominent climate scientists clearly believe that our present level of carbon emissions raises a risk of severe worldwide environmental damage. The probability and size of this risk is hard to measure, but when faced with a real possibility of a severe risk, it’s often worth taking out some insurance. In this case, one form of “insurance” would mean finding ways to limit the amount of carbon in the atmosphere. We could have command-and-control rules, for example, about maximum carbon emissions and minimum gas mileages for all cars. We could pass rules about carbon emissions from factories and other pollution sources. We could, alternatively or in addition, institute a carbon tax. We could issue marketable permits for carbon emissions to factories, refineries, automobile manufacturers, and the like. We could invest in research and development for technologies that remove carbon from the air or to spur development of energy sources that don’t emit carbon. It’s no challenge to come up with ways to reduce carbon emissions; the real trick is to do it in a market-oriented and flexible way that limits carbon emissions at the lowest possible economic cost.
For many environmentalists, all these ways to address pollution miss the point because they don’t lead to zero pollution. Wearing my hardheaded economist hat, I have to declare that zero pollution is not a realistic or a useful policy goal. Zero pollution would mean shutting down most industry and most of the economy. All our policy options—both command-and-control and market-oriented environmental policies—involve allowing some pollution. The argument for absolutely zero pollution is neither viable nor intellectually serious. The reasonable policy goal is to balance the benefits of production with the costs of pollution, or, to put it another way, bring the social costs and social benefits of production into line with each other.
The activities that create comfort and prosperity—such as manufacturing and transportation and heating our homes—will always have some environmental costs. The most logical way to balance growth and environmental responsibility is to build the price of pollution into the activities that cause it. Rational people respond to prices, and rational prices should reflect the true “social cost” of any activity. Coal is not a “cheap” source of energy when its environmental impacts are taken into account.
Pollution taxes, particularly a tax on carbon emissions, would encourage cost-effective conservation; consumers and companies can respond in whatever ways make the most economic sense to them. Any tax on pollution would also make cleaner sources of energy more economically viable. The market is a remarkably powerful phenomenon for creating sane environmental policies—if we give participants the right price signals, which we have failed to do so far.
Again, this point about making sure that prices accurately reflect costs – not just the immediate costs to buyers and sellers, but all relevant costs – is the key here. It’s true that under Pigovian taxes, prices for certain goods would be higher than they would otherwise be – but that’s the entire point; without the tax, the prices of those goods wouldn’t be high enough to reflect their true costs. The private cost of the goods wouldn’t be in line with their social cost. The Pigovian tax resolves this disparity and un-distorts the price. As Joseph Heath explains:
[The argument that] taxes always distort incentives in undesirable ways […] is not actually true of all taxes. […] Pigovian taxes […] are taxes that are imposed upon goods that are associated with significant negative externalities (a bit like so-called sin taxes on tobacco and alcohol). In this case, the incentive effects of the tax are themselves desirable from an efficiency perspective. Take, for example, the gasoline tax. The general problem with gasoline is that the market price is too low—the purchaser does not fully compensate those who are inconvenienced by her consumption. In particular, those who suffer as a result of the atmospheric pollution generated are not compensated. As a result, the price of gasoline is much lower than it would be in an ideal market economy (in which we all lived in bubbles and were able to charge other people for introducing foul emissions into our airspace).
The result, among other things, is that too much gasoline will be consumed. When the state imposes a tax upon gasoline, the incentive effect of the tax is to discourage gasoline consumption, and therefore to bring total consumption down closer to what it should be (closer to the point where the social cost is warranted by the private benefit). Thus the tax itself can be efficiency-promoting. (I say “can” because of the Second Best Theorem, which shows that, in an economy with multiple price distortions, fixing one price will not necessarily lead to a more efficient outcome; the argument must be made on a case-by-case basis.) Most important, the beneficial effects of the tax can be achieved regardless of what the government does with the revenue raised (even if it were to stuff it in bottles and bury them in abandoned mines). Indeed, many people think the government should impose a range of Pigovian taxes, then just turn around and give the money back to people. One sophisticated strategy is to push for the introduction of “green taxes” on a revenue-neutral basis, by matching them with cuts in the income tax. The general slogan: “Tax bads, not goods!”
And among economists – even conservative economists – this is an uncontroversial idea. As Wheelan notes:
Basic economics—the same study of markets that conservatives typically extol—tells us that most environmental problems are “market failures,” meaning that producers and consumers do not take the cost of pollution into account when they are making private decisions. This is one of those relatively rare circumstances in which markets do not align private behavior in ways that are consistent with what is good for society overall. As a result, economists across the political spectrum have embraced pollution taxes, such as a carbon tax, as a better way to raise government revenue than taxing productive activities like work, savings, and investment.
Gary Becker—a Nobel Prize winner from the University of Chicago, a disciple of Milton Friedman, and one of the most articulate contemporary proponents of free markets—is on record as favoring a carbon tax. So is former Federal Reserve chairman Alan Greenspan (who was a close friend of Ayn Rand while she was alive). Another persistent and persuasive advocate for some kind of carbon tax is Harvard economist Gregory Mankiw, who is the author of one of the most popular economics textbooks in America. More important in this context, Mankiw was the chair of the Council of Economic Advisers under George W. Bush.
The Booth School of Business at the University of Chicago polled an ideologically diverse group of prominent economists about their views on a carbon tax; 96 percent of the economists polled answered either “agree” or “strongly agree” that a twenty-dollar-per-ton tax on carbon emissions would be better for the U.S. economy than an income tax increase that raised the same amount of revenue.
It’s not often that an economic policy idea produces such a strong consensus; usually, the debate is firmly split into the pro-government side on the one hand, which supports intervention in the market, and the pro-market side on the other, which opposes anything that might make the market less efficient. In the case of Pigovian taxes, though, it’s no surprise that there’s no such split – because in this case, the two sides are actually one and the same. When it comes to negative externalities, intervening in the market is the very thing that helps keep the market efficient in the first place.
It’s the same sort of dynamic that we saw with land value taxation; by imposing a tax on a particular economic activity, we aren’t unduly violating people’s consent and thereby decreasing efficiency – we’re correcting for violations of consent and thereby preserving efficiency. We’re ensuring that the market is properly pricing in all the relevant costs, so that prices are actually accurate. And so in this context, government intervention – the act of taxing and redistributing money – isn’t some kind of outside force that can only ever get in the way of the proper functioning of the market; it’s an integral part of a well-functioning market. The government itself is simply an actor within the market economy, just like any other – it’s just that instead of only comprising one person or company, it comprises the entire populace. And when it collects taxes (assuming it’s actually collecting only the amount of tax that it’s justly entitled to, and not unduly imposing excessive taxes), it’s doing nothing more than receiving the payments that are rightly owed to its citizens for everything they’ve given up to the people being taxed – their land, their clean air and water, etc.
In a sense, we might regard the government as basically just a giant real estate company, which is collectively owned by the entire population. It rents out the land that naturally belongs to it (i.e. the commons, which belong to everyone), and in exchange for doing so, it receives rent from the people who move onto that land and assume the exclusive right to use it (i.e. also everyone). In turn, it provides them with services like roads, police, education, and so on, in the same way that the owner of an apartment building provides residents with housing, access to utilities, and amenities like a pool, a fitness center, etc. There isn’t really some fundamental economic difference between what the government does here and what private companies do; the same kind of transaction is occurring in both cases. In the end, whether we call it the “public sector” or the “private sector” is largely irrelevant; at the most basic level, it’s all just people acting in groups to achieve their economic goals. As Heath writes:
[There is] a surprisingly pervasive error that I refer to as the “government as consumer” fallacy.
The picture underlying this fallacy is relatively straightforward. Government services, such as health care, education, national defense, and so on, “cost” us as a society. We are able to pay for them only because of all the wealth that we generate in the private sector, which we transfer to the government in the form of taxes. A government that taxes the economy too heavily stands accused of “killing the goose that lays the golden eggs” by disrupting the mechanism that generates the wealth that it itself relies upon in order to provides its services.
Thus the government gets treated as a consumer of wealth, while the private sector is regarded as a producer. This is totally confused. The state in fact produces exactly the same amount of wealth as the market, which is to say, it produces none at all. People produce wealth, and people consume wealth. Institutions, such as the state or the market, neither produce nor consume anything. They simply constitute mechanisms through which people coordinate their production and consumption of wealth. Furthermore, the value of what a person produces has nothing to do with who pays his salary. The services of a security guard make the same contribution to the real wealth of the nation regardless of whether he is called a “police officer” and works for the state or is called a “rent-a-cop” and works for a private security firm.
At the end of the day, the fact that we call some of our economic activities “government” doesn’t change the fact that they’re still occurring within a market economy, and are therefore inescapably a part of the market themselves. (It’s a bit like how we refer to human civilization and technology as being “artificial” instead of “natural,” even though we humans are ourselves a product of nature, so by extension, everything we do is “natural” in a sense.) Anarchists and libertarians will often try to come up with ways of conducting certain economic activities (like addressing externalities) without government – but the best solutions they come up with will typically turn out to functionally just be small-scale “governments” of a sort themselves, just without the explicit “government” label. And again, there’s a reason for that; the label itself is fundamentally an artificial one. The fact that the “market mechanisms” they come up with are functionally just mini-governments is not a coincidence, because government itself – the thing they think they’re trying to avoid – is just such a market mechanism in its own right. If you’re going to have a well-functioning market economy that does things like account for externalities, some kind of government (whether you use that label for it or not) will just inextricably be a part of it. As Alexander writes:
Suppose […] that I sell my house to an amateur wasp farmer. Only he’s not a very good wasp farmer, so his wasps usually get loose and sting people all over the neighborhood every couple of days.
This trade between the wasp farmer and myself has benefited both of us, but it’s harmed people who weren’t consulted; namely, my neighbors, who are now locked indoors clutching cans of industrial-strength insect repellent. Although the trade was voluntary for both the wasp farmer and myself, it wasn’t voluntary for my neighbors.
[Are] there are libertarian ways to solve externalities [like this one] that don’t involve the use of force?
To some degree, yes. You can, for example, refuse to move into any neighborhood unless everyone in town has signed a contract agreeing not to raise wasps on their property.
But getting every single person in a town of thousands of people to sign a contract every time you think of something else you want banned might be a little difficult. More likely, you would want everyone in town to unanimously agree to a contract saying that certain things, which could be decided by some procedure requiring less than unanimity, could be banned from the neighborhood – sort of like the existing concept of neighborhood associations.
But convincing every single person in a town of thousands to join the neighborhood association would be near impossible, and all it would take would be a single holdout who starts raising wasps and all your work is useless. Better, perhaps, to start a new town on your own land with a pre-existing agreement that before you’re allowed to move in you must belong to the association and follow its rules. You could even collect dues from the members of this agreement to help pay for the people you’d need to enforce it.
But in this case, you’re not coming up with a clever libertarian way around government, you’re just reinventing the concept of government. There’s no difference between a town where to live there you have to agree to follow certain terms decided by association members following some procedure, pay dues, and suffer the consequences if you break the rules – and a regular town with a regular civic government.
As far as I know there is no loophole-free way to protect a community against externalities besides government and things that are functionally identical to it.
IV.
As much as anti-statists might want to just do away with government so everything can be left to the free market, the truth is that any genuinely competitive and well-functioning market needs government in order to work in the first place. In fact, as Milton and Rose Friedman point out, the very concept of private property itself couldn’t exist without some kind of overarching authority defining it and establishing the basic ground rules for its use:
No voluntary exchange that is at all complicated or extends over any considerable period of time can be free from ambiguity. There is not enough fine print in the world to specify in advance every contingency that might arise and to describe precisely the obligations of the various parties to the exchange in each case. There must be some way to mediate disputes. Such mediation itself can be voluntary and need not involve government. In the United States today, most disagreements that arise in connection with commercial contracts are settled by resort to private arbitrators chosen by a procedure specified in advance. In response to this demand an extensive private judicial system has grown up. But the court of last resort is provided by the governmental judicial system.
This role of government also includes facilitating voluntary exchanges by adopting general rules—the rules of the economic and social game that the citizens of a free society play. The most obvious example is the meaning to be attached to private property. I own a house. Are you “trespassing” on my private property if you fly your private airplane ten feet over my roof? One thousand feet? Thirty thousand feet? There is nothing “natural” about where my property rights end and yours begin. The major way that society has come to agree on the rules of property is through the growth of common law, though more recently legislation has played an increasing role.
According to the British philosopher Jeremy Bentham, “property and law are born together and die together. Before the laws there was no property; take away the laws, all property ceases.” Every first-year law student learns that private property is not an “object” or a “thing” but a complex bundle of rights. Property is a legally constructed social relation, a cluster of legislatively and judicially created and judicially enforceable rules of access and exclusion. Without government, capable of laying down and enforcing compliance with such rules, there would be no right to use, enjoy, destroy, or dispose of the things we own. This is obviously true for rights to intangible property (such as bank accounts, stocks, or trademarks), for the right to such property cannot be asserted by taking physical possession, only by an action at law. But it is equally true of tangible property. If the wielders of the police power are not on your side, you will not successfully “assert your right” to enter your own home and make use of its contents. Property rights are meaningful only if public authorities use coercion to exclude nonowners, who, in the absence of law, might well trespass on property that owners wish to maintain as an inviolable sanctuary. Moreover, to the extent that markets presuppose a reliable system of recordation, protecting title from never-ending challenge, property rights simultaneously presuppose the existence of many competent and honest and adequately paid civil servants outside the police force. My rights to enter, use, exclude from, sell, bequeath, mortgage, and abate nuisances threatening “my” property palpably presuppose a well-organized and well-funded court system.
A liberal government must refrain from violating rights. It must “respect” rights. But this way of speaking is misleading because it reduces the government’s role to that of a nonparticipant observer. A liberal legal system does not merely protect and defend property. It defines and thus creates property. Without legislation and adjudication there can be no property rights in the way Americans understand that term. Government lays down the rules of ownership specifying who owns what and how particular individuals acquire specific ownership rights. It identifies, for instance, the maintenance and repair obligations of landlords and how jointly owned property is to be sold. It therefore makes no more sense to associate property rights with “freedom from government” than to associate the right to play chess with freedom from the rules of chess. Property rights exist because possession and use are created and regulated by law.
Government must obviously help maintain owner control over resources, predictably penalizing force and fraud and other infractions of the rules of the game. Much of the civil law of property and tort is designed to carry out this business. And the criminal justice system channels considerable public resources to the deterrence of crimes against property: larceny, burglary, shoplifting, embezzlement, extortion, the forging of wills, receiving stolen goods, blackmail, arson, and so forth. The criminal law (inflicting punishments) and the civil law (exacting restitution or compensation) conduct a permanent, two-front, and publicly financed war on those who offend against the rights of owners.
David Hume, the Scottish philosopher, liked to point out that private property is a monopoly granted and maintained by public authority at the public’s expense. As the English jurist William Blackstone, following Hume, also explained, property is “a political establishment.” In drawing attention to the relation between property and law—which is to say, between property and government—Bentham was making the very same point. The private sphere of property relations takes its present form thanks to the political organization of society. Private property depends for its very existence on the quality of public institutions and on state action, including credible threats of prosecution and civil action.
What needs to be added to these observations is the correlative proposition that property rights depend on a state that is willing to tax and spend. Property rights are costly to enforce. To identify the precise monetary sum devoted to the protection of property rights, of course, raises difficult issues of accounting. But this much is clear: a state that could not, under specified conditions, “take” private assets could not protect them effectively, either. The security of acquisitions and transactions depend, in a rudimentary sense, on the government’s ability to extract resources from private citizens and apply them to public purposes.
Wheelan observes the same thing, noting that national economies without decent governments aren’t exactly known for functioning well:
Government does not just fix the rough edges of capitalism; it makes markets possible in the first place. You will get a lot of approving nods at a cocktail party by asserting that if government would simply get out of the way, then markets would deliver prosperity around the globe. Indeed, entire political campaigns are built around this issue. Anyone who has ever waited in line at the Department of Motor Vehicles, applied for a building permit, or tried to pay the nanny tax would agree. There is just one problem with that cocktail party sentiment: It’s wrong. Good government makes a market economy possible. Period. And bad government, or no government, dashes capitalism against the rocks, which is one reason that billions of people live in dire poverty around the globe.
To begin with, government sets the rules. Countries without functioning governments are not oases of free market prosperity. They are places in which it is expensive and difficult to conduct even the simplest business. Nigeria has one of the world’s largest reserves of oil and natural gas, yet firms trying to do business there face a problem known locally as BYOI—bring your own infrastructure.
By contrast, he continues, good government makes it possible for commerce to be conducted easily and efficiently:
Government lowers the cost of doing business in the private sector in all kinds of ways: by providing uniform rules and regulations, such as contract law; by rooting out fraud; by circulating a sound currency. Government builds and maintains infrastructure—roads, bridges, ports, and dams—that makes private commerce less costly. E-commerce may be a modern wonder, but let’s not lose sight of the fact that after you order khakis from Gap.com, they are dispatched from a distribution center in a truck barreling along an interstate. In the 1950s and 1960s, new roads, including the interstate highway system, accounted for a significant fraction of new capital created in the United States. And that investment in infrastructure is associated with large increases in productivity in industries that are vehicle-intensive.
Effective regulation and oversight make markets more credible. Because of the diligence of the Securities and Exchange Commission (SEC), one can buy shares in a new company listed on the NASDAQ with a reasonable degree of certainty that neither the company nor the traders on the stock exchange are engaging in fraud. In short, government is responsible for the rule of law. (Failure of the rule of law is one reason why nepotism, clans, and other family-centered behavior are so common in developing countries; in the absence of binding contractual agreements, business deals can be guaranteed only by some kind of personal relationship.) Jerry Jordan, former president of the Federal Reserve Bank of Cleveland, once mused on something that is obvious but too often taken for granted: Our sophisticated institutions, both public and private, make it possible to undertake complex transactions with total strangers. He noted:
It seems remarkable, when you think about it, that we often take substantial amounts of money to our bank and hand it over to people we have never met before. Or that securities traders can send millions of dollars to people they don’t know in countries they have never been in. Yet this occurs all the time. We trust that the infrastructure is set in place that allows us not to worry that the person at the bank who takes our money doesn’t just pocket it. Or that when we use credit cards to buy a new CD or tennis racquet over the Internet, from a business that is located in some other state or country, we are confident we will get our merchandise, and they are confident they will get paid.
Shakespeare may have advised us to get rid of all the lawyers, but he was a playwright, not an economist. The reality is that we all complain about lawyers until we have been wronged, at which point we run out and hire the best one we can find. Government enforces the rules in a reasonably fair and efficient manner. Is it perfect? No. But rather than singing the praises of the American justice system, let me simply provide a counterexample from India. Abdul Waheed filed a lawsuit against his neighbor, a milk merchant named Mohammad Nanhe, who had built several drains at the edge of his property that emptied into Mr. Waheed’s front yard. Mr. Waheed did not like the water draining onto his property, in part because he had hoped to add a third room to his cement house and he was worried that the drains would create a seepage problem. So he sued. The case came to trial in June 2000 in Moradabad, a city near New Delhi.
There is one major complication with this civil dispute: The case had been filed thirty-nine years earlier; Mr. Waheed was dead and so was Mr. Nanhe. (Their relatives inherited the case.) By one calculation, if no new cases were filed in India, it would still take 324 years to clear all the existing cases from the docket. These are not just civil cases. In late 1999, a seventy-five-year-old man was released from a Calcutta jail after waiting thirty-seven years to be tried on murder charges. He was released because the witnesses and investigating officer were all dead. (A judge had declared him mentally incompetent to stand trial in 1963 but the action was somehow lost.) Bear in mind that by developing world standards, India has relatively good government institutions. In Somalia, these kinds of disputes are not resolved in the courts.
All the while, government enforces antitrust laws that forbid companies from conspiring together in ways that erase the benefits of competition. Having three airlines that secretly collude when setting fares is no better than having one slovenly monopoly. The bottom line is that all these institutions form the tracks on which capitalism runs. Thomas Friedman, foreign affairs columnist for the New York Times, once made this point in a column. “Do you know how much your average Russian would give for a week of [the U.S. Department of Justice] busting Russia’s oligarchs and monopolists?” he queried. He pointed out that with many of the world’s economies plagued by endemic corruption, particularly in the developing world, he has found that foreigners often envy us for . . . hold on to your latte here . . . our Washington bureaucrats; “that is, our institutions, our courts, our bureaucracy, our military, and our regulatory agencies—the SEC, the Federal Reserve, the FAA, the FDA, the FBI, the EPA, the IRS, the INS, the U.S. Patent Office and the Federal Emergency Management Agency.”
Ultimately, it’s all this government infrastructure – not just the physical infrastructure, but the socio-civil infrastructure as well – that enables our private sector to flourish in the first place. Without government-constructed roads for shipping products, government protection of property rights, government enforcement of contracts between buyers and sellers, government-issued currency that’s both stable and universally accepted, government arbitration of disputes in impartial courts, government checks against monopolization and collusion, and so on, private actors simply wouldn’t be able to do business efficiently. Of course, all this infrastructure has to be paid for – which means that taxes have to be collected – but the crucial point here is that all the private-sector wealth that goes to pay these taxes couldn’t have been accumulated in the first place without the government collecting and spending taxes, because government-funded infrastructure was the very thing that allowed the wealth to originally be produced. In this sense, then, as David Harvey puts it, such government expenditures functionally pay for themselves:
[The] vast infrastructure that constitutes the built environment is a necessary material precondition for capitalist production, circulation and accumulation to proceed. This infrastructure, furthermore, requires constant and adequate maintenance to keep it in good working order.
[…]
It is here that the state […] has to enter into the picture and play a central role. To do this it needs to extract taxes. The theory of productive state expenditures pioneered in Second Empire Paris by the Saint-Simonian financiers and later generalised by [John Maynard] Keynes suggests that the tax base should increase as private capital responds positively to possibilities generated by new infrastructural provisions. The result is a form of state-capital circulation in which state investments not only pay for themselves but also earn extra revenues to be put into more infrastructures.
This goes back to the Georgist point from before, about how a lot of the value held by private property owners comes not from anything they’ve done personally, but from all the positive attributes of their surrounding community. If someone happens to live in a jurisdiction where property rights are well-protected, contracts are properly enforced, and so on, they’ll be able to create a lot more wealth than they’d be able to create in the opposite scenario – even if their own personal efforts are exactly the same in both cases – simply because of all these advantages provided to them by the jurisdiction in which they live. Their government will serve as an income amplifier for them, such that even if they later have to give back some of their earnings as taxes (in repayment for all these advantages they’re being given), they’ll still be better off than they would be if no such government existed – since in the latter case, their overall level of wealth, though untaxed, would be considerably lower. The fact that they’re having to pay taxes is, in David Cay Johnston’s words, nothing but “the price of maintaining the civilization that has made their success possible.”
V.
It’s true that collecting taxes and performing the necessary functions of government does require the ability to coerce people. If someone is unwilling to pay what they rightly owe, or is unwilling to follow the rightful laws of the land, then the government has to have the ability to compel them to do so. But it’s important to recognize that just because something is coercive doesn’t automatically make it unjust; after all, as we’ve already established, there are plenty of contexts in which even the most hardline libertarians will agree that coercion is justified – like for instance, if someone steals your property and you have to coerce them into giving it back (or reimbursing you), or if someone tries to harm you physically and you have to use physical force to defend yourself. The Non-Aggression Principle only prohibits the proactive initiation of force; so if someone is already violating someone else’s consent, there’s nothing wrong with using coercion to stop or redress that violation. In other words, while anti-statists will often push the line that (in Walter Williams’s words) “it’s immoral to take one person’s money and give it to another person, for any reason,” even they will admit when pressed that they don’t actually mean any reason; there are certain cases in which coercively redistributing money is not just fine, but morally obligatory.
What’s more, I’d be willing to go even further than that; I think that most reasonable people (including anti-statists) would agree that there are a number of contexts in which coercion can be acceptable even aside from just these narrow cases of counteracting other coercive acts. We’ve only been focusing on those counter-coercion type cases so far, but if we’re going to talk about the moral permissibility of taxation in general terms, we also have to consider the concepts of coercion and consent in more general terms. And in that broader context, I don’t think it’s too hard to find situations in which violating someone’s consent can be justified even if they aren’t already violating anyone else’s consent themselves. Just to take the most extreme possible case, for instance, imagine if there was an incoming meteor threatening to destroy all life on Earth, and (for some reason) the only way to stop it involved taking a single cent out of Bill Gates’s bank account without his consent. I think it’s safe to say that pretty much everyone would consider this to be morally acceptable (aside from perhaps a handful of stubborn anti-statists who were just trying to make an ideological point – and even they, I would think, would cave if they were actually somehow forced to make the call in real life). But this example isn’t like those others mentioned above, in which one person is actively violating someone else’s consent and is being taxed as a corrective. In this case, taking that one penny away from Bill Gates is nothing but an act of pure utilitarianism; the benefits significantly outweigh the harms, and that’s all there is to it. It’s a clear-cut violation of the Non-Aggression Principle – and yet it’s also very obviously the right thing to do. Not only that, but even if we tweaked the scenario’s parameters to be somewhat less extreme – e.g. if we made it so it involved taking $10 or $100 out of Bill Gates’s bank account instead of just one cent, or if we made it so the meteor was only threatening the lives of a few million people instead of all life on Earth – taking the money would still obviously be the right thing to do. Of course, if we pushed the parameters further still, we’d eventually reach a point where this was no longer the case; if we forcibly took away everything Bill Gates owned, for instance, and spent it on something completely trivial and unimportant, this would clearly no longer be a net positive by any reasonable standard. But the point here is just that it’s not a black-and-white kind of thing, where anything that coerces anyone for any reason whatsoever is categorically forbidden; the Non-Aggression Principle isn’t an absolute. The line between permissible coercion and impermissible coercion lies somewhere along a spectrum. Our question is just where that threshold lies.
Let’s consider another example to help clarify the point. Say there was a big forest where a bunch of people were fleeing from a rampaging Tyrannosaurus or something, and the only way for them to survive was to keep quiet enough to escape its notice. If one of them happened to be a huge loudmouth who was constantly shouting all the time, it seems obvious that it would be permissible for the rest of them to compel that person to be quiet (even by force if necessary) in order to save their own lives. This would be a clear-cut case of one person imposing a negative externality onto the rest of the group without their consent, so the rest of the group would be justified in using coercion to counteract that externality. But now let’s imagine that the group has to make another choice: Say they’re fleeing from the Tyrannosaurus and they come across the only safe refuge in the entire forest – a fortified cabin whose owner has conveniently gone on vacation for a few days. Hiding in the cabin would save all of their lives – but let’s say the owner has left a note on the front door saying that they don’t want anyone to come into their cabin because it might get their carpet slightly dirty and they don’t want to have to deal with the minor annoyance of vacuuming it. Would it still be morally permissible for the group to hide in the cabin for a few minutes until the Tyrannosaurus has passed? Again, I think it’s obvious that it would be. Sure, it would create a slight imposition on the cabin owner – and violating someone’s preferences in this way isn’t a good thing – but what’s even less of a good thing is having a bunch of people die horribly for the sake of keeping some silly carpet clean. The value of their lives would clearly outweigh the value of the cabin owner’s preference not to have to spend five minutes vacuuming their carpet; so the immorality of keeping them out of the cabin would clearly outweigh the immorality of ignoring the cabin owner’s preference in this one case.
The kind of one-dimensional thinking that says it’s impermissible to ever initiate any kind of non-consensual action against another person or their property for any reason – i.e. the Non-Aggression Principle – misses the most important point of scenarios like this: Yes, it’s true that coercing people is morally bad – but it’s also not the only thing that’s morally bad. It’s possible for things to happen that are even more egregious than using someone’s property without their consent. And in such cases, if we have no other choice but to choose the lesser of two evils, it’s okay to opt for the coercive option if doing so will prevent an even greater harm – even if it means impinging on someone’s private property, and in fact even if it means seizing their property outright.
Let’s consider one more variation on the Tyrannosaurus example. Say that instead of finding a fortified cabin, the group instead comes across a cache of high-powered tranquilizer guns capable of knocking out the Tyrannosaurus for long enough for them to escape. And let’s say that once again, these tranquilizer guns are owned by someone who’s off on vacation and hasn’t given their consent for anyone else to use them. Let’s also say that there are thousands of guns in the cache – far more than any one person could ever use themselves – and the owner would scarcely even notice the loss of one of them. In this case, would it not still be morally permissible to take one shot from one of the guns in order to save the lives of everyone in the group? If you were there yourself, would you not take the shot? Or would you insist that doing so without the owner’s consent would be unacceptable, and watch with folded arms as all your loved ones were horrifically killed in front of you before succumbing to the same fate yourself? Again, I think that for any reasonable person, the answer is obvious: In these kinds of situations, seizing someone’s property – even without their consent (and even if it’s not possible to reimburse them later) – is totally acceptable. (And this would be even more true if, say, the group of people fleeing from the Tyrannosaurus had been the ones who provided the gun owner with all the infrastructure needed to acquire the guns in the first place, and had never received any sort of payment for that service – in which case they could justifiably say that they were just reclaiming the share of the gun owner’s wealth that they themselves were responsible for creating.)
But of course, this is exactly what’s happening when a government collects ordinary taxes. If a country is (say) being threatened by a hostile neighbor, it might choose to tax some of its citizens (particularly the wealthier ones with more money to spare) so it can mount a national defense to keep its population safe. Or if there’s a raging epidemic of violent crime within the country’s borders, it might tax its citizens so it can fund a police force to stem the violence. In such cases, the government is using its coercive powers to take away people’s property without their consent – which is not a good thing in itself – but the alternative outcome if it didn’t collect these taxes would be even worse. And again, that’s not to say that it’s never possible for a government to tax its citizens for less worthy reasons; it’s absolutely possible, and in such cases, we can rightly say that the government’s actions are less justifiable (more on this point momentarily). But the point here is just that this is all a matter of degree, not a matter of some absolute black-and-white principle that forbids coercion for any reason in any context whatsoever. Once we’ve established that there are situations in which coercion can be justified, it’s just a question of where we draw the line. Unfortunately, a lot of people with anti-statist leanings, even despite acknowledging that exceptions have to be made in some cases, will still insist on treating it as an all-or-nothing kind of binary anyway – and this often leads them to draw hard lines in places that, frankly, just end up seeming kind of arbitrary.
What do I mean by “arbitrary,” exactly? Well, just to refer back to our Tyrannosaurus scenario one last time, we’ve established that most people (anti-statist or not) would agree that taking one of the tranquilizer guns without the owner’s consent would be justifiable if it meant saving a whole group of people from being killed. The same would be true if (for instance) instead of having one giant rampaging monster threatening everyone’s lives, it was a bunch of smaller monsters like Velociraptors or killer hornets or something; the size and quantity of the monster wouldn’t be relevant, because it would be the same basic situation regardless. This is likewise why most people would generally agree that taxation would be permissible in order to stop a genocidal foreign army from invading and decimating the population, or to stop violent gangs from doing the same thing within their own communities. It wouldn’t matter how big each of the threatening individuals were or what species they belonged to; it would be permissible to take some resources from those who had plenty to spare in order to stop the threat and save people’s lives. And yet, if we tweaked the scenario so the threatening individuals were even smaller and more numerous still, and were still threatening to kill people by the millions, there would actually be a lot of people who would all of a sudden insist that coercive taxation absolutely could not be used under any circumstances to protect the victims, and that everyone would have to either find their own means of keeping themselves safe, or else just be out of luck. Does this sound hard to believe? Unfortunately, it’s not a hypothetical; it’s basically the current reality of the healthcare debate in the US. The monsters that are constantly threatening to kill millions of people are called “microbes,” and the weapon used to fight them off is called “medicine” – but aside from these names, the situation is no different from any of the scenarios mentioned above. A bunch of dangerous organisms are attacking us and threatening to kill us – just the same as if we were being attacked by pack of Velociraptors or a swarm of killer hornets – but because they’re just an inch or so smaller than hornets, a lot of people have decided that now it’s suddenly no longer permissible to protect against them if it means taking away anyone’s property without their consent.
This is what I mean by “arbitrary.” When deciding where the Non-Aggression Principle can be applied, anti-statists will often concede that exceptions have to be made for things like physical security; if we’re under threat from dangerous monsters or armies or terrorists or what have you, and the private sector is unable to fully fend them off on its own, they’ll grant that it might be permissible to take people’s property without their consent just in these special cases. And yet, if the looming threat is deadly disease, they’ll insist that protecting people is somehow no longer a matter of their “physical security,” and therefore doesn’t fall under the same criteria – even though it’s fundamentally the same dynamic at play. By insisting that government can only be allowed to collect and spend tax money on things like military defense, and not on things like healthcare, anti-statists are defending a principle that doesn’t actually have any real defensible basis. And the result, when they get their way, is often a lot of needless suffering. As Alexander writes in an FAQ (responding to arguments from a hypothetical libertarian questioner):
[Q]: There’s a difference [between providing military defense and providing healthcare. In the latter case], people may die, but it’s not your job to save them. The government’s job is only to protect people and property from force, not to protect people from the general unfairness of life.
Who died and made you the guy who decides what the government’s job is? Or, less facetiously: on what rational grounds are you making that decision?
Currently, several trillion dollars are being spent to prevent terrorism. This seems to fall within the area of what libertarians would consider a legitimate duty of government, since terrorists are people who initiate force and threaten our safety and the government needs to stop this. However, terrorists only kill an average of a few dozen Americans per year.
Much less money is being spent on preventing cardiovascular disease, even though cardiovascular disease kills 800,000 Americans per year.
Let us say, as seems plausible, that the government can choose to spend its money either on fighting terrorists, or on fighting CVD. And let us say that by spending its money on fighting terrorists, it saves 40 lives, and by spending the same amount of money on fighting CVD, it saves 40,000 lives.
All of these lives, presumably, are equally valuable. So there is literally no benefit to spending the money on fighting terrorism rather than CVD. All you are doing is throwing away 39,960 lives on an obscure matter of principle. It’s not even a good principle – it’s the principle of wanting to always use heuristics even when they clearly don’t apply because it sounds more elegant.
There’s a reason this is so tempting. It’s called the Bad Guy Bias, and it’s an evolutionarily programmed flaw in human thinking. People care much more about the same amount of pain when it’s inflicted by humans than when it’s inflicted by nature. Psychologists can and have replicated this in the lab, along with a bunch of other little irrationalities in human cognition. It’s not anything to be ashamed of; everyone’s got it. But it’s not something to celebrate and raise to the level of a philosophical principle either.
[Q]: Stop calling principles like “don’t initiate force” heuristics! These aren’t some kind of good idea that works in a few cases. These are the very principles of government and morality, and it’s literally impossible for them to guide you wrong!
Let me give you a sketch of one possible way that a libertarian perfect world that followed all of the appropriate rules to the letter could end up as a horrible dystopia. There are others, but this one seems most black-and-white.
Imagine a terrible pandemic, the Amazon Death Flu, strikes the world. The Death Flu is 100% fatal. Luckily, one guy, Bob, comes up with a medicine that suppresses (but does not outright cure) the Death Flu. It’s a bit difficult to get the manufacturing process right, but cheap enough once you know how to do it. Anyone who takes the medicine at least once a month will be fine. Go more than a month without the medicine, and you die.
In a previous version of this FAQ, Bob patented the medicine, and then I got a constant stream of emails saying (some) libertarians don’t believe in patents. Okay. Let’s say that Bob doesn’t patent the medicine, but it’s complicated to reverse engineer, and it would definitely take more than a month. This will become important later.
Right now Bob is the sole producer of this medicine, and everyone in the world needs to have a dose within a month or they’ll die. Bob knows he can charge whatever he wants for the medicine, so he goes all out. He makes anyone who wants the cure pay one hundred percent of their current net worth, plus agree to serve him and do anything he says. He also makes them sign a contract promising that while they are receiving the medicine, they will not attempt to discover their own cure for the Death Flu, or go into business against him. Because this is a libertarian perfect world, everyone keeps their contracts.
A few people don’t want to sign their lives away to slavery, and refuse to sign the contract. These people receive no medicine and die. Some people try to invent a competing medicine. Bob, who by now has made a huge amount of money, makes life difficult for them and bribes biologists not to work with them. They’re unable to make a competing medicine within a month, and die. The rest of the world promises to do whatever Bob says. They end up working as peons for a new ruling class dominated by Bob and his friends.
If anyone speaks a word against Bob, they are told that Bob’s company no longer wants to do business with them, and denied the medicine. People are encouraged to inform on their friends and families, with the promise of otherwise unavailable luxury goods as a reward. To further cement his power, Bob restricts education to the children of his friends and strongest supporters, and bans the media, which he now controls, from reporting on any stories that cast him in a negative light.
When Bob dies, he hands over control of the medicine factory to his son, who continues his policies. The world is plunged into a Dark Age where no one except Bob and a few of his friends have any rights, material goods, or freedom. Depending on how sadistic Bob’s and his descendants are, you may make this world arbitrarily hellish while still keeping perfect adherence to libertarian principles.
Compare this to a similar world that followed a less libertarian model. Once again, the Amazon Death Flu strikes. Once again, Bob invents a cure. The government thanks him, pays him a princely sum as compensation for putting his cure into the public domain, opens up a medicine factory, and distributes free medicine to everyone. Bob has become rich, the Amazon Death Flu has been conquered, and everyone is free and happy.
[Q]: This is a ridiculously unlikely story with no relevance to the real world.
I admit this particular situation is more a reductio ad absurdum than something I expect to actually occur the moment people start taking libertarianism seriously, but I disagree that it isn’t relevant.
The arguments that libertarianism will protect our values and not collapse into an oppressive plutocracy require certain assumptions: there are lots of competing companies, zero transaction costs, zero start-up costs, everyone has complete information, everyone has free choice whether or not to buy any particular good, everyone behaves rationally, et cetera. The Amazon Death Flu starts by assuming the opposite of all of these assumptions: there is only one company, there are prohibitive start-up costs, a particular good absolutely has to be bought, et cetera.
The Amazon Death Flu world, with its assumptions, is not the world we live in. But neither is the libertarian world. Reality lies somewhere between the “capitalism is perfect” of the one, and the “capitalism leads to hellish misery” of the other.
There’s no Amazon Death Flu, but there are things like hunger, thirst, unemployment, normal diseases, and homelessness. In order to escape these problems, we need things provided by other people or corporations. This is fine and as it should be, and as long as there’s a healthy free market with lots of alternatives, in most cases these other people or corporations will serve our needs and society’s needs while getting rich themselves, just like libertarians hope.
But this is a contingent fact about the world, and one that can sometimes be wrong. We can’t just assume that the heuristic “never initiate force” will always turn out well.
Even Huemer, thoroughgoing anarchist that he is, agrees that the Non-Aggression Principle can’t be treated as an absolute; the only way of making it even remotely credible, as he points out, is to add so many exceptions that it ultimately ceases to function as a firm rule at all:
In libertarian lore, [the simplest version of] the Non-Aggression Principle says something like this:
[…]
NAP0 (a.k.a. “pacifism”): It is always wrong to use force against others.
That’s obviously false, and almost no one (including libertarians) believes it, because there are cases of justified self-defense and defense of innocent third parties involving use of force. So we make our first qualification, resulting in:
NAP1: It is wrong to use force against others, unless doing so is necessary to stop someone else from using force against others.
But in fact, almost no one, not even libertarians, believes NAP1 either. It too is obviously false. There are many examples showing this.
Example 1: I promise to mow Ayn Rand’s lawn in exchange for her grading some of my papers. Rand grades the papers, with copious helpful comments (pointing out where students are evading reality, hating the good for being the good, etc.), but then I don’t mow the lawn. I also refuse to do anything to make amends for my failure. Haha.
Almost everyone, including libertarians, thinks that the state can force me to mow the lawn or otherwise make amends (e.g., pay the money value of a mowed lawn).
Example 2: I start deliberately spreading false rumors that Walter Block is a Nazi. This causes him to be ostracized, lose his job, and be blacklisted by the SJW culture that is academia.
Almost everyone, including most libertarians, agrees that Walter should be able to sue me for defamation in court, and collect damages, coercively enforced, of course.
Example 3: Hans-Hermann Hoppe owns a large plot of land around his house. One day, when he emerges from his house to collect his copy of Reason, he sees me sleeping peacefully in a corner of his lawn. Though I haven’t hurt anyone and (being very thin and meek) pose no physical threat to anyone, Hans is nevertheless irritated, and demands that I get off his lawn. I just plug my ears and go back to sleep.
Almost everyone (especially Hans) thinks that Hans can use force to expel me from his lawn.
Example 4: I have become seriously injured by a hit-and-run driver, and I need to be taken to the hospital immediately. The only available car is Murray Rothbard’s car, but Murray is not around to give permission. (I am also not sure he would authorize saving me.)
Almost everyone thinks it is permissible for me to break into Murray’s car (thus initiating force against his property?) to get to the hospital.
Example 5: Powerful space aliens are going to bomb the Earth, killing 3 billion people, unless you deliver to them one recently-plucked hair from the head of Harry Binswanger. Harry, unfortunately, cannot be persuaded to part with the hair for any amount of money. (He really wants to live up to his name, you see.)
Almost everyone thinks it is permissible to forcibly steal the hair.
Of course, all of these examples are easily accommodated: we just have to add appropriate qualifications and clarifications to our NAP. Once we add the needed qualifications, we arrive at:
NAP6: It is wrong to use force against others, unless: (i) Doing so is necessary to stop them from using force against others (unless: (a) their force was itself justified by one of the conditions listed herein, in which case it is still wrong to use force against them), or (ii) It is necessary to enforce a contract, or (iii) It is necessary to force someone to pay compensation for defamation, or (iv) It is necessary to stop someone from using someone’s property without the property owner’s consent (unless: (b) the use of the property is necessary, in an emergency situation, to prevent something much worse from happening, in which case it is still wrong to use force against that person), or (v) It is necessary to prevent some vastly greater harm from occurring.
We’ve added five main qualifications ((i)-(v)) and two meta-qualifications ((a) and (b)). Is NAP6 true at last? Well, it’s hard to say whether we’ve included all needed qualifications and sub-qualifications. At least this latest NAP is no longer obviously false. But it’s so complex that it’s hard to claim that it’s obviously true either, and it’s really unclear why someone else cannot propose another qualification, say, “… or (vi) it is necessary to stop the poor from going without health care.”
Libertarians will disagree with qualification (vi), but they don’t have a good reason to resist it, if their libertarianism just rests on a NAP. We added (i)-(v) and (a) and (b) in order to accommodate our ethical intuitions (as rationality demands), but then why can’t a leftist add (vi) to accommodate their intuitions?
In other words, NAP6 might be true, but you don’t establish anything interesting by appealing to it, since someone with different starting intuitions can just as reasonably modify NAP6 to fit those intuitions.
(And just to affirm what he’s saying here, I personally wouldn’t even see why qualification (vi) would have to be its own point; to me, it seems like something that would fall squarely under qualification (v).)
With all this being said, of course, it’s worth reiterating that just because it’s possible to justify coercion in some cases doesn’t mean that everyone who claims to have a good reason for violating someone else’s consent is in fact justified. Most of the time, when someone takes someone else’s property without their consent, it’s just called “theft,” because their reasons don’t meet any of the criteria above. They might claim to have justifiable reasons for their actions – and they might even believe it themselves – but if our only condition for allowing people to coerce each other was just that they thought they had a justifiable reason to do so, it’d turn all of society into a total mess. Far better to have some kind of formal, coordinated system through which we can decide which acts of coercion truly are justifiable, and can implement them in a way that’s consistent and minimally disruptive, rather than just having a free-for-all in which coercive acts are committed by random individuals on a completely haphazard basis regardless of whether or not they’re actually justified. To quote Alexander again:
The right to property [is an important one]. On the individual scale, taking someone else’s property makes them very unhappy, as you know if you’ve ever had your bike stolen. On the larger scale, abandoning belief in private property has disastrous results for an entire society, as the experiences of China and the Soviet Union proved so conclusively. So it’s safe to say there’s a right to private property.
Is it ever acceptable to violate that right? In the classic novel Les Miserables, Jean Valjean’s family is trapped in bitter poverty in 19th century France, and his nephew is slowly starving to death. Jean steals a loaf of bread from a rich man who has more than enough, in order to save his nephew’s life. This is a classic moral dilemma: is theft acceptable in this instance?
We can argue both sides. A proponent might say that the good consequences to Jean and his family were very great – his nephew’s life was saved – and the bad consequences to the rich man were comparatively small – he probably has so much food that he didn’t even miss it, and if he did he could just send his servant to the bakery to get another one. So on net the theft led to good consequences.
The other side would be that once we let people decide whether or not to steal things, we are on a slippery slope. What if we move from 19th century France to 21st century America, and I’m not exactly starving to death but I really want a PlayStation? And my rich neighbor owns like five PlayStations and there’s no reason he couldn’t just go to the store and buy another. Is it morally acceptable for me to steal one of his PlayStations? The same argument that applied in Jean Valjean’s case above seems to suggest that it is – but it’s easy to see how we go from there to everyone stealing everyone’s stuff, private property becoming impossible, and civilization collapsing. That doesn’t sound like a very good consequence at all.
If everyone violates moral heuristics whenever they personally think it’s a good idea, civilization collapses. If no one ever violates moral heuristics, Jean Valjean’s nephew starves to death for the sake of a piece of bread the rich man never would have missed.
We need to bind society by moral heuristics, but also have some procedure in place so that we can suspend them in cases where we’re exceptionally sure of ourselves without civilization instantly collapsing. Ideally, this procedure should include lots of checks and balances, to make sure no one person can act on her own accord. It should reflect the opinions of the majority of people in society, either directly or indirectly. It should have access to the best minds available, who can predict whether violating a heuristic will be worth the risk in this particular case.
Thus far, the human race’s best solution to this problem has been governments. Governments provide a method to systematically violate heuristics in a particular area where it is necessary to do so without leading to the complete collapse of civilization.
If there was no government, I, in Jean Valjean’s situation, absolutely would steal that loaf of bread to save my nephew’s life. Since there is a government, the government can set a certain constant amount of theft per year, distribute the theft fairly among people whom it knows can bear the burden, and then feed starving children and do other nice things. The ethical question of “is it ethical for me to steal/kill/stab in this instance?” goes away, and society can be peaceful and stable.
He sums up this principle of not violating people’s consent unless it can be done in a formal, systematic way with the general phrase “Be nice, at least until you can coordinate meanness”:
A friend (I can’t remember who) once argued that “be nice” provides a nigh-infallible ethical decision procedure. For example, enslaving people isn’t very nice, so we know slavery is wrong. Kicking down people’s doors and throwing them in prison for having a joint of marijuana isn’t very nice, so we know the drug war is wrong. Not letting gays marry isn’t very nice, so we know homophobia is wrong.
I counterargue that even if we ignore the ways our notion of “nice” itself packs up pre-existing moral beliefs, this heuristic fails in several important cases:
1. Refusing the guy who is begging you to give his drivers’ license back, saying that without a car he won’t be able to visit his friends and family or have any fun, and who is promising that he won’t drive drunk an eleventh time. 2. Forcibly restraining a screaming baby while you jam a needle into them to vaccinate them against a deadly disease. 3. Sending the police to arrest a libertarian rancher in Montana who refuses to pay taxes for reasons of conscience 4. Revoking the credential (and thus destroying the future job prospects of) a teacher who has sex with one of her underage students
Sure, you could say that each of these “leads toward a greater niceness”, like that you’re only refusing the alcoholic his license in order to be nice to potential drunk driving victims. But then you’ve lost all meaningful distinction between the word “nice” and the word “good” and reinvented utilitarianism. And reinventing utilitarianism is pretty cool, but after you do that you no longer have such an easy time arguing against the drug war – somebody’s going to argue that it leads to the greater good of there being fewer drugs.
We usually want to avoid meanness. In some rare cases, meanness is necessary. I think one check for whether a certain type of meanness might be excusable is – it’s less likely to be excusable if it’s not coordinated.
Consider: society demands taxes to pay for communal goods and services. This does sometimes involve not-niceness, as in the example of the rancher in (3). But what makes it tolerable is that it’s done consistently and through a coordinated process. If the rule was “anybody who has a social program they want can take money from somebody else to pay for it,” this would be anarchy. Some libertarians say “taxation is theft”, but where arbitrary theft is unfair, unpredictable, and encourage perverse incentives like living in fear or investing in attack dogs, taxation has none of these disadvantages.
By the rule “be nice, at least until you can coordinate meanness”, we should not permit individuals to rob each other at gunpoint in order to pay for social programs they want, but we might permit them to advocate for a coordinated national taxation policy.
Or: society punishes people for crimes, including the crime of libel. Punishment is naturally not-nice, but this seems fair; we can’t just have people libeling each other all the time with no consequences. But what makes this tolerable is that it’s coordinated – done through the court system according to carefully codified libel law that explains to everybody what is and isn’t okay. Remove the coordination aspect, and you’ve got the old system where if you say something that offends my honor then I get some friends and try to beat you up in a dark alley. The impulse is the same: deploy not-niceness in the worthy goal of preventing libel. But one method is coordinated and the other isn’t.
This is very, very far from saying that coordinated meanness is a sure test that means something’s okay – that would be the insane position that anything legal must be ethical, something most countries spent the past few centuries disproving spectacularly. This is the much weaker claim that legality sets a minimum bar for people attempting mean policies.
As far as I can tell there are two things we want in a legal system. First, it should have good laws that produce a just society. But second, it should at least have clear and predictable laws that produce a safe and stable society.
For example, the first goal of libel law is to balance people’s desire to protect their reputation with other people’s desire for free speech. But the second goal of libel law should be that everybody understands what is and isn’t libel. If a system achieves the second goal, nobody will end up jailed or dead because they said something they thought was totally innocent but somebody else thought was libel. And nobody will spent years and thousands of dollars entangled in an endless court case hiring a bunch of lawyers to debate whether some form of speech was acceptable or not.
So coordinated meanness is better than uncoordinated meanness not because it necessarily achieves the first goal of justice, but because it achieves the second goal of safety and stability. Everyone knows exactly when to expect it and what they can do to avoid it. I may not know what speech will or won’t offend a violent person with enough friends to organize a goon squad, but I can always read the libel law and try to stay on the right side of it.
This approach is especially helpful when it comes to things like taxation because it lets people know in advance that they’re going to be taxed every year, so they can plan their finances accordingly, as opposed to potentially being caught off guard by a more random approach. But aside from just making coercion more predictable, this approach also has the advantage of making it so fewer acts of coercion are committed in general. Alexander continues:
The second reason that coordinated meanness is better than uncoordinated meanness is that it is less common. Uncoordinated meanness happens whenever one person wants to be mean; coordinated meanness happens when everyone (or 51% of the population, or an entire church worth of Puritans, or whatever) wants to be mean. If we accept theories like the wisdom of crowds or the marketplace of ideas – and we better, if we’re small-d democrats, small-r republicans, small-l liberals, or basically any word beginning with a lowercase letter at all – then a big group of people all debating with each other will be harder to rile up than a single lunatic.
As a Jew, if I heard that skinheads were beating up Jews in dark alleys, I would be pretty freaked out; for all I know I could be the next victim. But if I heard that skinheads were circulating a petition to get Congress to expel all the Jews, I wouldn’t be freaked out at all. I would expect almost nobody to sign the petition
(and in the sort of world where most people were signing the petition, I hope I would have moved to Israel long before anyone got any chance to expel me anyway)
Trying to coordinate meanness is not in itself a mean act – or at least, not as mean as actual meanness. If Westboro Baptist Church just published lots of pamphlets saying we should pass laws against homosexuality, maybe it would have made some gay people feel less wanted, but it would have been a lot less intense than picketing funerals. If people who are against promiscuity want to write books about why we should all worry about promiscuity, it might get promiscuous people a little creeped out, but a lot less so than going up to promiscuous people and throwing water on them and shouting “YOU STRUMPET!”
This is my answer to people who say that certain forms of speech make them feel unsafe, versus certain other people who demand the freedom to express their ideas. We should all feel unsafe around anybody who relishes uncoordinated meanness – beating people in dark alleys, picketing their funerals, shaming them, harassing them, doxxing them, getting them fired from their jobs. I have no tolerance for these people – I am sometimes forced to accept their existence because of the First Amendment, but I won’t do anything more.
On the other hand, we should feel mostly safe around people who agree that meanness, in the unfortunate cases where it’s necessary, must be coordinated. There is no threat at all from pro-coordination skinheads except in the vanishingly unlikely possibility they legally win control of the government and take over.
I admit that this safety is still only relative. It hinges on the skinheads’ inability to convert 51% of the population. But until the Messiah comes to enforce the moral law directly, safety has to hinge on something. The question is whether it should hinge on the ability of the truth to triumph in the marketplace of ideas in the long-term across an entire society, or whether it should hinge on the fact that you can beat me up with a baseball bat right now.
(if you want pre-Messianic absolute safety, there are some super-democratic mechanisms that might help. America’s Bill of Rights seems pretty close to this; anyone wanting to coordinate meanness against a certain religion has to clear not only the 50% bar, but the much higher level required of Constitutional amendments. Visions of more complete protection remain utopian but alluring. For example, in an Archipelago you might well have absolute safety. The skinheads can’t say “Let’s beat up Jews right now”, they can’t even say “Let’s start an anti-Jew political party and gradually win power”. They can, at best, say, “Let’s go found our own society somewhere else without any Jews”, in which case you need say nothing but “don’t let the door hit you on your way out”. In this case their coordination of meanness cannot possibly hurt anyone.)
Of course, even if we accept that coordinated coercion is better overall than uncoordinated coercion, and that using a formal mechanism like government is generally better than not doing so in such cases, it’s still true that government won’t always get things right every time. Sometimes, a government will tax and spend its citizens’ money in a way that doesn’t serve any higher justification, but simply funnels the money into politicians’ pockets, or wastes it on things that the population doesn’t actually want. In such cases, the government can no longer be said to be acting legitimately – so in order to rectify the situation, it makes sense that there should be some kind of formal mechanism through which citizens can declare (in a systematic, coordinated manner) that they no longer recognize their government’s legitimacy to continue governing them, and can oust it in favor of a government that better represents them. And that’s exactly what democratic elections are for; the whole function of the voting process is to enable citizens to reject political leaders who they feel are no longer using their powers of coercion for justifiable ends, and to replace them with leaders who do use those powers more properly. We’ll get more into this topic later, when we discuss why democracy generally works better as a system of government than all the other non-democratic alternatives out there. For now, though, the point is just to recognize that we need to have some form of government in place in order to help us handle issues of coercion – both to redress instances in the private sector in which people are unjustifiably violating each other’s consent, and to allow us to coordinate in areas where violations of consent are justifiable, so we can ensure that they’re all done in as fair and just a manner as possible. Simply refusing to accept any kind of government at all isn’t an adequate answer to these problems.
VI.
In fairness, it’s not hard to understand why anti-statists reject government. They see it committing acts of coercion against its citizens – collecting taxes and imposing on their freedom in other ways without their consent – and they don’t like what they see, so they oppose it. People’s rights and freedom shouldn’t be violated; what could be more straightforward than that? In their ideal world, no one’s rights or freedom would ever be violated, for any reason at all.
Unfortunately though, as appealing as the idea might be in the abstract, it’s not actually possible in practice. I’ve talked about this before in otherposts, but I’ll just repeat it here: We often act as if freedom is some purely binary thing that you either have or don’t have; you’re either totally free, or your rights are being infringed (and therefore you’re being oppressed). But in truth, it isn’t possible to have absolute freedom at all times, for the simple reason that people’s freedoms often conflict with each other; it’s often the case that the only way to have a particular right or freedom is by violating some other right or freedom. As we discussed earlier with negative externalities, one person’s right to use their property as they see fit (say, by building a factory) might conflict with another person’s right to life and health (if the pollution from the factory would damage their lungs). Or to take another example, a newspaper’s right to freely publish news stories might conflict with their subjects’ right to privacy (if the newspaper published details of those people’s personal lives). In order to protect people’s rights in such situations, other people’s rights sometimes have to be impinged upon. And sure, this is conceptually unsatisfying; it feels a lot less compelling to say that everything is a tradeoff and that we always have to weigh various interests against each other than to simply have a hard-and-fast rule like “no coercion” that can be followed at all times without exception. But reality is rarely so simple and clear-cut; so if we want to navigate it successfully, we have to deal with it as it actually is. As commenter NoIAmNumber4 writes:
We have been trained to seek out flawless, mathematically complete solutions that are all upside and no downside. The plethora of police shows is an example of this, but so is politics – vote for me and all your problems will be magically solved.
The problem is that there is very rarely any such thing as a perfect solution, only tradeoffs of moral priorities.
[…]
The best we can do is decide what values we think are more paramount than others and accept the ethical tradeoff we are making. “Sometimes in the defense of liberty and freedom, people die in terrorist attacks” is a good example.
But this kind of nuanced thinking is difficult, painful and doesn’t fit into a 5 second sound bite – and requires considered thought to accept or defend. Since we are never taught that, especially in school, it is not common.
Our world is getting more and more complex. Decades ago we could have gotten away with avoiding the responsibility. The more complex the world gets, the less true that becomes.
[Q]: Freedom is incredibly important to human happiness, a precondition for human virtue, and a value almost everyone holds dear. People who have it die to protect it, and people who don’t have it cross oceans or lead revolutions in order to gain it. But government policies all infringe upon freedom. How can you possibly support this?
Freedom is one good among many, albeit an especially important one.
In addition to freedom, we value things like happiness, health, prosperity, friends, family, love, knowledge, art, and justice. Sometimes we have to trade off one of these goods against another. For example, a witness who has seen her brother commit a crime may have to decide between family and justice when deciding whether to testify. A student who likes both music and biology may have to decide between art and knowledge when choosing a career. A food-lover who becomes overweight may have to decide between happiness and health when deciding whether to start a diet.
People sometimes act as if there is some hierarchy to these goods, such that Good A always trumps Good B. But in practice people don’t act this way. For example, someone might say “Friendship is worth more than any amount of money to me.” But she might continue working a job to gain money, instead of quitting in order to spend more time with her friends. And if you offered her $10 million to miss a friend’s birthday party, it’s a rare person indeed who would say no.
In reality, people value these goods the same way they value every good in a market economy: in comparison with other goods. If you get the option to spend more time with your friends at the cost of some amount of money, you’ll either take it or leave it. We can then work backward from your choice to determine how much you really value friendship relative to money. Just as we can learn how much you value steel by learning how many tons of steel we can trade for how many barrels of oil, how many heads of cabbages, or (most commonly) how many dollars, so we can learn how much you value friendship by seeing when you prefer it to opportunities to make money, or see great works of art, or stay healthy, or become famous.
Freedom is a good much like these other goods. Because it is so important to human happiness and virtue, we can expect people to value it very highly.
But they do not value it infinitely highly. Anyone who valued freedom from government regulation infinitely highly would move to whichever state has the most lax regulations (Montana? New Hampshire?), or go live on a platform in the middle of the ocean where there is no government, or donate literally all their money to libertarian charities or candidates on the tiny chance that it would effect a change.
Most people do not do so, and we understand why. People do not move to Montana because they value aspects of their life in non-Montana places – like their friends and families and nice high paying jobs and not getting eaten by bears – more than they value the small amount of extra freedom they could gain in Montana. Most people do not live on a platform in the middle of the ocean because they value aspects of living on land – like being around other people and being safe – more than they value the rather large amount of extra freedom the platform would give them. And most people do not donate literally all their money to libertarian charities because they like having money for other things.
So we value freedom a finite amount. There are trade-offs of a certain amount of freedom for a certain amount of other goods that we already accept. It may be that there are other such trade-offs we would also accept, if we were offered them.
For example, suppose the government is considering a regulation to ban dumping mercury into the local river. This is a trade-off: I lose a certain amount of freedom in exchange for a certain amount of health. In particular, I lose the freedom to dump mercury into the river in exchange for the health benefits of not drinking poisoned water.
But I don’t really care that much about the freedom to dump mercury into the river, and I care a lot about the health benefits of not drinking poisoned water. So this seems like a pretty good trade-off.
And this generalizes to an answer to the original question. I completely agree freedom is an extremely important good, maybe the most important. I don’t agree it’s an infinitely important good, so I’m willing to consider trade-offs that sacrifice a small amount of freedom for a large amount of something else I consider valuable. Even the simplest laws, like laws against stealing, are of this nature (I trade my “freedom” to steal, which I don’t care much about, in exchange for all the advantages of an economic system based on private property).
The arguments above are all attempts to show that some of the trade-offs proposed in modern politics are worthwhile: they give us enough other goods to justify losing a relatively insignificant “freedom” like the freedom to dump mercury into the river.
[Q]: But didn’t Benjamin Franklin say that those who would trade freedom for security deserve neither?
No, he said that those who would trade essential liberty for temporary security deserved neither. Dumping mercury into the river hardly seems like essential liberty. And when Franklin was at the Constitutional Convention he agreed to replace the minimal government of the Articles of Confederation with a much stronger centralized government just like everyone else.
The fact that our rights and freedoms aren’t absolute, and have to be balanced against each other when they come into conflict, means that we can’t just rely on simplistic rules like the Non-Aggression Principle to help us weigh all the relevant tradeoffs – because such categorical rules fail to acknowledge that such tradeoffs can even exist at all. By their logic, the only factor that ever needs to be accounted for is whether the government is violating someone’s consent, end of story. But of course, this isn’t always the only factor; other factors come into play all the time, and sometimes they conflict with each other. When they do, then, we can’t expect to be able to automatically resolve every difficulty by simply saying “Don’t infringe people’s rights” and not going any further than that – because as Alexander explains, this response alone doesn’t cut it:
When push comes to shove the Non-Aggression Principle just isn’t strong enough to solve hard problems. It usually results in a bunch of people claiming conflicting rights and judges just having to go with whatever seems intuitively best to them.
For example, a person has the right to live where he or she wants, because he or she has “a right to personal self-determination”. Unless that person is a child, in which case the child has to live where his or her parents say, because…um…the parents have “a right to their child” that trumps the child’s “right to personal self-determination”. But what if the parents are evil and abusive and lock the child in a fetid closet with no food for two weeks? Then maybe the authorities can take the child away because…um…the child’s “right to decent conditions” trumps the parents’ “right to their child” even though the latter trumps the child’s “right to personal self-determination”? Or maybe they can’t, because there shouldn’t even be authorities of that sort? Hard to tell.
Another example. I can build an ugly shed on my property, because I have a “right to control my property”, even though the sight of the shed leaves my property and irritates my neighbor; my neighbor has no “right not to be irritated”. Maybe I can build a ten million decibel noise-making machine on my property, but maybe not, because the noise will leave my property and disturbs neighbor; my “right to control my property” might or might not trump my neighbor’s “right not to be disturbed”, even though disturbed and irritated are synonyms. I definitely can’t detonate a nuclear warhead on my property, because the blast wave will leave my property and incinerates my neighbor, and my neighbor apparently does have a “right not to be incinerated”.
If you’ve ever seen people working within our current moral system trying to solve issues like these, you quickly realize that not only are they making it up as they go along based on a series of ad hoc rules, but they’re so used to doing so that they no longer realize that this is undesirable or a shoddy way to handle ethics.
Nor are these kinds of situations particularly rare or unusual. As Holmes and Sunstein illustrate, pretty much anything involving property rights can give rise to these kinds of dilemmas involving conflicting interests that have to be weighed against each other:
Does the accidental finder of goods have a legal right to judicial protection? Does a purchaser acquire an ownership right to property bought for value and in good faith from a thief? What rights against a present occupant belong to the owner of a future interest in real property? How many years of wrongful possession destroy the title of the original owner? Can an illegitimate child inherit from its natural parents by intestate succession? What happens if one joint owner sells his portion of jointly owned property? Can I, without notice, cut off branches from my neighbor’s tree if they overhang my land? Do I have a right to pile a mountain of garbage in my front yard? Can I build an electrical fence around my land with voltage high enough to kill trespassers? Can I erect a building that cuts off my neighbor’s vista? Can I advertise the free viewing of pornographic videos in my front window? Can I stick posters on my neighbor’s fence? Under what conditions is copyright assignable? How much do which creditors collect in case of bankruptcy? What rights do pawnbrokers have over goods left to them upon pledge?
Milton Friedman sums up, bringing back one more example from earlier for good measure:
The notion of property, as it has developed over centuries and as it is embodied in our legal codes, has become so much a part of us that we tend to take it for granted, and fail to recognize the extent to which just what constitutes property and what rights the ownership of property confers are complex social creations rather than self-evident propositions. Does my having title to land, for example, and my freedom to use my property as I wish, permit me to deny to someone else the right to fly over my land in his airplane? Or does his right to use his airplane take precedence? Or does this depend on how high he flies? Or how much noise he makes? Does voluntary exchange require that he pay me for the privilege of flying over my land? Or that I must pay him to refrain from flying over it? The mere mention of royalties, copyrights, patents; shares of stock in corporations; riparian rights, and the like, may perhaps emphasize the role of generally accepted social rules in the very definition of property. It may suggest also that, in many cases, the existence of a well specified and generally accepted definition of property is far more important than just what the definition is.
Again, in these kinds of situations, simply saying “Don’t violate anyone’s rights” is unhelpful, because it’s unclear what that would even entail, and where each party’s rights should begin and end in the first place. Anti-statists might insist that it’s not actually that complicated at all, and that as long as we just get government out of the picture, the rest will sort itself out; but if we actually care about preserving people’s rights, that just doesn’t seem like a sufficient answer, as Paul Kienitz notes:
When absolutist beliefs move from the speech and the pamphlet to the legislatures and the courts, and try to cope with real life, they will inevitably run sooner or later into paradoxes, where the overly simplified principles end up working against their own supposed purposes. When this happens, the choice comes down to either waffling on principle for the sake of having things work out reasonably in practice (which undermines the presumption of moral necessity behind the new system and invites any amount of further twiddling), or sticking by the rules regardless of the consequences, even though somebody gets fucked over. The really dedicated man of principle will, of course, choose the latter. But not with my support.
For one example of how a paradox can arise from Libertarian principles as they have been explained to me so far, consider whether a citizen has a right to travel. I think most people would agree that to be held prisoner in one place is a violation probably second only to that of a physical attack on the body, more fundamental than a theft of property is. (Our penal system reserves imprisonment for more drastic crimes than the ones it punishes by seizing property.) I think this means that most of us would agree that the right to get up and go somewhere else is fundamental. Now, Libertarians are all for repealing laws that restrict travel, such as immigration quotas. But they also generally favor the privatization of all public roads, so that instead of being paid for by taxes, they would be paid by fees or subscriptions of those who use them. For roads to work on a private basis without taxes, it is of course necessary that the property rights of road owners should not be watered down; any requirement that a road owner has to allow travel by everyone greatly undermines both the financial attractiveness of road maintenance and the true elimination of centralized government coercion. So the person who owns the road outside your front door has every right to refuse to allow you to step onto it, if you don’t have the toll fee or if he just doesn’t like your kind of person. And it isn’t just outside your front door, but wherever you go; you can hardly take a step in any direction unless you first ensure that you have an invitation or permission of the person controlling the land under your feet. We already have laws enough that make it a crime for a destitute person to lie down and sleep. Now the proposal is to make it a crime also for them to stand up and walk around, putting a hike along the shoulder of the road on the same footing with a hike through your back yard.
Libertarianism and other schemes involving maximum privatization are a completely untried experiment in seeing how human beings could get along with no public space. There has never been a society, to the best of my knowledge, that has found any way to live as a community without some kind of common public space. I would predict that the need for it is so great that eventually something must give way: either we would have some landowners throwing up their hands and ceding their holdings to free public use just because somebody’s got to do it, or people will get so uncomfortable with restrictions that there would be civil disobedience of private property laws in areas that people had the strongest public-land feeling about. I don’t think people can really manage without some amount of public space, especially not if they take the right of travel seriously.
It might well be that some public-minded landowners would contribute their land to be free public space, but then the cost of this falls on just one person. There’s a tremendous incentive to hold back and wait for someone else to blink first. How many other amenities that we take for granted would be hard to find if each one required that some individual chooses to sacrifice the profit he could make?
[…]
Another paradox arises when you take the right to bear arms to an absolutist extreme. According to Libertarian thought, citizens are not truly free from the threat of tyranny until they are privately armed at a level that can match the government. These days, that doesn’t just mean machine guns and grenades, it means fighter jets and nuclear missiles. With any lesser armament, a tyranny can always end the fight in its favor, so if you truly don’t compromise, you can’t stop short of this. When nobody has the right to blow your entire city to Hell but everybody has the right to be ready to do so (if they can afford it), and your only recourse if somebody decides to let fly is to either hire a private anti-missile battalion and pray, or just sue the attacker after you’re dead… what you’ve got isn’t freedom, it’s a reign of terror. A cold war on every block? Once is enough, include me out. At least when weapons of mass destruction are handled through a democratic government, you can hold them accountable, limit their construction, and put rules on their use before people get slaughtered. Though a government monopoly on hydrogen bombs is hardly to the liking of those uncomfortable with the power it gives to the evil State, it’s probably the reason we’re still alive. There is no man less free than a dead guy.
There is one way around these kinds of paradoxes: people have to be willing to compromise on hard-and-fast rules of dogma, and cease treating them as moral absolutes. In the case of guns, for instance, we must recognize that somewhere there is a cutoff level beyond which it is suicide to have most of society decide one way, and a minority decide the other way. We made it through the cold war alive because we had the capacity to make one decision about whether to launch or not to launch, without fearing (much) that some faction would decide otherwise and take matters into its own hands. We can argue later about how bad a weapon should be before we either ban it or restrict it to collective use instead of individual use, but first we had better acknowledge that some division point must exist. Until the Libertarian movement recognizes the need to permit and practice such compromises, and qualifies their morally righteous stance with some admission that the rules they propose will need to be made less pure and exacting in practice, I cannot support them.
To be sure, nobody likes it when they’re the ones who have to make the compromise. Nobody wants to be ones who have to forego their freedom of action or give up their property by paying taxes, especially if those taxes are then used to fund laws and regulations that (however necessary they might be on the broader scale) feel like further impositions still. As Wheelan puts it:
To protect the neighborhood [via e.g. a fire department], we may have to invest public resources in saving the house of the guy who was smoking in bed—who may or may not learn his lesson. Or we may have to impose regulations on homeowners before there is a fire (e.g., sprinklers and smoke detectors) to protect the neighborhood. Both are unpopular. No one likes regulations before a crisis, when they seem intrusive and expensive. And no one likes bailing out bad actors once a crisis unfolds.
But as annoying as it can sometimes feel to have to pay taxes and comply with laws and regulations, the fact that we make such concessions is what allows us to have a functional society with things like rights and property in the first place. It may be a cliché to say that “freedom isn’t free” – but it also happens to be true, in the most literal sense. As Holmes and Sunstein point out:
Rights cost money. Rights cannot be protected or enforced without public funding and support.
[…]
Individuals enjoy rights, in a legal as opposed to a moral sense, only if the wrongs they suffer are fairly and predictably redressed by their government. This simple point goes a long way toward disclosing the inadequacy of the negative rights/positive rights distinction [i.e. the distinction between the right to be left alone by others on the one hand, and the right to some positive claim over a particular resource or service (like education or healthcare or legal protection) on the other]. What it shows is that all legally enforced rights are necessarily positive rights.
Rights are costly because remedies are costly. Enforcement is expensive, especially uniform and fair enforcement; and legal rights are hollow to the extent that they remain unenforced. Formulated differently, almost every right implies a correlative duty, and duties are taken seriously only when dereliction is punished by the public power drawing on the public purse. There are no legally enforceable rights in the absence of legally enforceable duties, which is why law can be permissive only by being simultaneously obligatory. That is to say, personal liberty cannot be secured merely by limiting government interference with freedom of action and association. No right is simply a right to be left alone by public officials. All rights are claims to an affirmative governmental response. All rights, descriptively speaking, amount to entitlements defined and safeguarded by law. A cease-and-desist order handed down by a judge whose injunctions are regularly obeyed is a good example of government “intrusion” for the sake of individual liberty. But government is involved at an even more fundamental level when legislatures and courts define the rights that such judges protect. Every thou-shalt-not, to whomever it is addressed, implies both an affirmative grant of right by the state and a legitimate request for assistance addressed to an agent of the state.
If rights were merely immunities from public interference, the highest virtue of government (so far as the exercise of rights was concerned) would be paralysis or disability. But a disabled state cannot protect personal liberties, even those that seem wholly “negative,” such as the right against being tortured by police officers and prison guards. A state that cannot arrange prompt visits to jails and prisons by taxpayer-salaried doctors, prepared to submit credible evidence at trial, cannot effectively protect the incarcerated against tortures and beatings. All rights are costly because all rights presuppose taxpayer funding of effective supervisory machinery for monitoring and enforcement.
The most familiar government monitors of wrongs and enforcers of rights are the courts themselves. Indeed, the notion that rights are basically “walls against the state” often rests upon the confused belief that the judiciary is not a branch of government at all, that judges (who exercise jurisdiction over police officers, executive agencies, legislatures, and other judges) are not civil servants living off government salaries. But American courts are “ordained and established” by government; they are part and parcel of the state. Judicial accessibility and openness to appeal are crowning achievements of liberal state-building. And their operating expenses are paid by tax revenues funneled successfully to the court and its officers; the judiciary on its own is helpless to extract those revenues. Federal judges in the United States have lifetime tenure, and they are quite free from the supervisory authority of the public prosecutor. But no well-functioning judiciary is financially independent. No court system can operate in a budgetary vacuum. No court can function without receiving regular injections of taxpayers’ dollars to finance its efforts to discipline public or private violators of rights, and when those dollars are not forthcoming, rights cannot be vindicated. To the extent that rights enforcement depends upon judicial vigilance, rights cost, at a minimum, whatever it costs to recruit, train, supply, pay, and (in turn) monitor the judicial custodians of our basic rights.
Fortunately, as we’ve been highlighting throughout this discussion, the upside of all this is that having a government capable of protecting and assuring citizens’ rights is what allows those citizens to become prosperous enough to have the money to pay taxes in the first place – so in that sense, government-protected rights largely pay for themselves. Holmes and Sustein continue:
Some rights, although costly up front, increase taxable social wealth to such an extent that they can reasonably be considered self-financing. The right to private property is an obvious example. The right to education is another. Even protecting women from domestic violence may be viewed in this way, if it helps bring once-battered wives back into the productive workforce. Public investment in the protection of such rights helps swell the tax base upon which active rights protection, in other areas as well, depends.
But irrespective of how much wealth our rights and freedoms might produce, the point here is just that in order for them to exist in the first place, government enforcement mechanisms must also exist, and in order for government enforcement mechanisms to exist, they must receive funding from the public, in the form of taxes. Holmes and Sunstein conclude:
American law protects the property rights of owners not by leaving them alone but by coercively excluding nonowners (say, the homeless) who might otherwise be sorely tempted to trespass. Every creditor has a right to demand that the debtor repay his debt; in practice, this means that the creditor can instigate a two-party judicial procedure against a defaulting debtor in which a delict is ascertained and a sanction imposed. And he can also count on the sheriff to “levy upon” the personal property of the debtor, to sell it, and then to pay the delinquent’s debts from the proceeds. The property rights of creditors, like the property rights of landowners, would be empty words without such positive actions by publicly salaried officials.
The financing of basic rights through tax revenues helps us see clearly that rights are public goods: taxpayer-funded and government-managed social services designed to improve collective and individual well-being. All rights are positive rights.
VII.
You might or might not buy into Holmes and Sunstein’s framing that “all rights are positive rights.” But even if you reject it and want to maintain a distinction between positive rights and negative rights, the point they’re making still stands: Our negative right not to be interfered with by other people does depend on having a positive right to certain mechanisms of enforcement, provided by government, which prevent those people from interfering with us. If we didn’t have such mechanisms, we’d be facing coercion all the time. Sure, it wouldn’t be government coercion, but government isn’t the only entity capable of coercing people; private-sector actors are all too capable of violating people’s consent, and without any counterbalancing force to keep them in check, they’d have free rein to do so at every opportunity. As Balioc writes:
The basic pure-anarchist [philosophy] that says “just let people do whatever they want and things will work out for the best” — isn’t very popular. The pitfalls are too obvious and too pronounced. Notably, in the pure-anarchist world some people invariably choose to exercise their free choice by becoming bandits and warlords, who go around forcibly restricting the choices of others (often in especially unpleasant ways).
So [we] tend to conjure up large powerful public institutions, Hobbesian leviathans, charged with defending us from each other and thereby safeguarding our greater freedoms.
[…]
The idea is that we use public coercion to eliminate private coercion.
Or in other words, we use mechanisms of coercion that are actually accountable to the public and the democratic process to eliminate forms of coercion that aren’t accountable to anyone or anything except the coercers. It may not be better than having no coercion at all, but seeing as how the latter isn’t an actual option, it’s the best alternative available.
Anti-statists, of course, won’t necessarily agree with this. They’ll often argue that, whatever the consequences might be, the most important thing is simply that people have as few constraints on their action as possible – that’s what freedom means. And in a sense, their definition is right; the fewer constraints a person has on their choices, the freer they are. Again though, it bears repeating that government isn’t the only thing that can constrain a person’s choices. If someone is being coerced by a private actor – or even if they aren’t being coerced by anyone at all, but are simply having their choices constrained by the circumstances they’re in (and don’t have any government services to help them out) – then those things too can make them less free. A person stranded on a desert island, for instance, with no resources available to them aside from rocks and coconuts, might be totally “free” in the negative sense – that is, they’re free from all government laws and regulations – but it’s very hard to argue that their situation makes them more free in an absolute sense than someone living in a wealthy First World country with all its abundance and possibilities. Granted, they aren’t having their choices constrained by other people, but their choices are still being constrained by external forces, namely their circumstances and nature itself. It’s not that anyone is actively stopping them from making whatever choices they might want to make – but the space of choices available to them is so severely limited that the “freedom” they have within that limited space barely amounts to anything. Even though they’re technically allowed to do whatever they want, the number of things they’re actually able to do is miniscule compared to the array of options available to the person in the rich country, who might have more things they aren’t allowed to do, but who also has far more things that they’re able to do the first place, so they’re actually far less constrained in their space of choices overall. In short, “freedom from” isn’t really worth anything without “freedom to.”
These considerations don’t just apply to people on hypothetical desert islands, either – they’re just as true for people here in the real world. The more resources someone has – the more things they can actually choose to do if they want to – the freer they are; and conversely, the fewer resources they have – and the fewer options they have – the less free they are. We might say that the jobs they take and the places they live and the transactions they engage in are all voluntarily chosen, and are therefore “free” choices; but if their array of options was so limited to begin with that they never really had any other alternatives to choose from, then a “free choice” never really existed for them at all – they weren’t truly free in any meaningful sense. As Sandel writes:
[There’s an argument to be made] that, for those with limited alternatives, the free market is not all that free.
Consider an extreme case: A homeless person sleeping under a bridge may have chosen, in some sense, to do so; but we would not necessarily consider his choice to be a free one. Nor would we be justified in assuming that he must prefer sleeping under a bridge to sleeping in an apartment. In order to know whether his choice reflects a preference for sleeping out of doors or an inability to afford an apartment, we need to know something about his circumstances. Is he doing this freely or out of necessity?
The same question can be asked of market choices generally—including the choices people make when they take on various jobs.
True individual freedom cannot exist without economic security and independence. […] Necessitous men are not free men.
If someone is fortunate enough to have abundant resources at their disposal, they accordingly also have more freedom to choose which transactions they want to engage in and which they want to reject; they can be more selective. But those who aren’t so advantaged enjoy less freedom of choice; they simply have to take what they can get. And this is important not only because of how it affects their ability to make their way in the world as individuals, but because of how it affects their interactions with each other as well. More specifically, it means that when you bring together both types of people – the advantaged and the disadvantaged – and have them transact with each other in the same economy, there will naturally be a disparity in bargaining power between them. The richer people will have a marked advantage over the poorer ones in terms of power and autonomy. And this disparity will tend to blur the line between choices that are made entirely freely and choices that are compelled by necessity. The poorer people will often have little choice but to accept whatever terms the richer people offer them, simply because they have no better alternatives. What does it really mean, then, to say that they’re “choosing freely” in such situations? Does true freedom of choice really exist when someone is under this kind of duress from their circumstances? Commenter haalidoodi shares some thoughts on the matter:
The discussion of “is money power” is an interesting one in its own right. I’ve worked with the IRB (the official ethics review board) at my university to design human experiments, and one of the things they explicitly forbid is “excessive” compensation for experiments. If you’re performing surgery or giving an experimental drug you can be cleared to give thousands of dollars in compensation, but if you’re running some rudimentary psychological experiment the board is very hesitant to approve compensation any higher than $20 an hour or so. The justification? Higher amounts would be “coercive”.
This might seem like strange reasoning to those in economics: after all, in economics we typically look at prices as a matter of cost-benefit analysis. If the price is higher than the perceived cost, then the transaction is performed voluntarily and both sides benefit. So why does the IRB forbid excessive compensation?
I like to use a simple analogy to give students of economics a slightly different perspective on this matter. Imagine being confronted by a gun-wielding mugger, who demands your wallet. Clearly a coercive situation, right? But let’s break down exactly what’s going on, in oversimplified economic terms. Here’s what’s happening: an external actor who you have no control over is manipulating the costs and benefits you’re facing (in this case, raising your cost of not cooperating) until it’s in your rational self-interest to behave in the way they want you to. Power, it can therefore be said, is simply the ability to manipulate the costs and benefits other people face until they “voluntarily” go along with what you want.
Yet when you break it down like this, it turns out that lots of situations that we consider free and voluntary are similar to this mugging situation: a price change in an essential good, an employer threatening to sack an unproductive employer, etc. But we all know intuitively that a change in the market price, for example, is still somehow qualitatively different in its morality than a mugger stealing from you.
So what is that difference? Why does your average Joe call the mugger a thief, but scoffs when someone says “taxation is theft”? I suggest that it’s a matter of moral legitimacy: “coercion” and “theft” are morally charged terms that suggest moral “badness”, a qualitative judgment outside the purview of both empirical science and economics. Nearly all social situations involve some sort of manipulation of costs and benefits external to the self, but affecting the self, yet we only get mad about some of them. I suggest that this is because certain bodies, individuals, etc. are popularly recognized as having some vaguely defined “moral legitimacy”, some form of implicit consent that Locke described as the “social contract”. It’s a pretty wishy-washy subject which makes lots of folks uncomfortable, but I believe it’s how the world works.
Anyways, back to the central point: money can be said to translate to power because it allows you to manipulate the “profit functions” of others to a very large degree, mainly by raising the compensation for doing something until the opportunity cost of not doing it becomes unbearably high and they are functionally “coerced” into doing what you want.
In real life, persons are situated differently. This means that differences in bargaining power and knowledge are always possible. And as long as this is true, the fact of an agreement does not, by itself, guarantee the fairness of an agreement. This is why actual contracts are not self-sufficient moral instruments. It always makes sense to ask, “But is it fair, what they have agreed to?”
Of course, having said all this, I should also say that I don’t think the mere fact that such power asymmetries exist means that the optimal solution is therefore to do like haalidoodi’s IRB and just prohibit those with more resources from ever being able to make overwhelmingly compelling offers to those with fewer resources. After all, the whole reason why these offers are considered “coercive” in the first place (or at least, more coercive than they would be if they were being made to richer people) is because the people on the receiving end of them feel like they have no choice but to accept them – and the reason why they feel that way is because the offers make them appreciably better off than any other available alternative. If all we do is ban such transactions, then, we aren’t making anyone better off; we’re simply constraining the poorer people’s freedom of choice even further, and keeping them from being able to make even the slightest improvement to their situation. And that’s not something that even they themselves would want; for them, even a less-than-ideal alternative is better than no alternative at all. So what’s the better solution, then? It’s pretty simple: We ensure that they actually do have enough material security that they don’t ever feel like they have no other alternative (whether that be by providing them with resources like education and healthcare, giving them alternative offers of employment, or using some other means). We give them enough positive freedom that they don’t ever feel compelled to accept some grossly unfair offer out of pure necessity. As Balioc explains:
[In contrast to the pure-anarchist model], the left-liberal model posits that no one can make truly free choices without a certain level of material security and comfort. Because those things are so fundamentally important to such an overwhelming proportion of people, anyone who doesn’t have them or can’t count on them is fundamentally under the power of anyone who can offer them. So it is that power dynamics allow those people to get pulled into arrangements that are not genuinely free and are overall bad for the world (or at least markedly sub-optimal).
The solution is to have the state ensure, directly, that everyone has enough resources. Which probably entails coercively redistributing money away from the people who have tons and towards everyone else, or at least towards the people who have the least.
It’s true, having a government that’s able to use laws and taxes to ensure that everyone has a basic level of material security and freedom of choice does involve some degree of coercion; by implementing those laws and collecting those taxes, the government is imposing certain constraints on its citizens. It is, strictly speaking, reducing their negative freedom. But in so doing, it’s increasing their positive freedom by expanding the space of opportunities that are available to them, so that on net, they’re able to do more than they’d otherwise be able to – i.e. they have more autonomy. As commenter Epistaxis writes:
For a […] positive refutation of the anti-government strain of libertarianism, consider Amartya Sen’s Development as Freedom. His argument is that if our goal is to maximize individual freedom, that actually requires social-welfare programs like public education and health care, because lower taxes don’t make you free in any meaningful sense if you’re confined to dead-end jobs due to poor education, hospitals due to poor preventive medicine, or illness and bankruptcy due to not being able to afford health care in the first place.
[It’s important to acknowledge] the role of government in promoting affirmative liberties. A young person from a poor family who does not need to incur crippling debt to attend university is a freer person. A low-income mother who cannot afford to pay the doctor attains a new degree of freedom when she and her children are covered by Medicaid. A worker who might be compelled to choose between his job and his physical safety becomes freer if government health and safety regulations are enforced. The employee of a big-box store who can take paid family leave when a child gets sick is freer than one whose entire life is at the whim of the boss; likewise a worker with a union contract that provides protection from arbitrary dismissal or theft of wages. An elderly person saved from destitution by a government-organized Social Security pension has a lot more liberty than one bagging groceries at age 80 to make ends meet, or one choosing between supper and filling a prescription. An aspiring homeowner who doesn’t need to spend countless hours making sure that the mortgage won’t explode is freer to spend leisure time on other activities if government is certifying which financial products are sound and is prohibiting other kinds.
Anderson sums up the whole issue of positive freedom versus negative freedom (plus a third kind of freedom which she calls republican freedom) like this:
[Critics of government often depict] a zero-sum trade-off between the liberties of the state and those of its citizens. But there are at least three concepts of freedom: negative, positive, and republican. If you have negative freedom, no one is interfering with your actions. If you have positive freedom, you have a rich menu of options effectively accessible to you, given your resources. If you have republican freedom, no one is dominating you—you are subject to no one’s arbitrary, unaccountable will. These three kinds of freedom are distinct. A lone person on a desert island has perfect negative and republican freedom, but virtually no positive freedom, because there is nothing to do but eat coconuts. An absolute monarch’s favorites may enjoy great negative and positive freedom if he has granted them generous privileges and well-paid sinecures. But they still lack republican freedom, since he can take their perks away and toss them into a dungeon on a whim. Citizens of prosperous social democracies have considerable positive and republican freedom, but are subject to numerous negative liberty constraints, in the form of complex state regulations that constrain their choices in numerous aspects of their lives.
All three kinds of freedom are valuable. There are sound reasons to make trade-offs among them. If we focus purely on negative liberty, and purely concerning rival goods, it might seem that [the critics of government are] correct that the size of the liberty pie is fixed: one agent’s liberty over rival good G would seem to preclude another’s liberty over it. But this is to confuse negative liberties with exclusive rights. There is nothing incoherent about a Hobbesian state of nature, in which everyone has the negative liberty to take, or compete for possession of, every rival good. That would be a social state of perfect negative liberty: it is a state of anarchist communism, in which the world is an unregulated commons. Such a condition would also be catastrophic. Production would collapse if anyone were free to take whatever anyone else had worked to produce. Even the natural resources of the earth would rapidly be depleted in an unregulated commons. Without property rights—rights to exclude others—people would therefore be very poor and insecure. Opportunities—positive liberties—are vastly greater with the establishment of a system of property rights.
This is a standard argument for a regime of private property rights. It is impeccable. Yet its logical entailments are often overlooked. Every establishment of a private property right entails a correlative duty, coercively enforceable by individuals or the state, that others refrain from meddling with another’s property without the owner’s permission. Private property rights thus entail massive net losses in negative liberty, relative to the state of maximum negative liberty. If Lalitha has private property in a parcel of land, her liberty over that parcel is secured by an exclusive right at the cost of the identical negative liberty of seven billion others over that parcel. If we are good libertarians and insist that the justification of any constraint on liberty must appeal to some other more important liberty, then the libertarian case for private property depends on accepting that positive liberty very often rightly overrides negative liberty. It follows that even massive state constraints on negative liberty (in the form of enforcements of private property rights) can increase total liberty (in an accounting that weights positive liberty more highly than negative, as any accounting that can justify private property in terms of freedom must).
State-enforced constraints on negative liberty can also increase total liberty through their enhancement of republican freedom. This is a venerable argument from the republican tradition: without robust protection of private property rights (which, as we have seen, entail massive net losses of negative liberty), a republican form of government is insecure, because the state is liable to degenerate into despotism, exercising arbitrary power over its subjects. This argument has been carried over in modern libertarian writing.
This form of argument is equally applicable to substate [entities like private firms, which might be regarded in their own right as kinds of small-scale] private governments. If one finds oneself subject to private government—a state of republican unfreedom—one can enhance one’s freedom by placing negative liberty constraints on the power of one’s private governors to order one around or impose sanctions on one’s refusal to comply. This may involve state regulation of private governments. For example, a state’s imposition of a requirement on employers that they refrain from discriminating against employees on the basis of their sexual orientation or identity enhances the republican and negative freedom of workers to express their sexual identities and choose their sexual and life partners. It also enhances their positive liberties, by enabling more people to move out of the closet, and thereby increasing opportunities for LGBT people to engage with others of like sexual orientation. The state’s imposition of negative liberty constraints on some people can thereby enhance all three liberties of many more.
Anderson’s last point there, about how we might consider private firms to be kinds of miniature governments in themselves, is a particularly salient one, and one that I want to devote a whole separate post to in the future; but for now, let’s just take a brief moment to address it before we move on. We’ve already discussed in this post how the reverse point is true – how governments are essentially like private-sector firms, only owned by everyone. But the similarity applies in both directions; in a sense, private firms are a lot like small-scale governments, with the owners and executives being the governors and the workers being the citizens. Of course, unlike the traditional governments we’re most familiar with, most such private governments aren’t remotely democratic; they operate through wholly authoritarian top-down hierarchy, with the owners and executives telling their subordinates what to do, and the subordinates either obeying those orders or being permanently banished (i.e. fired from the firm). And this dynamic is extremely relevant when it comes to our discussion of coercion and consent – because while public-sector institutions are the ones that receive the most scrutiny and criticism for their infringements on people’s liberty, most people experience far more constraints on their freedom of action when they’re on the clock than when they aren’t. Chris Bertram, Cory Robin, and Alex Gourevitch illustrate this point:
Considering that all these examples are taken from places where governments already exist to limit employers’ infringement on workers’ freedoms, it’s not hard to imagine how much worse things would be if workers didn’t have any government protections at all. (Actually, it’s not even necessary to imagine; we need only look to the poorest countries to see what working conditions are like for workers who don’t have strong governments protecting their rights.) Of course, many anti-statists won’t acknowledge that there’s even any problem here in the first place; workers are free to quit any job they don’t like, they’ll say, so what’s the issue? Here are Bertram, Robin, and Gourevitch again in response:
[Many anti-statists] believe that workplace coercion is not coercion (or at least not impermissible coercion) for two reasons.
First, workers freely consent to work for their employer. […] Many libertarians take any voluntary contract, no matter how desperate the circumstances of the worker, as a proxy for consent.
[…]
Second, workers are free to quit any job not to their liking. […] Assuming, presumably, some kind of tacit consent theory, [anti-statists] conclude that any worker who performs a specific action at the behest of her boss—peeing in a cup, say, while the boss stands outside the stall, or peeing in her pants because she’s not allowed to go to the bathroom—is acting freely.
[But] the limitations of exit as an instrument of freedom can be illustrated by a simple analogy. Suppose Canada were a dictatorship, but the United States welcomed anyone who wished to leave, paid for her ticket and promised her a job. Would that mean that anyone who stayed behind was free? Or think about the implicit contract at the heart of ethnic cleansing: exit and live; stay and die. Now it’s undoubtedly true that exit is better than no exit—ethnic cleansing being better than genocide—in that it limits the reach of coercion. But it’s not true that exit lessens coercion and increases freedom among those who stay. Surely we don’t want to claim that those Jews who refused to flee the pogroms of tsarist Russia were somehow free. To be clear: the point is not that the workplace is as unfree as a dictatorship or the shtetl but that just because an employee can leave doesn’t mean she is free at work.
[Anti-statists often] appear to be claiming that wherever individuals are free to exit a relationship, authority cannot exist within it. This is like saying that Mussolini was not a dictator, because Italians could emigrate. While emigration rights may give governors an interest in voluntarily restraining their power, such rights hardly dissolve it.
Needless to say, as these writers acknowledge, the analogy here isn’t a perfect one; after all, it’s considerably more difficult for most people to leave their country than to leave their job (even if they don’t have absolute freedom of choice in either case) – so if we’re weighing the relative merits of letting governments set their own rules versus letting private entities set their own rules, it really does make more sense to maintain strict limits on the governments’ ability to impose on people’s freedoms than to maintain equally stringent limits on the private entities’ ability to do the same. As Alexander writes, a good rule of thumb here might be to say that in general, institutions (whether they be public ones like governments or private ones like companies, clubs, or religious organizations) should be allowed to impose constraints on their members’ freedoms only inasmuch as those members are participating in the institutions voluntarily (and can freely leave for an alternative whenever they want) – since by opting to remain with the institutions, members are thereby indicating their willingness to accept those constraints as a part of their membership:
I think we would have to make an argument based on what kind of characteristics an institution needs to be more like a corporation or intentional community (which have the rights to be strict) vs. a national government (which should be erring on the side of permissiveness). To me, the key differences seem to be things like:
exit rights and transaction costs of leaving
number of other options
ease of forming a new one
degree to which membership is voluntary vs. hereditary
So to give an example, most people have the intuition that the US government banning pork for religious reasons is bad, but also that if you go into a mosque and demand they let you eat pork there you’re in the wrong. I think this is because:
the people in the mosque have the option to very easily not be in the mosque
if you don’t like the mosque, you can always go to a church or an atheist meetup
you can always start your own mosque, with blackjack and hookers
most people in the mosque chose to be there because they agree with the mosque’s principles
But:
it’s hard to leave the US if you don’t like it
there aren’t that many other countries and you might not be able to find one you like
it’s very hard to start a new country
most US citizens are only citizens because they were born here, and didn’t necessarily sign on to any philosophical commitments
This is ignoring some important issues, like whether banning pork is the ethically correct action, or whether the majority of the people in each community support the ban. It’s just trying to give a completely formal, meta-level account of why our intuitions might be different for these two cases.
Like I said, this seems to me like a pretty good general approach for weighing how much institutions like governments and private companies should be allowed to impose constraints on their members’ freedom. Having said that, though, I should also add that I don’t think it would be complete without an acknowledgement that for many workers, their jobs fall more toward the “costly to leave and difficult to find any better alternatives” end of the scale than the “easy to leave and easy to find better alternatives” end. Sure, if they’re highly educated and highly skilled, then leaving their jobs might not be such a big deal, since they can always be sure of having plenty of alternative job prospects to choose from – but for those without such advantages, their options will be much more limited, and they’ll accordingly have to accept much worse working conditions if they don’t have any other source of support. As Anderson writes:
The suggestion that enhancing exit rights alone would be sufficient to deal with the problems [faced by less advantaged workers] is not credible. What jobs are [these] workers supposed to exit to? When 90 percent of waitresses experience sexual harassment, they have no reliable place to escape it, other than by leaving their industry-specific skills behind, and even then, not so much, since sexual harassment exists in all industries. Add to this the problems of unemployment, underemployment, ineligibility for unemployment insurance for “voluntary” quits, and it’s easy to see how unhelpful “why don’t you just leave?” is as advice to workers. When workers have only exit rights and no voice, this amounts to a grant to the dictatorial employer to harvest the entire “producer’s surplus” – all the benefits that make their job better than workers’ next best alternative – that would otherwise accrue to workers before the job gets so intolerable that they quit. Indeed, given the uncertainties about whether conditions would be better elsewhere (extremely difficult for employees to determine), and the steep costs of job loss under any realistic scenario, an exit-rights-only regime in effect grants to dictatorial employers the power to appropriate considerably more than the workers’ producer surplus before they leave.
[Tyler] Cowen argues that employers have to compete for talent, and this makes them respect workers’ autonomy and dignity. “The desire to attract and keep talent is the single biggest reason why companies try to create pleasant and tolerant atmospheres for their workers.” I agree with his statement: when workers are respected by their employers, this is the main reason why. It doesn’t follow that all workers do get respected by their employers. Rather, the amount of respect, standing, and autonomy they get is roughly proportional to their market value. Employers don’t have to compete for workers who aren’t scarce: those who are unskilled, inexperienced, living in areas with high unemployment, or with other liabilities, such as arrest record or disability. That’s a lot of workers. Blacks, for example, who are about 12 percent of the labor force, suffer from virtually permanent double-digit unemployment rates. Workers of all races who live in towns devastated from plant closures due to competition from abroad also suffer from high unemployment, because their mobility is low. Much of the time, the entire economy operates in periods of substantial unemployment or underemployment, affecting workers generally: even if they have a job, the cost of job loss is so high they have to put up with nearly any abuse just to hang on to an income. Meanwhile, employers use their power to design workplaces to create a fine-grained division of labor in which workers are deskilled and thus easily replaceable.
Alexander further examines just how disempowered these workers really are in relation to their employers:
It is frequently proposed that workers and bosses are equal negotiating partners bargaining on equal terms, and only the excessive government intervention on the side of labor that makes the negotiating table unfair. After all, both need something from one another: the worker needs money, the boss labor. Both can end the deal if they don’t like the terms: the boss can fire the worker, or the worker can quit the boss. Both have other choices: the boss can choose a different employee, the worker can work for a different company. And yet, strange to behold, having proven the fundamental equality of workers and bosses, we find that everyone keeps acting as if bosses have the better end of the deal.
During interviews, the prospective employee is often nervous; the boss rarely is. The boss can ask all sorts of things like that the prospective pay for her own background check, or pee in a cup so the boss can test the urine for drugs; the prospective employee would think twice before daring make even so reasonable a request as a cup of coffee. Once the employee is hired, the boss may ask on a moment’s notice that she work a half hour longer or else she’s fired, and she may not dare to even complain. On the other hand, if she were to so much as ask to be allowed to start work thirty minutes later to get more sleep or else she’ll quit, she might well be laughed out of the company. A boss may, and very often does, yell at an employee who has made a minor mistake, telling her how stupid and worthless she is, but rarely could an employee get away with even politely mentioning the mistake of a boss, even if it is many times as unforgivable.
The naive economist who truly believes in the equal bargaining position of labor and capital would find all of these things very puzzling.
Let’s focus on the last issue; a boss berating an employee, versus an employee berating a boss. Maybe the boss has one hundred employees. Each of these employees only has one job. If the boss decides she dislikes an employee, she can drive her to quit and still be 99% as productive while she looks for a replacement; once the replacement is found, the company will go on exactly as smoothly as before.
But if the employee’s actions drive the boss to fire her, then she must be completely unemployed until such time as she finds a new job, suffering a long period of 0% productivity. Her new job may require a completely different life routine, including working different hours, learning different skills, or moving to an entirely new city. And because people often get promoted based on seniority, she probably won’t be as well paid or have as many opportunities as she did at her old company. And of course, there’s always the chance she won’t find another job at all, or will only find one in a much less tolerable field like fast food.
We previously proposed a symmetry between a boss firing a worker and a worker quitting a boss, but actually they could not be more different. For a boss to fire a worker is at most a minor inconvenience; for a worker to lose a job is a disaster. The Holmes-Rahe Stress Scale, a measure of the comparative stress level of different life events, puts being fired at 47 units, worse than the death of a close friend and nearly as bad as a jail term. Tellingly, “firing one of your employees” failed to make the scale.
This fundamental asymmetry gives capital the power to create more asymmetries in its favor. For example, bosses retain a level of control on workers even after they quit, because a worker may very well need a letter of reference from a previous boss to get a good job at a new company. On the other hand, a prospective employee who asked her prospective boss to produce letters of recommendation from her previous workers would be politely shown the door; we find even the image funny.
The proper level negotiating partner to a boss is not one worker, but all workers. If the boss lost all workers at once, then she would be at 0% productivity, the same as the worker who loses her job. Likewise, if all the workers approached the boss and said “We want to start a half hour later in the morning or we all quit”, they might receive the same attention as the boss who said “Work a half hour longer each day or you’re all fired”. [Hence the existence of labor unions.]
But getting all the workers together presents coordination problems. One worker has to be the first to speak up. But if one worker speaks up and doesn’t get immediate support from all the other workers, the boss can just fire that first worker as a troublemaker. Being the first worker to speak up has major costs – a good chance of being fired – but no benefits – all workers will benefit equally from revised policies no matter who the first worker to ask for them is.
Or, to look at it from the other angle, if only one worker sticks up for the boss, then intolerable conditions may well still get changed, but the boss will remember that one worker and maybe be more likely to promote her. So even someone who hates the boss’s policies has a strong selfish incentive to stick up for her.
The ability of workers to coordinate action without being threatened or fired for attempting to do so is the only thing that gives them any negotiating power at all, and is necessary for a healthy labor market. Although we can debate the specifics of exactly how much protection should be afforded each kind of coordination, the fundamental principle is sound.
[Q]: But workers don’t need to coordinate. If working conditions are bad, people can just change jobs, and that would solve the bad conditions.
About three hundred Americans commit suicide for work-related reasons every year – this number doesn’t count those who attempt suicide but fail. The reasons cited by suicide notes, survivors and researchers investigating the phenomenon include on-the-job bullying, poor working conditions, unbearable hours, and fear of being fired.
I don’t claim to understand the thought processes that would drive someone to do this, but given the rarity and extremity of suicide, we can assume for every worker who goes ahead with suicide for work-related reasons, there are a hundred or a thousand who feel miserable but not quite suicidal.
If people are literally killing themselves because of bad working conditions, it’s safe to say that life is more complicated than the ideal world in which everyone who didn’t like their working conditions quits and get a better job elsewhere.
[…]
I note in the same vein stories from the days before labor regulations when employers would ban workers from using the restroom on jobs with nine hour shifts, often ending in the workers wetting themselves. This seems like the sort of thing that provides so much humiliation to the workers, and so little benefit to the bosses, that a free market would eliminate it in a split second. But we know that it was a common policy in the 1910s and 1920s, and that factories with such policies never wanted for employees. The same is true of factories that literally locked their workers inside to prevent them from secretly using the restroom or going out for a smoking break, leading to disasters like the Triangle Shirtwaist Fire when hundreds of workers died when the building they were locked inside burnt down. And yet even after this fire, the practice of locking workers inside buildings only stopped when the government finally passed regulation against it.
Even just a simple test of common sense confirms that most workers aren’t truly free to leave their jobs at will. After all, if nobody ever felt compelled by their circumstances to accept a bad job (or to stay in a bad job they already had), then the very concept of someone being stuck in a job they hated wouldn’t exist; people would always have a free choice in the matter, so there would be no reason for them to stay in such a position if they didn’t want to. But in fact, what we see in the real world is that most low-level workers are in just such a situation; they would happily quit their jobs if they could, but no better option is available to them, so they’re forced to spend their lives doing things they don’t actually want to be doing. Their space of choices, in short, is being constrained by their circumstances – and as a result, they are less free. Without any real resources at their disposal, all they can do is resign themselves to doing whatever work those who do have resources want them to do. In other words, they just have to go along with the will of the more powerful, whatever that might be.
VIII.
Of course, as unfavorable as these power asymmetries can be here in the modern-day US, they could be a whole lot worse – and in many places and times throughout history, they have been. The only reason why we aren’t, say, having to submit to the absolute rule of private feudal warlords nowadays (as opposed to just having to deal with private companies offering lousy working conditions) is because we have a government strong enough to keep such would-be oppressors in check. Without any kind of government protection at all, we’d find ourselves in a much less stable – and much nastier – situation; and unfortunately, there’s a long list of countries (including modern ones) whose examples can attest to this. As David Atkins writes:
[When] all central authority and protection break down completely, […] power localizes into the hands of local criminals and feudal/tribal warlords with little compunction about abusing and terrorizing the local population (e.g., feudal France, Afghanistan, Somalia, western Pakistan, etc.). As I said before:
Feudalism is the inevitable historical consequence of the decline of a centralized cosmopolitan state. That’s because the exercise of power by those in a position to wield it does not end with the elimination of federal authority: rather, it simply shifts to those of a more localized, more tyrannical, and less democratically accountable bent.
Urban street gangs in under-policed neighborhoods, mafias in under-taxed countries, and groups like Hezbollah in Lebanon invariably step in to fill the void where government fails.
Without a government capable of protecting property and human rights, press freedoms and business contracts, antitrust laws and consumer demands, a society gets not the rule of law but rule of the strong. If one wanted to see what the absence of government produces, one need only look at Africa—it is not a free-market paradise.
From the comfort of our current position here in the First World, where things like outright violence and subjugation have largely been reduced to rare exceptions to the general norm of peaceful coexistence, we can easily be lulled into thinking that maybe we don’t really need all this government intervention to keep people from turning to coercion and abuse. After all, most people behave well enough and don’t have any problem getting along and respecting each other’s autonomy, right? So why should we expect that this would be any different in the absence of government? Well, it may be true that most people who’ve been raised in an environment where violence and subjugation have been mostly stamped out won’t generally be inclined to engage in those behaviors even if given the free opportunity to do so. But what most people are inclined to do isn’t really the thing we have to worry about here. There are unfortunately plenty of people for whom it would feel perfectly natural and rational to use coercion and even physical force to get what they wanted if there was nothing stopping them. So if peaceful coexistence is what we prefer, we have to have some mechanism through which we can stop such people from running roughshod over everybody else. In other words, we have to have a government that’s capable of using its coercive abilities to stand in the way of theirs. As Steven Pinker writes:
[Thomas] Hobbes’s analysis of the causes of violence, borne out by modern data on crime and war, shows that violence is not a primitive, irrational urge, nor is it a “pathology” except in the metaphorical sense of a condition that everyone would like to eliminate. Instead, it is a near-inevitable outcome of the dynamics of self-interested, rational social organisms.
But Hobbes is famous for presenting not just the causes of violence but a means of preventing it: “a common power to keep them all in awe.” His commonwealth was a means of implementing the principle “that a man be willing, when others are so too … to lay down this right to all things; and be contented with so much liberty against other men, as he would allow other men against himself.” People vest authority in a sovereign person or assembly who can use the collective force of the contractors to hold each one to the agreement, because “covenants, without the sword, are but words, and of no strength to secure a man at all.”
A governing body that has been granted a monopoly on the legitimate use of violence can neutralize each of Hobbes’s reasons for quarrel. By inflicting penalties on aggressors, the governing body eliminates the profitability of invading for gain. That in turn defuses the Hobbesian trap in which mutually distrustful peoples are each tempted to inflict a preemptive strike to avoid being invaded for gain. And a system of laws that defines infractions and penalties and metes them out disinterestedly can obviate the need for a hair trigger for retaliation and the accompanying culture of honor. People can rest assured that someone else will impose disincentives on their enemies, making it unnecessary for them to maintain a belligerent stance to prove they are not punching bags. And having a third party measure the infractions and the punishments circumvents the hazard of self-deception, which ordinarily convinces those on each side that they have suffered the greater number of offenses. These advantages of third-party intercession can also come from nongovernmental methods of conflict resolution, in which mediators try to help the hostile parties negotiate an agreement or arbitrators render a verdict but cannot enforce it. The problem with these toothless measures is that the parties can always walk away when the outcome doesn’t come out the way they want.
Adjudication by an armed authority appears to be the most effective general violence-reduction technique ever invented. Though we debate whether tweaks in criminal policy, such as executing murderers versus locking them up for life, can reduce violence by a few percentage points, there can be no debate on the massive effects of having a criminal justice system as opposed to living in anarchy. The shockingly high homicide rates of pre-state societies, with 10 to 60 percent of the men dying at the hands of other men, provide one kind of evidence. Another is the emergence of a violent culture of honor in just about any corner of the world that is beyond the reach of the law. Many historians argue that people acquiesced to centralized authorities during the Middle Ages and other periods to relieve themselves of the burden of having to retaliate against those who would harm them and their kin. And the growth of those authorities may explain the hundredfold decline in homicide rates in European societies since the Middle Ages. The United States saw a dramatic reduction in urban crime rates from the first half of the nineteenth century to the second half, which coincided with the formation of professional police forces in the cities. The causes of the decline in American crime in the 1990s are controversial and probably multifarious, but many criminologists trace it in part to more intensive community policing and higher incarceration rates of violent criminals.
The inverse is true as well. When law enforcement vanishes, all manner of violence breaks out: looting, settling old scores, ethnic cleansing, and petty warfare among gangs, warlords, and mafias. This was obvious in the remnants of Yugoslavia, the Soviet Union, and parts of Africa in the 1990s, but can also happen in countries with a long tradition of civility. As a young teenager in proudly peaceable Canada during the romantic 1960s, I was a true believer in Bakunin’s anarchism. I laughed off my parents’ argument that if the government ever laid down its arms all hell would break loose. Our competing predictions were put to the test at 8:00 a.m. on October 17, 1969, when the Montreal police went on strike. By 11:20 a.m. the first bank was robbed. By noon most downtown stores had closed because of looting. Within a few more hours, taxi drivers burned down the garage of a limousine service that had competed with them for airport customers, a rooftop sniper killed a provincial police officer, rioters broke into several hotels and restaurants, and a doctor slew a burglar in his suburban home. By the end of the day, six banks had been robbed, a hundred shops had been looted, twelve fires had been set, forty carloads of storefront glass had been broken, and three million dollars in property damage had been inflicted, before city authorities had to call in the army and, of course, the Mounties to restore order. This decisive empirical test left my politics in tatters (and offered a foretaste of life as a scientist).
The generalization that anarchy in the sense of a lack of government leads to anarchy in the sense of violent chaos may seem banal, but it is often overlooked in today’s still-romantic climate. Government in general is anathema to many conservatives, and the police and prison system are anathema to many liberals. Many people on the left, citing uncertainty about the deterrent value of capital punishment compared to life imprisonment, maintain that deterrence is not effective in general. And many oppose more effective policing of inner-city neighborhoods, even though it may be the most effective way for their decent inhabitants to abjure the code of the streets. Certainly we must combat the racial inequities that put too many African American men in prison, but as the legal scholar Randall Kennedy has argued, we must also combat the racial inequities that leave too many African Americans exposed to criminals. Many on the right oppose decriminalizing drugs, prostitution, and gambling without factoring in the costs of the zones of anarchy that, by their own free-market logic, are inevitably spawned by prohibition policies. When demand for a commodity is high, suppliers will materialize, and if they cannot protect their property rights by calling the police, they will do so with a violent culture of honor. (This is distinct from the moral argument that our current drug policies incarcerate multitudes of nonviolent people.) Schoolchildren are currently fed the disinformation that Native Americans and other peoples in pre-state societies were inherently peaceable, leaving them uncomprehending, indeed contemptuous, of one of our species’ greatest inventions, democratic government and the rule of law.
Of course, Pinker also acknowledges that government power has to have reasonable limits, qualifying his arguments with the important caveat that we can’t just allow government to use coercive force against anyone at any time, lest it become an even bigger problem than the ones it was meant to solve:
Where Hobbes fell short was in dealing with the problem of policing the police. In his view, civil war was such a calamity that any government — monarchy, aristocracy, or democracy — was preferable to it. He did not seem to appreciate that in practice a leviathan would not be an otherworldly sea monster but a human being or group of them, complete with the deadly sins of greed, mistrust, and honor. […] (This became the obsession of the heirs of Hobbes who framed the American Constitution.) Armed men are always a menace, so police who are not under tight democratic control can be a far worse calamity than the crime and feuding that go on without them. In the twentieth century, according to the political scientist R. J. Rummel in Death by Government, 170 million people were killed by their own governments. Nor is murder-by-government a relic of the tyrannies of the middle of the century. The World Conflict List for the year 2000 reported:
The stupidest conflict in this year’s count is Cameroon. Early in the year, Cameroon was experiencing widespread problems with violent crime. The government responded to this crisis by creating and arming militias and paramilitary groups to stamp out the crime extrajudicially. Now, while violent crime has fallen, the militias and paramilitaries have created far more chaos and death than crime ever would have. Indeed, as the year wore on mass graves were discovered that were tied to the paramilitary groups.
The pattern is familiar from other regions of the world (including our own) and shows that civil libertarians’ concern about abusive police practices is an indispensable counterweight to the monopoly on violence we grant the state.
This is absolutely right; as I hope I made clear in my last post, having government that’s completely unconstrained in its reach and power is one of the biggest mistakes a civilization can make. Having said that, though, it’s also important to recognize that just because it’s possible to have too much of something doesn’t mean that we shouldn’t have any amount of it at all. If that were the case, then we could just as easily point to endless examples of people who’ve been harmed by, say, technology – victims of car crashes and factory accidents and aerial bombings and so forth – and conclude that therefore technology is evil and we shouldn’t have any form of it whatsoever. Obviously, that would be a misguided conclusion to leap to, since the positive benefits of using technology well significantly outweigh the negative effects of using it badly. The fact that technology can have negative effects along with positive ones doesn’t mean that we should abandon its use entirely; it just means that we should keep working to make it better, safer, and more effective. And the same is true of government. If we see instances of bad governments producing bad results, that doesn’t mean that the solution is therefore to stop having any kind of government at all; it just means that we should work to make those governments better. To quote John Atcheson: “The solution to bad government is good government, not no government.” And if we allow ourselves to forget that principle – if we buy in to the anti-statist idea that political institutions aren’t really all that necessary at all – we set ourselves up for disaster, not just in terms of outright violence in the streets but in terms of practically every other kind of socio-civil dysfunction we might imagine. Again, this isn’t just hypothetical speculation; it’s something we can see for ourselves in all the real-world cases where political institutions have collapsed. As Francis Fukuyama explains:
The kinds of minimal or no-government societies envisioned by dreamers of the Left and Right are not fantasies; they actually exist in the contemporary developing world. Many parts of sub-Saharan Africa are a libertarian’s paradise. The region as a whole is a low-tax utopia, with governments often unable to collect more than about 10 percent of GDP in taxes, compared to more than 30 percent in the United States and 50 percent in parts of Europe. Rather than unleashing entrepreneurship, this low rate of taxation means that basic public services like health, education, and pothole filling are starved of funding. The physical infrastructure on which a modern economy rests, like roads, court systems, and police, are missing. In Somalia, where a strong central government has not existed since the late 1980s, ordinary individuals may own not just assault rifles but also rocket-propelled grenades, antiaircraft missiles, and tanks. People are free to protect their own families, and indeed are forced to do so. Nigeria has a film industry that produces as many titles as India’s famed Bollywood, but films have to earn a quick return because the government is incapable of guaranteeing intellectual property rights and preventing products from being copied illegally.
The degree to which people in developed countries take political institutions for granted was very much evident in the way that the United States planned, or failed to plan, for the aftermath of its 2003 invasion of Iraq. The U.S. administration seemed to think that democracy and a market economy were default conditions to which the country would automatically revert once Saddam Hussein’s dictatorship was removed, and seemed genuinely surprised when the Iraqi state itself collapsed in an orgy of looting and civil conflict. U.S. purposes have been similarly stymied in Afghanistan, where ten years of effort and the investment of hundreds of billions of dollars have not produced a stable, legitimate Afghan state.
Political institutions are necessary and cannot be taken for granted. A market economy and high levels of wealth don’t magically appear when you “get government out of the way”; they rest on a hidden institutional foundation of property rights, rule of law, and basic political order. A free market, a vigorous civil society, the spontaneous “wisdom of crowds” are all important components of a working democracy, but none can ultimately replace the functions of a strong, hierarchical government. There has been a broad recognition among economists in recent years that “institutions matter”: poor countries are poor not because they lack resources, but because they lack effective political institutions.
IX.
The truth is, as much as we might chafe at the idea of being compelled to do something we wouldn’t otherwise choose to do, having an overarching authority that’s capable of requiring everybody to follow certain rules can make us all significantly better off in the long run. As counterintuitive as it might sound, we can benefit greatly from voluntarily accepting (on a meta level) the possibility of some involuntary coercion (at the object level).
So just to take an example that’s unrelated to government, consider this one described by James Surowiecki:
Back in the nineteen-seventies, an economist named Thomas Schelling, who later won the Nobel Prize, noticed something peculiar about the N.H.L. At the time, players were allowed, but not required, to wear helmets, and most players chose to go helmet-less, despite the risk of severe head trauma. But when they were asked in secret ballots most players also said that the league should require them to wear helmets. The reason for this conflict, Schelling explained, was that not wearing a helmet conferred a slight advantage on the ice; crucially, it gave the player better peripheral vision, and it also made him look fearless. The players wanted to have their heads protected, but as individuals they couldn’t afford to jeopardize their effectiveness on the ice. Making helmets compulsory eliminated the dilemma: the players could protect their heads without suffering a competitive disadvantage. Without the rule, the players’ individually rational decisions added up to a collectively irrational result. With the rule, the outcome was closer to what players really wanted.
Dilemmas like this are known as “collective action problems” – situations where (in the absence of any higher authority) each individual has an incentive to act in a particular way for their own self-interest, but then when they all act in accordance with that incentive, it ends up producing a group-wide outcome that’s not in anybody’s interest. And they can crop up in all kinds of different areas of life, not just in artificially constructed situations like hockey games. As Pinker puts it:
[These are] actions that make sense to the individual choosing them but are costly to society when everyone chooses them. Examples include overfishing a harbor, overgrazing a commons, commuting on a bumper-to-bumper freeway, or buying a sport utility vehicle to protect oneself in a collision because everyone else is driving a sport utility vehicle.
It’s worth explaining each of these examples in a little more depth – so here’s Peter Singer, for starters, elaborating on the traffic example:
Suppose I live in the suburbs and work in the city. I could drive my car to work, or take the bus. I prefer not to wait around for the bus, and so I take my car. Fifty thousand other people living in my suburb face the same choice and make the same decision. The road to town is choked with cars. It takes each of us an hour to travel ten miles.
In this situation, according to the [negative] conception of freedom, we have all chosen freely. No one deliberately interfered with our choices. Yet the outcome is something none of us want. If we all went by bus, the roads would be empty and we could cover the distance in twenty minutes. Even with the inconvenience of waiting at the bus stop, we would all prefer that. We are, of course, free to alter our choice of transportation, but what can we do? While so many cars slow the bus down, why should any individual choose differently? The [negative] conception of freedom has led to a paradox: we have each chosen in our own interests, but the result is in no one’s interest. Individual rationality, collective irrationality.
The solution, obviously, is for us all to get together and make a collective decision. As individuals we are unable to bring about the situation we desire. Together we can achieve what we want, subject only to the physical limits of our resources and technology. In this example, we can all agree to use the bus.
[But now let’s imagine a group of people trying to voluntarily coordinate their commutes in this way.] They hold a meeting. All agree that it would be better to leave their cars at home. They part, rejoicing at the prospect of no more traffic jams. But in the privacy of their own homes, some reason to themselves as follows: ‘If everyone else is going to take the bus tomorrow, the roads will be empty. So I’ll take my car. Then I’ll have the convenience of door-to-door transportation and the advantage of a traffic-free run which will get me to work in less time than if I took the bus.’ From a self-interested point of view this reasoning is correct. As long as most take the bus, a few others can obtain the benefits of the socially minded behaviour of the majority, without giving up anything themselves.
What should the majority do about this? Should they leave it up to the individual conscience to decide whether to abuse the system in this manner? If they do, there is a risk that the system will break down – once a few take their own cars, others will soon follow, for no one likes to be taken advantage of. Or should the majority attempt to coerce the minority into taking the bus? That is the easy way out. It can be done in the name of freedom for all; but it may lead to freedom for none.
Singer is right to point out in his last sentence there that in some cases, the collective benefits of using coercion, while appreciable, aren’t worth the costs in terms of individual freedom. This particular commuting example, for instance, despite being a good hypothetical for illustrative purposes, isn’t necessarily one in which government coercion would automatically be justifiable. Here in the real world, we’ve decided not to impose such constraints on people’s ability to drive their own individual cars – and for perfectly valid reasons. As a result, we unfortunately have to endure horrible traffic jams fairly often – but that’s a tradeoff we’ve chosen to accept.
Having said that, though, there are plenty of cases in which the tradeoff does more clearly favor using government coercion to resolve the problem – with the most obvious of these cases, of course, being all the laws against outright theft and violence and so on. People might find that they’re able to benefit on an individual basis from being able to abuse and steal from others – but if they all acted in accordance with this individual incentive, society as a whole would be plunged into chaos; so therefore we’ve opted to have government require everyone to coexist peacefully, even if it means using coercion where necessary. As Fukuyama explains in his discussion of the original formation of the state in China:
In Leviathan, Thomas Hobbes argues that the sovereign derives his legitimacy from an unwritten social contract by which each individual gives up his natural liberty to do as he pleases in order to secure his own natural right to life, which would otherwise be threatened by the “warre of every man against every man.” If we substitute “group” for “man,” it is clear that many premodern societies operated on the basis of such a social contract, China’s included. Human beings were willing to give up a huge amount of freedom and delegate a corresponding amount of discretion to an emperor who would rule them and guarantee social peace. They found this preferable to a state of war, which they had experienced repeatedly in their history, when powerful oligarchs fought each other and exploited their own people without restraint.
Preventing rampant violence and subjugation in this way is a pretty elementary use of government authority. But in addition to such basic cases, there are also plenty of other kinds of collective action problems which don’t involve any kind of overtly belligerent behavior on anyone’s part, but can still benefit from government intervention to bring everyone together under a common set of rules. One of the best-known of these is what’s called “the tragedy of the commons,” which includes the aforementioned examples of overgrazing and overfishing. In such situations, where there’s some kind of renewable common-pool good like a natural resource (e.g. grazing pastures), everyone has an individual incentive to use up as much of it for themselves as they can – but then, when they all do that, it ends up depleting the natural resource beyond the point of recovery, leaving them all worse off in the end. If they could all commit to limiting their usage of the natural resource, they could use it in a more sustainable way, and could continue reaping the benefits indefinitely – but none of them has any individual incentive to do so unless it means everyone else will do so as well. So the only two ways of ensuring that the resource isn’t depleted are to either take it out of the commons entirely (i.e. privatize it and put it in the hands of specifically designated owners) or to have an overarching authority impose enforceable limits on how much of it each individual may use at a time (i.e. use government coercion) – or both. Wheelan breaks it down by comparing it to a classic prisoner’s dilemma:
Some of the most interesting problems in economics involve situations in which rational individuals acting in their own best interest do things that make themselves worse off. Yet their behavior is entirely logical.
The classic example is the prisoner’s dilemma, a somewhat contrived but highly powerful model of human behavior. The basic idea is that two men have been arrested on suspicion of murder. They are immediately separated so that they can be interrogated without communicating with one another. The case against them is not terribly strong, and the police are looking for a confession. Indeed, the authorities are willing to offer a deal if one of the men rats out the other as the trigger man.
If neither man confesses, the police will charge them both with illegal possession of a weapon, which carries a five-year jail sentence. If both of them confess, then each will receive a twenty-five-year murder sentence. If one man rats out the other, then the snitch will receive a light three-year sentence as an accomplice and his partner will get life in prison. What happens?
The men are best off collectively if they keep their mouths shut. But that’s not what they do. Each of them starts thinking. Prisoner A figures that if his partner keeps his mouth shut, then he can get the light three-year sentence by ratting him out. Then it dawns on him: His partner is almost certainly thinking the same thing—in which case he had better confess to avoid having the whole crime pinned on himself. Indeed, his best strategy is to confess regardless of what his partner does: It either gets him the three-year sentence (if his partner stays quiet) or saves him from getting life in prison (if his partner talks).
Of course, Prisoner B has the same incentives. They both confess, and they both get twenty-five years in prison when they might have served only five. Yet neither prisoner has done anything irrational.
The amazing thing about this model is that it offers great insight into real-world situations in which unfettered self-interest leads to poor outcomes. It is particularly applicable to the way in which renewable natural resources, such as fisheries, are exploited when many individuals are drawing from a common resource. For example, if Atlantic swordfish are harvested wisely, such as by limiting the number of fish caught each season, then the swordfish population will remain stable or even grow, providing a living for fishermen indefinitely. But no one “owns” the world’s swordfish stocks, making it difficult to police who catches what. As a result, independent fishing boats start to act a lot like our prisoners under interrogation. They can either limit their catch in the name of conservation, or they can take as many fish as possible. What happens?
Exactly what the prisoner’s dilemma predicts: The fishermen do not trust each other well enough to coordinate an outcome that would make them all better off. Rhode Island fisherman John Sorlien told the New York Times in a story on dwindling fish stocks, “Right now, my only incentive is to go out and kill as many fish as I can. I have no incentive to conserve the fishery, because any fish I leave is just going to be picked up by the next guy.” So the world’s stocks of tuna, cod, swordfish, and lobster are fished away. Meanwhile, politicians often make the situation worse by bailing out struggling fishermen with assorted subsidies. This merely keeps boats in the water when some fishermen might otherwise quit.
Sometimes individuals need to be saved from themselves. One nice example of this is the lobstering community of Port Lincoln on Australia’s southern coast. In the 1960s, the community set a limit on the number of traps that could be set and then sold licenses for those traps. Since then, any newcomer could enter the business only by buying a license from another lobsterman. This limit on the overall catch has allowed the lobster population to thrive. Ironically, Port Lincoln lobstermen catch more than their American colleagues while working less. Meanwhile, a license purchased in 1984 for $2,000 now fetches about $35,000. As Aussie lobsterman Daryl Spencer told the Times, “Why hurt the fishery? It’s my retirement fund. No one’s going to pay me $35,000 a pot if there are no lobsters left. If I rape and pillage the fishery now, in ten years my licenses won’t be worth anything.” Mr. Spencer is not smarter or more altruistic than his fishing colleagues around the world; he just has different incentives. Oddly, some environmental groups oppose these kinds of licensed quotas because they “privatize” a public resource. They also fear that the licenses will be bought up by large corporations, driving small fishermen out of business.
So far, the evidence strongly suggests that creating private property rights—giving individual fishermen the right to a certain catch, including the option of selling that right—is the most effective tool in the face of collapsing commercial fisheries. A 2008 study of the world’s commercial fisheries published in Science found that individual transferable quotas can stop or even reverse the collapse of fishing stocks. Fisheries managed with transferable quotas were half as likely to collapse as fisheries that use traditional methods.
In this example, the tragedy of the commons is avoided thanks to the government allocating private property rights to individual fishermen, and then setting limits on how much each of them can catch at a time. In other such cases, like that of grazing land, the solution might be an even more straightforward form of privatization, with the government simply splitting up the land, allocating a piece of it to each person, and then letting them use it however they want, without any need for further regulation. In still other cases, though, this kind of privatization might have major drawbacks – and if those drawbacks are significant enough, the best solution might not be to privatize at all, but to just keep the resource as a common-pool good and have the government regulate its use. As Heath explains:
For a long time, social scientists were remarkably inattentive to the economic benefits of [common-pool goods]. For example, economists routinely engaged in blanket condemnation of “common property” arrangements, on the grounds that they generated collective action problems. The story of the “tragedy of the commons” is a good example. According to the received wisdom, the common-field system in England, in which all the peasants in a community were entitled to graze their animals in a shared pasture, led to systematic overgrazing. Since most of the cost of grazing one’s own animals on the pasture was borne by other users, the private benefit of grazing always exceeded the private cost, and so everyone kept grazing until the field was destroyed. (The same dynamic has led to the destruction of many common resources, such as the cod fishery on the Grand Banks off the coast of northeastern North America.) The solution typically proposed is, when feasible, to divide up the common property into individual holdings and then limit individuals to the use of their own plots. If the only place to graze your animals is in your own backyard, then the incentive to overgraze disappears, since the full cost is now “internalized,” or borne by the individual making the decision.
There is a lot to be said for this analysis. However, it is often suggested that the transition to a private-property regime is an unqualified Pareto improvement. In many cases, particularly agricultural ones, this is not the case, because enclosure also has the effect of “unpooling” various types of risk. If some portion of the land is afflicted by a blight or ruined by hail or eaten by insects, a common-field system automatically divides the loss up among all members of the community. With individual plots of land, on the other hand, some individuals may find themselves with nothing, while others are completely unaffected. Thus private property may increase productivity and improve the incentives for land management, but it will also increase the variability of returns. Whether or not this makes it worthwhile depends upon a large number of factors, particularly environmental ones.
This is not to say that sharing and mutual aid are never based upon purely altruistic sentiment. It is, however, easy to underestimate the extent to which they are based upon rational self-interest, narrowly conceived. Many of the social movements and institutional arrangements that we traditionally think of as “socialist” have a lot more to do with risk-pooling than with equality or distributive justice. And in cases where these movements succeed in generating permanent institutional reform, it is usually because the arrangements they promote involve win-win efficiency gains, not win-lose redistributions of income.
It is interesting to reconsider the history of nineteenth- and twentieth-century “class struggle” in this light. Canadians sometime express puzzlement over the fact that Saskatchewan, which is traditionally the most socialist province in the country, sits right next to Alberta, which is traditionally the most conservative. Yet this is hardly so mysterious when one thinks of socialism in terms of social insurance. The left-right political divide coincides near-perfectly with the shift from an economy based almost entirely upon farming, in Saskatchewan, to one based increasingly on ranching (and later oil), in Alberta. Since the growing season on the prairies is barely long enough to support one crop, and rainfall is both low and highly variable, there is a long-standing tradition of mutual aid among farmers. The socialist movement prospered in Saskatchewan primarily because of the risk-pooling arrangements that it helped to create, first in response to price volatility and crop failure, and eventually in response to other shared risks (most important, in health care, through the introduction of “socialized” government insurance). Ranchers, on the other hand, are not affected by very many “exogenous” risks, such as bad weather. Theft is usually their biggest issue, leading to a political emphasis upon property rights, rather than upon solidarity and mutual aid. That’s why a rancher is more likely to shoot you if he finds you on his land, whereas a farmer might invite you to stay for dinner.
The advantages of pooling risk in this way are often considerable, and can provide a strong reason to want to maintain a common-property arrangement rather than opting for complete privatization. In addition to that, though, another relevant factor favoring the former over the latter is that sometimes privatization just isn’t possible at all, even in theory. As Alexander writes in his broad overview of coordination problems, such a situation might arise in a context like, for instance, aquaculture, in which fish are farmed in a lake (as opposed to Wheelan’s example of catching them in the wild):
[Q]: What are coordination problems?
Coordination problems are cases in which everyone agrees that a certain action would be best, but the free market cannot coordinate them into taking that action.
As a thought experiment, let’s consider aquaculture (fish farming) in a lake. Imagine a lake with a thousand identical fish farms owned by a thousand competing companies. Each fish farm earns a profit of $1000/month. For a while, all is well.
But each fish farm produces waste, which fouls the water in the lake. Let’s say each fish farm produces enough pollution to lower productivity in the lake by $1/month.
A thousand fish farms produce enough waste to lower productivity by $1000/month, meaning none of the fish farms are making any money. Capitalism to the rescue: someone invents a complex filtering system that removes waste products. It costs $300/month to operate. All fish farms voluntarily install it, the pollution ends, and the fish farms are now making a profit of $700/month – still a respectable sum.
But one farmer (let’s call him Steve) gets tired of spending the money to operate his filter. Now one fish farm worth of waste is polluting the lake, lowering productivity by $1. Steve earns $999 profit, and everyone else earns $699 profit.
Everyone else sees Steve is much more profitable than they are, because he’s not spending the maintenance costs on his filter. They disconnect their filters too.
Once four hundred people disconnect their filters, Steve is earning $600/month – less than he would be if he and everyone else had kept their filters on! And the poor virtuous filter users are only making $300. Steve goes around to everyone, saying “Wait! We all need to make a voluntary pact to use filters! Otherwise, everyone’s productivity goes down.”
Everyone agrees with him, and they all sign the Filter Pact, except one person who is sort of a jerk. Let’s call him Mike. Now everyone is back using filters again, except Mike. Mike earns $999/month, and everyone else earns $699/month. Slowly, people start thinking they too should be getting big bucks like Mike, and disconnect their filter for $300 extra profit…
A self-interested person never has any incentive to use a filter. A self-interested person has some incentive to sign a pact to make everyone use a filter, but in many cases has a stronger incentive to wait for everyone else to sign such a pact but opt out himself. This can lead to an undesirable equilibrium in which no one will sign such a pact.
The most profitable solution to this problem is for Steve to declare himself King of the Lake and threaten to initiate force against anyone who doesn’t use a filter. This regulatory solution leads to greater total productivity for the thousand fish farms than a free market could.
The classic libertarian solution to this problem is to try to find a way to privatize the shared resource (in this case, the lake). I intentionally chose aquaculture for this example because privatization doesn’t work. Even after the entire lake has been divided into parcels and sold to private landowners (waterowners?) the problem remains, since waste will spread from one parcel to another regardless of property boundaries.
[Q]: Even without anyone declaring himself King of the Lake, the fish farmers would voluntarily agree to abide by the pact that benefits everyone.
Empirically, no. This situation happens with wild fisheries all the time. There’s some population of cod or salmon or something which will be self-sustaining as long as it’s not overfished. Fishermen come in and catch as many fish as they can, overfishing it. Environmentalists warn that the fishery is going to collapse. Fishermen find this worrying, but none of them want to fish less because then their competitors will just take up the slack. Then the fishery collapses and everyone goes out of business. The most famous example is the Collapse of the Northern Cod Fishery, but there are many others in various oceans, lakes, and rivers.
If not for resistance to government regulation, the Canadian governments could have set strict fishing quotas, and companies could still be profitably fishing the area today. Other fisheries that do have government-imposed quotas are much more successful.
[Q]: I bet [extremely complex privatization scheme that takes into account the ability of cod to move across property boundaries and the migration patterns of cod and so on] could have saved the Atlantic cod too.
Maybe, but left to their own devices, cod fishermen never implemented or recommended that scheme. If we ban all government regulation in the environment, that won’t make fishermen suddenly start implementing complex privatization schemes that they’ve never implemented before. It will just make fishermen keep doing what they’re doing while tying the hands of the one organization that has a track record of actually solving this sort of problem in the real world.
Alexander continues his discussion by showing how similar considerations can apply to even bigger coordination problems as well, like global warming, business regulation, and charitable spending:
[Q]: How do coordination problems justify environmental regulations?
Consider the process of trying to stop global warming. If everyone believes in global warming and wants to stop it, it’s still not in any one person’s self-interest to be more environmentally conscious. After all, that would make a major impact on her quality of life, but a negligible difference to overall worldwide temperatures. If everyone acts only in their self-interest, then no one will act against global warming, even though stopping global warming is in everyone’s self-interest. However, everyone would support the institution of a government that uses force to make everyone more environmentally conscious.
Notice how well this explains reality. The government of every major country has publicly declared that they think solving global warming is a high priority, but every time they meet in Kyoto or Copenhagen or Bangkok for one of their big conferences, the developed countries would rather the developing countries shoulder the burden, the developing countries would rather the developed countries do the hard work, and so nothing ever gets done.
The same applies mutans mutandis to other environmental issues like the ozone layer, recycling, and anything else where one person cannot make a major difference but many people acting together can.
[Q]: How do coordination problems justify regulation of ethical business practices?
The normal libertarian belief is that it is unnecessary for government to regulate ethical business practices. After all, if people object to something a business is doing, they will boycott that business, either incentivizing the business to change its ways, or driving them into well-deserved bankruptcy. And if people don’t object, then there’s no problem and the government shouldn’t intervene.
A close consideration of coordination problems demolishes this argument. Let’s say Wanda’s Widgets has one million customers. Each customer pays it $100 per year, for a total income of $100 million. Each customer prefers Wanda to her competitor Wayland, who charges $150 for widgets of equal quality. Now let’s say Wanda’s Widgets does some unspeakably horrible act which makes it $10 million per year, but offends every one of its million customers.
There is no incentive for a single customer to boycott Wanda’s Widgets. After all, that customer’s boycott will cost the customer $50 (she will have to switch to Wayland) and make an insignificant difference to Wanda (who is still earning $99,999,900 of her original hundred million). The customer takes significant inconvenience, and Wanda neither cares nor stops doing her unspeakably horrible act (after all, it’s giving her $10 million per year, and only losing her $100).
The only reason it would be in a customer’s interests to boycott is if she believed over a hundred thousand other customers would join her. In that case, the boycott would be costing Wanda more than the $10 million she gains from her unspeakably horrible act, and it’s now in her self-interest to stop committing the act. However, unless each boycotter believes 99,999 others will join her, she is inconveniencing herself for no benefit.
Furthermore, if a customer offended by Wanda’s actions believes 100,000 others will boycott Wanda, then it’s in the customer’s self-interest to “defect” from the boycott and buy Wanda’s products. After all, the customer will lose money if she buys Wayland’s more expensive widgets, and this is unnecessary – the 100,000 other boycotters will change Wanda’s mind with or without her participation.
This suggests a “market failure” of boycotts, which seems confirmed by experience. We know that, despite many companies doing very controversial things, there have been very few successful boycotts. Indeed, few boycotts, successful or otherwise, ever make the news, and the number of successful boycotts seems much less than the amount of outrage expressed at companies’ actions.
The existence of government regulation solves this problem nicely. If >51% of people disagree with Wanda’s unspeakably horrible act, they don’t need to waste time and money guessing how many of them will join in a boycott, and they don’t need to worry about being unable to conscript enough defectors to reach critical mass. They simply vote to pass a law banning the action.
[Q]: How do coordination problems justify government spending on charitable causes?
Because failure to donate to a charitable cause might also be because of a coordination problem.
How many people want to end world hunger? I’ve never yet met someone who would answer with a “not me!”, but maybe some of those people are just trying to look good in front of other people, so let’s make a conservative estimate of 50%.
There’s a lot of dispute over what it would mean to “end world hunger”, all the way from “buy and ship food every day to everyone who is hungry that day” all the way to “create sustainable infrastructure and economic development such that everyone naturally produces enough food or money”. There are various estimates about how much these different definitions would cost, all the way from “about $15 billion a year” to “about $200 billion a year” – permanently in the case of shipping food, and for a decade or two in the case of promoting development.
Even if we take the highest possible estimate, it’s still well below what you would make if 50% of the population of the world donated $1/week to the cause. Now, certainly there are some very poor people in the world who couldn’t donate $1/week, but there are also some very rich people who could no doubt donate much, much more.
So we have two possibilities. Either the majority of people don’t care enough about world hunger to give a dollar a week to end it, or something else is going on.
That something else is a coordination problem. No one expects anyone else to donate a dollar a week, so they don’t either. And although somebody could shout very loudly “Hey, let’s all donate $1 a week to fight world hunger!” no one would expect anyone else to listen to that person, so they wouldn’t either.
When the government levies tax money on everyone in the country and then donates it to a charitable cause, it is often because everyone in the country supports that charitable cause but a private attempt to show that support would fall victim to coordination problems.
His list of coordination problems goes on, including the whole discussion of labor coordination quoted earlier. What all these examples demonstrate is that there are some things we just can’t do very efficiently if we’re all acting as isolated individuals, compared to if we’re all acting as a collective unit through the coordination mechanism of government. And in fact, there are some things that we can only really do as a collective unit – including a lot of the things that are most crucial to our well-being. It’s just one more way in which the coercive power of government, despite being restrictive in one sense, can nevertheless make us freer in a broader sense by helping us fulfill our interests in a way that just wouldn’t be possible without it.
X.
Indeed, government isn’t just valuable because of its ability to set rules and regulations for the private sector; in some cases, the private sector isn’t capable of adequately providing certain important goods and services at all – and in those cases, having a government that can directly provide those goods and services itself is vital. As Stiglitz puts it:
[It’s often] simply assumed that markets arise quickly to meet every need, when in fact, many government activities arise because markets have failed to provide essential services.
To be sure, the market is an extraordinary mechanism for supplying ordinary consumer goods like electronics, groceries, haircuts, and so on. But there are some goods which, for a variety of economic reasons (including the kinds of collective action problems we’ve been discussing), can only really be provided by government if they’re to be provided at all. As Paul Krugman writes:
Like all advanced nations, America mainly relies on private markets and private initiatives to provide its citizens with the things they want and need, and hardly anyone in our political discourse would propose changing that. The days when it sounded like a good idea to have the government directly run large parts of the economy are long past.
Yet we also know that some things more or less must be done by government. Every economics textbooks talks about “public goods” like national defense and air traffic control that can’t be made available to anyone without being made available to everyone, and which profit-seeking firms, therefore, have no incentive to provide.
Do you drive on a highway to get to work? If your home catches fire, do you expect someone to answer when you dial 911? You might not think of roads and fire departments as goods, but economists do. There are some goods that many of us rely on every day, but it is difficult to imagine buying the quantities we desire from a group of private firms competing in the market. Some classic examples are national defense, funding basic research and development, roads and highways, and police and fire protection. These fall into the category of what economists call “public goods.”
Public goods share two key characteristics: they are nonrivalrous and nonexcludable. “Nonrivalrous” means that the good itself is not diminished as more people use it. When you have a private good, such as a slice of pizza, if Max eats the pizza, Michelle can’t. Compare that to, say, national defense. Max being protected by the armed forces doesn’t diminish the amount of protection Michelle receives. “Nonexcludable” means a seller cannot exclude those who did not pay from using the good. That slice of pizza is excludable; you don’t buy it, you don’t get to eat it. But if someone doesn’t wish to be protected by the armed forces, there’s no realistic way to exclude them.
It’s important to remember that this term “public good” has a very specific meaning to economists. It does not refer to everything that is both provided by government and (arguably) good. It’s also important to recognize that categorizing something as not a public good does not mean that there is no economic argument for public policy in that area. Many of the things we call public goods aren’t perfectly nonrivalrous or perfectly nonexcludable, but they are close enough to make it difficult for a private market to provide them. For example:
Public health measures, such as vaccinations, are nonrivalrous because a rise in population doesn’t diminish the benefit of reducing infectious disease, and they are nonexcludable because the benefit extends to the whole population.
A good road system offers society all sorts of benefits. Barring toll roads, it’s hard to exclude people from using it. Barring extremes of traffic congestion, my use of a highway doesn’t stop you from using it, too.
Scientific research—in fact, ideas in general—are nonrivalrous. As Thomas Jefferson (1813) put it, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”
Many of the benefits of education aren’t just to the person who is educated; there are benefits to all of us from living in a society in which the overwhelming majority of adults can read and understand basic mathematical calculations.
When some people receive the benefits from public goods without paying their fair share of the costs, economists call it a free-rider problem. Imagine the difficulties that could arise if you told people you wanted them to pay for their roads the same way they pay for their groceries. People know that, in the industrialized world at least, odds are the road will be built whether they agree to pay for it or not, and that the government can’t or won’t stop them from using the road once it’s built. Guided by self-interest, most people would prefer that everyone else living nearby chip in for road construction, but that they themselves take a free ride. Since roads are mostly nonexcludable and nonrivalrous (with the exception of toll roads and congested roads), people may decide to be free riders. But if everyone makes this self-interested decision, no road gets built and no one benefits.
The free-rider problem is important to economic analysis. For the most part, economics argues that producers and consumers following their own self-interest offer many benefits for society. But in a situation of public goods, if everyone follows individual narrow self-interest, the result is actually worse for everyone.
How can public goods be provided if a self-interested market works against them? A variety of social mechanisms can help solve the problem. For example, how do public radio and public television survive? They typically use a combination of social pressure (pledge drives, mass mailings) and incentives (thank-you gifts for your pledge, member benefits and events, special programming) to persuade you to contribute. They’re trying to overcome the free-rider problem with a mixture of public recognition for contributors and mild shaming for those who don’t contribute.
The government uses taxes to require citizens to pay for a public good—whether each individual citizen would want that quantity of that good or not. This applies to goods the government provides directly (such as a standing army or a court system) or indirectly, via private contractors (as with road and building construction). When we say that government supplies a public good, we’re actually saying that the government collects the money to pay for the good; it’s an open question whether public workers or the private sector provides the good.
Taxes overcome the free-rider problem by force: if you don’t pay your taxes for the public good, you go to jail. These benefits and costs are part of an implicit social contract. If members of society don’t find a way to come together to provide public goods—through either political or social mechanisms—they all lose out.
We already discussed earlier how the collection of taxes can be justifiable; in the case of taxes like Georgist land value taxes and Pigovian externality taxes, in particular, it’s a straightforward matter of reimbursing people for costs that have been nonconsensually imposed on them by other private-sector actors. But one thing we didn’t really discuss was how to take the next step after those taxes are collected – i.e. how to redistribute them to the population, and whether that redistribution should take the form of direct cash transfers or some other form. A natural first thought might be that if these taxes are supposed to be reimbursing people for the costs they’ve nonconsensually incurred, it would only make sense to have them take the form of simple cash compensation – and in some cases, this may in fact be the right choice. But given everything we know about how public goods work, and how much more efficient it is to provide them via government than via the private sector, it’s not hard to see how a better general approach might be to have the government use at least some of the tax revenue it collects to provide essential public goods, rather than just handing the cash over to the population and leaving it to each individual citizen to find a way of obtaining them privately (which, of course, would result in those citizens either having to pay significantly more for them or not being able to obtain them at all).
This is an uncontroversial point among economists, including strongly pro-market ones. In fact, aside from true dyed-in-the-wool anarchists, even most anti-statists (i.e. those of a more run-of-the-mill libertarian bent) will grudgingly concede that government spending on some public goods like national defense may be justifiable. Still, these anti-statists will often nevertheless insist that government spending on other public goods can’t be considered legitimate in the same way – that allowing the government to spend tax revenues on anything other than physical and legal protection for citizens and property (and maybe one or two other things) is an unacceptable violation of taxpayers’ rights. As Heath points out, though, this stance can’t really be justified on any kind of solid theoretical basis, since the economics are fundamentally the same in either case:
Taxes, as we all know, are coercive. They restrict individual liberty. Yet the libertarian is forced to admit that taxation—at least at a certain level—is not only necessary, but desirable (in order to institute the foundations of the free market). How is one to justify overriding the freedom of the individual in this case? Coercion is justified here because it is necessary in order to resolve a collective action problem. Thus it is not freedom in general that is being denied by such taxes, only the freedom to free ride, which is not really a desirable form of freedom in the first place.
It should be noted that the level of taxation required to institute even the most bare-bones system of property rights and commercial exchange is not negligible. Anyone who has ever bought a piece of land and has been through the rigmarole of title searches and so forth knows that meticulous public records are essential for determining who owns what. Indeed, one of the reasons that “possession is nine-tenths of the law” is that it is prohibitively expensive to keep definitive records of who actually owns what. In their book The Cost of Rights: Why Liberty Depends on Taxes, Stephen Holmes and Cass Sunstein estimate that in 1997 the United States federal government spent $203 million on property-records management. That was just to keep track of things. Protecting and enforcing those rights cost the federal government more than $6.5 billion—and that does not include any of the costs associated with the federal justice system (more than $5 billion), much less “police protection and criminal corrections” in the nation as a whole ($73 billion in 1992).
We have a special term for these types of services when they are offered by government and paid for through taxation. They are called social programs. (They are also known as public goods, although this is a somewhat loose way of speaking.) […] Here we can see the fundamental problem with the libertarian or conservative vision of a minimal state. It has no principled basis: it is simply a list of social programs that people with certain personal preferences and animosities happen to favor. Once the libertarian makes the crucial concession—that it is legitimate for the state to impose taxes in order to provide goods and services that, because of collective action problems, would not otherwise be provided—it’s difficult to explain why there shouldn’t be other social programs as well, to resolve other sorts of collective action problems.
Conservatives, we are told, support government spending on “law and order,” national defense, maybe highways, and perhaps a few other things. But why just these programs? Why not public housing, public education, public health care, state pension plans, unemployment insurance, environmental legislation, and so on? The basic argument for all of these social programs is that state provision is necessary in order to resolve collective action problems. They are, in this respect, no different than the military or the criminal justice system. Of course, the details of these arguments are all controversial. The point is that the libertarian is now forced to consider arguments for each of these social programs on a case-by-case basis. Sweeping denunciations of government “interference” with the market or with individual liberty are no longer credible. The scope of state action and the appropriate level of taxation cannot be settled at the level of political ideology; they now depend upon the answer to empirical questions concerning the occurrence and severity of collective action problems and the effectiveness of government in resolving them.
In response to this, critics of government will often retort that although they admittedly might not have a good basis for categorically rejecting all government spending, they’re still justified in wanting to reject as much of it as possible, due to its inherent wastefulness. They’ll criticize government programs that they perceive as unacceptably inefficient, arguing that no private sector firm that failed to make a profit would ever survive, so therefore no government program that runs at a loss should be allowed to survive either. And to be fair, they’re right to point out that there certainly are some government programs that are unacceptably wasteful and ought to be eliminated. But saying that programs should be eliminated just because they’re unprofitable is a mistake – because after all, in many cases, the whole reason they’re being done by government in the first place, and not by the private sector, is precisely because they wouldn’t be workable as for-profit ventures. As Alexander writes:
In cases where state-run corporations are unprofitable, this is often not due to some negative effect of being state-run, but because the corporation was put under state control precisely because it was something so unprofitable no private company would touch it, but still important enough that it had to be done. For example, the US Post Office has a legal mandate to ship affordable mail in a timely fashion to every single god-forsaken town in the United States; obviously it will be out-competed by a private company that can focus on the easiest and most profitable routes, but this does not speak against it. Amtrak exists despite passenger rail travel in the United States being fundamentally unprofitable, but within its limitations it has done a relatively good job: on-time rates better than that of commercial airlines, 80% customer satisfaction rate, and double-digit year-on-year passenger growth every year for the past decade.
If such services could be provided on a for-profit basis, the private sector would already be providing them. But the fact that this isn’t the case doesn’t mean that they aren’t worth having at all – just that the private market isn’t the perfect mechanism for providing literally every single good and service we might need. In cases where the market is unable to adequately provide important goods, then, we really are better off letting the government step in to fill the gap – and the economic reasons for this are perfectly understandable.
XI.
It’s also worth mentioning that the provision of public goods isn’t the only area in which private markets can operate less than perfectly. Market failure, as economists call it, can in fact happen for all kinds of reasons – monopolization, information asymmetry, cartelization, cost externalization, adverse selection, moral hazard, prohibitively high barriers to market entry… the list goes on. In all these cases, there are entirely valid economic reasons why it might be justifiable for government to get involved – if not to directly provide the goods and services itself, then to at least just set corrective rules and regulations where necessary to prevent the market failures from being too egregious.
So to take the issue of monopolization, for instance: We’ve already discussed why government is necessary to provide the basic physical and civil infrastructure for large-scale markets to exist in the first place – but in addition to this, it’s also vital for ensuring that once these markets do exist, they’ll remain competitive (and therefore efficient) instead of just being dominated by monopolies all the time. You can just imagine what it would be like if, say, there were only one place to get groceries, which controlled the entire market. Customers would have nowhere else to go to buy their basic necessities, so the grocery store would be able to charge practically whatever it wanted (and skimp on product quality as much as it wanted), and the customers would still have no choice but to pay. The entire rationale for having the goods and services supplied by the free market – keeping product quality high and prices low – would no longer apply, because there would no longer be a genuine competitive market; the grocery store would just be dictating terms for everyone unilaterally like a one-party state. For that reason, then, government intervention would be justifiable in order to avert such an outcome and ensure that free choice and competition were preserved.
But why would government have to be involved at all? Why couldn’t the problem instead just be solved by having other grocery stores enter the market and compete on their own? Well, normally this is the go-to solution; under ordinary market conditions, where barriers to entry are low and things like economies of scale aren’t so overwhelming as to make competition impossible, the standard way of keeping would-be monopolies in check is to just let them all keep each other in check, competing with each other for customers and thereby forcing each other to keep their product quality high and their prices low. In some cases, though, companies will figure out ways to avoid this kind of competition. Maybe they’ll collude with each other to fix their prices at artificially high levels, or maybe they’ll come together to form a cartel and coordinate their actions in other ways (bid rigging, reducing output, etc.). Or alternatively, maybe one company will take measures to prevent any competitors from being able to emerge and challenge their market dominance in the first place, so the question of whether to collude or compete with rival firms never even comes up. This was something that happened in the case of Kodak, for instance, in the early 20th century:
In the early 1900’s Kodak monopolized the American film industry, controlling 96% of the market. They were required by the American federal government to stop coercing retail stores to sign exclusivity deals with them as they had a hold on a large portion of the market. This prevented entry into the market by other corporations, therefore Kodak was using their market power to minimise competition in the market.
Even though Kodak was operating as a private market actor, its actions were decidedly anti-market. It wasn’t just trying to prevail in a fair competition – it was trying to rig the situation so there was no competition at all.
This kind of thing isn’t just limited to companies dictating blatantly one-sided terms for the products they sell, either. The examples we’ve been discussing so far typify the classic types of monopolies we’re all familiar with, in which a single seller dominates the market and is thereby able to dictate prices to buyers. But this dynamic can run in the other direction, too; under certain circumstances, it might be the case that there’s only one buyer dictating prices to its suppliers – a situation that economists refer to as a monopsony. So for instance, if we go back to our hypothetical grocery monopoly, we might imagine the grocery store requiring its suppliers (i.e. the farmers who produce the food) to sign contracts agreeing to only sell their food to it and no one else – thereby choking off any potential competitors before they can even enter the market, and simultaneously making itself so indispensable to its suppliers that they have no choice but to accept its every demand or else go out of business themselves. Similarly, if we consider a company’s employees to also be “suppliers” of a sort – specifically, suppliers of labor – we can see how the same kind of thing might play out in the context of hiring. If our hypothetical grocery monopoly were the only major employer in a remote town, for instance, we can imagine how it might be able to exploit its position to exercise monopsony power over the local labor market, grossly underpaying its employees because it knows there’s nowhere else for them to go for work. The workers might strongly prefer to take their services elsewhere, in theory – but because there simply isn’t any such alternative available to them (absent some outside party like government coming to the rescue), they have no choice but to accept terms that are much worse than what they’d get in a free and competitive market. It’s the same story for every kind of monopolization, whether it involves a company’s relationship with its customers, its suppliers, its employees, or anyone else.
Of course, not all cases of monopoly are especially overt or deliberate. In some cases, a company might attain monopoly status in their particular market niche not because of anything they’re actively doing to prevent competitors from entering the market, but just due to things like so-called network effects, in which customers are effectively locked into only using a particular product by mere virtue of the fact that everyone else is using that same product as well. So if you consider social media or e-commerce sites, for instance, these can be prime examples of this phenomenon in action, as Alexander describes:
Individual sites like blogs and little storefronts are in decline and conversation and commerce have moved to a couple of giant corporations: Facebook, Twitter, Reddit, Amazon, Paypal.
These companies aren’t exactly monopolies. To some degree, if you’re unsatisfied with Facebook you can move to Twitter. But they’re not exactly competitors either – there are a lot of things Facebook is good for that Twitter fails completely, and vice versa. It’s like Coca-Cola vs. milk: in theory you’ve always got the choice to drink either in place of the other; in practice you usually know which one you need at any given time. In that sense, there’s no real Facebook competitor except eg Orkut or Diaspora, which no one uses.
Which suggests one reason why these sites are so dominant: their main selling point is their size. Facebook is the best because all of your friends are on it; if I made a much better Facebook clone tomorrow no one would go unless everyone else was already there (Google found this out the hard way). Amazon is the best because you can buy pretty much everything you want there; Paypal is the best because most sites take PayPal. So not only do they have no competitors, but it’s really hard to imagine one ever arising. In order to compete with Facebook, you not only need a better product, you need a product that’s so much better that everybody decides to switch en masse at the same time. The only example I can think of where this ever worked was the Great Digg Exodus, where Digg screwed up their product so thoroughly that everyone simultaneously said “@#!$ this” and moved to Reddit.
So instead of “let a thousand nations bloom”, it ended up more like “let five or six big nations bloom that we can never get rid of”.
This is yet another example of a classic coordination problem; even if everyone preferred in theory to leave Facebook for a superior competitor, in practical terms (barring extreme circumstances) no such competitor would ever be able to get enough of a foothold in the market to start pulling users away from Facebook in the first place, because no individual user would have any incentive to make the switch unless everyone else had already done so. It’s an all-or-nothing type of situation; so whichever company holds the dominant market position is essentially assured of keeping it.
Similarly, even in situations that don’t involve network effects, if a particularly large company is dominating its market, it may be able to maintain that dominance for no other reason than simply because it’s so large. As I mentioned in my last post, the larger a company grows, the more it’s able to take advantage of efficiency enhancers like economies of scale and more specialized divisions of labor. As Taylor explains:
“Economies of scale” is the jargon for saying that, in certain cases, a larger firm can produce at a lower average cost than a smaller firm. A tiny factory that produces only one hundred cars a year will have much larger production costs per car than a factory making ten thousand cars, which can take advantage of specialization and assembly line production.
This can be a good thing for customers in the most immediate sense, since it means larger companies can sell their products more cheaply than they otherwise could. The downside, though, is that it can make it impossible for smaller firms without those economies of scale to compete on price in the immediate term – so even if they’re more efficient than the larger firms in every other regard, and would therefore be able to sell their products more cheaply if they actually did have the opportunity to grow to a comparable size, they never get the chance to do so, since the larger companies use their scale advantage to drive them out of business long before then. The effect of this dynamic on customers, of course, is that despite whatever savings they might enjoy from the big corporations’ economies of scale in the short term, they may still be made worse off in the long run. In order to offset this effect, then, some kind of government intervention to either tax the larger companies, subsidize the smaller companies, or otherwise level the playing field can help keep the market more competitive, and therefore more efficient. As Alexander puts it:
A tax on large corporations proportional to their size [can be used] to approximately balance economies of scale and give small mom-and-pop stores and start-ups ability to compete on an equal footing.
This kind of thing might not make the big companies too happy, naturally, but that shouldn’t necessarily be a reason not to do it – because after all, a good government’s job isn’t to just cater to individual businesses; it’s to protect the functioning of the market as a whole. As Harford puts it:
Economists believe there’s an important difference between being in favor of markets and being in favor of business, especially particular businesses. A politician who is in favor of markets believes in the importance of competition and wants to prevent businesses from getting too much scarcity power. A politician who’s too influenced by corporate lobbyists will do exactly the reverse.
In short, then, when monopolies threaten to undermine the health and competitiveness of the market, the government may be justified in taking measures to rein them in – either by imposing regulations to keep them from unduly wielding too much power, or ideally, to keep them from forming in the first place.
Of course, in some cases, monopolization might be unavoidable. There are some goods and services – particularly ones like physical infrastructure networks – that can only really be provided by some kind of monopoly. In such cases, though, it’s even more clear that simply leaving matters to the market is fundamentally unworkable as a one-size-fits-all approach. Friedman points to roads as just one illustration of this:
In [the] case [of highways], it is technically possible to identify and hence charge individuals for their use of the roads and so to have private operation. However, for general access roads, involving many points of entry and exit, the costs of collection would be extremely high if a charge were to be made for the specific services received by each individual, because of the necessity of establishing toll booths or the equivalent at all entrances. The gasoline tax is a much cheaper method of charging individuals roughly in proportion to their use of the roads. This method, however, is one in which the particular payment cannot be identified closely with the particular use. Hence, it is hardly feasible to have private enterprise provide the service and collect the charge without establishing extensive private monopoly.
In cases like this, then, the government may be warranted in taking a direct role – either by imposing strict regulations on private companies to keep them from abusing their monopoly power and overcharging their customers, or by cutting out the middleman and just directly providing the services in question itself. As Jeffrey Sachs writes:
There are many kinds of infrastructure, especially networks like power grids, roads, and other transport facilities—airports and seaports—which are characterized by increasing returns to scale. If left to private markets, these sectors would tend to be monopolized, so they are called natural monopolies. If such capital investments are left to the private sector, the privately owned monopolies would overcharge for their use, and the result would be too little utilization of this kind of capital. Potential users would be rationed out of the market. It is more efficient, therefore, for a public monopoly to provide network infrastructure and set an efficient price below the one that would be set by a private monopolist.
In some industries, market competition isn’t likely to work well. Instead, it leads to a situation in which all firms can suffer enormous and unsustainable losses. Back in the late nineteenth century, the U.S. railroad industry seemed to be booming. The biggest outlay that firms had was the cost of laying track; once that was done, the cost of moving goods along those tracks was low. If a company had the only tracks in a given area, it could charge high prices for shipping goods and use the profits to pay high dividends, which attracted more investors, which gave that company the money to lay more track, and so on. By 1882 almost 90,000 miles of track had been laid by competing railroads. But then competition drove shipping prices way down, and firms were unable to pay the bills incurred from building those tracks. By 1900, half the railroad tracks built by private firms were being operated by the bankruptcy courts. As a result, for most of the twentieth century, the U.S. government regulated the railroads—and, later, for similar reasons, the airlines.
Competition doesn’t work very well among public utilities, either. Why not? Try to imagine a city with four separate water companies; that’s four sets of pipes, one for each company, under every building in the city. It’s not viable. Imagine four times the number of electrical lines running down your street, or four times the number of railroad lines crisscrossing a city. Many water and electrical companies are technically privately owned, but they are closely regulated by the government.
These regulated industries share a common underlying characteristic: they rely upon networks of some kind. The cost of building the overall network tends to be high, whereas the cost of running it tends to be low. If you leave these big companies alone, you’ll tend to end up with a monopoly. Alternatively, two or more such firms in competition, once their infrastructure is in place, may compete each other into ruin—or to a merger, in which case there’s a monopoly again. This situation is referred to as “natural monopoly,” because the way in which the good is produced, with high fixed costs of building the network and low costs of delivering services afterward, can so easily lead to a monopoly outcome.
Again, it’s important to stress that just because a natural monopoly exists doesn’t necessarily mean that the government should always opt for the maximum possible response and completely take over the entire industry every time. It will often be the case that private firms can still play a central role – so inasmuch as it’s possible to maintain competitive private markets for a particular product, the dynamism of the market mechanism should be embraced. The point is not to insist on a 100% government-only approach, any more than we should want to insist on a 100% market-only approach; our goal should just be to follow what the economics tells us is the best approach in any given situation. As Taylor concludes:
Even in cases where some regulation is needed, regulated industries might have one or more parts that could be carved off and left to competitive market forces. Maybe the best example of this process is the breakup of the former telephone monopolist AT&T. AT&T’s long-distance, equipment, and research arms certainly became more innovative in the aftermath of competition. The local phone companies left in the wake of the breakup proved a bit slower to compete, but with the spread of smarter cell phones and Web-based technology, competition is rising. Other settings in which some level of competition might help include garbage collection, in which independent firms could bid for neighborhood contracts; and the service industries that support city and county governments, such as cleaning services, maintenance and repair services, cafeteria services, and building management.
Electricity has long been thought of as a natural monopoly and has been regulated as a public utility, thanks to the physical network of the grid. But arguments about the grid don’t focus on how the electrical power is generated. The grid might be publicly owned and regulated, but firms could compete to supply energy—including energy from alternative sources such as solar and wind farms. The United Kingdom has been experimenting with energy markets since 1989, and a number of U.S. states tried electricity deregulation in the 1990s, with some successes (Pennsylvania) and some disasters (California). There are as many lessons to be learned from what didn’t work as from what did.
Broadband Internet access has some of the traits of a natural monopoly. Again, it can be provided by setting up a network that has high fixed costs—running cable to everybody’s home. Therefore, some jurisdictions have argued that it should be provided by a regulated monopoly. But the broadband Internet industry also has potential for competition through the various delivery methods that have become viable over the past decade: cable, fiber optics, even wireless. With a rapidly evolving technology, it’s often better to encourage a multiplicity of technologies than for government to anoint one technology and then regulate it.
The forces of competition can encourage innovation and efficiency and benefit consumers. But in certain well-defined circumstances, when competition can’t or won’t work well, government has a useful role as a referee of economic competition. Government is also a logical arbiter of safety standards, financial honesty, and information disclosure. The real challenge when the outcomes of market forces seem undesirable is to identify the specific underlying problem and design the policy response accordingly. Is the problem a monopoly, a cartel, a restrictive business practice, a natural monopoly, a regulated industry that doesn’t need regulation anymore, or low-income people needing access to a certain service? Rather than locking yourself into a mental box—either vehemently for or against regulation—it’s often wise to take a case-by-case approach. Regulation works poorly when it assumes that government can simply dictate the outcome; regulation is more likely to work well when it respects the power of incentives and market forces.
XII.
Now, while monopolization might be the best-known way in which asymmetries of bargaining power can exist between buyers and sellers, it’s not the only one. In much the same way that companies can exploit their monopoly status to overcharge their customers – taking advantage of inflexibilities in the supply side of the equation – they can also find ways of overcharging their customers by taking advantage of inflexibilities in the demand side of the equation. If there’s a particular good or service that customers feel they have to have, and for which their demand accordingly remains consistently high regardless of how expensive it gets – what economists refer to as price inelasticity of demand – it can open up the possibility for sellers to raise the price far beyond what they’d otherwise be able to charge. Mind you, this won’t necessarily be a problem under normal market conditions, since competition between sellers will generally keep prices down – but in circumstances where the space for such competition is constrained in some way, it can produce similar effects to monopoly pricing even if the sellers aren’t actually monopolies themselves, simply because the urgency of customers’ needs makes it so they might as well be monopolies.
So for instance, imagine if you found yourself in some emergency situation – like, say, your house was burning down – and there was no such thing as a public fire department, only private fire brigades that roved around town at random. (Let’s also assume for the sake of this example that you’ve just moved in, so you haven’t already arranged for fire protection or insurance in advance.) As soon as one of these private fire brigades arrived at your burning house, it would be able to charge you practically however much it wanted to put the fire out, even up to the full value of the damaged house, and you’d have no choice but to pay, since your only alternative would be to lose everything. You wouldn’t have time to haggle over the price or shop around for a better deal, since your house would be burning away by the second, and for all you know the second-closest fire brigade might be stuck in traffic and unable to reach you in time to save it. You’d just have to pay whatever price the fire brigade named, because your need for its services would be completely inelastic.
The first ever Roman fire brigade was created by Crassus. Fires were almost a daily occurrence in Rome, and Crassus took advantage of the fact that Rome had no fire department, by creating his own brigade—500 men strong—which rushed to burning buildings at the first cry of alarm. Upon arriving at the scene, however, the firefighters did nothing while Crassus offered to buy the burning building from the distressed property owner, at a miserable price. If the owner agreed to sell the property, his men would put out the fire; if the owner refused, then they would simply let the structure burn to the ground. After buying many properties this way, he rebuilt them, and often leased the properties to their original owners or new tenants.
Two thousand years later, we’ve solved this problem by opting for public fire departments over private ones – but even now, attempts are sometimes made to switch from a universal public model to one based on individual customer payments, and the results have predictably been less than ideal (as in the case of a Tennessee area that adopted a “No Pay, No Spray” policy, which resulted in firefighters standing idly by while a family’s house burned down – with their pets still inside – because they hadn’t paid for firefighting services in advance).
Firefighting isn’t the only area in which this kind of unequal bargaining power can come into play, either. Another example that might immediately come to mind when you hear “unequal bargaining power” is labor. We already discussed earlier how such power asymmetries can force workers to accept horribly one-sided job offers even if they don’t really want to, due simply to their lack of better options. But that discussion framed the issue mostly in terms of supply – i.e. the lack of employers offering good jobs. We could just as easily frame it in terms of demand – i.e. how much the workers need those jobs in the first place. If the workers are independently wealthy and comfortable enough that they don’t truly need to accept any job, their demand for work will accordingly be highly elastic – so employers will have to offer genuinely high-quality terms of employment if they want to hire them. On the other hand, if the workers are utterly broke and desperate, their demand for work will be highly inelastic – so they’ll be a lot more inclined to accept any job they’re offered, even if the pay and working conditions are terrible, simply because they’ll have to do so in order to survive.
In a similar vein, another high-stakes area where demand inelasticity can be a major factor is healthcare. If you’re suddenly struck with an acute injury or some other urgent medical problem that requires immediate attention, you probably won’t have the luxury of being able to shop around for the best price or hold out for a better deal if no affordable options are immediately available. Your need for treatment – especially if your issue is life-threatening – will be unequivocally inelastic – which means that your only option under our current privatized healthcare system will be to simply get yourself to the nearest hospital or doctor’s office (which might be the only one nearby if you live in a rural area), cross your fingers that your health insurance will cover whatever treatment you might need (if you’re lucky enough to even have health insurance at all), and then only find out after the fact what the price of that treatment actually was and how much of it you’ll have to pay yourself. Maybe it’ll turn out that the cost is so high that it completely bankrupts you – in fact, this outcome is far from uncommon; about two-thirds of bankruptcies in the US are due to medical bills (and of those, about four-fifths are cases in which the patients have insurance, but are bankrupted anyway due to the combination of co-payments, deductibles, uncovered services, and in some cases becoming so sick that they lose their job and their insurance along with it). But when the only alternative is death or permanent disability, it’s not hard to see why most people simply pay whatever their medical bills end up costing them, even if it costs them everything they have. “Your money or your life” doesn’t really seem to have a hard upper bound.
Healthcare providers, of course, fully understand this situation – and as a result, they’re able to charge far more for their services than they’d be able to if the demand for those services were more elastic, as Ezra Klein notes:
Health care is an unusual product in that it is difficult, and sometimes impossible, for the customer to say “no.” In certain cases, the customer is passed out, or otherwise incapable of making decisions about her care, and the decisions are made by providers whose mandate is, correctly, to save lives rather than money.
In other cases, there is more time for loved ones to consider costs, but little emotional space to do so — no one wants to think there was something more they could have done to save their parent or child. It is not like buying a television, where you can easily comparison shop and walk out of the store, and even forgo the purchase if it’s too expensive. And imagine what you would pay for a television if the salesmen at Best Buy knew that you couldn’t leave without making a purchase.
“In my view, health is a business in the United States in quite a different way than it is elsewhere,” says Tom Sackville, who served in Margaret Thatcher’s government and now directs the [International Federation of Health Plans]. “It’s very much something people make money out of. There isn’t too much embarrassment about that compared to Europe and elsewhere.”
The result is that, unlike in other countries, sellers of health-care services in America have considerable power to set prices, and so they set them quite high. Two of the five most profitable industries in the United States — the pharmaceuticals industry and the medical device industry — sell health care. With margins of almost 20 percent, they beat out even the financial sector for sheer profitability.
And because American healthcare providers do charge so much more than those in other advanced industrial countries (where healthcare is paid for by government), the upshot is that we Americans end up paying roughly twice as much for our healthcare as those other countries do, while receiving roughly the same outcomes. So what could we be doing to improve our situation here? Well, the most obvious answer is that we could simply switch to doing what all those other countries are doing – have our healthcare universally paid for as a public service, in the same way that we have our fire departments paid for as a public service. This wouldn’t necessarily mean that the government would have to directly hire all the doctors and provide the medical services itself, as in the UK; it could also mean that it allows all the doctors and hospitals to continue operating in the private sector, and simply provides everyone with universal health insurance to pay for their services, as in Canada. Or it could mean some other variation still; every advanced industrial country besides the US has its own unique way of providing universal healthcare to citizens. What they’ve all recognized, though, is that having government involved in the provision of healthcare allows them to keep their citizens healthy more cheaply and efficiently than leaving everything to the private sector, simply as a matter of basic economics.
XIII.
But price inelasticity of demand is just one reason why trying to provide services like healthcare through the private sector can be problematic. An even bigger one is the problem of asymmetric information – where the seller knows more information relevant to the transaction than the buyer does, or vice-versa. In the case of healthcare, this can be something as basic as patients not fully understanding all the arcane details of which medical services they should accept and how those services will be charged and how much their insurance will actually cover and so on – which, in many cases, can screw them over hard when it turns out that they weren’t as covered as they thought they were. As Aaron Carroll notes:
Most people don’t find out that they’re not getting “good service” [on their health insurance] until they’re sick. Healthy people don’t make much use of their insurance, so they don’t know how bad it is. They only find out after they’re ill, and then it’s too late.
But patients aren’t the only ones who have to deal with information asymmetries; their doctors and insurance providers often face equally daunting information problems themselves. In fact, the problem of information asymmetry pervades practically every aspect of the healthcare sector – and a number of other sectors as well. Wheelan provides a full rundown:
Information matters, particularly when we don’t have all that we need. Markets tend to favor the party that knows more. (Have you ever bought a used car?) But if the imbalance, or asymmetry of information, becomes too large, then markets can break down entirely. This was the fundamental insight of 2001 Nobel laureate George Akerlof, an economist at the University of California, Berkeley. His paper entitled “The Market for Lemons” used the used-car market to make its central point. Any individual selling a used car knows more about its quality than someone looking to buy it. This creates an adverse selection problem. […] Car owners who are happy with their vehicles are less likely to sell them. Thus, used-car buyers anticipate hidden problems and demand a discount. But once there is a discount built into the market, owners of high-quality cars become even less likely to sell them—which guarantees the market will be full of lemons. In theory, the market for high-quality used cars will not work, much to the detriment of anyone who may want to buy or sell such a car. (In practice, such markets often do work for reasons explained by the gentlemen with whom Mr. Akerlof shared his Nobel prize; more on that in a moment.)
“The Market for Lemons” is characteristic of the kinds of ideas recognized by the Nobel committee. It is, in the words of the Royal Swedish Academy of Sciences, “a simple but profound and universal idea, with numerous implications and widespread applications.” Health care, for example, is plagued with information problems. Consumers of health care—the patients—almost always have less information about their care than their doctors do. Indeed, even after we see a doctor, we may not know whether we were treated properly. This asymmetry of information is at the heart of our health care woes.
Under any “fee for service” system, doctors charge a fee for each procedure they perform. Patients do not pay for these extra tests and procedures; their insurance companies (or the federal government, in the case of older Americans who are eligible for Medicare) do. At the same time, medical technology continues to present all kinds of new medical options, many of which are fabulously expensive. This combination is at the heart of rapidly rising medical costs: Doctors have an incentive to perform expensive medical procedures and patients have no reason to disagree. If you walk into your doctor’s office with a headache and the doctor suggests a CAT scan, you would almost certainly agree “just to be sure.” Neither you nor your doctor is acting unethically. When cost is not a factor, it makes perfect sense to rule out brain cancer even when the only symptom is a headache the morning after the holiday office party. Your doctor might also reasonably fear that if she doesn’t order a CAT scan, you might sue for big bucks later if something turns out to be wrong with your head.
Medical innovation is terrific in some cases and wasteful in others. Consider the current range of treatments for prostate cancer, a cancer that afflicts many older men. One treatment option is “watchful waiting,” which involves doing nothing unless and until tests show that the cancer is getting worse. This is a reasonable course of action because prostate cancer is so slow-growing that most men die of something else before the prostate cancer becomes a serious problem. Another treatment option is proton radiation therapy, which involves shooting atomic particles at the cancer using a proton accelerator that is roughly the size of a football field. Doing nothing essentially costs nothing (more or less); shooting protons from an accelerator costs somewhere in the range of $100,000.
The cost difference is not surprising; the shocking thing is that proton therapy has not been proven any more effective than watchful waiting. An analysis by the RAND Corporation concluded, “No therapy has been shown superior to another.”
Health maintenance organizations were designed to control costs by changing the incentives. Under many HMO plans, general practitioners are paid a fixed fee per patient per year, regardless of what services they provide. Doctors may be restricted in the kinds of tests and services they can prescribe and may even be paid a bonus if they refrain from sending their patients to see specialists. That changes things. Now when you walk into the doctor’s office (still at a disadvantage in terms of information about your own health) and say, “I’m dizzy, my head hurts, and I’m bleeding out my ear,” the doctor consults the HMO treatment guidelines and tells you to take two aspirin. As exaggerated as that example may be, the basic point is valid: The person who knows most about your medical condition may have an economic incentive to deny you care. Complaints about too much spending are replaced by complaints about too little spending. Every HMO customer has a horror story about wrangling with bureaucrats over acceptable expenses. In the most extreme (and anecdotal) stories, patients are denied lifesaving treatments by HMO bean counters.
Some doctors are willing to do battle with the insurance companies on behalf of their patients. Others simply break the rules by disguising treatments that are not covered by insurance as treatments that are. (Patients aren’t the only ones suffering from an asymmetry of information.) Politicians have jumped into the fray, too, demanding things like disclosure of the incentives paid to doctors by insurance companies and even a patient’s bill of rights.
The information problem at the heart of health care has not gone away: (1) The patient, who does not pay the bill, demands as much care as possible; (2) the doctor maximizes income and minimizes lawsuits by delivering as much care as possible; (3) the insurance company maximizes profits by paying for as little care as possible; (4) technology has introduced an array of massively expensive options, some of which are miracles and others of which are a waste of money; and (5) it is very costly for either the patient or the insurance company to prove the “right” course of treatment. In short, information makes health care different from the rest of the economy. When you walk into an electronics store to buy a big-screen TV, you can observe which picture looks clearest. You then compare price tags, knowing that the bill will arrive at your house eventually. In the end, you weigh the benefits of assorted televisions (whose quality you can observe) against the costs (that you will have to pay) and you pick one. Brain surgery really is different.
The fundamental challenge of health care reform is paying for the “right” treatment—the “product” that makes the most sense relative to what it costs. This is an exercise that consumers perform on their own everywhere else in the economy. Bean counters should not automatically say no to super-expensive treatments; some may be wonderfully effective and worth every penny. They should say no to expensive treatments that are not demonstrably better than less expensive options. They should also say no to doing some tests “just to be sure,” both because these diagnostics are expensive, but also because when administered to healthy people they tend to generate “false positives,” which can breed expensive, unnecessary, and potentially dangerous follow-up care.
There is an old aphorism in advertising: “I know I’m wasting half my money; I just wish I knew which half.” Health care is similar, and if the goal of health care reform is to restrain rapidly rising costs, then any policy change will have to focus on quality and outcomes rather than just paying for inputs. New York Times financial columnist David Leonhardt describes the treatment for prostate cancer (where fabulously expensive technology does not appear to be delivering better health) as his own “personal litmus test” for health care reform. He writes, “The prostate cancer test will determine whether President Obama and Congress put together a bill that begins to fix the fundamental problem with our medical system: the combination of soaring costs and mediocre results. If they don’t, the medical system will remain deeply troubled, no matter what other improvements they make.”
—
But we’re not done with health care yet. The doctor may know more about your health than you do, but you know more about your long-term health than your insurance company does. You may not be able to diagnose rare diseases, but you know whether or not you lead a healthy lifestyle, if certain diseases run in your family, if you are engaging in risky sexual behavior, if you are likely to become pregnant, etc. This information advantage has the potential to wreak havoc on the insurance market.
Insurance is about getting the numbers right. Some individuals require virtually no health care. Others may have chronic diseases that require hundreds of thousands of dollars of treatment. The insurance company makes a profit by determining the average cost of treatment for all of its policyholders and then charging slightly more. When Aetna writes a group policy for 20,000 fifty-year-old men, and the average cost of health care for a fifty-year-old man is $1,250 a year, then presumably the company can set the annual premium at $1,300 and make $50—on average—for each policy underwritten. Aetna will make money on some policies and lose money on others, but overall the company will come out ahead—if the numbers are right.
Is this example starting to look like […] the used-car market? It should. The $1,300 policy is a bad deal for the healthiest fifty-year-old men and a very good deal for the overweight smokers with a family history of heart disease. So, the healthiest men are most likely to opt out of the program; the sickest guys are most likely to opt in. As that happens, the population of men on which the original premium was based begins to change; on average, the remaining men are less healthy. The insurance company studies its new pool of middle-aged men and reckons that the annual premium must be raised to $1,800 in order to make a profit. Do you see where this is going? At the new price, more men—the most healthy of the unhealthy—decide that the policy is a bad deal, so they opt out. The sickest guys cling to their policies as tightly as their disease-addled bodies will allow. Once again the pool changes and now even $1,800 does not cover the cost of insuring the men who sign up for the program. In theory, this adverse selection could go on until the market for health insurance fails entirely.
That does not actually happen. Insurance companies usually insure large groups whose individuals are not allowed to select in or out. If Aetna writes policies for all General Motors employees, for example, then there will be no adverse selection. The policy comes with the job, and all workers, healthy and unhealthy, are covered. They have no choice. Aetna can calculate the average cost of care for this large pool of men and women and then charge a premium sufficient to make a profit.
Writing policies for individuals, however, is a much scarier undertaking. Companies rightfully fear that the people who have the most demand for health coverage (or life insurance) are those who need it most. This will be true no matter how much an insurance company charges for its policies. At any given price—even $5,000 a month—the individuals who expect their medical costs to be higher than the cost of the policy will be the most likely to sign up. Of course, the insurance companies have some tricks of their own, such as refusing coverage to individuals who are sick or likely to become sick in the future. This is often viewed as some kind of cruel and unfair practice perpetrated on the public by the insurance industry. On a superficial level, it does seem perverse that sick people have the most trouble getting health insurance. But imagine if insurance companies did not have that legal privilege. A (highly contrived) conversation with your doctor might go something like this:
DOCTOR: I’m afraid I have bad news. Four of your coronary arteries are fully or partially blocked. I would recommend open-heart surgery as soon as possible.
PATIENT: Is it likely to be successful?
DOCTOR: Yes, we have excellent outcomes.
PATIENT: Is the operation expensive?
DOCTOR: Of course it’s expensive. We’re talking about open-heart surgery.
PATIENT: Then I should probably buy some health insurance first.
DOCTOR: Yes, that would be a very good idea.
Insurance companies ask applicants questions about family history, health habits, smoking, dangerous hobbies, and all kinds of other personal things. When I applied for term life insurance, a representative from the company came to my house and drew blood to make sure that I was not HIV-positive. He asked whether my parents were alive, if I scuba dive, if I race cars. (Yes, yes, no.) I peed in a cup; I got on a scale; I answered questions about tobacco and illicit drug use—all of which seemed reasonable given that the company was making a commitment to pay my wife a large sum of money should I die in the near future.
Insurance companies have another subtle tool. They can design policies, or “screening” mechanisms, that elicit information from their potential customers. This insight, which is applicable to all kinds of other markets, earned Joseph Stiglitz, an economist at Columbia University and a former chief economist of the World Bank, a share of the 2001 Nobel Prize. How do firms screen customers in the insurance business? They use a deductible. Customers who consider themselves likely to stay healthy will sign up for policies that have a high deductible. In exchange, they are offered cheaper premiums. Customers who privately know that they are likely to have costly bills will avoid the deductible and pay a higher premium as a result. (The same thing is true when you are shopping for car insurance and you have a sneaking suspicion that your sixteen-year-old son is an even worse driver than most sixteen-year-olds.) In short, the deductible is a tool for teasing out private information; it forces customers to sort themselves.
—
Any insurance question ultimately begs one explosive question: How much information is too much? I guarantee that this will become one of the most nettlesome policy problems in coming years. Here is a simple exercise. Pluck one hair from your head. (If you are totally bald, take a swab of saliva from your cheek.) That sample contains your entire genetic code. In the right hands (or the wrong hands), it can be used to determine if you are predisposed to heart disease, certain kinds of cancer, depression, and—if the science continues at its current blistering pace—all kinds of other diseases. With one strand of your hair, a researcher (or insurance company) may soon be able to determine if you are at risk for Alzheimer’s disease—twenty-five years before the onset of the disease. This creates a dilemma. If genetic information is shared widely with insurance companies, then it will become difficult, if not impossible, for those most prone to illness to get any kind of coverage. In other words, the people who need health insurance most will be the least likely to get it—not just the night before surgery, but ever. Individuals with a family history of Huntington’s disease, a hereditary degenerative brain disorder that causes premature death, are already finding it hard or impossible to get life insurance. On the other hand, new laws are forbidding insurance companies from gathering such information, leaving them vulnerable to serious adverse selection. Individuals who know that they are at high risk of getting sick in the future will be the ones who load up on generous insurance policies.
An editorial in The Economist noted this looming quandary: “Governments thus face a choice between banning the use of test results and destroying the industry, or allowing their use and creating an underclass of people who are either uninsurable or cannot afford to insure themselves.” The Economist, which is hardly a bastion of left-wing thought, suggested that the private health insurance market may eventually find this problem intractable, leaving government with a much larger role to play. The editorial concluded: “Indeed, genetic testing may become the most potent argument for state-financed universal health care.”
Any health care reform that seeks to make health insurance both more accessible and more affordable, particularly for those who are sick or likely to get sick, will have devastating adverse selection problems. Think about it: If I promise that you can buy affordable insurance, regardless of whether or not you are already sick, then the optimal time to buy that insurance is in the ambulance on the way to the hospital. The only fix for this inherent problem is to combine guaranteed access to affordable insurance with a requirement that everyone buy insurance—healthy and sick, young and old—a so-called “personal mandate.” The insurance companies will still lose money on the policies that they are forced to sell to bad risks, but those losses can be offset by the profits earned from healthy people who are forced to buy insurance. (Any country with a national health care system effectively has a personal mandate; all citizens are forced to pay taxes, and in return they all get government-funded health care.)
This is the approach that Massachusetts took as part of a state plan to provide universal access to health insurance. State residents who can afford health insurance but don’t buy it are fined on their state tax return. Hillary Clinton supported a personal mandate in the 2008 Democratic presidential primaries; Barack Obama did not, though that arguably had more to do with distinguishing himself from his toughest Democratic opponent than it did with his analysis of adverse selection. Obviously, forcing healthy people to buy something that they would otherwise not buy is a heavy-handed use of government; it’s also the only way to pool risk (which is the purpose of insurance) when the distribution of risk is not random.
Here are the relevant economics: (1) We know who is sick; (2) increasingly we know who will become sick; (3) sick people can be extremely expensive; and (4) private insurance doesn’t work well under these circumstances. That’s all straightforward. The tough part is philosophical/ideological: To what extent do we want to share health care expenses anyway (if at all), and how should we do it? Those were the fundamental questions when Bill Clinton sought to overhaul health care in 1993, and again when the Obama administration took it up in 2009.
Shortly after Wheelan wrote this, of course, the Obama administration did in fact pass a healthcare plan that included a personal mandate for every citizen to get health insurance, along with a rule that insurance companies could no longer deny coverage to people with pre-existing conditions. In practice, though, actually enforcing this policy has proven difficult – it’s hard to force people to buy something if their whole problem is that they don’t feel they can afford it – so millions of Americans still remain uninsured today.
The alternative to this kind of patchwork approach is universal coverage – a policy of just having one big insurance pool that includes everyone in the country. Having a large insurance pool means that each individual member of that pool becomes cheaper to insure – so naturally, having an insurance pool that encompasses everyone will be cheapest of all. Every advanced industrial country except the US has figured this out and adopted some form of universal health insurance, and they’ve enjoyed massive savings on their healthcare as a result – but meanwhile, the US still continues to waste billions of dollars on its inefficient privatized system that doesn’t even cover everybody. As Taylor writes:
[One] thing insurance companies can do to mitigate risk is to draw upon a larger pool of customers. The larger the customer pool, the more likely it is to contain a good percentage of low-risk participants to offset the high-risk ones. Thus, health insurance can be less expensive if purchased through your employer than for you as an individual, and health insurance is often less expensive per person for a large company than a small one. In the car insurance market, almost every state requires every car owner to get insurance, so low-risk drivers can’t drop out of the market, which again moderates the whole pool’s risk.
In most of the industrialized world, the imperfect information problems inherent to the health insurance market have been addressed by a nationalized health care system. Countries have structured their national programs in a variety of ways, but they share a belief that the problem of imperfect information is so great in health care that a free market cannot cope with it. Governments around the world—except in the United States—address the [problem of potential over-treatment] by controlling the amount of care provided, when care should be delivered, and how much it should cost. They address the adverse selection problem by bringing the whole country into the insurance pool.
As you may know, the United States spends much more on health care than any industrialized nation in the world. According to the World Health Organization, in 2007 U.S. health care spending (both privately and publicly funded) was about $7,300 per person. In comparison, Canada, France, Germany, Japan, and the United Kingdom spent between $2,700 and $3,900 per person. As a share of gross domestic product, health care spending in the United States was 15.7 percent; in Canada, France, and Germany it was between 10 and 11 percent of GDP; and in Japan and the United Kingdom it was roughly 8 percent of GDP. In short, the United States is spending twice as much per person on health care as do other nations with comparable economies.
A standard explanation for this pattern is the extraordinary quality of both health care and health care research in the United States. Innovation—whether in pharmaceuticals or equipment—is better rewarded, and doctors and nurses are better compensated for their hard work and long years of schooling. Yet it doesn’t appear that the quality of U.S. health care is twice as good as in these other nations. The United States doesn’t seem to be seeing a big enough payoff in terms of health for this high level of spending, especially given that through the mid-2000s, 40 million people in the country had no health insurance at all.
Despite the US being the only holdout in this area, American conservatives still have strong objections to the idea of getting government more involved in the provision of healthcare. But as Alexander demonstrates, their objections all have straightforward responses:
[Q]: Government would do a terrible job in health care. We should avoid government-run “socialized” medicine unless we want cost overruns, long waiting times, and death panels.
Government-run health systems empirically do better than private health systems, while also costing much less money.
Let’s compare, for example, Sweden, France, Canada, the United Kingdom, and the United States. The first four all have single-payer health care (a version of government-run health system); the last has a mostly private health system (although it shouldn’t matter, we’ll use statistics from before Obamacare took effect). We’ll look at three representative statistics commonly used to measure quality of health care: infant mortality, life expectancy, and cancer death rate.
Infant mortality is the percent of babies who die in the first few weeks of life, usually a good measure of pediatric and neonatal care. Of the five countries, Sweden has the lowest infant mortality at 2.56 per 1,000 births, followed by France at 3.54, followed by the UK at 4.91, followed by Canada at 5.22, with the United States last at 6.81. (source)
Life expectancy, the average age a person born today can expect to live, is a good measurement of lifelong and geriatric care. Here Sweden is again first at 80.9, France and Canada tied for second at 80.7, the UK next at 79.4, and the United States once again last at 78.3. (source)
Taking cancer deaths per 100,000 people per year as representative of deaths from serious disease, here we find the UK doing best at 253.5 deaths, Sweden second at 268.2, France in third at 286.1, and the United States again in last place at 321.9 deaths (source: OECD statistics; data for Canada not available).
So we notice that the United States does worse than all four countries with single-payer health systems, even though America is wealthier per capita than any of them. This is not statistical cherry-picking: any way you look at it, the United States has one of the least effective health systems in the developed world.
[Q]: Government-run health care would be bloated, bureaucratic, and unnecessarily expensive, as opposed to the sleek, efficient service we get from the free market.
Actually, government-run health care is empirically more efficient than market health care. For example, Blue Cross New England employs more people to administer health insurance for its 2.5 million customers than the Canadian health system employs to administer health insurance for 27 million Canadians. Health care spending per person (public + private) in Canada is half what it is in America, yet Canadians have longer life expectancy, lower infant mortality, and are healthier by every objective standard.
Remember those five countries from the last question?
The UK spends $1,675 per person per year on health care. Canada spends $1,939. Sweden, which you’ll remember did best on most of the statistics, spends $2,125. France spends $2,288. Americans spend on average $4,271 – almost three times as much as Britain, a country which delivers better health care.
When this argument gets put in graph form, it becomes even clearer that US health inefficiency is literally off the chart.
If these were companies in the free market, the company that charges three times as much to provide a worse service would have gone bankrupt long ago. That company is American-style private health care.
[Q]: In government-run health care, people are relegated to “waiting lists”, where they have to wait months or even years for doctor visits, surgeries, and other procedures. Sometimes people die on these waiting lists. Obviously, this is unacceptable and a knock-down argument against government-run health care.
The laws of supply and demand apply in health care as much as anywhere else: people would like to see doctors as quickly as possible, but doctors are a scarce resource that must be allocated somehow.
In a private system, doctor access is allocated based on money; this has the advantage of incentivizing the production of more doctors and of ensuring that people with enough money can see doctors quickly. These are also its disadvantages: assuming more people want to see a doctor than need to do so, costs will spiral out of control and poor people will have limited or no access.
In a public system, doctor access is allocated based on medical need. Although no one will be turned away from a doctor in an emergency situation, people may have to wait a long amount of time for elective surgeries in order that other sicker people, including poor people who would not be seen at all in a private system, can be seen first.
The relative effectiveness of the two systems can once again be seen in the infant mortality, life expectancy, and cancer survival rate statistics.
[Q]: Government-run health care inevitably includes “death panels” who kill off expensive patients in order to save money on health care costs.
The private system as it exists now in America also has bodies that make these kinds of rationing decisions. Health care rationing is not some sinister conspiracy but a reasonable response to limited resources. The complete argument is here, but I can sum up the basics:
Insurance providers, whether they are a government agency or a private corporation, have a finite amount of money; they can only spend money they have. In one insurance company, customers might pay hundred million dollars in fees each year, so the total amount of money the insurance company can spend on all its customers that year is a hundred million dollars. In reality, since it is a business, it wants to make a profit. Let’s say it wants a profit of ten percent. That means the total amount of money it has to spend is ninety million dollars.
But as a simplified example, let’s reduce this to an insurance company with one hundred customers, each of whom pays $1. This insurance company wants 10% profit, so it has $90 to spend (instead of our real company’s $90 million). Seven people on the company’s plan are sick, with seven different diseases, each of which is fatal. Each disease has a cure. The cures cost, in order, $90, $50, $40, $20, $15, $10, and $5.
We are far too nice to ration health care with death panels; therefore, we have decided to give everyone every possible treatment. So when the first person, the one with the $90 disease, comes to us, we gladly spend $90 on their treatment; it would be inhuman to just turn them away. Now we have no money left for anyone else. Six out of seven people die.
The fault here isn’t with the insurance company wanting to make a profit. Even if the insurance company gave up its ten percent profit, it would only have $10 more; enough to save the person with the $10 disease, but five out of seven would still die.
A better tactic would be to turn down the person with the $90 disease. Instead, treat the people with $5, $10, $15, $20, and $40 diseases. You still use only $90, but only two out of seven die. By refusing treatment to the $90 case, you save four lives. This solution can be described as more cost-effective; by spending the same amount of money, you save more people. Even though “cost-effectiveness” is derided in the media as being opposed to the goal of saving lives, it’s actually all about saving lives.
If you don’t know how many people will get sick next year with what diseases, but you assume it will be pretty close to the amount of people who get sick this year, you might make a rule for next year: Treat everyone with diseases that cost $40 or less, but refuse treatment to anyone with diseases that cost $50 or more.
This rule remains true in the case of the $90 million insurance company. In their case, no one patient can use up all the money, but they still run the risk of spending money in a way that is not cost-effective, causing many people to die. Like the small insurance company, they can increase cost-effectiveness by creating a rule that they won’t treat people with diseases that cost more than a certain amount.
So, as one commentator pointed out, “death panels” should be called “life panels”: they aim to maximize the total number of lives that can be saved with a certain limited amount of resources.
[Q]: Why is government-run health care so much more effective?
A lot of it is economies of scale: if the government is ensuring the entire population of a country, it can get much better deals than a couple of small insurance companies. But a lot of it is more complicated, and involves people’s status as irrational consumers of health products. A person sick with cancer doesn’t want to hear a cost-benefit analysis suggesting that the latest cancer treatment is probably not effective. He wants that treatment right now, and the most successful insurance companies and hospitals are the ones that will give it to him. Here’s a good article explaining some of the systematic flaws in the economics of health care under the American system.
It could also be that really good health care and the profit motive don’t mix: studies show that for-profit hospitals are more expensive, and have poorer care (as measured in death rates) than not-for-profit hospitals.
There are all kinds of other factors that we could get into as well (and it’s a big enough topic that it could potentially comprise a whole separate post of its own one day). The simplest takeaway here, though, is that there are some goods that are simply more efficient to pay for as a collective than as separate individuals. This is true for health insurance, for all the reasons we’ve been discussing; but for similar reasons, it’s also true for other goods, like, say, insurance against the potentially-debilitating effects of old age – i.e. Social Security. Heath provides perhaps the best summary of the dynamic as a whole, and how it can apply to all different kinds of goods and services:
Despite the agreeable homophony between “public good” and “public sector,” most of what governments are in the business of providing is not public goods, but rather what economists call club goods. This term was introduced by the economist James M. Buchanan to bridge what he described as “the awesome Samuelson gap between the purely private and the purely public good.” Every good, Buchanan pointed out, has what might be referred to as an “optimal sharing group.” Your toothbrush, for instance, probably has an optimal sharing group of one, making it a good candidate for treatment as a purely private good. But other things are not like this. For instance, it’s not a great idea to spend too much money on exercise equipment. While it is convenient to have an elliptical trainer in the basement so you can work out in the privacy of your own home, this very expensive piece of equipment is likely to sit unused 362 days of the year. If your neighbor has an equally unused StairMaster, and someone else a stationary bike, then there are obvious efficiency gains to be had from sharing exercise equipment. One could organize a complicated rotation scheme among neighbors, or one could do what most people do, which is simply to take out a gym membership.
A “gym” is basically an arrangement through which individuals collectively purchase and share a variety of different types of fitness equipment. Such an arrangement is advantageous because use of this equipment is relatively nonrival. The equipment is quite durable, and so is not noticeably eroded in the short term through multiple use. Furthermore, the amount of time that any one person wants to spend using it represents a relatively small fraction of the day, which makes it well suited for sharing. Thus the way that we typically organize consumption is by charging people a flat fee for access to the club, which then gives them “free” access to all the machines within.
There are a couple of things worth noting about this arrangement. The first is that the use of a flat fee for payment can have the unfortunate effect of obscuring the nature of the underlying economic transaction. For instance, people who join a gym often don’t realize that they’re paying for everything—the treadmill, the sauna, the swimming pool—regardless of whether they actually use it. They think the fee goes to the club, and the club buys the equipment (along with the services of those who work there). They don’t realize that the club is just an intermediary, and that it is really the members, collectively, who are doing the purchasing.
The second important point is that club purchasing often involves a significant reduction of consumer choice. When I go out to buy exercise equipment in the market, I pay for exactly what I want to use, and I don’t pay for anything else. When I join a club, the fee structure usually ensures that I have to pay for a share of everything, regardless of whether I use it. This is why people who like to swim usually get a better deal out of gym memberships than anyone else. Since the swimming pool is by far the most expensive item to maintain, there is almost always cross-subsidization among members of clubs that have a pool—an effect that clubs sometime seek to diminish by imposing a surcharge, such as a towel or locker fee, on those who use the pool.
This cross-subsidization among members is clearly one of the disadvantages of many club-purchased goods. It is partially attenuated by the fact that different clubs will arise that offer different mixes of goods, and so consumers can shop around for one that most closely caters to their preferences (for example, someone who doesn’t like to swim should not join a club with a pool). Although in theory one could get perfect efficiency here, in practice the amount of variety on display is fairly limited (as anyone who has compared fitness clubs can attest). This shows that the efficiency gains arising from the collective purchase (that is, the formation of an optimal sharing group) are sufficiently great that they outweigh the losses caused by the bundle of goods being less tailored to the needs of the individual consumer.
One can see a similar phenomenon in the case of condominiums. Each building offers its members a mix of “private” amenities (living unit, parking space) and “public” ones (elevators, security, heating). The latter are paid for through flat monthly condo fees, and are essentially available to everyone in the building “for free.” Again, each condominium offers a different bundle of public (or club) benefits, with options such as a swimming pool, garbage removal, even concierge service. In some cases, this is because the goods are relatively nonrival, and so can easily be enjoyed by all. In other cases, it’s because the goods are relatively nonexcludable, and so everyone must be forced to pay in order to avoid a collective action problem. (This is the case with the security guard at the entrance, whose presence automatically confers a benefit upon everyone in the building.)
—
Hopefully this all seems quite plausible as an account of how health clubs and condominiums are typically organized. Now suppose someone comes along and says, “How high should condo fees be?” or “How much should a gym membership cost?” The answer, of course, is that it depends. The only limit in principle is the amount that people are willing to pay. How much they are willing to pay will be determined entirely by how much they want to consume the type of goods that are best purchased collectively. If the members of a health club want a new sauna, or the residents of a condominium want expanded parking facilities, they should expect an increase in fees in order to finance these purchases.
Does it make sense, when shopping for a condominium, to find the building with the lowest possible fees? Again, it depends. Some people are not particularly interested in having a swimming pool in the building or timely repairs to the elevators, so they might be perfectly happy living in a building with rock-bottom fees. Other people, who happen to have more of a taste for the sort of goods that are best purchased collectively, will want to live in a building with higher fees and more amenities. What matters, in other words, is not the absolute level of fees, but rather the value that one gets for them.
This may seem crushingly obvious, and it usually is in the case of condominium fees. Unfortunately, when it comes to the subject of taxes, people tend to get all confused. In the same way that members of a club pay a fee for admission and then enjoy a certain range of goods “for free” once inside, as citizens of a country we all pay fees and then enjoy certain other goods “for free.” In the latter case, we call the fee a “tax,” but it has essentially the same structure as a club fee.
The fact that some goods are provided for by the state, financed through taxes, is a reflection of the optimal sharing group for those goods. In some cases, the optimal sharing group is everyone (consider national defense, the sewage and water system, highways). In this case, the good is provided by the “club of everyone,” which is to say, the state. The police are no different in principle from the security guard at the front desk of a condo. Furthermore, payment for these services—in the form of taxes—is mandatory for the same reason that condo fees are mandatory: The benefits are relatively nonexcludable (which is to say, unreasonably costly to exclude people from).
One of the goods that can often be purchased most efficiently through taxes is insurance. Since the benefits of an insurance scheme come from the pooling of risks, the size of the gain is often proportional to the size of the pool. As a result, it is in our interest in many cases to purchase insurance using the mechanism of universal taxation and public provision. This is basically how the health care system in Canada works. I pay taxes, and what I get in return is a basic health-insurance policy, provided by the state. So if Canadians want to consume more health care or a new subway or better roads, what are their options? The situation is the same as with the condo residents who want a new sauna: If people want to buy more of this stuff (and are willing to buy less of something else), then they should vote to raise taxes and buy more of it. It doesn’t necessarily impose a drag on the economy to raise taxes in this way, any more than it imposes a drag on the economy when the residents of a condo association vote to increase their condo fees.
One can see, then, the absurdity of the view that taxes are intrinsically bad, or that lower taxes are necessarily preferable to higher taxes. The absolute level of taxation is unimportant; what matters is how much individuals want to purchase through the public sector (the “club of everyone”), and how much value the government is able to deliver. This is why low-tax jurisdictions are not necessarily more “competitive” than high-tax jurisdictions (any more than low-fee condominiums are necessarily more attractive places to live than high-fee condominiums). Furthermore, the government does not “consume” the money collected in taxes—this is a fundamental fallacy; it is merely the vehicle through which we organize our spending. In this respect, taxation is basically a form of collective shopping. Needless to say, how much shopping we do collectively, and in what size of groups, is a matter of fundamental indifference from the standpoint of economic prosperity.
There is enormous confusion on this point. Every year, in dozens of countries around the world, right-wing anti-tax groups calculate and then solemnly declare a “Tax Freedom Day,” in order to let people know what day they “stop working for the government and start working for themselves.” But it would make just as much sense to declare an annual “mortgage freedom day,” in order to let homeowners know what day they “stop working for the bank and start working for themselves.” It takes the average homeowner at least a couple months of work each year to pay off his or her annual mortgage bill. But who cares? Homeowners are not really “working for the bank;” they’re merely financing their own consumption. After all, they’re the ones living in the house, not the bank manager. It’s the same thing with taxes. You’re not really “working for the government” when your kids are going to public school, you’re commuting on public roads, and you expect the government to pay your hospital bills when you’re old and infirm. You’re simply financing your own consumption.
One can find a similar fallacy at work in the widespread belief that tax cuts “stimulate” the economy. This is the same as believing that a legislated reduction in condo fees would stimulate the economy. Naturally, if condo fees go down across the board, it will result in people having more money in their pockets to spend. But it will also result in condo boards having less money to spend. The result will simply be a shift away from the sort of goods that are provided on a club basis toward the sort that are provided on a private basis. Tax cuts have the same effect. They just mean less money spent on schools and health care, more spent on cars and homes. Absent some effect on savings, the increased demand that occurs in one sector is necessarily offset by decreased demand in some other. (An exception to the rule occurs when the government has no money, and so has to borrow to make up the shortfall in tax revenue. In this case, the tax cut is not really a tax cut—it’s more like a mandatory personal consumption loan taken out by the state for each citizen. Either way, the same effect could be achieved by having the state spend the borrowed money on health care, pollution abatement, highway construction, or any other form of publicly organized consumption.)
There are, of course, certain costs associated with the use of the taxation system as a way of purchasing goods and services. For various reasons, taxes can’t be imposed as “flat fees” the way that club fees are usually imposed (Margaret Thatcher tried, in the U.K., but didn’t get very far with it). This means that they must be collected in other ways, such as income and consumption taxes, which distort economic incentives and generate all sorts of counterproductive tax-avoidance behavior (such as individuals hiring crafty accountants to discover and exploit tax loopholes). Yet this is not a phenomenon that is unique to taxation. Private markets also have transaction costs, such as exposure to the risk of fraud or the need to hire lawyers to look over contracts. That doesn’t mean that no one should hire a lawyer, and it doesn’t mean that no one should pay taxes. The underlying problem is that people behave non-cooperatively. As a result, we need to worry about being defrauded or taken advantage of in commercial transactions. We also need to worry about being assaulted and about having our property stolen. In order to decide what’s best, we have to weigh the benefits of the contemplated transaction against the costs of organizing it in a particular way. How much does it cost to hire a lawyer versus how much does it cost to get defrauded? How much does it cost to pay taxes (in terms of deadweight losses) versus how much does it cost to live with market failure?
The question, in other words, is simply whether the benefits that come from the formation of an optimal sharing group outweigh the costs that are associated with the particular sharing arrangements adopted. To say, as Milton Friedman once did, that any tax cut is a good tax cut is simply to articulate an arbitrary preference against a particular type of purchasing arrangement. It’s like saying that the best condo fee is the lowest condo fee. A lot of first-time buyers do have this attitude, but they usually come to regret it.
—
One of the things that tend to muddy the waters when it comes to understanding club goods is that we often give things different names when they are purchased collectively and when they are purchased individually. To take a somewhat exotic example, consider the case of the life annuity. This is basically an insurance product that people buy in order to protect themselves against the risk of outliving their savings. Although the precise details are usually complex, the basic idea is that one pays a flat sum up front in return for a fixed periodic payment starting at the age of retirement and continuing until death (for example, one might pay $1,000 now in return for a guaranteed payment of $10 a month from retirement until death).
Why might someone choose to buy an annuity? Statistics can tell us, on average, how long each of us can expect to live, and we can infer from this how much we will need to save for our retirements. But unfortunately, there is a fair degree of variation around this mean. As a result, we all face the risk of either saving too much or, more important, saving too little. For any one individual, it may not make sense to save enough to maintain a comfortable lifestyle until the age of 90, but the fact is, lots of people live that long. One solution, therefore, is to buy an annuity. If you die young, you don’t get much out of it, but if you live for a long time, it’s guaranteed to keep paying out. When an insurance company sells annuities to hundreds or thousands of customers, the ones who die young are likely to balance out those who live longer, so the total of payments made is likely to be quite close to what one could anticipate simply by looking at mortality tables and average life expectancy.
There is a problem, however, in the market for life annuities. It’s known as “adverse selection.” Basically, a life annuity is a good deal if you expect to live for a very long time, but a bad deal if you expect to die young. As a result, the only people with an incentive to buy them will be those who, for some reason or another, expect to live for longer than the average. In some cases, the reasons people have will be obvious. Women, for instance, typically have a life expectancy five years longer than men, so the value to them of a life annuity is significantly greater. As a result, any insurer that sets a “unisex” price for life annuities would tend to attract only female customers. This would in turn result in greater-than-expected liabilities.
Women, therefore, have to pay more for life annuities than men do (typically, the same up-front payment purchases a lower periodic payment). Yet there are many other factors affecting life expectancy that are not so easily detectable by insurance companies (for example, whether or not you smoked when you were young). As a result, insurers tend to attract precisely the sort of customers that they least want. In effect, the mere fact that a person is interested in buying a life annuity is cause for suspicion, since it suggests that he expects to live for a long time. This “adverse selection” effect needs to be taken into consideration when setting the price of the annuities. But because of this, many of the “good risks”—people who are likely to die young—will be priced out of the market, since they cannot credibly identify themselves to the insurer as good risks. Annuities will simply be too expensive for it to be worth their while to purchase them.
This problem is somewhat attenuated, however, if people go shopping for annuities as a group. For example, if an employer approaches an insurance company and says, “I’d like to buy life annuities for all my employees,” this is inherently less suspicious. After all, since very few companies make employment decisions based upon anticipated longevity, a company’s employees are likely to be a fairly representative sample of the population (for each one who lives a very long time, there is likely to be one who dies young). The insurer can therefore sell life annuities to the group at a better rate than it can to individuals. A life annuity is thus the type of good that has an optimal sharing group larger than one: It is best purchased not as a private good, but as a club good.
Unfortunately, when we purchase life annuities as a group through an employer, they are no longer called annuities. Instead, they are called “defined benefit pension schemes.” This change in nomenclature creates all sorts of confusion; there is an inclination to regard pension schemes as savings arrangements, rather than as insurance products. Nevertheless, an annuity is essentially what is being purchased: In return for an up-front payment (the “pension contribution”), the employer guarantees a fixed periodic payment from the time of retirement until death.
Of course, if it pays to shop for annuities as a group, then the bigger the group, the better. The benefit, in this case, comes from belonging to a group that can credibly claim to be a representative sample of the population. And of course, the best sample of the population is the population itself. As a result, the optimal sharing group for life annuities is the entire country. No surprise then that the state also “purchases” life annuities for its citizens, in the form of public pensions (the Canada Pension Plan, Social Security in the United States).
The failure to appreciate that what is being provided here is a life annuity creates considerable confusion. In the debates over “privatization” of Social Security in the United States, for instance, people routinely compared the rate of return of money that was saved and invested in the stock market with the rate of return of money paid into Social Security. Yet analyzing the latter in terms of rate of return involved a category error. It amounted to comparing an investment to an insurance policy. In this respect, it is like calculating the rate of return on your car insurance: If you had no accidents, your rate of return looks terrible; if you crashed your new Mercedes in the first month of your policy, the rate of return looks great. The same is true with Social Security: If you live until you’re 120, the rate of return is going to be spectacular. But that misses the point. People buy insurance not because they hope to get a payout, but because of the peace of mind that comes from being protected against particular risks. With state pension plans and other annuity products, knowing that you can’t outlive your savings serves as the primary source of benefit (among other things, it relieves people of the need to have so many children).
Thus what proponents of “privatization” of Social Security in the United States were recommending was not really privatization of the system. Privatization would involve individuals purchasing life annuities privately, rather than collectively—which would be a transparently bad deal. What they were actually recommending was that individuals stop purchasing insurance entirely, and instead simply save for their own retirements. (Many proponents of “defined contribution” over “defined benefit” pension plans are making the same recommendation.) In other words, their goal was simply to undo a mutually beneficial risk-pooling arrangement, for no particularly good reason other than an ideological hostility to government. No wonder the idea didn’t go anywhere.
—
There is, of course, one big difference between paying fees to a club and paying taxes to the government. It is an almost inevitable consequence of shared consumption that it reduces consumer choice. No condominium is likely to provide exactly the mix of “public” amenities that you most value. No health club is likely to have exactly the equipment that you would have bought for yourself. And yet with clubs the consumer still has some choice. Not only can you shop around to find the one that is the best fit, you also have the option of leaving, if, say, decisions get made that are too far contrary to your desires. People may hate condo and gym membership fees, but they are not forced to pay them.
In the case of the state, however, this exit option is typically absent (even if you do leave, you may not find any other state willing to take you). Thus the mix of goods you get by virtue of membership is likely to be crudely mismatched to your needs (“Public schools—what do I need those for? I have no kids …”). The provision of public goods by the state is not just a case of collective shopping; it is also a case of compulsory collective shopping. So while the economic character of the transaction may be the same in the two cases, in the case of the state there is an interference with individual liberty that is felt by many to be a rather keen insult.
What is to be said here? Of course, the observation is correct. You have to pay your taxes, and you have to pay for a wide variety of public goods even if you don’t use them. Because of this, state provision should be considered only in cases of egregious market failure, when a one-size-fits-all provision is better than the alternative. This is why the state is often more successful in dealing with relatively homogenous goods, such as insurance, where differences in consumer preference are not all that significant. (Compare this to food or clothing, where the advantages of being able to shop around are obviously much greater.) A more general way of putting it would be to say that the optimal sharing group will tend to be larger for goods where consumer preference is more homogeneous, because the losses caused by “preference mismatch” will tend to be smaller.
All that having been said, it is important to note that the amount of uniformity in the package of benefits offered by government is often overstated. This is particularly true in federal states, where the major functions of the welfare state are often discharged at a more local level and where there are very few internal barriers to mobility. Like different health clubs offering a different mix of fitness equipment, each U.S. state or Canadian province offers a different mix of club goods. If you want subsidized daycare, move to Quebec, not Alberta. If you want state-funded universities, move to California, not South Carolina.
Even in a nonfederal system, every country delivers a large number of public services at the level of the municipality, and municipalities compete with one another for both people and businesses by trying to offer an attractive mix of taxation and public services. If you’re willing to make certain idealizing assumptions about geography and mobility, the economist Charles Tiebout has shown that municipal governments are able to achieve the same level of efficiency as private markets. Of course, this all needs to be taken with a grain of salt. What it shows, however, is that people who argue for the superiority of private over public provision on the grounds of choice often overstate the amount of choice that actually exists in private markets, while understating the amount of choice that exists in the public sector.
It is worth observing as well that in most cases, when the welfare state provides a certain good, it does so at a very low level, leaving consumers free to “top up” their entitlements by purchasing more of the good on private markets. This is true in all the major categories of welfare-state expenditure: pensions, education, security, disability insurance, health insurance, communications (such as postal services), and sometimes even transportation networks. Thus it is the poor who suffer most acutely from restrictions on consumer choice—but they are not likely to complain, since the one-size-fits-all package that they receive is of much greater value than anything they would have been able to afford on their own.
Finally, it is important to distinguish those who want to exercise an exit option from those who simply want a free ride. In the case of a gym membership, you are always free to quit if you don’t think you’re getting your money’s worth. But the gym also has the right to kick you out—to exclude you from its benefits—if it doesn’t like the way you are behaving. States, on the other hand, cannot kick people out (barring certain exceptional cases). The state health care provider cannot discontinue your policy after you develop diabetes, the way a private insurer can. There is an obvious quid pro quo here: a limited right of exit is coupled with a limited right of exclusion. The latter provides a benefit—in the form of peace of mind—that is often taken for granted.
In the face of all these points favoring public insurance, conservatives will sometimes fall back on the argument that giving everyone insurance coverage, whether it be insurance against poor health or old age or whatever else, is irresponsible because it encourages people to engage in riskier or more careless behavior – the so-called problem of moral hazard. As Taylor explains it:
Moral hazard […] means that having insurance leads people to take fewer steps to avoid or prevent the bad event from happening in the first place. The insured have a little bit less incentive to change the habits or improve the conditions that make them more vulnerable to a negative event. For example, a company with good fire insurance might worry less about preventing fires at an older factory. Someone with insurance against theft might be less likely to buy a security system. Someone with health insurance is more likely to head for the doctor for every sniffle and cough than someone without health insurance. As a result of this disincentive, the existence of insurance makes the payouts from insurance higher than they would otherwise be.
This is a legitimate concern, and one that any public insurance plan has to account for. As Heath points out, though, it’s also one that any private insurance plan has to account for, because it’s an issue inherent to insurance in general, not just to government-sponsored insurance; so it doesn’t really make sense to use it as an argument against public insurance unless you’re also arguing against the whole concept of insurance itself, in every form:
[There is a] fallacy underlying the “personal responsibility” crusade of the right. Conservatives blame government handouts for undermining the spirit of self-reliance. This is just a moralizing way of describing a generic problem with insurance systems, where indemnity (“handouts”) tends to generate moral hazard (“irresponsibility”). What conservatives fail to realize is that the moral hazard effect in question is a generic feature of any type of insurance system—it has nothing to do with the question of public or private ownership. There is, however, a prior selection effect that gets ignored. Because private insurance markets are so prone to failure in the face of information asymmetries, the type of insurance that is usually prone to moral hazard or adverse selection tends to be feasible only when provided by the “the insurer of last resort”: the state. So it doesn’t make much sense to blame government for the moral hazard. It’s usually because of the moral hazard problem that the government is running the program in the first place.
When criticizing various aspects of the “social safety net,” conservatives chronically make the error of drawing an invidious comparison between the moral hazard effects of government insurance and the moral hazards effects of no insurance at all. It’s no surprise that the latter wins. This is a clear instance of the “add up the costs, ignore the benefits” fallacy. Having no insurance means you don’t have to suffer the losses caused by moral hazard, but it also means that you have to suffer the losses of having no insurance. Naturally, self-insurance has lower costs; the problem is that it also has no benefits. Thus the only way to make it look good is to consider only the costs and to disregard the forgone benefits.
Unfortunately, the no-insurance option is routinely described in public debate as “privatization,” not as “abolition” of the insurance scheme. For it to be real privatization, the comparison would have to be between government provision and private provision of the same type of insurance. Privatizing Social Security, for instance, would mean sending individuals out to buy their own life annuities on the open market, not having them invest in mutual funds. When the comparison is constructed in this way, government provision often comes out looking quite good.
Consider the case of health care. American critics of “socialized” medicine often regard it as a synthetic a priori truth that public provision will generate overconsumption, followed by rationing of care. “If you think health care is expensive now,” they say, “wait until you see how much it costs when it’s free.” If the government started giving away free cheese, people would start eating too much cheese. So why would anyone want to give away free health care?
This line of reasoning, however, contains two obvious errors. First of all, socialized medicine on the single-payer model doesn’t mean socialized health care, it means socialized health insurance. In Canada, for example, health care provision is almost entirely private. (I can assure you, since my wife is a surgeon. Not only is she privately incorporated, I am happy to report that her corporation is quite profitable.) Obviously, the fact that health insurance is provided to all citizens “for free” does not generate overconsumption of insurance, since it’s provided in the form of a standard, universal benefit. Second, except in the most unusual of cases, Americans don’t pay for health care out of pocket. They also participate in various risk-pooling arrangements, ranging from private insurance and health management organizations to government-supplied Medicare and Medicaid. Thus the typical health care consumer faces exactly the same incentives at the front end in Canada and the United States. Health care is all “free” at the point of purchase.
Thus the conservative critique of socialized medicine is not actually a critique of public ownership and provision. It is a critique of health insurance in general, both public and private. One can see this in the remedy that conservatives offer for the problems of moral hazard in the health insurance system, namely, health savings accounts. These sorts of proposals are usually based on the assumption that the reason for government involvement in the health care sector is that not everyone can afford it. (This is already a mistake, since the rationale for public health insurance is not distributive justice, but rather market failure. It is only the non-actuarial structure of the premiums in social insurance schemes that is motivated by considerations of distributive justice.) Thus the proposed solution is to give each individual citizen a yearly grant, to be kept in a special savings account. The individual would save or spend this money on health care, as needed. Because individuals would start paying for health care out of pocket, the argument goes, they would lose whatever incentive they may have to overspend.
The problem with these proposals is that they are grotesquely inefficient. Health care spending in our society generally follows what’s known as the 80/20 rule: 20% of the population is responsible for roughly 80% of the health care spending. Thus the figure of $2 trillion in annual health care spending in the United States, or $6,700 per person, is slightly misleading. A better aggregate picture would be to imagine one person spending $26,800 per year, along with four others spending only $1,675. So what happens if the government gives everyone $6,700 in a special little account? How much should you save? Well, I guess that depends on whether or not you intend to develop diabetes … But of course, you don’t know if you are going to get diabetes, just as most other people don’t know what their future health care needs are going to be. Furthermore, since spending is going to be highly uneven across the population, giving everyone a grant of the same size guarantees that the state will give too much to most people and not enough to virtually everyone who is really going to need it.
But across large population groups, health care spending (not to mention rates of disease) is highly predictable. This means that there is an overwhelming efficiency argument to be made for the pooling of health care savings. This also means that, when you examine any of the more mainstream proposals for health savings accounts, their bark tends to be much worse than their bite. Recognizing that the only rational way to organize the bulk of health care spending is through insurance, what conservatives typically propose is a set of relatively small grants at the front end (perhaps $2,000) coupled with a “catastrophic coverage” insurance mechanism at the back end. When all is said and done, the savings account winds up being just a heavy-handed way of discouraging parents from bringing little Johnny to the emergency room every time he gets the sniffles. It’s the back-end insurance mechanism that does all the heavy lifting, covering the cost of all major procedures and accounting for the bulk of health care spending. The same policy objectives could be achieved within a socialized medicine system simply by imposing a small user fee on hospital visits (as they do in Sweden), or in a regular private insurance system by having a large deductible.
So where does all this leave the issue of personal responsibility? Conservatives are not wrong to think that there is a fundamental tension between the old-fashioned ideals of individual liberty, responsibility, and self-reliance and the practices of what European intellectuals like to call the “risk society.” Yet they blame the decline of personal responsibility on government, whereas what they should be blaming is the rise of insurance. As François Ewald (perhaps the most original French thinker on the subject of the welfare state) argues, the decisive rupture with the old-fashioned ideal of personal responsibility occurred in the nineteenth century, with the development of actuarial science and the rise of the private insurance industry. Government in the twentieth century, in developing the social safety net, was simply borrowing techniques that had been developed in the private sector. “Personal responsibility” was dead long before they got to it.
Whether or not one regards this as a good thing overall, it’s important to keep in mind that the development of comprehensive insurance systems coincides with the emergence of capitalism as a relatively stable economic system. Insurance does a lot more than simply keep the taxis on the street in Hong Kong. Every single aspect of our financial and commercial system, every transaction that we engage in, is underwritten at some level by insurance. It truly is the all-purpose economic lubricant. Unfortunately, it also requires us to be somewhat less fastidious when it comes to holding people responsible for their actions. But what can you do? Welcome to the modern world.
For the sake of discussion, we can ask what might happen if we did simply abolish all these forms of social insurance and just left it to individuals to bear full financial responsibility for their own healthcare and their own retirement and so on, without engaging in any kind of collective risk-pooling. But the clear answer, in short, is that it wouldn’t go well. As Pinker explains, human nature just isn’t built to perfectly optimize for our long term interests as individuals in every instance – even when we have resources to spare (which not all of us do), we’re often bad at saving them – so in order to keep ourselves on track, it can make sense for us to put matters in the hands of some separate entity (like government) which can handle a portion of the long-term planning and saving on our behalf:
An important challenge to conservative political theory has come from behavioral economists such as Richard Thaler and George Akerlof, who were influenced by the evolutionary cognitive psychology of Herbert Simon, Amos Tversky, Daniel Kahneman, Gerd Gigerenzer, and Paul Slovic. These psychologists have argued that human thinking and decision making are biological adaptations rather than engines of pure rationality. These mental systems work with limited amounts of information, have to reach decisions in a finite amount of time, and ultimately serve evolutionary goals such as status and security. Conservatives have always invoked limitations on human reason to rein in the pretense that we can understand social behavior well enough to redesign society. But those limitations also undermine the assumption of rational self-interest that underlies classical economics and secular conservatism. Ever since Adam Smith, classical economists have argued that in the absence of outside interference, individuals making decisions in their own interests will do what is best for themselves and for society. But if people do not always calculate what is best for themselves, they might be better off with the taxes and regulations that classical economists find so perverse.
For example, rational agents informed by interest rates and their life expectancies should save the optimal proportion of their wages for comfort in their old age. Social security and mandatory savings plans should be unnecessary—indeed, harmful—because they take away choice and hence the opportunity to find the best balance between consuming now and saving for the future. But economists repeatedly find that people spend their money like drunken sailors. They act as if they think they will die in a few years, or as if the future is completely unpredictable, which may be closer to the reality of our evolutionary ancestors than it is to life today. If so, then allowing people to manage their own savings (for example, letting them keep their entire paycheck and investing it as they please) may work against their interests. Like Odysseus approaching the island of the Sirens, people might rationally agree to let their employer or the government tie them to the mast of forced savings.
The economist Robert Frank has appealed to the evolutionary psychology of status to point out other shortcomings of the rational-actor theory and, by extension, laissez-faire economics. Rational actors should eschew not only forced retirement savings but other policies that ostensibly protect them, such as mandatory health benefits, workplace safety regulations, unemployment insurance, and union dues. All of these cost money that would otherwise go into their paychecks, and workers could decide for themselves whether to take a pay cut to work for a company with the most paternalistic policies or go for the biggest salary and take higher risks on the job. Companies, in their competition for the best employees, should find the balance demanded by the employees they want.
The rub, Frank points out, is that people are endowed with a craving for status. Their first impulse is to spend money in ways that put themselves ahead of the Joneses (houses, cars, clothing, prestigious educations), rather than in ways that only they know about (health care, job safety, retirement savings). Unfortunately, status is a zero-sum game, so when everyone has more money to spend on cars and houses, the houses and cars get bigger but people are no happier than they were before. Like hockey players who agree to wear helmets only if a rule forces their opponents to wear them too, people might agree to regulations that force everyone to pay for hidden benefits like health care that make them happier in the long run, even if the regulations come at the expense of disposable income. For the same reason, Frank argues, we would be better off if we implemented a steeply graduated tax on consumption, replacing the current graduated tax on income. A consumption tax would damp down the futile arms race for ever more lavish cars, houses, and watches and compensate people with resources that provably increase happiness, such as leisure time, safer streets, and more pleasant commuting and working conditions.
Whether or not such a consumption tax would be exactly the right mechanism for this kind of reallocation of resources is, of course, debatable (see our whole long discussion earlier on the pros and cons of different kinds of taxes). But however the necessary funds might be collected, what seems clear is that it’s possible for us to improve our society as a whole by using our power of collective action to overcome some of our less-than-perfectly-optimal tendencies as individuals. We may not be able to change our own nature, but with a bit of ingenuity, we can create institutions that help offset some of the undesirable side effects of our occasional irrationality. Some traditional conservative economists might want to deny that the propensity for such irrationality is even a part of our nature at all, and that there’s therefore nothing to fix – but as Alexander points out, the facts seem to indicate otherwise; so the best thing we can do for ourselves, if we want to maximize our own interests, is to set up our society in a way that’s capable of recognizing and correcting for those tendencies:
[Q]: What do you mean by “irrational choices”?
A company (Thaler, 2007, download study as .pdf) gives its employees the opportunity to sign up for a pension plan. They contribute a small amount of money each month, and the company will also contribute some money, and overall it ends up as a really good deal for the employees and gives them an excellent retirement fund. Only a small minority of the employees sign up.
The libertarian would answer that this is fine. Although some outsider might condescendingly declare it “a really good deal”, the employees are the most likely to understand their own unique financial situation. They may have a better pension plan somewhere else, or mistrust the company’s promises, or expect not to need much money in their own age. For some outsider to declare that they are wrong to avoid the pension plan, or worse to try to force them into it for their own good, would be the worst sort of arrogant paternalism, and an attack on the employees’ dignity as rational beings.
Then the company switches tactics. It automatically signs the employees up for the pension plan, but offers them the option to opt out. This time, only a small minority of the employees opt out.
That makes it very hard to spin the first condition as the employees rationally preferring not to participate in the pension plan, since the second condition reveals the opposite preference. It looks more like they just didn’t have the mental energy to think about it or go through the trouble of signing up. And in the latter condition, they didn’t have the mental energy to think about it or go through the trouble of opting out.
If the employees were rationally deciding whether or not to sign up, then some outsider regulating their decision would be a disaster. But if the employees are making demonstrably irrational choices because of a lack of mental energy, and if people do so consistently and predictably, then having someone else who has considered the issue in more depth regulate their choices could lead to a better outcome.
[Q]: So what’s going on here?
Old-school economics assumed choice to be “revealed preference”: an individual’s choices will invariably correspond to their preferences, and imposing any other set of choices on them will result in fewer preferences being satisfied.
In some cases, economists have gone to absurd lengths to defend this model. For example, Bryan Caplan says that when drug addicts say they wish that they could quit drugs, they must be lying, since they haven’t done so. Seemingly unsuccessful attempts to quit must be elaborate theater, done to convince other people to continue supporting them, while they secretly enjoy their drugs as much as ever.
But the past fifty years of cognitive science have thoroughly demolished this “revealed preference” assumption, showing that people’s choices result from a complex mix of external compulsions, internal motivations, natural biases, and impulsive behaviors. These decisions usually approximate fulfilling preferences, but sometimes they fail in predictable and consistent ways.
The gist of this research, as it relates to the current topic, is that people don’t always make the best choice according to their preferences. Sometimes they consistently make the easiest or the most superficially attractive choice instead. It may be best not to think of them as a “choice” at all, but as a reflexive reaction to certain circumstances, which often but not always conforms to rationality.
Such possibilities cast doubt on the principle that every trade that can be voluntarily made should be voluntarily made.
If people’s decisions are not randomly irrational, but systematically irrational in predictable ways, that raises the possibility that people who are aware of these irrationalities may be able to do better than the average person in particular fields where the irrationalities are more common, raising the possibility that paternalism can sometimes be justified.
[Q]: Why should the government protect people from their own irrational choices?
By definition of “irrational”, people will be happier and have more of their preferences satisfied if they do not make irrational choices. By the principles of the free market, as people make more rational decisions the economy will also improve.
If you mean this question in a moral sense, more like “How dare the government presume to protect me from my own irrational choices!”, see the [earlier discussion on the morality of coercion and freedom].
[Q]: What is the significance of predictably irrational behavior?
It justifies government-mandated pensions, some consumer safety and labor regulations, advertising regulations, concern about addictive drugs, and public health promotion, among other things.
XIV.
As Alexander rightly notes, these problems of asymmetric information and misaligned market behavior aren’t just limited to areas like insurance and long-term savings. Even ordinary off-the-shelf consumer products are often accompanied by similar issues, and this can make it more difficult for people to make perfect market decisions on their own. In order to correct for consumers’ inability to ever be 100% fully informed about 100% of the products they buy, then, government can play a valuable role in performing that function on their behalf, using its regulatory power to protect them against being misled and helping them avoid buying products they wouldn’t actually want to buy if they knew better. Here’s Alexander again:
[Q]: What do you mean by “lack of information”?
Many economic theories start with the assumption that everyone has perfect information about everything. For example, if a company’s products are unsafe, these economic theories assume consumers know the product is unsafe, and so will buy less of it.
No economist literally believes consumers have perfect information, but there are still strong arguments for keeping the “perfect information” assumption. These revolve around the idea that consumers will be motivated to pursue information about things that are important to them. For example, if they care about product safety, they will fund investigations into product safety, or only buy products that have been certified safe by some credible third party. The only case in which a consumer would buy something without information on it is if the consumer had no interest in the information, or wasn’t willing to pay as much for the information as it would cost, in which case the consumer doesn’t care much about the information anyway, and it is a success rather than a failure of the market that it has not given it to her.
In nonlibertarian thought, people care so much about things like product safety and efficacy, or the ethics of how a product is produced, that the government needs to ensure them. In libertarian thought, if people really care about product safety, efficacy and ethics, the market will ensure them itself, and if they genuinely don’t care, that’s okay too.
[Q]: And what’s wrong with the libertarian position here?
[The previous section described] how we can sometimes predict when people will make irrational choices. One of the most consistent irrational choices people make is buying products without spending as much effort to gather information as the amount they care about these things would suggest. So in fact, the nonlibertarians are right: if there were no government regulation, people who care a lot about things like safety and efficacy would consistently be stuck with unsafe and ineffective products, and the market would not correct these failures.
[Q]: Is this really true? Surely people would investigate the safety, ethics, and efficacy of the products they buy.
Below follows a list of statements about products. Some are real, others are made up. Can you identify which are which?
1. Some processed food items, including most Kraft cheese products, contain methylarachinate, an additive which causes a dangerous anaphylactic reaction in 1/31000 people who consume it. They have been banned in Canada, but continue to be used in the United States after intense lobbying from food industry interests.
2. Commonly used US-manufactured wood products, including almost all plywood, contain formaldehyde, a compound known to cause cancer. This has been known in scientific circles for years, but was only officially reported a few months ago because of intense chemical industry lobbying to keep it secret. Formaldehyde-containing wood products are illegal in the EU and most other developed nations.
3. Total S.A., an oil company that owns fill-up stations around the world, sometimes uses slave labor in repressive third-world countries to build its pipelines and oil wells. Laborers are coerced to work for the company by juntas funded by the corporation, and are shot or tortured if they refuse. The company also helps pay for the military muscle needed to keep the juntas in power.
4. Microsoft has cooperated with the Chinese government by turning over records from the Chinese equivalents of its search engine “Bing” and its hotmail email service, despite knowing these records would be used to arrest dissidents. At least three dissidents were arrested based on the information and are currently believed to be in jail or “re-education” centers.
5. Wellpoint, the second largest US health care company, has a long record of refusing to provide expensive health care treatments promised in some of its plans by arguing that their customers have violated the “small print” of the terms of agreement; in fact they make it so technical that almost all customers violate them unknowingly, then only cite the ones who need expensive treatment. Although it has been sued for these practices at least twice, both times it has used its legal muscle to tie the cases up in court long enough that the patients settled for an undisclosed amount believed to be fraction of the original benefits promised.
6. Ultrasonic mosquito repellents like those made by GSI, which claim to mimic frequencies produced by the mosquito’s natural predator, the bat, do not actually repel mosquitoes. Studies have shown that exactly as many mosquitoes inhabit the vicinity of such a mosquito repellent as anywhere else.
7. Listerine (and related mouth washes) probably do not eliminate bad breath. Although it may be effective at first, in the long term it generally increases bad breath by drying out the mouth and inhibiting the salivary glands. This may also increase the population of dental bacteria. Most top dentists recommend avoiding mouth wash or using it very sparingly.
8. The most popular laundry detergents, including most varieties of Tide and Method, have minimal to zero ability to remove stains from clothing. They mostly just make clothing smell better when removed from the laundry. Some of the more expensive alkylbenzenesulfonate detergents have genuine stain-removing action, but aside from the cost, these detergents have very strong smells and are unpopular.
[Q]: Okay, I admit I’m not sure of most of these. What’s your point?
This is a complicated FAQ about complicated philosophical issues. Most likely its readers are in the top few percentiles in terms of intelligence and education.
And we live in a world where there are many organizations, both private and governmental, that exist to evaluate products and disseminate information about their safety.
And all of the companies and products above are popular ones that most American consumers have encountered and had to make purchasing decisions about. I tried to choose safety issues that were extremely serious and carried significant risks of death, and ethical issues involving slavery and communism, which would be of particular importance to libertarians.
If the test was challenging, it means that the smartest and best-educated people in a world full of consumer safety and education organizations don’t bother to look up important life-or-death facts specifically tailored to be relevant to them about the most popular products and companies they use every day.
And if that’s the case, why would you believe that less well-educated people in a world with less consumer safety information trying to draw finer distinctions between more obscure products will definitely seek out the consumer information necessary allows them to avoid unsafe, unethical, or ineffective products?
The above test is an attempt at experimental proof that people don’t seek out even the product information that is genuinely important to them, but instead take the easy choice of buying whatever’s convenient based on information they get from advertising campaigns and the like.
[Q]: Fine, fine, what are the answers to the test?
Four of them are true and four of them are false, but I’m not saying which are which, in the hopes that people will observe their own thought processes when deciding whether or not it’s worth looking up.
[Q]: Right, well of course people don’t look up product information now because the government regulates that for them. In a real libertarian society, they would be more proactive.
All of the four true items on the test above are true in spite of government regulation. Clearly, there are still significant issues even in a regulated environment.
If you honestly believe you have no incentive to look up product information because you trust the government to take care of that, then you’re about ten times more statist than I am, and I’m the guy writing the Non-Libertarian FAQ.
[Q]: What other unexpected consequences might occur without consumer regulation?
It could destroy small business.
In the absence of government regulation, you would have to trust corporate self-interest to regulate quality. And to some degree you can do that. Wal-Mart and Target are both big enough and important enough that if they sold tainted products, it would make it into the newspaper, there would be a big outcry, and they would be forced to stop. One could feel quite safe shopping at Wal-Mart.
But suppose on the way to Wal-Mart, you see a random mom-and-pop store that looks interesting. What do you know about its safety standards? Nothing. If they sold tainted or defective products, it would be unlikely to make the news; if it were a small enough store, it might not even make the Internet. Although you expect the CEO of Wal-Mart to be a reasonable man who understands his own self-interest and who would enforce strict safety standards, you have no idea whether the owner of the mom-and-pop store is stupid, lazy, or just assumes (with some justification) that no one will ever notice his misdeeds. So you avoid the unknown quantity and head to Wal-Mart, which you know is safe.
Repeated across a million people in a thousand cities, big businesses get bigger and small businesses get unsustainable.
[Q]: What is the significance of lack of information?
It justifies some consumer and safety regulations, and the taxes necessary to pay for them.
In the absence of government machinery capable of detecting and remedying misrepresentation and false dealing, free exchange would be an even more risky business than it is. The act of buying and selling is often worrisome in the absence of reliable means to counteract the asymmetry of knowledge between buyer and seller. The seller frequently knows something the buyer needs to know. That is one reason why the risk-averse fear commercial exchanges as possible scams, why they cling to suppliers they know personally rather than shopping around for bargains. Public officials can discourage this kind of clinging, promote market ordering, and discourage swindlers by insuring against any damage arising from the asymmetry of information between buyers and sellers. To help consumers make rational choices about where to obtain credit, for instance, the Consumer Credit Protection Act forces any organization that extends credit to disclose its finance charges and annual percentage rate. Just so, consumers benefit from competitive markets in restaurants because, as voters and taxpayers, they have created and funded sanitation boards that allow them to range adventurously beyond a restricted circle of personally known and trusted establishments. The enforcement of disclosure rules or antifraud statutes is no less a taxpayer-funded spur to market behavior than government inspection of food handlers.
The appropriate level of federal spending and government oversight will remain controversial. Nothing said above is intended as a defense of any particular program; some existing programs should undoubtedly be scaled down. What cannot be denied is that enforceable antifraud legislation is a common good, embodying biblically simple moral principles (keep your promises, tell the truth, cheating is wrong). Moreover, the benefits of antifraud law cannot be captured by a few but are diffused widely throughout society. It is a public service, collectively provided, and serving to reduce transaction costs and promote a free-wheeling atmosphere of buying and selling that would be very unlikely to arise if “caveat emptor!” were the sole rule.
Anti-statists will often argue that in a purely free market, all that’s really needed to ensure that products are safe and effective is to allow companies to compete with each other for customers; their drive to maximize profits will be all the incentive necessary for them to provide customers with the most desirable products possible. But while it’s certainly true that companies are fundamentally motivated to maximize profits by convincing as many customers as possible to buy their products, there’s no law of economics that says this can only be done by providing products that are safe and effective; in some cases, companies can increase their profits by doing exactly the opposite, cutting corners on product quality and simply deceiving their customers into thinking they’re getting better products than they actually are. And in fact, this is exactly what we see in underdeveloped countries that lack good government regulation; without a capable public overseer to serve as referee and ensure that everyone is doing business safely and honestly, the market is dominated by sellers who don’t do business safely and honestly – shady companies selling defective products, snake oil salesmen peddling phony cure-alls with dangerous ingredients, scammers duping people out of their money with misleading pitches, etc. And we can even see the same thing here in the US when particular industries are able to avoid regulations to keep them in line – as we saw, for instance, in the financial industry leading up to the crisis of 2008. It’s simply a natural outcome of the system working within the constraints (or lack thereof) that have been imposed on it, as James K. Galbraith writes:
Without reliable standards, without clear guidelines as to what is safe and reasonable and what is not, the overall efficiency of the market declines because search and transaction costs rise beyond all reasonable limits. The efficiency of the market becomes limited by the fear of fraud. Shopping, far from being the exercise of market freedom, becomes itself an endless absorber of time and attention, whether one is discussing clothes, electronics, cell phone contracts, a mortgage application, or airline seats from New York to Pittsburgh.
Meanwhile, the capacities to manipulate, change, innovate, differentiate, discriminate, and exploit become in their turn the defining characteristics of corporate success. And those who pursue such strategies the most aggressively are, naturally, the heroes of the corporate world. It is not that there is a thin line between meeting consumers’ needs and presenting them with complex choices intended to make them easier to fleece. It is that there is no line at all: one practice bleeds over into the other. Success in meeting needs and success in screwing the vulnerable cannot be readily distinguished. […] The interaction of complexity and deregulation transforms and actually criminalizes the market.
The fact that sellers naturally turn to this kind of chicanery in the absence of regulatory oversight is not, we should note, just a matter of them being greedy or evil or otherwise flawed as human beings. Many of them certainly may be greedy or flawed, of course, but as Richard Wolff writes:
When capitalists [engage in antisocial business practices], those are behaviors prompted in them by the realities of the system within which they work and for which they are rewarded and praised. Many capitalists do these things without being greedy or evil. When capitalists do display greed or other character flaws, those flaws are less causes than results of a system that requires certain actions by capitalists who want to survive and prosper.
The harsh reality is that in a market system, the constant need to compete for customers will always compel companies to do whatever they can – that is, whatever they’re allowed to do – to maximize their share of the market, lest they be forced out of business altogether. As Alexander puts it:
Companies in an economic environment of sufficiently intense competition are forced to abandon all values except optimizing-for-profit or else be outcompeted by companies that optimized for profit better and so can sell the same service [or what customers think is the same service] at a lower price.
This is why it’s crucial to have a government that can constrain the space of “whatever sellers are allowed to do” to not include unsafe or deceptive practices that take advantage of unknowing customers – because if such rules aren’t put in place, sellers can and will exploit those practices to their fullest possible extent.
Companies’ ability to engage in ethically dubious tactics like this is only exacerbated by the “limited liability” structure under which modern corporations are organized. Under this model, a company’s ownership rights may be dispersed among multiple stockholders (as opposed to just one owner) – which has the positive benefit of enabling people to establish businesses without the concern that they’ll personally lose everything if these ventures fail, but which also has the negative consequence of enabling corporations to pursue harmful business practices while at least partially shielding the owners from paying for those actions. As the term “limited liability” suggests, under the corporate model each individual stockholder is only partially responsible for whatever misconduct might occur within their businesses — so no one ever really pays the full price for it. (Ambrose Bierce’s tongue-in-cheek Devil’s Dictionary defines the corporation as “an ingenious device for obtaining individual profit without individual responsibility.”) And although this is an understandable approach for ensuring that innocent stockholders aren’t punished for wrongdoing they had nothing to do with, it also has the negative side effect of making such wrongdoing easier to pull off in the first place for companies that are so inclined; it’s that old familiar problem of moral hazard again. In order to preclude this possibility, then, it makes sense to have the government impose certain standards that companies must follow if they want to organize themselves under a corporate structure. As commenter trisweb writes:
“Free Market” proponents opposed to regulation of corporations have got it completely wrong. You can do business without incorporating, and accept full responsibility for your own decisions and actions… although you’ll be considered a fool for doing so. If you accept the protections, then you are obliged to accept such restrictions as society may impose for its own protection. We can certainly discuss the value of particular regulations – whether they are excessive or insufficient, whether they accomplish what is intended or not, whether they are being properly enforced or not – but not whether or not regulation is appropriate. If you take the benefits, you get the drawbacks too. […] Regulation is the price paid for doing business under the protections of a corporate entity.
A common criticism of government regulation is that it impedes the market process and makes it less efficient. But as Tim Wu points out:
When [deceptive behavior on the part of businesses] confuses, misleads, or fools customers, it does not aid the market process, or indeed any process premised on informed choice, but instead defeats it.
By contrast, when the government puts regulations in place to ensure that transactions are being conducted safely and transparently, it actually improves the market process. Alexander elaborates:
[Q]: Give an example of a government intervention that actually improves a market.
Food labeling.
Let’s say you learn you’re at risk for osteoporosis, a bone disease. You go to the supermarket and buy a block of cheese that advertises itself as being “high calcium!”, a frozen pizza that says “Helps fight osteoporosis!”, a frozen steak that says “May increase bone density!” and a supplement that says “Clinically proven to prevent osteoporosis!”
What if the cheese is high calcium compared to, say, dirt, but has no more calcium than any other block of cheese, and perhaps less? What if the pizza contains such a tiny amount of osteoporosis-fighting chemicals as to be a thousand times less than the clinically significant level, even though supposedly every little bit “helps”? What if the frozen steak contains no calcium at all, and the “may increase bone density” claim ought to have been read as “Well, for all we know it may increase bone density, although we have no evidence for this.” What if the supplement was “proven” to work in a sham study conducted by the supplement manufacturer and rejected by all qualified scientists?
Government regulation of food is currently spotty, and different standards are applied to different kinds of claims and different kinds of food. But in cases where the government doesn’t regulate food labels, this is exactly the kind of thing that happens. You can’t sue the companies for false advertising, because they’ve made a special effort not to say anything that’s literally false. But they’ve managed to throw off the free market and confuse your ability to honestly purchase healthy food anyway.
[Q]: But the free market would solve this problem. Without government, if people really want to buy healthy foods, private industry will invent a better certification procedure that health-conscious people can choose to follow.
I chose this example precisely because government regulation is so spotty. There are many areas the government completely fails to regulate, has failed to regulate for decades, and in that time, nothing even resembling a trustworthy well-known independent certification agency has arisen.
There are a couple of reasons why this might be. First, some large companies have tried to invent their own certification system, promising that they will only put “low fat” stickers on their food if it has less than a certain number of grams of fat. But it’s impossible to compare these systems to one another: a product that qualifies as “low fat” under Kraft’s system might have more fat than a product that doesn’t qualify as “low fat” under Kashi’s system. There are significant incentives for a company to create its own system such that its flagship products qualify as “low fat”, and companies have taken these incentives. Although it would be nice if consumers exerted pressure on these companies to use an independent standardized certification agency, after decades of lax government regulation this has yet to happen, and there’s no reason to believe it will happen in the future.
Also, most consumers either aren’t very skeptical or just plain don’t have enough time to dedicate to researching the latest information on competing food labelling practices. If a box of cookies says “low fat” on it on a colorful star-shaped sticker, that’s good enough for us. Did you investigate further last time you bought a low-fat or high-vitamin or loaded-with-anti-oxidants product? It is all nice and well to wish that people were more rational in their buying decisions, but wishing just doesn’t help.
Consider also the list of nutrients, calories, and so on prominently featured on most food item sold in US supermarkets. Do you find that useful? I do. I like to know how many calories food has before I buy it. I’ve even based buying decisions entirely on that information.
Do you think companies would voluntarily list that information even in the absence of government regulations forcing them to do so? If your answer is “yes”, how come large restaurant chains, which aren’t regulated, practically never list nutritional information on menus? It’s not that there’s no demand: voters in several American cities have supported laws to force restaurants to provide such information, so obviously a lot of people want it. Restaurants have just concluded that it’s more profitable for them to avoid it. I think they’re probably right.
It’s also worth noting that in countries with less strict supermarket labelling laws, supermarket labels contain much less information: this is true even in countries with flourishing free markets.
What all this means is that the existence of government regulations on food labelling practices actually make an improvement over free market conditions. Consumers get more information and purchase goods more efficiently than they would under laissez-faire conditions.
Now, if you’re a real hard-liner, you might acknowledge all of these points and still reject the idea of government regulation anyway, on the simple grounds that however capable or incapable customers might be of informing themselves about their purchases, the responsibility for doing so should be theirs alone, because the transactions are just between them and the sellers and no one else. The thing is, though, transactions aren’t always just between the buyers and sellers alone – they’ll often have side effects that affect the rest of society as well (recall our whole discussion earlier of negative externalities) – and in such cases, it’s only reasonable that the rest of society should therefore also have some say in the terms of those transactions. Alexander (writing in the midst of the post-2008 recession) continues on this point:
[Q]: Government regulation denies corporations the right to make choices that affect only them and their consenting customers.
In many cases this is true, and in some of those cases I oppose the regulation. However, other regulations exist precisely because cases that appear to affect only the corporation and its customers actually affect everyone.
Take the example of regulating banks. Banks made some really poor loan decisions. That means that the banks collapse and go bankrupt and their executives end out on the street (at least it would, if we didn’t keep bailing them out all the time!). Not good, but the executives deserve it, and the people who took out loans from those banks knew what they were getting into. Therefore, the correct course of action is to hope they’ll be smarter the next time around, not to regulate who banks can and can’t give loans to.
…except that when hundreds of major banks around the world go broke simultaneously, that affects a whole lot more people than just bank executives and their customers. It sends the world economy into recession, potentially creates runs on the more responsible banks, and negatively affects everyone in the world. I’ve never taken out bad loans nor condoned anyone who did so, and you probably haven’t either, but we’re both getting hurt in this recession, and we both had to pay the taxes that bailed out the banks so the economic system didn’t collapse.
I think it’s reasonable that the government, not wanting innocent people to be hurt by stupid lending policies, bans the banks from having those stupid lending policies in the first place. It’s not to protect the banks from themselves, it’s to protect everyone else from the banks.
(If you have a complicated economic reason why you don’t think the current recession was caused by poor banking policies, think up another example at your leisure.)
Similarly, it’s often argued that government shouldn’t have any business regulating the terms of employment between companies and their workers; if some particular worker is willing to work 30-hour shifts on a regular basis, why should anyone be able to stop them? But again, although such arrangements might appear to just be between employer and employee, they’ll often turn out to have secondary effects that impinge upon others outside the company. In such cases, then, as Sowell explains, outside intervention may be justifiable to limit those externalities and keep everyone safe:
While safety is one aspect of working conditions, it is a special aspect because, in some cases, leaving its costs and benefits to be weighed by employers and employees leaves out the safety of the general public that may be affected by the actions of employers and employees. Obvious examples include pilots, truck drivers, and train crews, because their fatigue can endanger many others besides themselves when a plane crashes, a big rig goes out of control on a crowded highway, or a train derails, killing not only passengers on board but also spreading fire or toxic fumes to people living near where the derailment occurs. Laws have accordingly been passed, limiting how many consecutive hours individuals may work in these occupations, even if longer hours might be acceptable to both employers and employees in these occupations.
Economists speak of “externalities”—the costs (or benefits) incurred by third parties who did not agree to the transaction causing the cost (or benefit). For example, if a farmer begins using a new kind of fertilizer that increases his yield but causes more damaging runoff into nearby rivers, he keeps the profit but the costs of his decision are borne by others. If a factory farm finds a faster way to fatten up cattle but thereby causes the animals to suffer more digestive problems and broken bones, it keeps the profit and the animals pay the cost. Corporations are obligated to maximize profit for shareholders, and that means looking for any and all opportunities to lower costs, including passing on costs on to others (when legal) in the form of externalities.
I am not anticorporate, I am simply [aware that the incentives faced by corporations differ depending on whether or not they are subject to public oversight]. When corporations operate in full view of the public, with a free press that is willing and able to report on the externalities being foisted on the public, they are likely to behave well, as most corporations do. But many corporations operate with a high degree of secrecy and public invisibility (for example, America’s giant food processors and factory farms). And many corporations have the ability to “capture” or otherwise influence the politicians and federal agencies whose job it is to regulate them (especially now that the U.S. Supreme Court has given corporations and unions the “right” to make unlimited donations to political causes). When corporations are given [total freedom from public oversight], we can expect catastrophic results (for the ecosystem, the banking system, public health, etc.).
I think liberals are right that a major function of government is to stand up for the public interest against corporations and their tendency to distort markets and impose externalities on others, particularly on those least able to stand up for themselves in court (such as the poor, or immigrants, or farm animals). Efficient markets require government regulation. Liberals go too far sometimes— indeed, they are often reflexively antibusiness, which is a huge mistake from a utilitarian point of view. But it is healthy for a nation to have a constant tug-of-war, a constant debate between yin and yang over how and when to limit and regulate corporate behavior.
Whether it’s a matter of keeping the water supply safe from toxic contaminants, keeping the roads safe from impaired drivers, or keeping the financial markets safe from reckless bankers, the argument for implementing preventative safety measures on the public’s behalf is the same – it’s not about creating distortions in the market, it’s about correcting distortions that have already been created by private actors, and restoring the market to a state in which everyone can make properly informed choices and all costs are accounted for. That’s not anti-market; it’s pro-market – so if we claim to care about strong markets ourselves, it’s something we should be in favor of. Granted, we should always want to keep such regulation within reasonable limits so it doesn’t end up overstepping its bounds and doing more harm than good; that should hopefully go without saying. But we shouldn’t make the mistake of pushing things so far in the other direction that we try to do away with the whole idea of government regulation altogether – because as Alexander attests, there are places all around the world where such regulation is absent, and we can see for ourselves that the results are not good:
You know all those picky rules that you hate because they’re just arrogant government bureaucrats trying to control your life? There are lots of places that don’t have those rules. They’re called hellholes. [After having personally done some traveling in the Third World,] I have developed a profound respect for everything from zoning ordinances to noise pollution laws to environmental regulations to licensing schemes for professionals to whatever law it is that says you can’t block the sidewalk (there should be a statue to whichever politician came up with that one; it is far less obvious than you’d think).
Among the most extreme examples of what he’s talking about can be found in countries like Haiti, where the lack of well-maintained standards hasn’t just resulted in lower quality of life – it has completely destroyed millions of its citizens’ lives and livelihoods outright. When the country was struck by a 7.0-magnitude earthquake in 2010, its effective lack of building codes resulted in the deaths of an estimated 100,000-316,000 people, with 300,000 more injured and 1,000,000 rendered homeless by the collapse of approximately 250,000 homes and 30,000 commercial buildings. By contrast, when San Francisco was stuck by an earthquake of equal magnitude in 1989, just 63 people were killed – still a tragedy, to be sure, but one that was far more limited in terms of scale.
This isn’t to say, mind you, that Third World countries like Haiti are the only ones that can experience tragedies as a result of under-regulation. Another of the most famous examples is the thalidomide scandal of the 1950s-60s, in which “the use of [the sedative drug] thalidomide in 46 countries by women who were pregnant or who subsequently became pregnant resulted in the ‘biggest man-made medical disaster ever,’ with more than 10,000 children born with a range of severe deformities, such as phocomelia, as well as thousands of miscarriages.” Countries like Canada, the UK, and Australia were all affected due to their relatively lax regulations – but meanwhile in the US, approval of the drug was blocked by the FDA, so Americans avoided the tragedy entirely. The FDA often gets a bad rap for being overly rigid with its regulatory oversight – and sometimes those complaints are justified – but in this case (and in many others), Americans were incredibly fortunate to have it looking out for them.
It’s undeniably true that government regulation can have its downsides – any kind of intervention in the market will always come with costs of some kind or another. But the crucial point here, as Joseph Aldy explains, is that it doesn’t just come with costs – it can also provide immense benefits – so our judgment of whether a particular regulation is good or bad should involve balancing all the relevant tradeoffs, not just looking at the downsides alone and rejecting it on that basis:
President Trump jettisoned more than 30 years of bipartisan regulatory policy on January 30 when he issued an executive order on “Reducing Regulation and Controlling Regulatory Costs.” The order requires that whenever a new regulation is enacted by any federal agency, regulators must eliminate two rules, so that the cost of complying with the new rule is offset by the costs associated with the two existing rules. But Trump misses a crucial point about government regulations: They impose costs on society, but they also produce benefits.
The executive order refers to regulatory costs 18 times, but never mentions regulatory benefits. By focusing only on costs, the president’s order focuses on corporate bottom lines and ignores society’s bottom line. If an industry is profitable but releases pollution that makes people sick, then the best outcome for society may be to pass a regulation that lowers corporate profits slightly, but also reduces expensive health problems for thousands of Americans.
[…]
Are regulations costly for business? Yes. If they weren’t, then businesses wouldn’t need government rules requiring them to eliminate lead paint and other toxins from children’s toys, make workplaces safer and disclose their financial risks. Most companies would not take these steps on their own. The question is not whether regulations represent good business investments, but whether they yield a good return for society.
When government regulators write rules, they use benefit-cost analysis to compare the benefits and costs that the rules produce for society, much as corporate leaders weigh the costs of new business ventures against their expected returns. This approach was introduced under President Ronald Reagan in 1981 and continued under Presidents George H.W. Bush, Clinton, George W. Bush and Obama.
As an example, the Environmental Protection Agency’s Acid Rain Program, enacted in 1990, has reduced sulfur dioxide emissions from U.S. power plants by more than 50 percent, at a cost of up to US$2 billion per year. It also has delivered up to $100 billion in annual benefits to society – mainly by avoiding about 18,000 premature mortalities and 24,000 nonfatal heart attacks. Electric utilities would not have reduced this pollution voluntarily, but the regulation that required them to do it has produced benefits that are worth at least 50 times its costs.
The example of pollution is especially noteworthy here, not only because it’s one of the biggest problems in the world right now, but also because in some ways, it’s oddly under-recognized. The climate change side of it gets a ton of attention, of course – but when it comes to things like ordinary air pollution, most people don’t realize just how dramatic its effects actually are. Not only is air pollution responsible for millions of premature deaths each year – it’s currently estimated to contribute to roughly 11.65% of human deaths worldwide (which is about twenty times more than the death toll from war, murder, and terrorism combined) – it also reduces quality of life for millions more due to disability and other health complications, with an estimated 213.28 million healthy life years in total being lost every year as a result of its effects (more than smoking or obesity or high cholesterol or drug use or practically any other risk factor you can think of). As Alex Tabarrok notes, its effects on everything from health to labor productivity to cognitive functioning are far more significant than is commonly understood. And David Wallace-Wells really drives the point home:
Here is just a partial list of the things, short of death rates, we know are affected by air pollution. GDP, with a 10 per cent increase in pollution reducing output by almost a full percentage point, according to an OECD report last year. Cognitive performance, with a study showing that cutting Chinese pollution to the standards required in the US would improve the average student’s ranking in verbal tests by 26 per cent and in maths by 13 per cent. In Los Angeles, after $700 air purifiers were installed in schools, student performance improved almost as much as it would if class sizes were reduced by a third. Heart disease is more common in polluted air, as are many types of cancer, and acute and chronic respiratory diseases like asthma, and strokes. The incidence of Alzheimer’s can triple: in Choked, Beth Gardiner cites a study which found early markers of Alzheimer’s in 40 per cent of autopsies conducted on those in high-pollution areas and in none of those outside them. Rates of other sorts of dementia increase too, as does Parkinson’s. Air pollution has also been linked to mental illness of all kinds – with a recent paper in the British Journal of Psychiatry showing that even small increases in local pollution raise the need for treatment by a third and for hospitalisation by a fifth – and to worse memory, attention and vocabulary, as well as ADHD and autism spectrum disorders. Pollution has been shown to damage the development of neurons in the brain, and proximity to a coal plant can deform a baby’s DNA in the womb. It even accelerates the degeneration of the eyesight.
A high pollution level in the year a baby is born has been shown to result in reduced earnings and labour force participation at the age of thirty. The relationship of pollution to premature births and low birth weight is so strong that the introduction of the automatic toll system E-ZPass in American cities reduced both problems in areas close to toll plazas (by 10.8 per cent and 11.8 per cent respectively), by cutting down on the exhaust expelled when cars have to queue. Extremely premature births, another study found, were 80 per cent more likely when mothers lived in areas of heavy traffic. Women breathing exhaust fumes during pregnancy gave birth to children with higher rates of paediatric leukaemia, kidney cancer, eye tumours and malignancies in the ovaries and testes. Infant death rates increased in line with pollution levels, as did heart malformations. And those breathing dirtier air in childhood exhibited significantly higher rates of self-harm in adulthood, with an increase of just five micrograms of small particulates a day associated, in 1.4 million people in Denmark, with a 42 per cent rise in violence towards oneself. Depression in teenagers quadruples; suicide becomes more common too.
Stock market returns are lower on days with higher air pollution, a study found this year. Surgical outcomes are worse. Crime goes up with increased particulate concentrations, especially violent crime: a 10 per cent reduction in pollution, researchers at Colorado State University found, could reduce the cost of crime in the US by $1.4 billion a year. When there’s more smog in the air, chess players make more mistakes, and bigger ones. Politicians speak more simplistically, and baseball umpires make more bad calls.
As a report from Michael Greenstone and Christa Hasenkopf puts it, air pollution might very well be the single greatest external threat to human health on the planet right now. And to say that the market alone has failed to adequately curb the problem would be an understatement, to say the least. Luckily, this is an area where we know definitively that government regulation can and does work. The US, despite all its environmentalist shortcomings, has repeatedly shown itself capable of effectively fighting pollution via government regulation, and has achieved a genuinely superior level of air quality compared to countries like India and China, which are currently bearing the brunt of air pollution’s adverse effects. (As Wallace-Wells notes, “the average inhabitant of Delhi would live 9.7 years longer were it not for air pollution.”) Wallace-Wells continues:
That everything is worse in the presence of pollution means that everything should be better in its absence. And, as best we can tell, it is. According to the National Resources Defence Council, the US Clean Air Act of 1970 is still saving 370,000 American lives every year – more than would have been saved [in 2020] had the [COVID-19] pandemic never arrived. According to the NRDC, a single piece of legislation delivers annual economic benefits of more than $3 trillion, 32 times the cost of enacting it – benefits distributed disproportionately to the poor and marginalised.
And the effects of the US’s regulatory success in this area are especially evident when it comes to things like lead pollution, as Haidt explains:
As automobile ownership skyrocketed in the 1950s and 1960s, so did the tonnage of lead being blown out of American tailpipes and into the atmosphere—200,000 tons of lead a year by 1973. (Gasoline refiners had been adding lead since the 1930s to increase the efficiency of the refining process.) Despite evidence that the rising tonnage of lead was making its way into the lungs, bloodstreams, and brains of Americans and was retarding the neural development of millions of children, the chemical industry had been able to block all efforts to ban lead additives from gasoline for decades. It was a classic case of corporate superorganisms using all methods of leverage to preserve their ability to pass a deadly externality on to the public.
The Carter administration began a partial phaseout of leaded gasoline, but it was nearly reversed when Ronald Reagan crippled the Environmental Protection Agency’s ability to draft new regulations or enforce old ones. A bipartisan group of congressmen stood up for children and against the chemical industry, and by the 1990s lead had been completely removed from gasoline. This simple public health intervention worked miracles: lead levels in children’s blood dropped in lockstep with declining levels of lead in gasoline, and the decline has been credited with some of the rise in IQ that has been measured in recent decades.
Even more amazingly, several studies have demonstrated that the phaseout, which began in the late 1970s, may have been responsible for up to half of the extraordinary and otherwise unexplained drop in crime that occurred in the 1990s. Tens of millions of children, particularly poor children in big cities, had grown up with high levels of lead, which interfered with their neural development from the 1950s until the late 1970s. The boys in this group went on to cause the giant surge of criminality that terrified America—and drove it to the right—from the 1960s until the early 1990s. These young men were eventually replaced by a new generation of young men with unleaded brains (and therefore better impulse control), which seems to be part of the reason the crime rate plummeted.
From a Durkheimian utilitarian perspective, it is hard to imagine a better case for government intervention to solve a national health problem. This one regulation saved vast quantities of lives, IQ points, money, and moral capital all at the same time. [Footnote: “It is true that producing gasoline without lead raises its cost. But Reyes 2007 calculated that the cost of removing lead from gasoline is ‘approximately twenty times smaller than the full value including quality of life of the crime reductions.’ That calculation does not include lives saved and other direct health benefits of lead reductions.”] And lead is far from the only environmental hazard that disrupts neural development. When young children are exposed to PCBs (polychlorinated biphenyls), organophosphates (used in some pesticides), and methyl mercury (a by-product of burning coal), it lowers their IQ and raises their risk of ADHD (attention deficit hyperactivity disorder). Given these brain disruptions, future studies are likely to find a link to violence and crime as well. Rather than building more prisons, the cheapest (and most humane) way to fight crime may be to give more money and authority to the Environmental Protection Agency.
When conservatives object that liberal efforts to intervene in markets or engage in “social engineering” always have unintended consequences, they should note that sometimes those consequences are positive. When conservatives say that markets offer better solutions than do regulations, let them step forward and explain their plan to eliminate the dangerous and unfair externalities generated by many markets.
Pollution is just one of many areas in which government regulation can make a real positive difference. In the same way that we can point to the successes of government regulation in the environmental realm, so too can we point to equivalent successes in the realms of consumer product safety, financial markets, and more. Granted, the government can’t always be expected to do a perfect job of determining exactly what the perfect amount of regulation should be for any given situation. For better or worse, there will always be some degree of trial and error involved, just as in every other area of human activity. And in the cases where it turns out that the government has overreached and is doing more harm than good, it’s important to be able to identify the issue quickly so the counterproductive regulations can be rolled back. That being said, it’s equally important to be able to identify instances in which, as it turns out, government regulation hasn’t gone far enough – because otherwise, if things like negative externalities and information asymmetries are simply allowed to proliferate unabated, it can lead to disasters that impose massive costs, both in terms of dollars and in terms of lives and well-being.
XV.
Of course, sometimes it will turn out that the regulations put in place to keep the market safe are too weak, and some kind of worst-case disaster scenario will actually occur; things will spiral out of control to such an extent that instead of being able to keep market failures in check just by using basic preventative measures (as in the case of air pollution, where it’s fairly straightforward to simply notice when too much pollution is being emitted and introduce regulation as needed to cut it back down again), mere preventative measures alone will no longer be enough to resolve the issue, because there will be some kind of acute mass-scale crisis that demands a more immediate response. One of the best examples of this was the 2008 financial crisis, in which a collapse in the banking sector threatened to bring down the entire economy. The seeds of the crisis were first planted when some of the big banks figured out how to take a bunch of the subprime mortgages they had on their books – i.e. mortgages that were more likely than normal to default – and bundle them together into financial products, which they then inaccurately labeled as extremely low-risk and sold off to oblivious buyers (including ordinary investors as well as other big financial institutions). When those supposedly safe investments turned out to be worthless, everyone who’d bought them suddenly found themselves suffering major losses they hadn’t planned for – and as a result, they had to tighten their belts in other areas to compensate for those losses. Lenders were no longer able to lend as much, and it became more difficult for individuals and firms to buy things on credit, so business began to slow down – and with business slowing down, firms were forced to cut labor costs by laying off employees or reducing their hours. This reduction in income for employees meant a reduction in consumer spending, as ordinary workers now had less spending money – and this reduction in consumer spending, naturally, meant even less revenue for businesses, which led to further layoffs, and so on in a self-reinforcing cycle, until ultimately the entire economy had ground to a halt.
So what can be done in a situation like this, in which the private sector has gotten itself stuck in a rut with no immediate way of pulling itself out? This is yet another area where the public sector can help offset the private sector’s failings by taking countermeasures of its own to balance them out. And it can do this in a few different ways. Among other things, it can provide backstops to prevent the failure of one bank from triggering a contagion of bank runs that brings down the whole banking system (not just with bailouts, but with basic guardrails like public deposit insurance and the like). It can use its tools of monetary policy to adjust interest rates as needed to loosen up credit markets and make it easier for private-sector actors to make big purchases and investments. And if even rock-bottom interest rates aren’t enough to get the market moving again, it can take an even more direct approach and increase its own spending and/or decrease taxation so as to put money back in people’s pockets and reverse the feedback loop of economic contraction.
Let’s discuss each of these approaches briefly, starting with the first one – providing backstops to prevent bank failures from spreading. As Heath explains, back in the days before federal deposit insurance, the threat of contagious bank runs was a major problem, and it led to frequent recessions that were both incredibly disruptive and wholly unnecessary – but once the FDIC was introduced, it essentially resolved the issue (at least for traditional banks) in one fell swoop:
During the Golden Age of laissez-faire capitalism, recessions were often preceded by bank failures. Because banks lend out most of the money that they receive, they cannot actually repay more than a small fraction of depositors at a time. […] This has the potential to spark a bank run—a collective action problem in which depositors, convinced that the bank is going to fail, try to get their money out before the bank does, yet, in so doing, essentially guarantee that the bank will fail.
While this may be tough luck for the bank’s customers, it is not necessarily a problem for the economy as a whole. Furthermore, it is not clear why it should cause a recession. Keynes’s analysis, however, provides a very simple explanation. It is the way the other banks responded that created general problems. As soon as one bank failed, all the other banks immediately tried to increase their cash reserves in order to protect themselves against the impact of copycat withdrawals and the possibility of a generalized bank panic. Furthermore, customers would get antsy about making deposits, and so would begin holding on to their money. The result was often a huge overnight shift in liquidity preference, with everyone suddenly wanting to hold as much physical currency as possible. This led to a systemic slackening of economic activity in all other sectors, as people became loath to engage in transactions, preferring to hold on to their money.
[…]
Public deposit insurance, one of the most important social programs for capitalists, essentially eliminated the problem of runs on ordinary commercial banks. (This is one of the reasons why the financial crisis of 2008 occurred in what some called the “parallel” banking sector—financial institutions that were operating outside the scope of government regulatory programs, including federal deposit insurance.) Stabilizing the banks had significant stabilizing effects on the business cycle (a fact that Marxian “crisis” theory is unable to explain). Recessions became less common and less severe, simply because one of the primary sources of volatility in the demand for money had been eliminated.
As he rightly points out, despite the stabilizing effect of public deposit insurance on the traditional banking sector, it’s still possible nowadays to have recessions that are caused by things other than classic bank runs. So what can be done when such recessions do occur? That’s when the government can turn to the next tool in its toolbox, monetary policy – lowering interest rates, boosting market liquidity, and expanding the supply of money circulating throughout the economy. This kind of intervention isn’t usually necessary under normal circumstances, since interest rates will tend to naturally adjust to market conditions and counteract routine slumps on their own, as Wheelan explains:
In a weak economy, interest rates typically fall because there is less demand for credit; struggling businesses and households are less inclined to borrow for an expansion or a bigger house. However, falling interest rates create a natural antidote for modest economic downturns. As credit gets cheaper, households are induced to buy more big-ticket items—cars and washing machines and even homes. Meanwhile, businesses find it cheaper to expand and invest. These new investments and purchases help restore the economy to health.
Occasionally, though, lenders might not be able to provide enough cheap credit to fully reverse the slump because they’ll have too much of their money wrapped up in non-loanable assets like bonds and not enough in the form of loanable funds. In such cases, then, the Federal Reserve (America’s central bank) will be able to help out by buying up those bonds so the banks have enough loanable cash to lower their interest rates still further. It’s not really necessary for our purposes here to understand all the logistical details of how this works – just that this is something the Fed can do – but if you’re interested, Wheelan provides a basic overview of the process:
Where does the Fed derive this extraordinary power over interest rates? After all, commercial banks are private entities. The Federal Reserve cannot force Citibank to raise or lower the rates it charges consumers for auto loans and home mortgages. Rather, the process is indirect. [See,] the interest rate is really just a rental rate for capital, or the “price of money.” The Fed controls America’s money supply. We’ll get to the mechanics of that process in a moment. For now, recognize that capital is no different from apartments: The greater the supply, the cheaper the rent. The Fed moves interest rates by making changes in the quantity of funds available to commercial banks. If banks are awash with money, then interest rates must be relatively low to attract borrowers for all the available funds. When capital is scarce, the opposite will be true: Banks can charge higher interest rates and still attract enough borrowers for all available funds. It’s supply and demand, with the Fed controlling the supply.
These monetary decisions—the determination whether interest rates need to go up, down, or stay the same—are made by a committee within the Fed called the Federal Open Market Committee (FOMC), which consists of the board of governors, the president of the Federal Reserve Bank of New York, and the presidents of four other Federal Reserve Banks on a rotating basis. The Fed chairman is also the chairman of the FOMC. Ben Bernanke [the Fed chairman during the post-2008 recession, when this was written] derives his power from the fact that he is sitting at the head of the table when the FOMC makes interest rate decisions.
If the FOMC wants to stimulate the economy by lowering the cost of borrowing, the committee has two primary tools at its disposal. The first is the discount rate, which is the interest rate at which commercial banks can borrow funds directly from the Federal Reserve. The relationship between the discount rate and the cost of borrowing at Citibank is straightforward; when the discount rate falls, banks can borrow more cheaply from the Fed and therefore lend more cheaply to their customers. There is one complication. Borrowing directly from the Fed carries a certain stigma; it implies that a bank was not able to raise funds privately. Thus, turning to the Fed for a loan is similar to borrowing from your parents after about age twenty-five: You’ll get the money, but it’s better to look somewhere else first.
Instead, banks generally borrow from other banks. The second important tool in the Fed’s money supply kit is the federal funds rate, the rate that banks charge other banks for short-term loans. The Fed cannot stipulate the rate at which Wells Fargo lends money to Citigroup. Rather, the FOMC sets a target for the federal funds rate, say 4.5 percent, and then manipulates the money supply to accomplish its objective. If the supply of funds goes up, then banks will have to drop their prices—lower interest rates—to find borrowers for the new funds. One can think of the money supply as a furnace with the federal funds rate as its thermostat. If the FOMC cuts the target fed funds rate from 4.5 percent to 4.25 percent, then the Federal Reserve will pump money into the banking system until the rate Wells Fargo charges Citigroup for an overnight loan falls to something very close to 4.25 percent.
All of which brings us to our final conundrum: How does the Federal Reserve inject money into a private banking system? Does Ben Bernanke print $100 million of new money, load it into a heavily armored truck, and drive it to a Citibank branch? Not exactly—though that image is not a bad way to understand what does happen.
Ben Bernanke and the FOMC do create new money. In the United States, they alone have that power. (The Treasury merely mints new currency and coins to replace money that already exists.) The Federal Reserve does deliver new money to banks like Citibank. But the Fed does not give funds to the bank; it trades the new money for government bonds that the banks currently own. In our metaphorical example, the Citibank branch manager meets Ben Bernanke’s armored truck outside the bank, loads $100 million of new money into the bank’s vault, and then hands the Fed chairman $100 million in government bonds from the bank’s portfolio in return. Note that Citibank has not been made richer by the transaction. The bank has merely swapped $100 million of one kind of asset (bonds) for $100 million of a different kind of asset (cash, or, more accurately, its electronic equivalent).
Banks hold bonds for the same reason individual investors do; bonds are a safe place to park funds that aren’t needed for something else. Specifically, banks buy bonds with depositors’ funds that are not being loaned out. To the economy, the fact that Citibank has swapped bonds for cash makes all the difference. When a bank has $100 million of deposits parked in bonds, those funds are not being loaned out. They are not financing houses, or businesses, or new plants. But after Ben Bernanke’s metaphorical armored truck pulls away, Citibank is left holding funds that can be loaned out. That means new loans for all the kinds of things that generate economic activity. Indeed, money injected into the banking system has a cascading effect. A bank that swaps bonds for money from the Fed keeps some fraction of the funds in reserves, as required by law, and then loans out the rest. Whoever receives those loans will spend them somewhere, perhaps at a car dealership or a department store. That money eventually ends up in other banks, which will keep some funds in reserve and then make loans of their own. A move by the Fed to inject $100 million of new funds into the banking system may ultimately increase the money supply by 10 times as much.
Of course, the Fed chairman does not actually drive a truck to a Citibank branch to swap cash for bonds. The FOMC can accomplish the same thing using the bond market (which works just like the stock market, except that bonds are bought and sold). Bond traders working on behalf of the Fed buy bonds from commercial banks and pay for them with newly created money—funds that simply did not exist twenty minutes earlier. (Presumably the banks selling their bonds will be those with the most opportunities to make new loans.) The Fed will continue to buy bonds with new money, a process called open market operations, until the target federal funds rate has been reached.
Obviously what the Fed giveth, the Fed can take away. The Federal Reserve can raise interest rates by doing the opposite of everything we’ve just discussed. The FOMC would vote to raise the discount rate and/or the target fed funds rate and issue an order to sell bonds from its portfolio to commercial banks. As banks give up lendable funds in exchange for bonds, the money supply shrinks. Money that might have been loaned out to consumers and businesses is parked in bonds instead. Interest rates go up, and anything purchased with borrowed capital becomes more expensive. The cumulative effect is slower economic growth.
That last point is important to note; in addition to lowering interest rates when the economy is slumping, the Fed can also raise them if the economy seems to be overheating. If demand is starting to significantly outpace supply, and it’s driving up prices to such a degree that the threat of out-of-control inflation starts to become a real danger, the Fed can take some of the air out of the system by tightening the credit market back up again. Wheelan continues:
The [Federal Reserve’s] job would not appear to be that complicated. If the Fed can make the economy grow faster by lowering interest rates, then presumably lower interest rates are always better. Indeed, why should there be any limit to the rate at which the economy can grow? If we begin to spend more freely when rates are cut from 7 percent to 5 percent, why stop there? If there are still people without jobs and others without new cars, then let’s press on to 3 percent, or even 1 percent. New money for everyone! Sadly, there are limits to how fast any economy can grow. If low interest rates, or “easy money,” causes consumers to demand 5 percent more new PT Cruisers than they purchased last year, then Chrysler must expand production by 5 percent. That means hiring more workers and buying more steel, glass, electrical components, etc. At some point, it becomes difficult or impossible for Chrysler to find these new inputs, particularly qualified workers. At that point, the company simply cannot make enough PT Cruisers to satisfy consumer demand; instead, the company begins to raise prices. Meanwhile, autoworkers recognize that Chrysler is desperate for labor, and the union demands higher wages.
The story does not stop there. The same thing would be happening throughout the economy, not just at Chrysler. If interest rates are exceptionally low, firms will borrow to invest in new computer systems and software; consumers will break out their VISA cards for big-screen televisions and Caribbean cruises—all up to a point. When the cruise ships are full and Dell is selling every computer it can produce, then those firms will raise their prices, too. (When demand exceeds supply, firms can charge more and still fill every boat or sell every computer.) In short, an “easy money” policy at the Fed can cause consumers to demand more than the economy can produce. The only way to ration that excess demand is with higher prices. The result is inflation.
The sticker price on the PT Cruiser goes up, and no one is better off for it. True, Chrysler is taking in more money, but it is also paying more to its suppliers and workers. Those workers are seeing higher wages, but they are also paying higher prices for their basic needs. Numbers are changing everywhere, but the productive capacity of our economy and the measure of our well-being, real GDP, has hit the wall. Once started, the inflationary cycle is hard to break. Firms and workers everywhere begin to expect continually rising prices (which, in turn, causes continually rising prices). Welcome to the 1970s.
The pace at which the economy can grow without causing inflation might reasonably be considered a “speed limit.” After all, there are only a handful of ways to increase the amount that we as a nation can produce. We can work longer hours. We can add new workers, through falling unemployment or immigration (recognizing that the workers available may not have the skills in demand). We can add machines and other kinds of capital that help us to produce things. Or we can become more productive—produce more with what we already have, perhaps because of an innovation or a technological change. Each of these sources of growth has natural constraints. Workers are scarce; capital is scarce; technological change proceeds at a finite and unpredictable pace. In the late 1990s, American autoworkers threatened to go on strike because they were being forced to work too much overtime. (Don’t we wish we had that problem now…) Meanwhile, fast-food restaurants were offering signing bonuses to new employees. We were at the wall. Economists reckon that the speed limit of the American economy is somewhere in the range of 3 percent growth per year.
The phrase “somewhere in the range” gives you the first inkling of how hard the Fed’s job is. The Federal Reserve must strike a delicate balance. If the economy grows more slowly than it is capable of, then we are wasting economic potential. Plants that make PT Cruisers sit idle; the workers who might have jobs there are unemployed instead. An economy that has the capacity to grow at 3 percent instead limps along at 1.5 percent, or even slips into recession. Thus, the Fed must feed enough credit to the economy to create jobs and prosperity but not so much that the economy begins to overheat. William McChesney Martin, Jr., Federal Reserve chairman during the 1950s and 1960s, once noted that the Fed’s job is to take away the punch bowl just as the party gets going.
Or sometimes the Fed must rein in the party long after it has gone out of control. The Federal Reserve has deliberately engineered a number of recessions in order to squeeze inflation out of the system. Most notably, Fed chairman Paul Volcker was the ogre who ended the inflationary party of the 1970s. At that point, naked people were dancing wildly on the tables. Inflation had climbed from 3 percent in 1972 to 13.5 percent in 1980. Mr. Volcker hit the monetary brakes, meaning that he cranked up interest rates to slow the economy down. Short-term interest rates peaked at over 16 percent in 1981. The result was a painful unwinding of the inflationary cycle. With interest rates in double digits, there were plenty of unsold Chrysler K cars sitting on the lot. Dealers were forced to cut prices (or stop raising them). The auto companies idled plants and laid off workers. The autoworkers who still had jobs decided that it would be a bad time to ask for more money.
The same thing, of course, was going on in every other sector of the economy. Slowly, and at great human cost, the expectation that prices would steadily rise was purged from the system. The result was the recession of 1981–1982, during which GDP shrank by 3 percent and unemployment climbed to nearly 10 percent. In the end, Mr. Volcker did clear the dancers off the tables. By 1983, inflation had fallen to 3 percent. Obviously it would have been easier and less painful if the party had never gone out of control in the first place.
Fortunately (even though the process can be painful), inflation is something that generally responds to monetary policy in a straightforward way, so central bankers can be fairly confident in their ability to rein it in if it starts getting too extreme. If it doesn’t seem to be responding to their interest rate hikes at first, they can just keep raising interest rates until it does. The flip side of this, though, is that despite there being no limit to how much interest rates can be raised, there is a limit to how much they can be lowered – it’s not possible to go below zero – so if the problem the Fed is facing isn’t inflation, but the opposite problem of recession and unemployment, the solution might be a bit more complicated than just adjusting interest rates and letting the market do the rest. To be sure, that kind of solution can work perfectly well for relatively minor slumps, in which lenders can still find new lending opportunities if they’re just given more loanable funds and their interest rates are lowered enough to attract new borrowers. But if there’s an especially severe recession, simply lowering interest rates – even lowering them to near-zero – might not be enough, because those kinds of safe lending opportunities won’t be as abundant anymore; so even if lenders have plenty of loanable cash on hand, they might not want to loan any of it out – and as a consequence, the economy will remain stuck. As Taylor writes:
Monetary policy might be better at contracting an economy than at stimulating it. Just as you can lead a horse to water but can’t make it drink, a central bank can buy bonds from banks so banks have more money to lend, but it can’t force banks to lend that additional money. If banks are unwilling to lend because they’re afraid of too many defaults, monetary policy won’t be much help in fighting a recession. In the aftermath of the 2007–2009 recession, banks and many nonfinancial firms were holding substantial amounts of cash, but given the level of economic uncertainty, they were reluctant to make loans. Central bankers have an old saying to capture this problem: Monetary policy can be like pulling and pushing on a string. When you pull a string, it moves toward you, but when you push on a string, it folds up and doesn’t move. When a central bank pulls on the string through contractionary policy, it can definitely raise interest rates and reduce aggregate demand. But when a central bank tries to push on the string through expansionary policy, it won’t have any effect if banks decide not to lend. It’s not that expansionary monetary policy never works, but it’s not always reliable.
Some people seem to infer […] that output and income can be raised by increasing the quantity of money. But this is like trying to get fat by buying a larger belt. In the United States to-day your belt is plenty big enough for your belly. It is a most misleading thing to stress the quantity of money, which is only a limiting factor, rather than the volume of expenditure, which is the operative factor.
If monetary policy alone isn’t sufficient to restore the amount of spending in the economy to normal levels, then, a more direct form of stimulus may be needed. As Krugman (writing in 2009) explains:
During a normal recession, the Fed responds by buying Treasury bills — short-term government debt — from banks. This drives interest rates on government debt down; investors seeking a higher rate of return move into other assets, driving other interest rates down as well; and normally these lower interest rates eventually lead to an economic bounceback. The Fed dealt with the recession that began in 1990 by driving short-term interest rates from 9 percent down to 3 percent. It dealt with the recession that began in 2001 by driving rates from 6.5 percent to 1 percent. And it tried to deal with the current recession by driving rates down from 5.25 percent to zero.
But zero, it turned out, isn’t low enough to end this recession. And the Fed can’t push rates below zero, since at near-zero rates investors simply hoard cash rather than lending it out. So by late 2008, with interest rates basically at what macroeconomists call the “zero lower bound” even as the recession continued to deepen, conventional monetary policy had lost all traction.
Now what? This is the second time America has been up against the zero lower bound, the previous occasion being the Great Depression. And it was precisely the observation that there’s a lower bound to interest rates that led Keynes to advocate higher government spending: when monetary policy is ineffective and the private sector can’t be persuaded to spend more, the public sector must take its place in supporting the economy. Fiscal stimulus is the Keynesian answer to the kind of depression-type economic situation we’re currently in.
This is the final way in which government can intervene to rescue the private sector from recession. Unlike monetary policy, which involves the central bank adjusting the money supply and changing interest rates in order to affect the private sector’s spending levels, fiscal policy is about the spending that the government directly does itself. And in the case of an extreme recession that has hamstrung the private sector’s ability to spend enough money to restore the economy to full health, the public sector can be justified in picking up the slack by doing more spending than usual on its own part. As Krugman writes:
When almost everyone is the world is trying to spend less than their income, the result is a vicious contraction—because my spending is your income, and your spending is my income. What you need to limit the damage is for somebody to be willing to spend more than their income. And governments [can play] that crucial role.
Recessions occur when there is a sudden change in liquidity preference, leading to a shortage of money in circulation. One way to address this is to increase the money supply […] or to lower interest rates. Keynes, however, thought that there were limits on the effectiveness of either strategy. As a result, if people weren’t willing to spend their own money, the next-best solution was for the government to spend it for them. Keynes recommended that the state engage in “countercyclical spending”—running a surplus [or at least running a relatively tighter budget] in times of economic expansion, then [being more willing to engage in] deficit-spending during recessions.
Taylor goes into a bit more detail about how exactly this strategy works, and how to some extent it can be set up to kick in automatically whenever it’s needed rather than requiring government to respond to every market fluctuation on a case-by-case basis:
A policy of increasing aggregate demand or buying power in the economy is called expansionary macroeconomic policy. It’s also sometimes called a “loose” fiscal policy. Expansionary policy includes tax cuts and spending increases, for example, both of which put more money into the economy. Policies used to reduce aggregate demand, in contrast, are called contractionary policies, or “tight” fiscal policies. A policy of tax increases or spending cuts is considered contractionary, reducing the amount of buying power in the economy. Such a policy can also be used to lean against the cycles of the economy; such countercyclical fiscal policy aims at counterbalancing the underlying economic cycle of recession and recovery.
The decision about whether to use changes in spending or in taxes to affect demand will depend on specific conditions of that time and place and on political priorities. The emphasis is to get the economy moving, one way or another, by pumping up aggregate demand. In his General Theory of Employment, Interest, and Money, John Maynard Keynes (1936) ruminates at one point that the government could “fill old bottles with bank notes and bury them at a suitable depth in disused coal mines, which were then filled up to the surface with town rubbish, and leave it to private enterprise on the well-tried principles of laissez-faire to dig the notes up again.” He also points out that it would be more sensible to stimulate the economy in a way that provided actual economic benefits—his example is building houses—but his point is that when you’re trying to pump up aggregate demand, exactly how you do it is a secondary concern from a macroeconomic point of view.
If, instead of fighting unemployment, we’re trying to fight inflation, we need to think about a tight fiscal policy to reduce aggregate demand. This means lowering spending and/or raising taxes, and the theory doesn’t dictate which choice or which mix of the two is best for any given situation or in general.
[…]
Countercyclical fiscal policy can be implemented in two ways: automatic and discretionary. Automatic stabilizers are government fiscal policies that, without any need for legislation, automatically stimulate aggregate demand when the economy is declining and automatically hold down aggregate demand when the economy is expanding.
To understand how this happens automatically, imagine first that the economy is growing rapidly. Aggregate demand is very high; it’s at or above potential GDP, and we’re worried about inflation. What would be the appropriate countercyclical fiscal policy in this situation? One option is to increase taxes to take some of the buying power out of the economy. But this happens automatically to some extent because taxes are, more or less, a percentage of what people earn. As income rises, taxes therefore automatically rise. Indeed, the U.S. individual income tax is structured around tax brackets so as people earn more income, the taxes paid out of each additional dollar gradually rise. The same process works in reverse, of course. In a shrinking economy, the taxes that people owe automatically decline because taxes are a share of income. This helps prevent aggregate demand from shrinking as much as it otherwise would. Thus, taxes are an automatic countercyclical fiscal policy, or an automatic stabilizer.
On the spending side, when the economy grows, what countercyclical policy do we want to apply, and what actually happens? As a booming economy approaches potential GDP, the goal of countercyclical fiscal policy is to prevent demand from growing too fast and tipping the economy into inflation. But when the economy is doing well, fewer people need government support programs such as welfare, Medicaid, and unemployment benefits. As a result, in good economic times, spending from the government in these kinds of categories automatically declines, which acts as the desired automatic stabilizer. The same works in reverse. In a shrinking economy or a recession, more people are unemployed and need government assistance. At such times, government spending on programs that help the unemployed and the poor tends to rise, boosting aggregate demand (or at least keeping it from shrinking too much), which is exactly the countercyclical fiscal policy one would want.
Recent economic experience offers several examples of these patterns. During the dot-com economic boom of the late 1990s, there was an unexpected surge in federal tax revenues. President Bill Clinton’s proposed budget for fiscal year 1998 predicted a deficit of $120 billion, but when the booming economy produced $200 billion more in tax revenue than expected, it led to a budget surplus of $69 billion. Similarly, Clinton’s proposed budget for 1999 predicted a balanced budget in 2000; they actually got a surplus of $236 billion. Surpluses that existed between 1998 and 2001 led to increased federal tax revenues; federal taxes in 2000 collected 20.9 percent of GDP. These unexpectedly high tax revenues weren’t the result of new legislation. They were automatic stabilizers, helping to prevent the economy from expanding too quickly and triggering inflation.
As a counterexample, consider the extremely large budget deficits of 2009 and 2010. President George W. Bush’s last proposed budget, which applied to fiscal year 2009, projected that the tax revenues for 2009 would be 18 percent of GDP. But when the recession hit, tax revenues for 2009 turned out to be just 14.8 percent of GDP. A portion of this drop was due to tax cuts passed in 2009 under the incoming administration of President Obama, but most of it was due to the recession turning out to be far harsher than expected. This unexpected drop in tax collections was an automatic stabilizer that helped to cushion the blow of the recession.
More broadly, systematic evidence shows the impact of countercyclical fiscal policy over time. John Taylor looked at evidence from the 1960s up through 2000 and found that, on average, a 2 percent fall in GDP led to an automatic offsetting increase in fiscal policy of 1 percent of GDP.
Of course, as you can tell from these numbers, automatic stabilizers aren’t designed to always perfectly offset every swing in the economy all by themselves – so if there’s a really extreme recession, they won’t typically be enough to fully counteract it on their own. This, then, is where discretionary fiscal policy can come into the picture; if private individuals and firms can’t or won’t start spending until they have a little more money in their pockets, government can make that happen by either giving them tax breaks, buying more goods and services from them, or hiring them directly to work on big public works projects – building roads and bridges, rehabilitating dilapidated schools and libraries, and so on – or some combination of things like these. By boosting people’s incomes in this way, government stimulus can enable them to return to more normal levels of spending – and with this increase in consumer demand, private businesses will naturally respond by expanding their operations and hiring more workers, thereby putting even more money in people’s pockets, and causing consumer spending to rise further still, and so on in a virtuous cycle, until ultimately the downward spiral of recession is completely reversed. With the economy back on track, the government can then let the private sector take the lead again, and can allow its own spending to return to more normal levels. This was how the US finally overcame the Great Depression; the government hired millions of unemployed workers and put their unused labor to good use, first with the public works projects of Roosevelt’s New Deal, and then with the ultimate public works project known as World War II. And it used the same kind of fiscal stimulus strategy to pull out of the 2008 recession and other post-war recessions as well, albeit to a much less dramatic degree. You might argue that in each of these cases, the economy could have eventually recovered on its own without government intervention – but as Taylor writes:
Keynesians are concerned that the macroeconomy can become stuck below potential GDP for a long time. Even if the economy gradually returns to full employment in the long run without government action, as Keynes famously wrote, “In the long run, we’re all dead.” Waiting for the long run has large costs; if the economy takes years to readjust, that’s a huge chunk out of people’s lives and careers. Thus, Keynesian economists tend to support active government macroeconomic policies with the goals of fighting unemployment, stimulating the economy, and shortening recessions and depressions as much as possible.
If an active fiscal policy can get the economy back on track more quickly and painlessly than a policy of non-intervention, there’s no reason to prolong people’s hardship by forcing them to endure years of needless financial devastation and all the damage that comes with it. Far better to have a government that can swiftly identify economic problems as they arise and can accordingly adjust its own spending so as to neutralize them before they ever have a chance to cause long-term issues in the first place. There are, after all, always public needs that can be fulfilled – and if the government is able to rescue a foundering economy by employing otherwise idle labor to fulfill them, that’s a win-win for everyone involved. In fact, the benefits of this strategy are so evident that some commentators have proposed making it a more systematic part of the standard operating procedure for national governments, much like automatic stabilizers, which can simply kick in without any fuss whenever there’s a big enough recession. Wheelan, for instance, proposes that the federal government maintain an ongoing list of “deferred maintenance” projects that need doing (but haven’t made it into any federal budgets yet), which it can just pull up and start working on whenever the need for a bit of extra government spending arises:
A successful society needs to move people, goods, and information. Individuals cannot build their own air traffic control systems. Private firms do not have the power of eminent domain to create new corridors for moving freight and information.
Our nation needs an infrastructure strategy that lays out the most important federal infrastructure goals (e.g., reducing traffic congestion, improving air quality, promoting Internet connectivity, etc.) and then creates a mechanism for evaluating all federal projects against those goals. The most cost-effective projects should be included in the infrastructure budget; the projects that do not meet some threshold for cost-effectiveness should be rejected.
Earmarks, the process by which individual legislators tuck their pet projects into larger bills, will not disappear entirely. Congress has the power to pass legislation, including legislation that spends money on stupid things. However, an objective set of criteria for evaluating transportation projects would quantify the silliness of these pet projects. If modernizing the nation’s air traffic control system scores 91 out of 100 when it is evaluated against federal transportation goals, and expanding the parking lot at the Lawn Darts Hall of Fame scores a 3, then it becomes all the more shameful to spend money on the latter. And when shame does not work, a presidential veto just might.
This basic approach has two major advantages. First, it can help restore public trust in the government’s ability to make sensible infrastructure investments—rather than allowing “bridges to nowhere” to be built because an influential legislator was able to slip an earmark into the transportation bill. Second, it provides a less controversial and more efficient form of “stimulus” during economic downturns. Once the federal government has an infrastructure plan made up of approved projects, it is possible to reach deeper into that list of “shovel-ready projects” during slack economic times. There is no money wasted on stimulus spending; sensible spending is merely speeded up.
Other commentators have proposed taking this approach even further and simply making it an official policy that such government-sponsored jobs always be made available to workers whenever the private sector is unable to provide full employment – a so-called “federal job guarantee.” (This is yet another topic that merits an entire post of its own at some point.) Meanwhile, still others come at the issue from the opposite angle and argue that in the case of short-term economic shocks, it may not be necessary for the government to have to create new jobs for people at all, and that it can instead just assume part of the cost of keeping them in their current private-sector jobs. The German Kurzarbeit program, for instance, is designed to do exactly this:
Kurzarbeit is the German name for a program of state wage subsidies in which private-sector employees agree to or are forced to accept a reduction in working hours and pay, with public subsidies making up for all or part of the lost wages.
Several Central European countries use such subsidies to limit the impact on the economy as a whole or a particular sector from short-term threats such as a recession, pandemic, or natural disaster. The idea is to temporarily subsidize companies to avoid layoffs or bankruptcies during a temporary external disruption. Most notably, such subsidy programs were used to offset the effects of the COVID-19 pandemic and recession starting in 2020.
These different approaches all have points in their favor. One thing they all have in common, though, is that they all require an increase in government spending. For this reason, conservatives will often object to them on principle, insisting that the responsible thing to do in a recession is not to spend money like drunken sailors, but to “tighten our belts” and cut back on our spending – practicing a strategy of “fiscal austerity.” They’ll be especially opposed to the idea of the government taking out extra debt to finance its stimulus measures – because after all, if you were personally experiencing a major financial crisis in your own household, wouldn’t the last thing you’d want to do be to take out a bunch of debt and use it to go on a massive spending spree? And this is a perfectly good point – if you’re talking about a private household. But crucially, the government isn’t the same as a private household; its financial health isn’t based on its ability to save up wealth for itself by taking more money out of the economy than it spends, because it’s the one issuing the money and putting it out into the broader economy in the first place. It wouldn’t make sense for it to try and hoard as much money as possible in order to run a surplus in the middle of a recession, because the only way for it to do so would be to take that money out of the broader economy, thereby reducing people’s spending power and causing the recession to deepen. As Rex Nutting explained during the post-2008 slump:
Balancing the budget immediately would be a catastrophe. Even large budget cuts would make the very weak recovery stall. It’s simple arithmetic: What the government spends becomes someone’s income, which they in turn can spend. Cutting government spending (or raising taxes) means cutting disposable incomes, and that means cutting economic growth. That’s why we don’t want any of this budget balancing to take effect for at least a couple of years. Once growth is stronger, the government can reduce spending and raise taxes without hurting the economy.
Of course, it’s worth stressing that if and when the government does decide to boost spending and/or cut taxes, it can’t just assume that any old spending boost or tax cut will be as effective as any other. Merely cutting taxes for rich people who already have a ton of excess savings, for instance, probably won’t do all that much for the broader economy, since in a zero-lower-bound situation the money will most likely just end up sitting idle in a bank vault somewhere without being spent or lent out. Far more effective would be to direct those tax breaks and spending increases toward poorer citizens, who will be more likely to go out and spend their money (since poorer people always have to spend a larger proportion of their income just to meet their basic needs), and who will therefore provide a bigger boost to the economy at large. In fact, for this same reason (and despite everything we’ve been saying about how tax increases are generally counterproductive for stimulus purposes), it can even be possible under the right circumstances to create some degree of fiscal stimulus by just taxing rich people’s idle excess savings and then redistributing that money to everyone else to spend into circulation. Either way though, whether the stimulus is coming from redistributive taxation or deficit spending (or both), the point here is just that when private firms and individuals are unable or unwilling to spend their money, the government can step in and spend it for them, by taxing or borrowing those unused dollars and putting them to use.
Now, at this point conservatives may raise another objection: If the government is borrowing and/or taxing away so much of people’s money like this, doesn’t that mean that it’s eating up all the private sector’s investment capital, and thereby “crowding out” all other investment opportunities? By reducing the amount of investable funds available in the private sector, won’t it drive up interest rates and make it more difficult for firms and individuals to borrow money? Again though, this is one of those things that, while it can indeed happen when the economy is booming, ceases to be an issue when the economy is deeply depressed. This was demonstrated decisively in the post-2008 recovery, as Krugman notes:
[After the 2008 crash,] opponents of stimulus argued vociferously that deficit spending would send interest rates skyrocketing, “crowding out” private spending. Proponents responded, however, that crowding out — a real issue when the economy is near full employment — wouldn’t happen in a deeply depressed economy, awash in excess capacity and excess savings. And stimulus supporters were right: far from soaring, interest rates fell to historic lows.
Krugman himself was one of the biggest pro-stimulus voices during this period when the economy was struggling to recover; and years later, his arguments still hold up. As he wrote at the time:
Let’s start with what may be the most crucial thing to understand: the economy is not like an individual family.
Families earn what they can, and spend as much as they think prudent; spending and earning opportunities are two different things. In the economy as a whole, however, income and spending are interdependent: my spending is your income, and your spending is my income. If both of us slash spending at the same time, both of our incomes will fall too.
And that’s what happened after the financial crisis of 2008. Many people suddenly cut spending, either because they chose to or because their creditors forced them to; meanwhile, not many people were able or willing to spend more. The result was a plunge in incomes that also caused a plunge in employment, creating the depression that persists to this day.
Why did spending plunge? Mainly because of a burst housing bubble and an overhang of private-sector debt — but if you ask me, people talk too much about what went wrong during the boom years and not enough about what we should be doing now. For no matter how lurid the excesses of the past, there’s no good reason that we should pay for them with year after year of mass unemployment.
So what could we do to reduce unemployment? The answer is, this is a time for above-normal government spending, to sustain the economy until the private sector is willing to spend again. The crucial point is that under current conditions, the government is not, repeat not, in competition with the private sector. Government spending doesn’t divert resources away from private uses; it puts unemployed resources to work. Government borrowing doesn’t crowd out private investment; it mobilizes funds that would otherwise go unused.
Now, just to be clear, this is not a case for more government spending and larger budget deficits under all circumstances — and the claim that people like me always want bigger deficits is just false. For the economy isn’t always like this — in fact, situations like the one we’re in are fairly rare. By all means let’s try to reduce deficits and bring down government indebtedness once normal conditions return and the economy is no longer depressed. But right now we’re still dealing with the aftermath of a once-in-three-generations financial crisis. This is no time for austerity.
This view of our problems [as explained above] has made correct predictions over the past four years, while alternative views have gotten it all wrong. Budget deficits haven’t led to soaring interest rates (and the Fed’s “money-printing” hasn’t led to inflation); austerity policies have greatly deepened economic slumps almost everywhere they have been tried.
And it’s true; in the aftermath of the 2008 crisis (which was a truly global crisis, not just an American one), the countries that relied most heavily on austerity policies to save them from their economic troubles – like Portugal, Spain, and Ireland – were the ones whose slumps subsequently worsened most dramatically (not even mentioning Greece, which was a whole separate basket case of its own).
It might seem hard to believe that a national strategy of financial belt-tightening, which is so undeniably prudent and responsible in so many other circumstances, could be so disastrously wrong when economic conditions are at their most dire. But Michael J. Wilson sums up why this is in fact the case, and captures the core of the matter with a great analogy:
Deficits occur when the economy drags: think of the Reagan Recession of the early 1980s and [the post-2008] Great Recession. That’s because federal revenue falls at the same time that [the need for] vital safety net programs such as unemployment compensation and food stamps increases. Repairing our budget really starts by repairing our economy. And that repair begins with increased demand. But consumers are unable to spend if they are burdened by debt and millions are out of work — or worried they soon will be. Companies are unwilling to hire or invest in this uncertain economic environment. There’s only one entity left to prime the pump: the federal government. The seeming paradox of the government needing to spend more to balance the budget is not really all that strange. Short-term and long-term prescriptions are often at odds. Think about it: Exercise is generally a good idea, but not when you’re flat on your back with a broken leg. Once the economy is back on its feet again, spending can be addressed as part of the effort to control deficits — deficits which, not so incidentally, will be lower because of the tax revenue being generated by a robust economy.
His last point there is also worth noting; if your true long-term goal is to keep the national debt from growing too large, then even on that basis alone it’s no reason to avoid heavy government spending in a recession, but on the contrary is all the more reason to want the government to spend big in the short term so as to restore the economy to full health as quickly as possible and minimize the fiscal damage over the long term. True, it’ll cost more up front, but ultimately the results will pay for themselves and then some – whereas by contrast, an austerity-based strategy will only prolong and worsen the economic losses, as Krugman points out:
Cutting spending in a deeply depressed economy is largely self-defeating even in purely fiscal terms: any savings achieved at the front end are partly offset by lower revenue, as the economy shrinks.
As he sums it all up, then, the prescription is simple:
Slumps [can] be fought with appropriate government policies: low interest rates for relatively mild recessions, deficit spending for deeper downturns. And given these policies, much of the rest of the economy [can] be left up to markets.
Of course, I say “simple” – and even Keynes himself described this as a “moderately conservative” approach to macroeconomic issues (in contrast to, say, the Marxist model that economic crises must bring about the destruction of the entire system) – but in the eyes of many anti-statists, any amount of government intervention is too much, so not even small interventions can be accepted, much less large-scale ones that affect the entire economy. To them, government intervention does nothing but suck valuable money out of the economy and throw it away on useless endeavors – as if it were basically removing that wealth from the economy altogether. But this critique – or at least the crudest version of it – reflects a major misunderstanding of how public spending works. When the government taxes or borrows money from the private sector, it isn’t just piling all the money in a corner and burning it; it’s turning right back around and putting the money back into the pockets of the people whom it hires to perform government services. It’s no different from if those people were still working in the private sector; it’s just that instead of having their wages paid directly to them by other people in the private sector in exchange for their services, the money is first collected by the government, and then paid to them in exchange for their services. The end result is largely the same; it’s just that the money passes through an intermediary first. What’s more, the argument that hiring people in this way takes valuable money and labor away from being invested in productive and efficient private-sector uses in favor of unproductive and inefficient public ones is likewise flawed – because when the economy is depressed, those resources aren’t being used anyway; so it’s not a matter of government investment versus private investment – it’s a matter of government investment versus no investment at all.
Still though, what about the broader conservative critique of government spending in general, outside just the narrow context of economic depressions? Isn’t there some truth to the old stereotype that government spending tends to be horrendously wasteful compared to the private sector? Well, obviously the answer depends on the government, and on what kind of spending it’s choosing to pursue. Certainly it’s possible to have a government overextend itself and pursue projects that really are wasteful and inefficient – my whole last post was all about how governments can produce all kinds of abysmal results by trying to do things that are better left to the private sector. But on the other side of the coin, this whole post has just been one long extended discussion of all the areas where the government spending can actually be more effective and efficient than the private sector, and in many cases can accomplish things that the private sector simply can’t (or won’t) accomplish at all. So clearly it’s a matter of context. To be sure, the market mechanism is tremendously effective at providing most of our everyday goods and services at the lowest possible cost. But government still has a valuable role to play in filling the gaps that the market isn’t quite capable of perfectly covering. And in fact, one of the biggest points in its favor is that when it does need to spend money, it can often itself do so in a way that takes advantage of the market mechanism, as Wheelan points out:
Even [in situations where] government has an important role to play in the economy, such as building roads and bridges, it does not follow that government must actually do the work. Government employees do not have to be the ones pouring cement. Rather, government can plan and finance a new highway and then solicit bids from private contractors to do the work. If the bidding process is honest and competitive (big “ifs” in many cases), then the project will go to the firm that can do the best work at the lowest cost. In short, a public good is delivered in a way that harnesses all the benefits of the market.
In areas where the government is able to do this, the criticism that its spending is less cost-efficient than the private sector’s falls flat, because the government funds in such cases actually are being spent on private-sector work – and not only that, the spending is being done in the most competitive and price-efficient manner possible. Wheelan continues:
This distinction is sometimes lost on American taxpayers, a point that Barack Obama made during one of his town hall meetings on health care reform. He said, “I got a letter the other day from a woman. She said, ‘I don’t want government-run healthcare. I don’t want socialized medicine. And don’t touch my Medicare.’” The irony, of course, is that Medicare is government-run health care; the program allows Americans over age 65 to seek care from their private doctors, who are then reimbursed by the federal government.
This example is particularly noteworthy because the government not only runs Medicare efficiently, but in many ways runs it more efficiently than private health insurance. Private insurers’ administrative costs (that is, their “bureaucracy” and “red tape”) amount to around 15% of their overall funds – as opposed to Medicare’s 2% – and for that reason (among others, such as marketing expenses and payments to shareholders), private insurance costs billions of dollars more than Medicare to pay for the same treatments (see here and here). In other words, the government-sponsored option is by far the less bureaucratic and wasteful of the two.
Again, that’s not always the case; Wheelan is right to say that it’s often a big “if” whether having the government outsource its functions to private-sector contractors will really be the most cost-effective option for any given project. As this post illustrates, it will often be cheaper to simply have the government do the job itself, with its own employees that it hires directly. But in either case, the idea that government-sponsored undertakings are always significantly more wasteful than private ones is, to put it mildly, often considerably overstated by conservatives. I already quoted part of this excerpt from Alexander earlier, but I’ll just quote the fuller version here now:
[Q]: Large government projects are always late and over-budget.
The only study on the subject I could find, “What Causes Cost Overrun in Transport Infrastructure Projects?” (download study as .pdf) by Flyvbjerg, Holm, and Buhl, finds no difference in cost overruns between comparable government and private projects, and in fact find one of their two classes of government project (those not associated with a state-owned enterprise) to have a trend toward being more efficient than comparable private projects. They conclude that “…one conclusion is clear…the conventional wisdom, which holds that public ownership is problematic whereas private ownership is a main source of efficiency in curbing cost escalation, is dubious.”
Further, when government cost overruns occur, they are not usually because of corrupt bureaucrats wasting the public’s money. Rather, they’re because politicians don’t believe voters will approve their projects unless they spin them as being much cheaper and faster than the likely reality, leading a predictable and sometimes commendable execution to be condemned as “late and over budget” (download study as .pdf) While it is admittedly a problem that government provides an environment in which politicians have to lie to voters to get a project built, the facts provide little justification for a narrative in which government is incompetent at construction projects.
[Q]: State-run companies are always uncreative, unprofitable, and unpleasant to use.
Some of the greatest and most successful companies in the world are or have been state-run. Japan National Railways, which created the legendarily efficient bullet trains, and the BBC, which provides the most respected news coverage in the world as well as a host of popular shows like Doctor Who, both began as state-run corporations (JNR was later privatized).
In cases where state-run corporations are unprofitable, this is often not due to some negative effect of being state-run, but because the corporation was put under state control precisely because it was something so unprofitable no private company would touch it, but still important enough that it had to be done. For example, the US Post Office has a legal mandate to ship affordable mail in a timely fashion to every single god-forsaken town in the United States; obviously it will be out-competed by a private company that can focus on the easiest and most profitable routes, but this does not speak against it. Amtrak exists despite passenger rail travel in the United States being fundamentally unprofitable, but within its limitations it has done a relatively good job: on-time rates better than that of commercial airlines, 80% customer satisfaction rate, and double-digit year-on-year passenger growth every year for the past decade.
To reiterate, none of this is to suggest that it makes no difference whether big important projects are handled by the public sector or the private sector, and that therefore we might as well nationalize everything. Most of the time, the market mechanism really will be the cheapest and most effective way of getting things done in the economy, and we forget that at our own peril. The point here is just that it isn’t the only way of getting things done. Trying to run an entire economy solely through the market mechanism, without any kind of government intervention at all, would be dramatically less efficient and effective than running a mixed economy that takes full advantages of both the public and private sectors’ strengths. This is especially apparent in times of economic crisis, like during recessions or inflationary spirals – but it’s no less true even during normal times. Good government is what allows the market to reach its full potential, and to operate as smoothly and productively as possible.
XVI.
And this brings us to one of the biggest ways in which government can enhance the effectiveness of the private sector by doing the spending that the private sector can’t or won’t do for itself. We’ve been talking all about the importance of having a government that can come to the rescue when the private sector has overinvested in the wrong things and produced massive negative externalities for the rest of the economy. But in addition to these instances in which the government must come in after the fact to clean up the messes created by market failures, there are also cases in which the government can help from the opposite direction – by identifying areas in which the private sector is chronically underinvesting in crucial goods and services (and thereby causing itself to underperform its full potential), and making up the difference by investing in those areas itself. After all, in the same way that market transactions can sometimes produce negative externalities by imposing costs on outside parties without accounting for them, they can just as easily produce positive externalities by creating benefits for outside parties without accounting for them – and when this happens, the government may be justified in intervening to increase the production of those positive externalities, in the same way that it may be justified in intervening to decrease the production of negative externalities, so as to bring them in line with their most market-optimal levels.
So what are some instances of areas where this kind of thing can happen? Sowell starts us off with the most minor kind of everyday example just to illustrate the basic idea:
[In the same way that transactions may impose costs on uninvolved third parties], there may be transactions that would be beneficial to people who are not party to the decision-making, and whose interests are therefore not taken into account. The benefits of mud flaps on cars and trucks may be apparent to anyone who has ever driven in a rainstorm behind a car or truck that was throwing so much water or mud onto his windshield as to dangerously obscure vision [or for that matter, kicking up pieces of gravel or loose rocks]. Even if everyone agrees that the benefits of mud flaps greatly exceed their costs, there is no feasible way of buying these benefits in a free market, since you receive no benefits from the mud flaps that you buy and put on your own car, but only from mud flaps that other people buy and put on their cars and trucks.
These are “external benefits.” Here again, [as with external costs,] it is possible to obtain collectively through government what cannot be obtained individually through the marketplace, simply by having laws passed requiring all cars and trucks to have mud flaps on them.
It is worth noting that there can be positive externalities as well [as negative ones]; an individual’s behavior can have a positive impact on society for which he or she is not fully compensated. I once had an office window that looked out across the Chicago River at the Wrigley Building and the Tribune Tower, two of the most beautiful buildings in a city renowned for its architecture. On a clear day, the view of the skyline, and of these two buildings in particular, was positively inspiring. But I spent five years in that office without paying for the utility that I derived from this wonderful architecture. I didn’t mail a check to the Tribune Company every time I glanced out the window. Or, in the realm of economic development, a business may invest in a downtrodden neighborhood in a way that attracts other kinds of investment. Yet this business is not compensated for anchoring what may become an economic revitalization, which is why local governments often offer subsidies for such investment.
Things like making cities more physically attractive are often overlooked or dismissed as unimportant or superficial – and to be sure, they aren’t the most important things in the world – but even so, attractive living conditions are something to which people do ascribe real positive value – nobody wants to live in an area that’s all ugly gray architecture and trash in the streets – so a truly efficient market does need to take that positive value into account and make adjustments accordingly.
Beyond these kinds of concerns, though, the impact of positive externalities extends into much more serious areas as well – including areas like public health where they can sometimes be matters of life and death. Consider the example given by commenter LordeRoyale, for instance, of protecting the population from contagious diseases:
Take vaccines, for example. If one person vaccinates herself against a disease, she is less likely to catch it. But because she is less likely to catch it, she is less likely to become a carrier and infect other people. Thus, getting vaccinated conveys a positive externality. If getting vaccinated has some cost, either in money, time, or risk of adverse side effects, too few people will choose to get themselves vaccinated because they will likely ignore the positive externalities when weighing the costs and benefits. The government may remedy this problem by subsidizing the development, manufacture, and distribution of vaccines or by requiring vaccination.
Or consider the example of education, as raised by Sachs:
Many social sectors exhibit strong spillovers (or externalities) in their effects. I want you to sleep under an antimalarial bed net so that a mosquito does not bite you and then transmit the disease to me! For a similar reason, I want you to be well educated so that you do not easily fall under the sway of a demagogue who would be harmful for me as well as you. When such spillovers exist, private markets tend to undersupply the goods and services in question. For just this reason, Adam Smith called for the public provision of education: “An instructed and intelligent people . . . are more disposed to examine, and more capable of seeing through, the interested complaints of faction and sedition. . . .” Smith argued, therefore, that the whole society is at risk when any segment of society is poorly educated.
Even aside from this civic-minded kind of justification, having a society full of knowledgeable, educated citizens makes everyone in the society better off regardless of whether they’ve received any of the education services themselves simply because more highly educated people are less likely to engage in violent crime and other such anti-social activities, and are more likely to make positive contributions like producing valuable breakthroughs in science, technology, and other knowledge-intensive fields. As Sachs notes, though, these spillover effects are never actually included in private-sector prices, since the recipients of those ancillary benefits aren’t the ones actually paying for the education directly – so if left solely to private markets, education tends to be undersupplied and overpriced, just as negative externalities tend to be oversupplied and underpriced. The market alone produces inefficient results; so therefore, it’s possible for the government to improve things by getting involved.
On a related note, another area in which government support for positive externalities can make a major difference is in providing funding for scientific and technological research directly. Scientific advancements like (say) the discovery of some new medical treatment, or the invention of the internet, can have significantly bigger impacts on our quality of life than anything that might be going on in the political realm (despite the latter being where we typically prefer to devote the overwhelming share of our attention and energy). But again, the value of such advancements is often spread across the whole of society, and can’t be fully compensated by the market mechanism alone; the positive externalities at play are often significant. What’s more, this kind of scientific research can often require sizeable up-front investment, and may take years before it yields anything that could be profitable at all (if it ever does). For private firms that depend on short-term profitability in order to stay in business, it simply isn’t feasible to invest in such pursuits fully enough to generate the maximum possible social benefit. This is yet another area, then, where government investment can be indispensable. As Wheelan writes:
We [all know about] the powerful incentives that profits create for pharmaceutical companies and the like. But not all important scientific discoveries have immediate commercial applications. Exploring the universe or understanding how human cells divide or seeking subatomic particles may be many steps removed from launching a communications satellite or developing a drug that shrinks tumors or finding a cleaner source of energy. As important, this kind of research must be shared freely with other scientists in order to maximize its value. In other words, you won’t get rich—or even cover your costs in most cases—by generating knowledge that may someday significantly advance the human condition. Most of America’s basic research is [therefore] done either directly by the government at places like NASA and the National Institutes for Health or at research universities, which are nonprofit institutions that receive federal funding.
[Recall that a nonrival good is one where] the use of the [good] by one citizen does not diminish its availability for use by others. A scientific discovery is a classic nonrival good. Once the structure of DNA has been discovered, the use of that wonderful knowledge by any individual in society does not limit the use of the same knowledge by others in society. Economic efficiency requires that the knowledge should be available for all, to maximize the social benefits of the knowledge. There should not be a fee for scientists, businesses, households, researchers, and others who want to utilize the scientific knowledge of the structure of DNA! But if there is no fee, who will invest in the discoveries in the first place? The best answer is the public, through publicly financed institutions like the National Institutes of Health (NIH) in the United States. Even the free-market United States invests $27 billion in publicly financed knowledge capital through the NIH.
To be clear, this isn’t to say that private firms and individuals can never make scientific breakthroughs without government funding; obviously they do so all the time. But the breakthroughs that they do make tend to be geared more toward relatively short-term and narrowly-focused applications that can produce immediate profits, and not so much toward the kind of “big picture” research that might benefit the world in a broader sense despite not being immediately profitable. That latter kind of research more often relies on government funding.
In any case, even the shorter-term, more profit-focused efforts that private-sector innovators generally pursue wouldn’t themselves be possible in the first place without government involvement – specifically the government’s assurance that the innovators will be guaranteed sufficient intellectual property rights over their breakthroughs to be able to profit from them. As Taylor explains:
Thomas Edison’s first invention was a vote-counting machine. It worked just fine, but no one bought it and Edison vowed from then on to make only inventions people would actually buy. More recently, Gordon Gould put off patenting the laser, an idea he came up with in 1957. He had his working notebooks notarized to be sure he could prove when he had developed the idea, but he mistakenly believed he needed a working prototype before he could apply for a patent. By the time he did apply, other scientists were putting his ideas into action. It took twenty years and $100,000 in legal fees for him to earn some money from the invention.
These examples help to illustrate the reason why a free market may produce too little scientific research and innovation: there is no guarantee that an unfettered market will reward the inventor. Imagine a company that’s planning to invest a lot of money in research and development on a new invention. If the project fails, the company will have lower net profits than its competitors, and maybe it will even suffer losses and be driven out of business. The other possibility is the project succeeds; in that case, in a completely unregulated free market, competitors can just steal the idea. The innovating company will incur the development expenses but no special gain in revenues. It will still have lower net profits than all its competitors and may still be driven out of business. Heads, I lose; tails, you win.
In conceptual terms, new technology is the opposite of pollution. In the case of pollution, parties external to the transaction between producer and consumer suffered the environmental costs. With new inventions, parties external to the transaction between producer and consumer reap the benefits of these new innovations without needing to compensate the inventor. Thus, innovation is an example of a positive externality.
The key element driving innovation is the ability of an innovator to receive a substantial share of the economic gains from an investment in research and development. Economists call this “appropriability.” If inventors, and the firms that employ them, are not being sufficiently compensated for their efforts, they will provide less innovation. The appropriate policy response to negative externalities such as pollution is to find a way to make the producers face the social costs; conversely, the appropriate policy response to positive externalities such as innovation is to help compensate the producers for their firm’s costs of innovating. Granting and protecting intellectual property rights is one mechanism for accomplishing this goal. Such rights help firms avoid market competition for a set period of time, so that the firm can earn higher-than-normal profits for a while as a return on their investment in innovation.
Again, though, while it’s true that granting exclusive proprietary rights can be one good mechanism for incentivizing innovation, for many types of research merely granting proprietary rights isn’t an entirely adequate solution, either because the whole point of the research in question is to produce benefits that are shared by everyone, or because it’s not the kind of research that can be profited from at all. That’s why a more straightforward approach in many cases, as Harford notes, is to simply subsidize the research directly (along with whichever other positive externalities are currently being undersupplied by the market):
Just as negative externalities will tend to lead to too much pollution or congestion, positive externalities will leave us undervaccinated, with scruffy neighbors, and a dearth of pleasant cafés. And while negative externalities attract all the attention, positive externalities may be even more important: so many of the things that make life worth living are, in fact, subject to positive externalities and are underprovided: freedom from disease, honesty in public life, vibrant neighborhoods, and technological innovation.
Once we realize the importance of positive externalities, the obvious solution is the mirror image of the policies we considered to deal with negative externalities: instead of an externality charge, an externality subsidy. Vaccinations, for example, are often subsidized by governments or by aid agencies; scientific research, too, usually gets a big dose of government funding.
Taylor elaborates a bit more on what exactly this entails:
The U.S. government uses a range of policies to subsidize innovation. It directly funds scientific research through grants to universities, private research organizations, and businesses. According to the National Science Foundation, in 2008 some $397 billion was spent on research and development in the United States; 65 percent of that was spent by industry, 25 percent by the federal government, and the rest by the nonprofit and educational sector—including state universities. Most of the R&D in the United States is paid for by private industry, with the government’s share shrinking since the aerospace-and defense-related research boom of the 1960s and ’70s. One advantage of industry-funded R&D is that it is likely to focus on applied technology with near-term payoffs. Government-funded research, on the other hand, focuses on big-picture discoveries that might stretch across multiple industries with payoffs that might not appear for decades, such as breakthroughs in how we think about physics or biology. Government-funded research is more often released directly into the public domain, meaning the results are available to anyone who wants to build on them. Firm-financed research is typically subject to patent and trade secret law, so in many cases government-funded research disseminates more quickly through the economy.
If you were judging solely by the percentages, you might assume that government-funded research would take a clear backseat to privately-funded research, and that the latter must be the real force driving scientific and technological progress. This can certainly be true in some cases – but it’s most definitely not true as a universal rule. As Rana Faroohar writes:
In the movie Steve Jobs, a character asks, “So how come 10 times in a day I read ‘Steve Jobs is a genius?’” The great man reputation that envelops Jobs is just part of a larger mythology of the role that Silicon Valley, and indeed the entire U.S. private sector, has played in technology innovation. We idolize tech entrepreneurs like Jobs, and credit them for most of the growth in our economy. But University of Sussex economist Mariana Mazzucato, who has just published a new U.S. edition of her book, The Entrepreneurial State: Debunking Public vs. Private Sector Myths, makes a timely argument that it is the government, not venture capitalists and tech visionaries, that have been heroic.
“Every major technological change in recent years traces most of its funding back to the state,” says Mazzucato. Even “early stage” private-sector VCs come in much later, after the big breakthroughs have been made. For example, she notes, “The National Institutes of Health have spent almost a trillion dollars since their founding on the research that created both the pharmaceutical and the biotech sectors—with venture capitalists only entering biotech once the red carpet was laid down in the 1980s. We pretend that the government was at best just in the background creating the basic conditions (skills, infrastructure, basic science). But the truth is that the involvement required massive risk taking along the entire innovation chain: basic research, applied research and early stage financing of companies themselves.” The Silicon Valley VC model, which has typically dictated that financiers exit within 5 years or so, simply isn’t patient enough to create game changing innovation.
Mazzucato’s book cites powerful data and anecdotes. The parts of the smart phone that make it smart—GPS, touch screens, the Internet—were advanced by the Defense Department. Tesla’s battery technologies and solar panels came out of a grant from the U.S. Department of Energy. Google’s search engine algorithm was boosted by a National Science Foundation innovation. Many innovative new drugs have come out of NIH research.
[Q]: State-run companies may be able to paper-push with the best of them, but the government can never be truly innovative. Only the free market can do that. Look at Silicon Valley!
Advances invented either solely or partly by government institutions include […] the computer, mouse, Internet, digital camera, and email. Not to mention radar, the jet engine, satellites, fiber optics, artificial limbs, and nuclear energy. And that doesn’t include the less recognizable inventions used mostly in industry, or the scores of other inventions from government-funded universities and hospitals.
Even those inventions that come from corporations often come not from startups exposed to the free market, but from de facto state-owned monopolies. For example, during its fifty years as a state-sanctioned monopoly, the infamous Ma Bell invented (via its Bell Labs division) transistors, modern cryptography, solar cells, the laser, the C programming language, and mobile phones; when the monopoly was broken up, Bell Labs was sold off to Alcatel-Lucent, which after a few years announced it was cutting all funding for basic research to focus on more immediately profitable applications.
Although the media celebrates private companies like Apple as centers of innovation, Apple’s expertise lies, at best, in consumer packaging. They did not invent the computer, the mp3 player, or the mobile phone, but they developed versions of these products that were attractive and easy to use. This is great and they deserve the acclaim and heaps of money they’ve gathered from their success, but let’s make sure to call a spade a spade: they are good at marketing and design, not at brilliant invention of totally new technologies.
That sort of de novo invention seems to come mostly from very large organizations that can afford basic research without an obsession on short-term profitability. Although sometimes large companies like Ma Bell, invention-rich IBM and Xerox can fulfill this role, such organizations are disproportionately governments and state-sponsored companies, explaining their impressive track record in this area.
One of the biggest examples of government producing a massive surge in technological progress was World War II, as Daniel P. Gross and Bhaven N. Sampat document:
During World War II, the US government’s Office of Scientific Research and Development (OSRD) supported one of the largest public investments in applied R&D in US history. Using data on all OSRD-funded invention, we show this shock had a formative impact on the US innovation system, catalyzing technology clusters across the country, with accompanying increases in high-tech entrepreneurship and employment. These effects persist until at least the 1970s and appear to be driven by agglomerative forces and endogenous growth. In addition to creating technology clusters, wartime R&D permanently changed the trajectory of overall US innovation in the direction of OSRD-funded technologies.
But this leap forward was just part of a larger trend that has proven itself repeatedly: When the US government really decides to put its mind to it and dedicate significant resources to scientific research – whether for its own sake or for the purposes of some big military effort – it consistently produces advances in technology that are nothing short of transformative. As Noam Chomsky points out:
[Military spending has] played a prominent role in technological and industrial development throughout the modern era. That includes major advances in metallurgy, electronics, machine tools, and manufacturing processes, including the American system of mass production that astounded nineteenth-century competitors and set the stage for the automotive industry and other manufacturing achievements, based on many years of investment, R&D, and experience in weapons production within US Army arsenals. There was a qualitative leap forward after World War II, this time primarily in the US, as the military provided a cover for the creation of the core of the modern high-tech economy: computers and electronics generally, telecommunications and the Internet, automation, lasers, and the commercial aviation industry, and much else, now extending to nanotechnology, biotechnology, neuroengineering, and other new frontiers.
Of course, as he also goes on to emphasize, achieving this kind of technological progress by way of military spending isn’t actually the most productive way of doing so; if you’re spending billions of dollars on new weapons systems and munitions whose only ultimate purpose is to be destroyed, it’s generally quite a bit more wasteful than using that same money to develop technologies that can be put toward more constructive purposes. Unfortunately though, because so many American conservatives share such a strong knee-jerk opposition to every kind of government spending except military spending, using the pretense of national defense is often the only way of getting such spending approved at all.
Nevertheless, the US government does continue to sponsor all kinds of scientific research for civilian applications as well as military ones, and as a result it continues to accelerate our progress as a species in all kinds of critical areas. One of the most important recent examples of this is how government support has helped advance renewable forms of energy like solar power; as Alexander notes, “government subsidies to solar seem to have been a very successful attempt to push solar out of an area where it wasn’t profitable to improve into an area where it is.” If at some point in the near future we’re all living in a world powered primarily by renewable energy, we’ll in no small part have government to thank for it. But while this is just one of the more obvious areas where investment in technology can make us all better off, the same is true for every sector of the economy – and in fact, it’s far more true than even most pro-technology advocates might realize. As Taylor explains, economists estimate that for an advanced economy like the US, fully half of all economic growth can be attributed to the ongoing development of new productivity-enhancing technologies:
The underlying cause of long-term economic growth is a rise in productivity growth—that is, higher output per hour worked or higher output per worker. The three big drivers of productivity growth are an increase in physical capital, that is, more capital equipment for workers to use on the job; more human capital, meaning workers who have more experience or better education; and better technology, that is, more efficient ways of producing things. In practice, these work together in the context of the incentives in a market-oriented economy. However, a standard approach is to calculate how much education and experience per worker have increased and how much physical capital equipment per worker has increased. Then, any remaining growth that cannot be explained by these factors is commonly attributed to improved technology—where “technology” is a broad term referring to all the large and small innovations that change what is produced.
When economists break down the determinants of economic growth for an economy such as the United States, a common finding is that about one-fourth of long-term economic growth can be explained by growth in human capital, such as more education and more experience. Another one-fourth of economic growth can be explained by physical capital: more machinery to work with, more places producing goods. But about one-half of all growth is new technology.
In light of this (and also for other reasons I’ll get into in my next post), I’m inclined to think that scientific and technological research is one area where we should not only being doing more than we currently are, but vastly more than we currently are. We seem to have an inclination as a society, whether consciously or unconsciously, to think of scientific breakthroughs not as a central driving force of our civilization, but almost as a kind of extra bonus on top of the regular output of the economy – in other words, not really something that we should feel like we have to maximize, but just an added bit of “house money” that we might get to enjoy if we’re lucky. But I think this gets it dead wrong; I think that in an ideal world, if we could ever somehow implement a perfect system of government that truly accounted for all potential positive externalities to their fullest extent, we’d find that the task of accelerating the advancement of science and technology wouldn’t just make up a large part of the budget; it would practically be the main thing the government was directing its energy and resources toward. After all, as much as our lives are generally improved by the various other things that governments do – passing new laws and regulations and tax reforms and so on – history has shown that even the greatest of these accomplishments often pale in comparison to the impact produced by scientific breakthroughs like curing smallpox or inventing computers or what have you. (Smallpox is a particularly good case in point; in the last century before its eradication, it was estimated to have killed around 500 million people, plus countless more in the centuries before that – but by 1980 it had been completely eradicated, for a mere cost of around $300 million.) What’s more, as momentous as past scientific breakthroughs have been, we may actually be at a point in history right now where the next breakthroughs on the horizon – the technologies that we’re just now on the verge of unlocking – could be even more exponentially world-changing than anything else that has come before. For the first time in history, it has become conceivable that within our lifetimes, we could have AIs that are advanced enough to cure every disease, molecular nanofabricators that are advanced enough to instantly give us any physical good we might desire, bioengineering techniques that are advanced enough to let us extend our lifespans for as long as we want, and even more dramatic breakthroughs still. All of these impossibly sci-fi-sounding technologies are not only genuinely attainable – they could be attainable within the very near future. (Again, this will all be discussed more in the next post, but in the meantime you can check out this post by Tim Urban for a glimpse of what I mean.) But if we want to get there – and get there the right way, without destroying ourselves in the meantime – it will require real investment in research; and the more urgency we exercise in doing so, the better.
In my opinion, there’s a very real sense in which we should consider government’s most important role right now to simply be serving as a tool for empowering scientists and engineers – ensuring that the background social conditions of stability and prosperity are maintained to a sufficient degree to allow them to pursue their research without impediment, and helping advance them in their quest in every way possible (with funding, education, etc.). As things currently stand, the number of people who are actually out there in the world doing real important scientific research is, once you crunch the numbers, startlingly low (see Josiah Zayner’s post on the subject here). And among the working scientists who do exist, competition for funding and support is often incredibly fierce; scientists are frequently forced to spend an inordinate amount of their time jumping through hoops just to secure some limited resources for their work, rather than actually doing the work itself. In an ideal world, though, we would recognize the harm of impeding the progress of science in this way, and would make it our highest priority to do just the opposite; any person who wanted to get a science or technology degree and pursue scientific research for a living would essentially be given a blank check to do so. No doubt, it would require a lot of spending – considerably more than what we’re devoting to it now – and it might even require issuing considerably more government debt. But if we had our priorities straight, this wouldn’t be a problem, because we’d recognize it as the investment that it was; we’d be fully willing to pay the up-front cost, because we’d understand how much more we stood to gain in the long run. After all, this is how it works for every kind of positive externality – whether it be investing in scientific research or investing in repairing deteriorating infrastructure; as Sowell explains:
If a nation’s highways and bridges are crumbling from a lack of maintenance and repair, that does not appear in national debt statistics, but neglected infrastructure is a burden being passed on to the next generation, just as surely as a national debt would be. If the costs of repairs are worth the benefits, then issuing government bonds to raise the money needed to restore this infrastructure makes sense—and the burden on future generations may be no greater than if the bonds had never been issued, though it takes the form of money owed rather than the form of crumbling and perhaps dangerous infrastructure that may become even more costly to repair in the next generation, due to continued neglect.
Either wartime or peacetime expenditures by the government can be paid for out of tax revenues or out of money received from selling government bonds. Which method makes more economic sense depends in part on whether the money is being spent for a current flow of goods and services, such as electricity or paper for government agencies or food for the military forces, or is instead being spent for adding to an accumulated stock of capital, such as hydroelectric dams or national highways to be used in future years for future generations.
Going into debt to create long-term investments makes as much sense for the government as a private individual’s borrowing more than his annual income to buy a house. By the same token, people who borrow more than their annual income to pay for lavish entertainment this year are simply living beyond their means and probably heading for big financial trouble. The same principle applies to government expenditures for current benefits, with the costs being passed on to future generations.
To be sure, investing in infrastructure repair isn’t exactly like investing in scientific research. In the case of infrastructure, it can often make perfect sense to want to limit spending exclusively to the projects that are sure to produce the greatest benefits, so as to avoid needlessly wasting valuable resources on worthless money pits. When it comes to science investing, on the other hand, it’s often impossible to definitively know in advance exactly which projects will be turn out to be the most promising; so if anything, science investing might be better likened to venture capital, where the optimal strategy is to invest in a bunch of moonshot projects – even though you know that most of them will never amount to anything – simply because the payoff when one of them ultimately does succeed will be so enormous that it will outweigh all the losses and make it all worthwhile. (Adam Mastroianni’s post here makes this argument wonderfully.) Sure, it may cause conservatives to launch into fiery tirades about government waste when some of those ventures don’t end up producing fruit – as some of them inevitably won’t – but it’s a well-worn maxim in venture capital that if none of your investments ever fail, it’s a sign that you’re being far too restrained in your investing and are therefore missing out on a ton of positive value. And the same is true of government investment in science. Yes, investing in science does mean that you’ll have to incur some costs in the short term – but in the long term, you’ll earn back your original outlays and then some; that, after all, is what investment is. And that’s one thing that science investment and infrastructure investment do have in common – along with education investment, public health investment, and all the other examples we’ve been discussing. Government spending on positive externalities like these, which might strike some as “optional” or “superfluous,” is in fact a crucial part of a well-functioning economy, because it’s the only scalable way of unlocking immense stores of value that private markets wouldn’t otherwise be able to effectively unlock on their own.
XVII.
Speaking of providing innovators with government support to unlock whole new levels of attainment, though, another important part of this whole dynamic is that it’s not just individual researchers whose productivity can be significantly boosted in this way. Under certain circumstances, government may be able to achieve similar results on an even larger scale by throwing its support behind entire industries. I talked a lot in my last post about how trying to have the government “steer” what the economy produces is usually a massive mistake, and how it’s almost always better to let private markets organically determine which goods and services get produced. But I also mentioned that one possible exception to this rule might be if a developing country is trying to transition from producing (say) nothing but basic foodstuffs and other primary goods – which puts a natural ceiling on how high its productivity levels can be – to producing more sophisticated goods and services for which there’s more room for increases in demand as societal wealth increases (see Wendover Productions’ video on the subject here.) This is known as the “infant industries” argument, because the basic idea is that nascent industries in a particular country might theoretically be capable of operating efficiently enough to compete freely on the international market, but can’t actually do so until they’ve grown large enough to take advantage of economies of scale – so in order to help them get there, the government can provide them with some measure of short-term support (e.g. direct subsidies, temporary tariff protection from foreign competition, etc.) until they’ve grown efficient enough to compete. It’s admittedly a fairly contentious idea among economists, with strong arguments both for and against it – and in all honesty, I’m not entirely sure how fully I buy into it myself – but it does seem compelling enough to be worth mentioning, at least – because if it really is a legitimate strategy for development, it might well be one of the most important things a government of a developing country can do for its citizens. I’ll just repeat here the relevant parts from the last post where all this is discussed; so if you’ve already read that post, you can just skip this bit, but if not, here’s the first quotation, from Stiglitz:
Real development requires exploring all possible linkages: training local workers, developing small and medium-size enterprises to provide inputs for mining operations and oil and gas companies, domestic processing, and integrating the natural resources into the country’s economic structure. Of course, today, [poorer] countries may not have a comparative advantage in many of these activities, and some will argue that countries should stick to their strengths. From this perspective, these countries’ comparative advantage is having other countries exploit their resources.
That is wrong. What matters is dynamic comparative advantage, or comparative advantage in the long run, which can be shaped. Forty years ago, South Korea had a comparative advantage in growing rice. Had it stuck to that strength, it would not be the industrial giant that it is today. It might be the world’s most efficient rice grower, but it would still be poor.
Naturally, [David] Ricardo [with his idea of comparative advantage] is not the last word on the subject of international trade. It is important, however, that he be given the first word. His analysis of comparative advantage is the bedrock of modern international trade theory. It is simply not possible to have a proper conversation on the subject unless everyone fully understands this theory, along with all the constraints that it imposes upon our thinking about the subject.
That having been said, there are all sorts of interactions in which the harmonious logic of comparative advantage does not prevail. For example, countries compete against one another fairly directly for foreign direct investment. When Toyota needs to decide whether to build a new manufacturing facility in Ontario, Kentucky, or Baja California, it is not misleading to describe the issue in terms of international competitiveness. It is also important to note that comparative advantage is a trickier concept than it sometimes appears to be. When Ricardo presented his original argument, he used the example of England buying wine from Portugal in exchange for cloth. In this case, the fact that it took less effort to grow grapes in Portugal would seem to be a natural consequence of a more favorable climate. This in turn led to the suggestion that comparative advantage arose from conditions that were largely outside of anyone’s control, such as natural resource endowment. While this is sometimes true, often it is not. Knowledge, productive technology, and even organizational forms are not nearly as portable across national borders as they are often made out to be. Portugal remains a major exporter of port wine to this day, not because of climatic advantages, but because of advantages stemming from the knowledge, experience, and tradition that have arisen through centuries of producing this product. There are also significant economies of scale in winemaking that confer an advantage upon the “first mover,” as well as firms that have a large domestic market for their products.
Many of the most important sources of comparative advantage are completely overlooked in public policy discussions. For example, the presence of a large number of native (or highly competent) English speakers is a source of enormous advantage in particular sectors, not just in media and publishing, but also in law, financial services, scientific research, software development, and so on. Local and national culture can create advantages in ways that are very poorly understood. (Silicon Valley, as people are fond of pointing out, does not contain any significant silicon deposits, but it does contain a lot of Californians.) Network effects are important—the presence and success of one industry can generate advantages in related fields, often in very indirect ways. The legal system also confers advantages upon particular industries. The production of so-called intellectual property, for example, thrives only in jurisdictions with a legal environment that offers reasonable protection of patents and copyrights.
As a result, comparative advantage is not just something that countries happen to possess; it is also something that they can actively cultivate. In particular, subsidies to a given industry may create an advantage that is purely artificial, but over time they can lead to the creation of genuine comparative advantage, as the appropriate support networks, training systems, and reservoir of local knowledge needed for the industry are formed. This is, for example, what the government of Brazil is counting on with its support for Embraer (backed by the desire to get a slice of the international market for commercial aircraft), and the United Kingdom with its subsidization of the video game industry. Of course, this sort of political interference is unwise in many respects, but it does not rest upon any sort of misunderstanding or fallacy. It is possible for a country to do very well for itself through a well-planned and well-executed industrial strategy (particularly in “winner-take-all markets,” where the world only needs one or two suppliers). In this respect, the vocabulary of international competitiveness is again not misleading. If a nation has a particular reason for wanting to be an exporter in a particular sector, it may find itself competing with other nations to build up the right sort of advantages for itself.
Of course, just because it’s possible for a government to provide the necessary “scaffolding,” so to speak, for an industry to successfully build up its economies of scale and become internationally competitive, doesn’t mean that such efforts will always be successful; there’s a long list of examples of countries attempting to build up particular industries and failing badly. So how can a country ensure that it’s tackling the task in the right way, and building up industries that actually are ultimately able to compete, instead of just pouring endless funds into infant industries that never grow up and just turn out to be massive wastes? Alexander summarizes Joe Studwell’s take on the whole issue:
East Asian countries got rich by manufacturing. First it was “Made in Japan”, then “Made in Taiwan”, then “Made in China”. At first each label was synonymous with low-quality knockoffs. Gradually they improved, until now “Made in Japan” has the same kind of prestige as Germany or Switzerland, and even China is losing some of its stigma.
Not every rich country gets rich by manufacturing. Studwell divides successful countries into three groups. First, small financial hubs, like Singapore, Dubai, or Switzerland. This is good work if you can get it, but it really only works for one small country per region; you can’t have all of China be “a financial hub”. In the 1980s, everyone was so impressed with Singapore and Hong Kong that they became the go-to models for development, and people incorrectly recommended liberal free market policies as the solution to everything. But the Singapore/Hong Kong model doesn’t necessarily work for bigger countries, and most of the good financial hub niches are already filled by now.
Second, “high-value agricultural producers”. Studwell gives Denmark and New Zealand as examples. Again, these countries are very nice. But they also tend to be small and sparsely populated, and they also don’t scale. New Zealand’s biggest export category is “dairy, eggs, and honey”. Imagine how much honey you would have to eat to lift China out of poverty that way. It would be absolutely delicious for a few years, and then we would all die of diabetes.
Third, manufacturing, eg everyone else. Every big developed country went through its manufacturing phase. Britain, Germany, and America all passed through an era of sweatshops, smokestacks, and steel. Most developed countries gradually leave that phase, switch to a services-based economy, and offshore some of the worse jobs to places with cheaper labor. But they can’t skip it entirely.
And every big developed country that passed through a manufacturing phase used tariffs (except Britain, which industrialized first and didn’t need to defend itself against anybody). Economic planners like Friedrich List in Germany and Alexander Hamilton in the United States realized early on that British competition would stifle the development of native industry without government protection. Once their industries were as good as Britain’s, they removed their tariffs, which was the right move – but they never would have been able to reach that level without protectionism.
Imagine having to start your own car company in Zimbabwe. Your past experience is “peasant farmer”. You have no idea how to make cars. The local financial system can muster up only a few million dollars in seed funding, and the local manufacturing expertise is limited to a handful of engineers who have just returned from foreign universities. Maybe if you’re very lucky you can eventually succeed at making cars that run at all. But there’s no way you’ll be able to outcompete Ford, Toyota, and Tesla. All these companies have billions of dollars and some of the smartest people in the world working for them, plus decades of practice and lots of proprietary technology. Your cars will inevitably be worse and more expensive than theirs. Every country that’s solved this problem and started a local car industry has done so by putting high tariffs on foreign cars. Locals will have to buy your cars, so even if you’re not exactly making a profit after a few years, at least you’re not completely useless either.
This will become a problem if it shelters companies from competition; they’ll have no incentive to improve. Successful East Asian countries avoided this outcome by having many local car companies. The most successful ones went a bit overboard with this:
In the Korea of 1973 – which at the time boasted a car market of just 30,000 vehicles per annum – government had offered protection and subsidies to not one but three putative makers of ‘citizens’ cars: HMC, Shinjin, and Kia. Inasmuch as the market was too small for one producer, the licensing of three companies was ridiculous. HMC posted losses every year from 1972 to 1978, despite very high domestic car prices. However, the government sanctioned multiple car makers not to make shot-term profits – which would have come much sooner to a monopoly manufacturer – but rather to force the pace of technological learning through competition.
In addition to domestic competition, these governments enforced “export discipline”. In order to keep their government perks (and sometimes in order to keep existing at all), companies needed to sell a certain amount of units abroad each year. At the beginning, they might have to sell for way below-cost to other equally poor countries. That was fine. The point wasn’t that any of this was a short-term economically reasonable thing to do. The point was to force companies to be constantly thinking about how to succeed in the “real world” outside the tariff wall. And the secondary point was to let the government know which companies were at least a little promising, vs. which ones were totally unable to survive except in a captive marketplace. If a company couldn’t export at least a few units, the government usually culled it off and gave its assets to other companies that could.
Aren’t there good free-market arguments against tariffs and government intervention in the economy? The key counterargument is that developing country industries aren’t just about profit. They’re about learning. The benefits of a developing-country industry go partly to the owners/investors, but mostly to the country itself, in the sense of gaining technology / expertise / capacity. It’s almost always more profitable in the short run for developing-world capitalists to start another banana plantation, or speculate on real estate, or open a casino. But a country that invests mostly in banana plantations will still be a banana republic fifty years later, whereas a country that invests mostly in car companies will become South Korea. The car company produces a big positive externality – in the sense of raising the country’s level of development – which isn’t naturally captured by the owners/investors. So development is a collective action problem. The country as a whole would be better off if everyone started car companies, but each individual capitalist would rather start banana plantations.
So the job of a developing country government is to try to get everyone to ignore profits in favor of the industrial learning process. “Ignore profits” doesn’t actually mean the companies shouldn’t be profitable. All else being equal, higher profits are a sign that the company is learning its industry better. But it means that there are many short-term profit opportunities that shouldn’t be taken because nobody will learn anything from them. And lots of things that will spend decades unprofitable should be done anyway, for educational value.
[…]
I think there are [strong] counterarguments to Studwell scattered throughout journals that I haven’t quite figured out how to navigate and collect. The infant industry argument seems to be a going controversy within economics and not at all settled science. The picture is complicated by studies showing that countries with lower tariffs have had higher GDP growth since 1945. Studwell could respond that tariffs only work as part of a coherent and well-designed industrial policy; if you just tariff random things to protect special interests, it will go badly in exactly the way free marketeers expect.
To be sure, there are good reasons why economists tend to be so adamant in their insistence that this kind of planned industrial policy shouldn’t be tried under ordinary circumstances. If history has shown anything, it’s that trying to steer the course of an entire economy via government policy is a very hard thing to get right, to say the least. Still, it does seem like one thing that can make it easier is if the countries in question aren’t trying to chart a whole new path to prosperity, but are simply aiming for known benchmarks in order to catch up to other countries who’ve already become rich successfully. Here’s Alexander again, citing the work of Robert Allen this time:
By the early 20th century, a clear gap had emerged between Europe, North America, and Japan (on one side), and everyone else (on the other). After World War II, the former colonies declared independence from Europe, hoping to try the Standard Development Model at long last and get the same easy successes the West had. But this no longer worked; they had missed the boat entirely. [Allen] invokes the increasing gap between developed and less developed countries; when the gap was still small, the Standard Model prongs were enough to overcome it. By the 20th century, developed countries were so far ahead that the model made less sense. If you’re 1820s France trying to catch up to Britain, you can probably find some craftsmen somewhere in your economy who can make something like a textile mill, train them a bit, get them to make textile mills, use some clever investment policy to create whatever prerequisites to textile mills you don’t already have, and eventually end up with textile mills without too much trouble. If you’re 2000s Bangladesh trying to catch up to the West, you want semiconductor factories. Scrounging around a mostly-agrarian economy and eventually cobbling together enough expertise and capital to make a textile mill is one thing. Making a semiconductor factory is a lot harder. And if you decide to just make the textile mill instead, what if First World textile mills are some sort of amazing robotic wonderland now and nobody wants your crappy 1800s-technology textiles? Development needs a lot more slack now before it can become profitable.
Is it still possible to succeed? Allen points to South Korea, the USSR, and China as examples that it might be. He describes their strategy as “the Big Push” – a strong central government producing lots of (not immediately useful or profitable) industry, in the hopes that it will pay off later:
This is Big Push industrialization. It raises difficult problems since everything is built ahead of supply and demand. The steel mills are built before the auto factories that will use their rolled sheets. The auto plants are built before the steel they will fabricate is available, and indeed before there is effective demand for their product. Every investment depends on faith that the complementary investments will materialize. The success of the grand design requires a planning authority to coordinate the activities and ensure that they are carried out. The large economies that have broken out of poverty in the 20th century have managed to do this, although they varied considerably in their planning apparatus.
There follows some discussion of the Soviet Union and China. Both [took this approach], but the Soviet economy stagnated anyway in the 1970s. Allen seems kind of unsure about why this happened, and is willing to entertain both the possibility it was random and contingent (maybe the planners made a mistake in trying to pour so much investment into parts of Siberia that weren’t really habitable), and the possibility that planned economies are fundamentally better at catch-up growth than at the technological frontier (central planners can force people to make steel mills if you know steel mills are next up on your tech tree, but if you don’t know what’s next on the tech tree it’s hard to plan for it). […] The author is [more] impressed with China, which seems to have gotten this part right (maybe by accident): they communismed until they reached the technological frontier, then uncommunismed in time to get on the path to being a normal developed country. [Allen’s book] isn’t very big on prescriptions, but I think it would probably suggest having a pretty heavily planned economy while you’re playing catch-up, and then unwinding it once you’re close to where you want to be.
Like I said, I don’t personally claim to know how valid this idea is; even the professional economists, it seems, aren’t exactly unanimous on the matter. And at any rate, it’s something of a moot point for those of us living in the US, since so few of our industries are in their infancy compared to the rest of the world anyway; we tend to be at the forefront of most fields, so there aren’t really any industries that would theoretically need to be protected until they “caught up” at all. Having said that, though, for less developed countries the question isn’t such a moot point; in fact, getting it right might be the key to pulling themselves out of their economic rut and becoming prosperous – or, if they get it wrong, digging themselves even more deeply into it. Either way, it’s definitely an idea that merits attention – because if there’s really something to it, it’s yet another example of an area where government intervention in the market can mean the difference between fantastic wealth and abject poverty for entire populations of people.
XVIII.
Returning to the subject of our own economy, though, I should briefly clarify here that when I say the US government doesn’t really need to protect any of its infant industries to keep up with the rest of the world economy, that’s not necessarily the same thing as saying it should never support any of its domestic industries for any reason. That would pretty blatantly contradict all the arguments I was making a moment ago about the importance of supporting scientific research and incentivizing technological breakthroughs and so on; obviously, I’m strongly in favor of supporting all kinds of efforts toward those ends. I should also add, though, that those aren’t the only areas in which government support for certain domestic industries might be justifiable for the greater public good. To give another example (repeating again from last post), I think there’s a good case to be made that in order to keep the country safe from catastrophic emergencies, the government should be allowed to provide ongoing support to domestic producers of certain goods to keep them up and running on a stable basis, even if it’s not quite as efficient as entrusting the entire supply of those goods to foreign suppliers, just in case (I’m thinking of things like food supplies in particular). This is something I’ve changed my opinion on fairly recently; I used to think things like agriculture subsidies were a prime example of egregious government waste – why spend valuable taxpayer money to keep domestic farmers in business when it would be cheaper just to buy all our food from overseas? – but now I’ve started to see a bit more of the logic behind having a functioning food production infrastructure already in place in case of some sudden unexpected crisis. Sure, it might be a slight drag on our economy 99.99% of the time – but in that other 0.01% of cases, like (say) if we’re ever stuck by some deadly super-pandemic that suddenly makes it impossible to import sufficient food from abroad, it seems like we’d be glad to have it. After all, food isn’t something you can just instantly start producing overnight; it takes months to grow and harvest. So although it’s true that you could prepare to some extent by having some necessities already stockpiled in advance – including some non-perishable food supplies being strategically held in reserve – it still doesn’t strike me as completely unreasonable to also want to have some operations already in place to produce more food as needed (including farmers who know how to run them), just in case.
Now, having said all this, I should stress again that I don’t think this should be the norm for every industry. For the vast majority of industries, I’m still strongly in favor of letting firms compete in the free market, and not trying to have the government control them or give them any kind of special protection. This has consistently proven to be the most efficient way to get the highest quality goods to the people who want them at the lowest cost. Nevertheless, there’s no denying that the market system has its downsides, and all the examples discussed throughout this post aren’t even the most obvious of these. Probably the most basic downside of all when it comes to market competition is simply that (like any competition) it will always inevitably produce losers as well as winners; when some companies succeed and outcompete others, the less competitive ones will be forced out of business, and their workers will have to find new jobs. For those newly unemployed workers, finding new work can be a difficult and painful process – and the harder it is for them, the more the broader economy misses out on their productive potential. But is there anything that can or should be done to mitigate this, or is it just something we have to accept in all its harshness as the price of having free markets? I don’t think the answer necessarily has to just be the latter; it seems to me that this is one more area where government support can help fill in some of the gaps created by market competition and act as a kind of economic lubricant to keep the economic machine running more smoothly. This is yet another section I’ll just be copying over wholesale from the previous post, so again, if you’ve already read that one you can just skip the rest of this bit; but if not, the rest of this section will be repeating a portion of what I wrote there. To start things off, here’s Wheelan’s summary of the basic dilemma:
A market economy inspires hard work and progress not just because it rewards winners, but because it crushes losers. The 1990s were a great time to be involved in the Internet. They were bad years to be in the electric typewriter business. Implicit in Adam Smith’s invisible hand is the idea of “creative destruction,” a term coined by the Austrian economist Joseph Schumpeter. Markets do not suffer fools gladly. Take Wal-Mart, a remarkably efficient retailer that often leaves carnage in its wake. Americans flock to Wal-Mart because the store offers an amazing range of products cheaper than they can be purchased anywhere else. This is a good thing. Being able to buy goods cheaper is essentially the same thing as having more income. At the same time, Wal-Mart is the ultimate nightmare for Al’s Glass and Hardware in Pekin, Illinois—and for mom-and-pop shops everywhere else. The pattern is well established: Wal-Mart opens a giant store just outside of town; several years later, the small shops on Main Street are closed and boarded up.
Capitalism can be a brutal, cruel process. We look back and speak admiringly of technological breakthroughs like the steam engine, the spinning wheel, and the telephone. But those advances made it a bad time to be, respectively, a blacksmith, a seamstress, or a telegraph operator. Creative destruction is not just something that might happen in a market economy. It is something that must happen. At the beginning of the twentieth century, half of all Americans worked in farming or ranching. Now that figure is about one in a hundred and still falling. (Iowa is still losing roughly fifteen hundred farmers a year.) Note that two important things have not happened: (1) We have not starved to death; and (2) we do not have a 49 percent unemployment rate. Instead, American farmers have become so productive that we need far fewer of them to feed ourselves. The individuals who would have been farming ninety years ago are now fixing our cars, designing computer games, playing professional football, etc. Just imagine our collective loss of utility if Steve Jobs, Steven Spielberg, and Oprah Winfrey were corn farmers.
Creative destruction is a tremendous positive force in the long run. The bad news is that people don’t pay their bills in the long run. The folks at the mortgage company can be real sticklers about getting that check every month. When a plant closes or an industry is wiped out by competition, it can be years or even an entire generation before the affected workers and communities recover. Anyone who has ever driven through New England has seen the abandoned or underutilized mills that are monuments to the days when America still manufactured things like textiles and shoes. Or one can drive through Gary, Indiana, where miles of rusting steel plants are a reminder that the city was not always most famous for having more murders per capita than any other city in the United States.
Competition means losers, which goes a long way toward explaining why we embrace it heartily in theory and then often fight it bitterly in practice. A college classmate of mine worked for a congressman from Michigan shortly after our graduation. My friend was not allowed to drive his Japanese car to work, lest it be spotted in one of the Michigan congressman’s reserved parking spaces. That congressman will almost certainly tell you that he is a capitalist. Of course he believes in markets—unless a Japanese company happens to make a better, cheaper car, in which case the staff member who bought that vehicle should be forced to take the train to work. (I would argue that the American automakers would have been much stronger in the long run if they had faced this international competition head-on instead of looking for political protection from the first wave of Japanese imports in the 1970s and 1980s.) This is nothing new; competition is always best when it involves other people. During the Industrial Revolution, weavers in rural England demonstrated, petitioned Parliament, and even burned down textile mills in an effort to fend off mechanization. Would we be better off now if they had succeeded and we still made all of our clothes by hand?
If you make a better mousetrap, the world will beat a path to your door; if you make the old mousetrap, it is time to start firing people. This helps to explain our ambivalence to international trade and globalization, to ruthless retailers like Wal-Mart, and even to some kinds of technology and automation. Competition also creates some interesting policy trade-offs. Government inevitably faces pressure to help firms and industries under siege from competition and to protect the affected workers. Yet many of the things that minimize the pain inflicted by competition—bailing out firms or making it hard to lay off workers—slow down or stop the process of creative destruction. To quote my junior high school football coach: “No pain, no gain.”
It’s true that there’s always the temptation, whenever the market is working its creative destruction like this, to leap to the conclusion that the market competition itself is the problem, and to accordingly want to impose drastic restrictions that prevent it from being able to so easily force firms out of business and force workers out of their jobs. But as Wheelan rightly points out, any government that intervenes in the market in an effort to counteract its negative effects has to be careful about whether it’s doing so in a remedial kind of way that still allows the positive effects to occur, or whether it’s simply blocking all of the market’s effects, both positive and negative, from ever occurring in the first place – because in the latter case, it may end up doing more harm than good. He continues:
What’s the problem [with trying to preserve obsolete jobs instead of letting them be eliminated]? The problem is that we don’t get the benefits of the new economic structure if politicians decide to protect the old one. Roger Ferguson, Jr., former vice chairman of the board of governors of the Federal Reserve, explains, “Policymakers who fail to appreciate the relationship between the relentless churning of the competitive environment and wealth creation will end up focusing their efforts on methods and skills that are in decline. In so doing, they establish policies that are aimed at protecting weak, outdated technologies, and in the end, they slow the economy’s march forward.”
Both politics and compassion suggest that we ought to offer a hand to those mowed over by competition. If some kind of wrenching change generates progress, then the pie must get bigger. And if the pie gets bigger, then at least some of it ought to be offered to the losers—be it in the form of transition aid, job retraining, or whatever else will help those who have been knocked over to get back on their feet. One of the features that made the North American Free Trade Agreement more palatable was a provision that offered compensation to workers whose job losses could be tied to expanded trade with Mexico. Similarly, many states are using money from the massive legal settlement with the tobacco industry to compensate tobacco farmers whose livelihoods are threatened by declining tobacco use.
There is a crucial distinction, however, between using the political process to build a safety net for those harmed by creative destruction and using the political process to stop that creative destruction in the first place. Think about the telegraph and the Pony Express. It would have been one thing to help displaced Pony Express workers by retraining them as telegraph operators; it would have been quite another to help them by banning the telegraph.
Needless to say, the idea of wanting to protect people’s jobs comes from a place of good intentions. But trying to accomplish this by interfering with productive market competition tends to backfire; by protecting unproductive firms and thereby keeping market efficiency artificially low, the cost of producing goods and services is kept artificially high – which leads to persistently high prices for customers, which means that those customers’ purchasing power stagnates rather than improving over time – the functional equivalent of denying them a universal pay raise.
So what’s a better alternative? Wheelan gave the answer already; there are certain approaches governments can take which actually do allow for the market’s creative destruction to take place, but then also provide a safety net as a supplement to it, so the benefits of market competition can still be realized while keeping the economic collateral damage to a minimum. Instead of trying to keep workers locked into their old jobs, government can, for instance, provide them with resources like transition aid and/or job retraining to more smoothly and easily transfer into new ones. As Wheelan puts it:
We can do things to soften [the] blows [of creative destruction]. We can retrain or even relocate workers. We can provide development assistance to communities harmed by the loss of a major industry. We can ensure that our schools teach the kinds of skills that make workers adaptable to whatever the economy may throw at them. In short, we can make sure that the winners do write checks (if indirectly) to the losers, sharing at least part of their gains. It’s good politics and it’s the right thing to do.
The logic behind this kind of strategy is clear enough: By allowing creative destruction to take place and make production more efficient (as opposed to resisting it with legal restrictions and so on), we can decrease costs for customers, so that even if a portion of those customer savings are then taxed and redistributed to workers who’ve lost their jobs, everyone can still come out ahead overall. And if this redistribution program includes training those workers for new jobs, it can increase efficiency and customer savings further still, by making it quicker and easier for firms to find qualified workers and bring them into positions where they can be most productive.
That being said, some have made the counterargument that whenever we’ve tried job retraining programs at various levels in the past, they’ve never really been all that effective, so they might not necessarily be the best way of helping out displaced workers. In fairness, a lot of these criticisms seem to just boil down to issues of execution, which seem like they could be fixable if administered more competently, so the criticisms shouldn’t necessarily be considered knock-down arguments against these kinds of programs. (From what I’ve read, it seems like some other countries have had great success with retraining programs, so our issues here may just be US-specific ones – more on that later.) But regardless of whether all the various logistical difficulties can be overcome, a more fundamental challenge when it comes to retraining workers is that, for a whole variety of reasons, a lot of them simply might not take to retraining all that well. As Alexander puts it:
Although “Well, they should retrain” is a nice thought, not every 50 year old grizzled miner can learn how to program social networking software. [Many] of them just [become] destitute and miserable.
Granted, it’s not quite such a dramatically binary situation in most cases; newly-unemployed workers do typically have other choices besides just software programming on the one hand and abject destitution on the other. The service sector, in particular, is one area where new kinds of jobs are being created all the time. (And what’s more, it’ll usually be the case that as new service-sector jobs open up, it won’t necessarily be the old manufacturing workers who are expected to fill them anyway; workers in more closely-related fields will tend to be the ones who fill them, and then those workers’ old jobs will be the ones that are available for manufacturing workers to step into. So for instance, if a new job is created in a service-sector field like nursing or computer programming or whatever, it might be filled by someone who would otherwise have become a teacher or something – and then as they switch into this new position, it’ll mean that there will now be an extra teaching position free to be filled by someone who might otherwise have become, say, a salesperson, which will mean there will now be an extra salesperson position available… and so on down the line until eventually a job will open up in a field that’s close enough to manufacturing that a former manufacturing worker will be able to switch into it fairly easily (or at least, more easily than switching into nursing or computer programming). So the argument that “you can’t expect lifelong miners and factory workers to pick up nursing or computer programming just like that” isn’t necessarily as significant as it might seem, because usually they aren’t being asked to make such a drastic change. More often, they’ll just be switching into the profession that they’re next-best-suited-to after manufacturing.)
Still, the point remains that sometimes workers really do have to switch into new fields that are completely foreign to them; and in those cases, there are all kinds of reasons why they might struggle to adjust to their new jobs. You might have some workers, for instance, who are great at working with their hands, but aren’t so great when it comes to extended interpersonal interactions, so they’re less naturally comfortable in jobs where they have to interact with customers a lot. Or you might have workers who function well in structured environments where their tasks are well-defined, but struggle to adjust to more abstract work where the objectives aren’t always as straightforward. Other workers might have a hard time working remotely if they’ve grown accustomed to having co-workers around them all the time. And the list goes on. Probably the biggest obstacle of all, though, is just the basic problem of having a skill set that no longer matches what the market is demanding. For workers who’ve been doing the same job for years, they’re likely to find that their extensive experience in their field isn’t worth all that much in other fields, so they basically have to start from square one all over again – which often means a significant pay cut. Of course, this is exactly why giving workers easy access to job retraining can be so valuable; the more quickly they’re able to get themselves up to speed in their new field, the more easily they’ll be able to maintain the quality of life they’re used to. But whether workers are able to succeed in retraining programs depends a lot on what kind of retraining is actually being offered and whether the workers are able to adapt and fit into the new jobs that they’re being trained for – and that’s not always an easy task if they don’t have the right background. So in light of the fact that every worker is different and every situation is different, another possible alternative to systematized job retraining is to simply adopt a policy of ensuring that every newly-unemployed worker will receive generous enough unemployment insurance payments that they’ll be able to make their own choices and pursue whatever path they think will be most rewarding for them in the end. In other words, rather than trying to put every unemployed worker through the same process and retrain them for jobs that may or may not actually suit them, it might be simpler and more efficient to take the funds that would have been spent on that retraining and just give them directly to the workers instead, so that they can either spend it on retraining if that’s what they prefer, or alternatively, put it toward something like getting a degree, or starting their own small business, or whatever else they think would be the best long-term investment for them given their specific circumstances. Similarly, another policy that might be helpful in this way would be to expand the Earned Income Tax Credit – rewarding workers with some extra financial cushion just for being able to secure any kind of job at all – so that once they do find new work, they’ll still be able to maintain a decent standard of living even if the new job doesn’t pay as much as their old one did. (This would also be useful as a kind of counterbalancing incentive to the unemployment benefits, since if those benefits were too open-ended, they might otherwise incentivize workers to remain unemployed for longer than necessary.) Likewise, making things like healthcare universally available – rather than having them be tied to people’s jobs – would go a long way toward making it easier and less stressful for people to change careers. And if all else fails, of course, there’s always the possibility of having the government step in and hire unemployed people directly, as a kind of last-resort stopgap employer for people who aren’t able to find any other way of making ends meet – so that even if they’re completely unable to find any other employer interested in making use of their labor, they can at least earn a living by contributing to public works projects that do some good in their communities. (This is the concept of countercyclical spending discussed earlier – or in its more ambitious form, a federal job guarantee.)
In short, there are a whole number of ways in which government can help take some of the sting out of the private sector’s creative destruction, by providing workers with enough provisional support between jobs to ensure that their period of unemployment is as short and painless as possible. Supporting workers in this way not only helps the market run more smoothly – by empowering workers to more quickly pick up necessary skills for new jobs, it can help it run more efficiently too. This in itself is reason enough to be in favor of it. But it’s not the only reason – and frankly, it’s not even the most important one. Up to this point, we’ve mostly been discussing all the things a government can do for its citizens in relatively pragmatic, utilitarian terms – all the economic benefits it can produce and so on – and to be sure, these are extremely important considerations. But in many cases, the biggest reason why a government should ensure that its people are taken care of is simply that they’re morally entitled to such treatment as human beings. Having the right to a fair living is a good example of this; if someone is willing and able to work, there’s no good reason why they should starve on the street because they can’t find a job. But it’s far from the only example – people’s basic human rights include plenty of others besides just the right to work – so it’s worth taking a moment to just give these rights their proper due and acknowledge their role in our broader discussion here.
XIX.
There’s often a tendency, in discussions like this, for economically-minded commentators to want to narrowly focus on the goal of maximizing economic efficiency, even to the exclusion of practically everything else. There’s not necessarily anything wrong with caring about efficiency, of course, provided it’s one consideration among many – especially if the word “efficient” is being used in the normal colloquial sense of “productive” and “not wasteful;” striving to keep productivity high and waste low is often one of the best ways of making society as a whole better off. But there’s also a technical economic definition of the word “efficiency” which often gets conflated with the more colloquial definition – namely, having a situation where there’s no way to make someone better off without making someone else worse off (what’s called “Pareto efficiency”). And if your only goal is to maximize that kind of efficiency, then that’s no guarantee at all that you’ll actually be doing the most you can to help society as a whole – because all it means to adhere to that definition of efficiency is to not have any net economic redistribution; and there are certainly plenty of circumstances under which some redistribution could make a society better off. To repeat one more example from before, if you had one person who was a billionaire, and a thousand others who were on the verge of starvation, it would be silly to say that a policy that made the billionaire $10M richer and affected no one else would automatically be better than a policy that made everyone else $10,000 richer but reduced the billionaire’s wealth by $1 simply because the first policy was technically more efficient in the economic sense. As Harford puts it, the level of efficiency just isn’t the most important factor in every case; sometimes things like equity and establishing a decent standard of living for everyone matter more:
Remember that when economists say the economy is inefficient, they mean that there’s a way to make somebody better off without harming anybody else. While the perfectly competitive market is perfectly efficient, efficiency is not enough to ensure a fair society, or even a society in which we would want to live. After all, it is efficient if Bill Gates has all the money and everybody else starves to death . . . because there is no way to make anybody better off without making Bill Gates worse off. We need something more than efficiency.
(See also Cosma Shalizi’s compelling post on this point here.)
If all you care about is simply maximizing efficiency alone, then economics can certainly tell you all the best ways of doing that. But what economics doesn’t tell you is whether that actually should be your only goal; it simply describes how the world works, not whether those workings are morally desirable or not. The latter question, as Wheelan points out, is one we have to answer for ourselves, based on our own judgments of what’s morally acceptable:
Government redistributes wealth. We collect taxes from some citizens and provide benefits to others. Contrary to popular opinion, most government benefits do not go to the poor; they go to the middle class in the form of Medicare and Social Security. Still, government has the legal authority to play Robin Hood; other governments around the world, such as the European countries, do so quite actively. What does economics have to say about this? Not much, unfortunately. The most important questions related to income distribution require philosophical or ideological answers, not economic ones. Consider the following question: Which would be a better state of the world, one in which every person in America earned $25,000—enough to cover the basic necessities—or the status quo, in which some Americans are wildly rich, some are desperately poor, and the average income is somewhere around $48,000? The latter describes a bigger economic pie; the former would be a smaller pie more evenly divided.
Economics does not provide the tools for answering philosophical questions related to income distribution. […] Will a tax increase that funds a better safety net for the poor but lowers overall economic growth make the country better off? That is a matter of opinion, not economic expertise. (Note that every presidential administration is able to find very qualified economists to support its ideological positions.) Liberals (in the American sense of the word) often ignore the fact that a growing pie, even if unequally divided, will almost always make even the small pieces larger. The developing world needs economic growth (to which international trade contributes heavily) to make the poor better off. Period. One historical reality is that government policies that ostensibly serve the poor can be ineffective or even counterproductive if they hobble the broader economy.
Meanwhile, conservatives often blithely assume that we should all rush out into the street and cheer for any policy that makes the economy grow faster, neglecting the fact that there are perfectly legitimate intellectual grounds for supporting other policies, such as protecting the environment or redistributing income, that may diminish the overall size of the pie. Indeed, some evidence suggests that our sense of well-being is determined at least as much by our relative wealth as it is by our absolute level of wealth. In other words, we derive utility not just from having a big television but from having a television that is as big as or bigger than the neighbors.’
There’s no objectively “correct” economic answer to the question of how we should balance growing the economy versus ensuring that economic well-being is widely shared. The question is a moral one; it’s a question of what we value as a society. So then in that case, what kind of arrangement should we prefer? In my opinion, Alexander’s answer here is a good one:
I care less about economic growth than about where the money goes. That includes caring less about distortionary taxation, deadweight loss, and all those other concepts.
Suppose Alice is an effective altruist who supports whatever charity you think is most important and does a really good job of it. Every dollar she spends saves multiple lives. She lives in a town of 1000 people where nobody else is an effective altruist and everyone else just lives a pretty decent life and spends their extra money on, I don’t know, breeding virtual cats or something.
A demon places a curse on Alice’s neighbor Bob. Every time Bob pays a dollar in taxes, it destroys a random two dollars’ worth of wealth somewhere in the town.
The town elders meet and decide that for some reason they have to lower taxes either on Alice or Bob. The economic case for Bob is overwhelming – taxes on him are especially inefficient because of the extra wealth they destroy.
Still, I would want a tax cut for Alice. It seems like the only important thing that happens at all in this town is Alice’s charitable donations. The amount I care about this town’s utility focuses pretty much entirely on that. We could give the break to Bob, and have a nominally better economy, but it would just lead to more people buying virtual cats. It could be that the extra two dollars’ of wealth destroyed by Bob’s taxes was some sort of useful machinery, and so taxing Bob harms economic growth. Again, it is hard to care, except insofar as that hurts Alice, the only person in town whose wealth matters much for anyone’s utility.
I can imagine a world in which Bob’s curse was stronger, and every dollar Bob was taxed destroyed a million dollars in value, and soon any tax on Bob meant the citizens of the town were starving to death and all of them including Alice went bankrupt. But right now the tax on Bob isn’t big enough to be worse for Alice than a tax on Alice, and since Alice is the only important person in this situation, I don’t care.
I can also imagine a world where a wise economist comes to town. She says “Alice’s work is the most important thing in this town, but taxing Bob destroys wealth for no reason. Some of the town elders support tax breaks for Bob, and others support tax breaks for Alice. But we can give the tax break to Bob, and then all the people who saved $2 each from the curse not being activated can give $1.50 to Alice. That way Bob is better off, Alice is better off, and potential curse victims are better off.”
This is the best argument in favor of wealth creation instead of redistribution. But right now we’re not doing that. We just create the wealth and then don’t redistribute it, except through charity, which is a rounding error, and taxes, which [conservatives generally oppose]. If we actually had Pareto-optimal wealth redistribution, then of course, create as much wealth as possible and redistribute it Pareto-optimally. Since we don’t, we’re kind of stuck.
My takeaway from this story is that in societies with a lot of marginal-value-of-money inequality, economic growth is potentially less useful than working to keep the money with people who can spend it on higher-marginal-value things.
The Alice-and-Bob story is, of course, just an exaggerated analogy for illustrative purposes – but here in the real world, our choices about how we should have the government tax and spend often aren’t all that different. In many cases, we’ll be faced with a decision of whether we should, say, impose higher taxes on rich people and then use that tax revenue to help the needy (even though that’s not technically a Pareto-efficient thing to do, since it makes the rich people worse off), or keep rich people’s taxes low and not do anything to help the needy (which is technically more Pareto-efficient since it doesn’t actively make anyone worse off – but has the downside of not making anyone better off either). In these kinds of situations, it may be useful to at least be aware of the efficiency implications of either choice; but I don’t think efficiency can or should be the be-all-end-all. I think that regardless of whether it might be slightly more or less efficient, we have a moral imperative to ensure that everyone in our society at least has a bare minimum standard of living, and no one is just left to starve in the streets. And the reason for this doesn’t have anything to do with economics at all; it’s just a matter of what people are entitled to in terms of their basic human rights.
Of course, as we’ve discussed previously, the concept of rights can be a tricky one to define concretely – especially in the context of political and economic systems where different interests often have to be weighed against each other. As Atul Gawande writes:
The Oxford political philosopher Henry Shue observed that our typical way of looking at rights is incomplete. People are used to thinking of rights as moral trump cards, near-absolute requirements that all of us can demand. But, Shue argued, rights are as much about our duties as about our freedoms. Even the basic right to physical security—to be free of threats or harm—has no meaning without a vast system of police departments, courts, and prisons, a system that requires extracting large amounts of money and effort from others. Once costs and mechanisms of implementation enter the picture, things get complicated. Trade-offs now have to be considered. And saying that something is a basic right starts to seem the equivalent of saying only, “It is very, very important.”
Shue held that what we really mean by “basic rights” are those which are necessary in order for us to enjoy any rights or privileges at all. In his analysis, basic rights include physical security, water, shelter, and health care. Meeting these basics is, he maintained, among government’s highest purposes and priorities. But how much aid and protection a society should provide, given the costs, is ultimately a complex choice for democracies. Debate often becomes focussed on the scale of the benefits conferred and the costs extracted. Yet the critical question may be how widely shared these benefits and costs are.
Despite the complexities at play here, though, I think Shue’s definition of basic rights is a valuable one. There are certain goods like basic safety and sustenance that are so essential for human survival that I don’t think practically anyone would dispute that they should qualify as fundamental rights. And on top of these, there are also certain goods which may not be strictly necessary for survival, but are still important enough that, again, most people would agree that everyone should have a basic right to them – things like the right to an education, the right to legal protection and representation in court, etc. The economic term for this whole category of goods – things that everyone should be entitled to regardless of their ability to pay – is “merit goods.” And although the private sector might still be the primary channel through which most people obtain these goods, there will still always be a role for the government in maintaining a network of programs (including food stamps, public housing, Medicare/Medicaid, FEMA, Social Security, etc.) to serve as the proverbial safety net in the cases where the market fails to make these necessary goods universally accessible. Here’s Sachs on the subject:
Societies around the world want to ensure that everybody has an adequate level of access to key goods and services (health care, education, safe drinking water) as a matter of right and justice. Goods that should be available to everybody because of their vital importance to human well-being are called merit goods. The rights to these merit goods are not only an informal commitment of the world’s governments, they are also enshrined in international law, most importantly in the Universal Declaration of Human Rights, as follows:
Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.
Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all on the basis of merit.
Moreover, according to Article 28 of the Universal Declaration, “Everyone is entitled to a social and international order in which the rights and freedoms set forth in this Declaration can be fully realized.”
Now, in response to this, a lot of anti-statists will agree that there are certain things everyone has a basic human right to, and will grant that it’s commendable to want to ensure that the disadvantaged are adequately taken care of, but will dispute the idea that the government should necessarily be the entity responsible for doing so. According to their view, if the government would just step aside and let private citizens keep their money (instead of collecting it as taxes), it would free up enough private wealth that the task of taking care of the disadvantaged could be handled entirely by private entities – churches, charities, wealthy philanthropists, and so on. There are a few possible counterarguments to this theory, but the simplest rebuttal is the plain fact that it’s just demonstrably not what happens in real life. There are plenty of places out there in the world right now where government services are lacking, and people are suffering from famine and poverty and deprivation as a result – and there are also plenty of incredibly rich people with trillions of dollars between them who could be using that money to alleviate the deprivation in those places – but they simply aren’t doing so; and consequently, the famine and poverty and deprivation continue. It’s not that the rich people can’t donate their money because it’s all being taxed away by government – they still have tons of disposable wealth even after taxes – and even if their tax rates were significantly higher, charitable donations are tax-deductible anyway, so any money they wanted to donate wouldn’t be subject to that taxation regardless. In other words, it’s not an issue of not having enough money to donate – the money’s already there – it’s just not being donated. What this means, then, is that the argument that just getting government out of the picture would free up the resources necessary for private philanthropy to completely take care of social welfare gets it precisely backward; if the private sector were willing and able to completely take care of social welfare, there wouldn’t be any need for government social programs to have been created in the first place. The whole reason those programs exist is because private efforts alone aren’t sufficient.
At any rate, despite the flaws in the anti-government argument here, it at least has the virtue of acknowledging that our ultimate goal should be to ensure that the disadvantaged are cared for, and simply disagrees as to the best method for accomplishing this. Unfortunately, the same can’t be said of every argument made against government social programs; in many cases, anti-statists will get so wrapped up in inveighing against these programs that they’ll slip into a more cynical mode of arguing against the very concept of providing aid to the needy itself. This will often involve painting the recipients of government aid as undeserving moochers who could perfectly well take care of themselves but are simply too lazy to do so, and insisting that the government taxing away richer people’s hard-earned wealth and redistributing it to the poor does nothing but reward this kind of laziness.
This kind of blanket generalization, though, misses the mark in some pretty crucial ways. For one thing, it (perhaps willfully) misunderstands what government social programs are actually designed to do. They aren’t designed to just hand over money to anyone who decides they don’t feel like working; if a person is fully capable of supporting themselves, they’ll be expected to do so, and they won’t be eligible for most forms of government support. The social programs we’re talking about here are almost entirely aimed at supporting those who can’t fully support themselves, either because they’re too young or too old or too disabled or some other such thing. Granted, there are some forms of government support that are aimed at people who are capable of supporting themselves but are just temporarily unable to do so for whatever reason – e.g. unemployment insurance for workers who’ve recently lost their jobs – but even in these cases, the people collecting the benefits aren’t generally doing so because they’re just lazy; sometimes there simply aren’t as many job openings as there are job seekers, so unemployed workers need some short-term support to tide them over until new job opportunities become available. A report from 2014, for instance, describes just such a situation:
The total number of job openings in April was 4.5 million, up from 4.2 million in March. In April, there were 9.8 million job seekers […] meaning that there were 2.2 times as many job seekers as job openings. Put another way: Job seekers so outnumbered job openings that more than half of job seekers were not going to find a job in April no matter what they did.
In the wake of the 2008 financial crash (prior to which the economy had been at full employment), millions of workers suddenly found themselves in situations like this, forced out of their jobs and unable to find new ones. Was this mass unemployment just due to a sudden outbreak of laziness? Did millions of workers who’d previously been working full-time jobs suddenly change their minds all at once and decide that they’d rather spend all day sitting on the couch watching TV? Of course not. The reason why they were unable to find jobs wasn’t anything to do with their own personal work ethic; it was because jobs simply weren’t available anymore. These workers were more than happy to return to work once the economy recovered and jobs actually became available again. But to accuse them of laziness just because they needed some short-term government support in the meantime wouldn’t just be false, it would be adding insult to injury by shaming them for something that was wholly out of their control.
Having said all this, it’s certainly true that not every worker who gets government aid is in exactly this kind of situation; occasionally people really do collect government benefits simply because they’re lazy or dishonest and somehow figure out a way of cheating the system. That kind of fraud is always a risk, whether we’re talking about a government program or a private charitable organization; and there’s nothing wrong with wanting to minimize it as much as possible, so as to ensure that there are enough resources available for the people who really do need them. But this is very different from saying that the whole concept of redistribution itself is just one giant scam, or that taxing the rich to help the poor is just the equivalent of taxing the deserving to help the undeserving. The latter is unfortunately often just an expression of contempt for the lower class. (Case in point: The amount of money that the government gives to rich homeowners in the form of their mortgage interest tax deductions is often considerably more than what it gives to poor people in the form of subsidized housing vouchers – and yet it’s only the latter that are typically criticized for receiving undeserved “handouts,” despite the fact that the former are getting the same kind of support for their housing.)
In reality, the distinction between the “deserving” rich and the “undeserving” poor isn’t nearly as clear-cut as conservatives so often purport it to be. As Alexander points out, it’s often just as much a result of sheer luck as anything else:
[Q]: Government is the recourse of “moochers”, who want to take the money of productive people and give it to the poor. But rich people earned their money, and poor people had the chance to earn money but did not. Therefore, the poor do not deserve rich people’s money.
The claim of many libertarians is that the wealthy earned their money by the sweat of their brow, and the poor are poor because they did not. The counterclaim of many liberals is that the wealthy gained their wealth by various unfair advantages, and that the poor never had a chance. These two conflicting worldviews have been the crux of many an Internet flamewar.
Luckily, this is an empirical question, and can be solved simply by collecting the relevant data. For example, we could examine whether the children of rich parents are more likely to be rich than poor parents, and, if so, how much more likely they are. This would give us a pretty good estimate of how much of rich people’s wealth comes from superior personal qualities, as opposed to starting with more advantages.
If we define “rich” as “income in the top 5%” and “poor” as “income in the bottom 5%” then children of rich parents are about twenty times more likely to become rich themselves than children of poor parents.
But maybe that’s an extreme case. Instead let’s talk about “upper class” (top 20%) and “lower class” (bottom 20%). A person born to a lower-class family only has a fifty-fifty chance of ever breaking out of the lower class (as opposed to 80% expected by chance), and only about a 3% chance of ending up in the upper class (as opposed to 20% expected by chance). The children of upper class parents are six times more likely to end up in the upper class than the lower class; the children of lower class families are four times more likely to end up in the lower class than the upper class.
The most precise way to measure this question is via a statistic called “intergenerational income mobility”, which studies have estimated at between .4 and .6. This means that around half the difference in people’s wealth, maybe more, can be explained solely by who their parents are.
Once you add in all the other factors besides how hard you work – like where you live (the average Delawarean earns $30000; the average Mississippian $15000) and the quality of your local school district, there doesn’t seem to be much room for hard work to determine more than about a third of the difference between income.
[Q]: The conventional wisdom among libertarians is completely different. I’ve heard of a study saying that people in the lower class are more likely to end up in the upper class than stay in the lower class, even over a period as short as ten years!
First of all, note that this is insane. Since the total must add up to 100%, this would mean that starting off poor actually makes you more likely to end up rich than someone who didn’t start off poor. If this were true, we should all send our children to school in the ghetto to maximize their life chances. This should be a red flag.
And, in fact, it is false. Most of the claims of this sort come from a single discredited study. The study focused on a cohort with a median age of twenty-two, then watched them for ten years, then compared the (thirty-two-year-old) origins with twenty-two-year-olds, then claimed that the fact that young professionals make more than college students was a fact about social mobility. It was kind of weird.
Why would someone do this? Far be it from me to point fingers, but Glenn Hubbard, the guy who conducted the study, worked for a conservative think tank called the “American Enterprise Institute”. You can see a more complete criticism of the study here.
[Q]: Okay, I acknowledge that at least half of the differences in wealth can be explained by parents. But that needn’t be rich parents leaving trust funds to their children. It could also be parents simply teaching their children better life habits. It could even be genes for intelligence and hard work.
This may explain a small part of the issue, but see [the next couple points], which show that under different socioeconomic conditions, this number markedly decreases. These socioeconomic changes would not be expected to affect things like genetics.
[Q]: So maybe children of the rich do have better opportunities, but that’s life. Some people just start with advantages not available to others. There’s no point in trying to use Big Government to regulate away something that’s part of the human condition.
This lack of social mobility isn’t part of the human condition, it’s a uniquely American problem. Of eleven developed countries investigated in a recent study on income mobility, America came out tenth out of eleven. Their calculation of US intergenerational income elasticity (the number previously cited as probably between .4 and .6) was .47. But other countries in the study had income elasticity as low as .15 (Denmark), .16 (Australia), .17 (Norway), and .19 (Canada). In each of those countries, the overwhelming majority of wealth is earned by hard work rather than inherited.
The United States, is just particularly bad at this; the American Dream turns out to be the “nearly every developed country except America” Dream.
[Q]: That’s depressing, but don’t try to turn it into a political narrative. Given the government’s incompetence and wastefulness, there’s no reason to think more government regulation and spending could possibly improve social mobility at all.
Studies show that increasing government spending significantly improves social mobility. States with higher government spending have about 33% more social mobility than states with lower spending.
This also helps explain why other First World countries have better social mobility than we do. Poor American children have very few chances to go to Harvard or Yale; poor Canadian children have a much better chance to go to to UToronto or McGill, where most of their tuition is government-subsidized.
[Q]: Then perhaps it is true that rich children start out with a major unfair advantage. But this advantage can be overcome. Poor children may have to work harder than rich children to become rich adults, but this is still possible, and so it is still true, in the important sense, that if you are not rich it’s mostly your own fault.
Several years ago, I had an interesting discussion with an evangelical Christian on the ethics of justification by faith. I promise you this will be relevant eventually.
I argued that it is unfair for God to restrict entry to Heaven to Christians alone. After all, 99% of native-born Ecuadorans are Christian, but less than 1% of native born Saudis are same. It follows that the chance of any native-born Ecuadorian of becoming Christian is 99%, and that of any native born Saudi, 1%. So if God judges people by their religion, then within 1% He’s basically just decided it’s free entry for Ecuadorians, but people born in Saudi Arabia can go to hell (literally).
My Christian friend argued that is not so: that there is a great difference between 0% of Saudis and 1% of Saudis. I answered that no, there was a 1% difference. But he said this 1% proves that the Saudis had free will: that even though all the cards were stacked against them, a few rare Saudis could still choose Christianity.
But what does it mean to have free will, if external circumstances can make 99% of people with free will decide one way in Ecuador, and the opposite way in Saudi Arabia?
I do sort of believe in free will, or at least in “free will”. But where my friend’s free will was unidirectional, an arrow pointing from MIND to WORLD, my idea of free will is circular: MIND affects WORLD affects MIND affects WORLD and so on.
Yes, it is ultimately the mind and nothing else that decides whether to accept or reject Islam or Christianity. But it is the world that shapes the mind before it does its accepting or rejecting. A man raised in Saudi Arabia uses a mind forged by Saudi culture to make the decision, and chooses Islam. A woman raised in Ecuador uses a mind forged by Ecuador to make the decision, and chooses Christianity. And so there is no contradiction in the saying that the decision between Islam and Christianity is up entirely to the individual, yet that it is almost entirely culturally determined. For the mind is a box, filled with genes and ideas, and although it is a wonderful magical box that can take things and combine them and forge them into something quite different and unexpected, it is not infinitely magical, and it cannot create out of thin air.
Returning to the question at hand, every poor person has the opportunity to work hard and eventually become rich. Whether that poor person grasps the opportunity comes from that person’s own personality. And that person’s own personality derives eventually from factors outside that person’s control. A clear look at the matter proves it must be so, or else personality would be self-created, like the story of the young man who received a gift of a time machine from a mysterious aged stranger, spent his life exploring past and future, and, in his own age, goes back and gives his time machine to his younger self.
[Q]: And why is this relevant to politics?
Earlier, I offered a number between .4 and .6 as the proportion of success attributable solely to one’s parents’ social class. This bears on, but does not wholly answer, a related question: what percentage of my success is my own, and what percentage is attributable to society? People have given answers to this question as diverse as (100%, 0%), (50%, 50%), (0%, 100%).
I boldly propose a different sort of answer: (80%, 100%). Most of my success comes from my own hard work, and all of my own hard work comes from external factors.
If all of our success comes from external factors, then it is reasonable to ask that we “pay it forward” by trying to improve the external factors of others, turning them into better people who will be better able to seize the opportunities to succeed. This is a good deal of the justification for the liberal program of redistribution of wealth and government aid to the poor.
[Q]: This is all very philosophical. Can you give some concrete examples?
Lead poisoning, for example. It’s relatively common among children in poorer areas (about 7% US prevalence) and was even more common before lead paint and leaded gasoline was banned (still >30% in many developing countries).
For every extra ten millionths of a gram per deciliter concentration of lead in their blood, children permanently lose five IQ points; there’s a difference of about ten IQ points among children who grew up in areas with no lead at all, and those who grew up in areas with the highest level of lead currently considered “safe”. Although no studies have been done on severely lead poisoned children from the era of leaded gasoline, they may have lost twenty or more IQ points from chronic lead exposure.
Further, lead also decreases behavioral inhibition, attention, and self-control. For every ten ug/dl lead increase, children were 50% more likely to have recognized behavioral problems. People exposed to higher levels of blood lead as a child were almost 50% more likely to be arrested for criminal behavior as adults (adjusting for confounders).
Economic success requires self-control, intelligence, and attention. It is cruel to blame people for not seizing opportunities to rise above their background when that background has damaged the very organ responsible for seizing opportunities. And this is why government action, despite a chorus of complaints from libertarians, banned lead from most products, a decision which is (controversially) credited with the most significant global drop in crime rates in decades, but which has certainly contributed to social mobility and opportunity for children who would otherwise be too lead-poisoned to succeed.
Lead is an interesting case because it has obvious neurological effects preventing success. The ability of psychologically and socially toxic environments to prevent success is harder to measure but no less real.
If a poor person can’t keep a job solely because she was lead-poisoned from birth until age 16, is it still fair to blame her for her failure? And is it still so unthinkable to take a little bit of money from everyone who was lucky enough to grow up in an area without lead poisoning, and use it to help her and detoxify her neighborhood?
[Q]: What is the significance of whether success is personally or environmentally determined?
It provides justification for redistribution of wealth, and for engineering an environment in which more people are able to succeed.
There’s a line from the Game of Thrones TV series, from the character Meera Reed, that goes: “Some people will always need help. That doesn’t mean they’re not worth helping.” I think there’s a lot of wisdom in that line. For better or worse, we live in a world where resources – not just financial resources, but things like social advantages and genetic predispositions and basic physical capabilities – are unevenly distributed. Some people enjoy the good fortune of having a lot, while others suffer the misfortune of having very little. But just because luck and circumstance happen to have favored the former doesn’t mean that the latter are any less deserving. Admittedly, it might not be feasible (or desirable) to make everyone perfectly equal in every way – but at the very least, we should want to ensure that everyone, regardless of their natural endowments, has the bare minimum of merit goods to live a decent life. Again, that’s not just a matter of economics; it’s a simple matter of respecting their most basic human rights – and importantly, it’s also a matter of deciding what kind of people we want to be ourselves.
XX.
Of course, having just made the case that we should make sure everyone’s basic needs are met even if it makes the economy as a whole less productive, I’ll now reveal that I don’t actually consider this to be all that much of a dilemma – because in reality, I don’t think these two goals actually oppose each other at all (or at least, they don’t necessarily have to); if implemented in an effective way, I think that having a strong social safety net ultimately makes an economy more healthy and productive. Sure, it might mean that the richest people aren’t quite as rich (albeit not enough to make any perceptible difference in their quality of life) – but at the level of the broader society, it seems clear that the positives more than outweigh the negatives. As commenter Crapfter puts it:
When other people have a better life, you personally have a better life too. You don’t even have to have empathy to support this stuff, you just have to not want to actively support other people’s misery. All things being equal, people would support this stuff for selfish reasons.
Because even when you don’t personally know a single one of the people who benefit from all these things, when more people are happy and healthy and wealthy and educated, the crime rate drops. Suicide drops. There’s less contagious disease. The economy is healthier. The natural environment benefits. You’re less likely to be a victim of violent crime, property crime, or terrorist attacks. People who vote in the same elections as you are casting their votes with better educations and more awareness of the issues. Your government spends less money on policing and prisons and has more left over to repair pot holes and improve your library or community center. More people are working on cures for whatever diseases you’ll eventually develop. I could go on and on.
This goes back to our earlier discussion of positive externalities; when people are healthy and educated and generally doing well in life, they aren’t the only ones who benefit. The secondary effects are contagious, because when people are doing well themselves, they’re able to more easily make things better for others in turn. What this means, then, is that a strong social safety net isn’t just a drain on society – it’s an investment in it. And sure enough, the numbers bear this out; according to a study by Jorge Luis García, James J. Heckman, Duncan Ermini Leaf, and María José Prados, for instance (as summarized by commenter smurfyjenkins), “every dollar spent on high-quality, early-childhood programs for disadvantaged children returned $7.3 over the long term. The programs led to reductions in taxpayer costs associated with crime, unemployment and healthcare, as well as contributing to a better-prepared workforce.” This is an incredible return on investment, to say the least. But really, is it all that surprising? After all, we can imagine coming at the issue from the opposite angle and asking what kind of outcomes we might expect if we made children more deprived and disadvantaged; do we think this would somehow produce better results overall? As Krugman puts it:
The evidence suggests that welfare-state programs enhance social mobility, thanks to little things like children of the poor having adequate nutrition and medical care. And conversely, of course, when such programs are absent or inadequate, the poor find themselves in a trap they often can’t escape, not because they lack the incentive, but because they lack the resources.
I mean, think about it: Do you really believe that making conditions harsh enough that poor women must work while pregnant or while they still have young children actually makes it more likely that those children will succeed in life?
The same question applies to social programs for adults as well as those for children; as Holmes and Sunstein write, quoting Smith:
What about the commonplace that a “right to welfare” discourages productive labor? This sounds rather plausible at first hearing. But an argument on the other side, formulated by Adam Smith, also has a certain weight: “That men in general should work better when they are ill fed than when they are well fed, when they are disheartened than when they are in good spirits, when they are frequently sick than when they are generally in good health, seems not very probable.”
The truth is, if a government chooses not to invest in social programs to keep its people safe and healthy and well-educated and so on, it’ll still end up having to spend money on them regardless – it’s just that instead of spending it in ways that empower them to contribute positively, it’ll have to spend it on cleaning up the messes that inevitably result from their decreased productivity and increased propensity for crime and so on – hiring more police to deal with the increased indigent population, building more prisons to contain the increased criminal population, etc. The government will still be having to pay steep costs – it just won’t get any benefits. And ironically, the more it denies people their most basic needs, the less of an incentive it’ll give them to want to avoid prison in the first place – since as John Stuart Mill points out, prison will be the only place left where they actually can get guaranteed food and shelter:
Since the state must necessarily provide subsistence for the criminal poor while undergoing punishment, not to do the same for the poor who have not offended is to give a premium on crime.
In light of the fact that the government is going to have to spend money to take care of its people’s basic needs either way, then, why wait until after they’ve been driven to criminality and locked away in prison to do so? Why not ensure that their needs are met early on, and thereby prevent all the social deterioration and crime from ever materializing in the first place? This is one area where it seems clear that an ounce of prevention is worth a pound of cure.
Of course, despite this logic, anti-statists will still often insist that giving the government any authority at all to implement this kind of mass-scale redistribution is a bridge too far; they’ll say that by granting it the power to seize citizens’ resources and use them for whatever it decides is best, we’re opening the floodgates to the kind of government overreach that inevitably descends into tyranny. But Alexander points out the empirical failure of this argument:
[Q]: I’m on board with doing things that have the best consequences. And I’m on board with the idea that some government interventions may have good consequences. But allowing any power to government is a slippery slope. It will inevitably lead to tyranny, in which do-gooder government officials take away all of our most sacred rights in order to “protect us” from ourselves.
History has never shown a country sinking into dictatorship in the way libertarians assume is the “natural progression” of a big-government society. No one seriously expects Sweden, the United Kingdom, France, or Canada to become a totalitarian state, even though all four have gone much further down the big-government road than America ever will.
Those countries that have collapsed into tyranny have done so by having so weak a social safety net and so uncaring a government that the masses felt they had nothing to lose in instituting Communism or some similar ideology. Even Hitler gained his early successes by pretending to be a champion of the populace against the ineffective Weimar regime.
Czar Nicholas was not known for his support of free universal health care for the Russian peasantry, nor was it Chiang Kai-Shek’s attempts to raise minimum wage that inspired Mao Zedong. It has generally been among weak governments and a lack of protection for the poor where dictators have found the soil most fertile for tyranny.
The same is true when it comes to the specific issue of taxation as well. Although anti-statists will often decry taxation as something that can only ever be used as a tool of government oppression, the truth is that the worst governments nowadays are typically the ones that have the least need for taxation and can instead rely on other sources of revenue (like natural resources) that don’t require them to be accountable to taxpayers. As Fukuyama puts it:
Alternative sources of fiscal support [besides taxes], such as natural resource rents or foreign aid, […] permit governments to bypass their own citizens. The struggle between the king and parliament over taxation could not play out in an oil-rich country, which is perhaps why so few of them are democratic.
Wealth in natural resources hinders both political modernization and economic growth. Two Harvard economists, Jeffrey D. Sachs and Andrew M. Warner, looked at ninety-seven developing countries over two decades (1971–89) and found that natural endowments were strongly correlated with economic failure. On average the richer a country was in mineral, agricultural, and fuel deposits, the slower its economy grew—think of Saudi Arabia or Nigeria. Countries with almost no resources—such as those in East Asia—grew the fastest. Those with some resources—as in western Europe—grew at rates between these two extremes. There are a few exceptions: Chile, Malaysia, and the United States are all resource rich yet have developed economically and politically. But the basic rule holds up strikingly well.
Why are unearned riches such a curse? Because they impede the development of modern political institutions, laws, and bureaucracies. Let us cynically assume that any government’s chief goal is to give itself greater wealth and power. In a country with no resources, for the state to get rich, society has to get rich so that the government can then tax this wealth. In this sense East Asia was blessed in that it was dirt poor. Its regimes had to work hard to create effective government because that was the only way to enrich the country and thus the state. Governments with treasure in their soil have it too easy; they are “trust-fund” states. They get fat on revenues from mineral or oil sales and don’t have to tackle the far more difficult task of creating a framework of laws and institutions that generate national wealth (think of Nigeria, Venezuela, Saudi Arabia). A thirteenth-century Central Asian Turkish poet, Yusuf, put this theory simply and in one verse:
To keep the realm needs many soldiers, horse and foot; To keep these soldiers needs much money. To get this money, the people must be rich; For the people to be rich, the laws must be just. If one of these is left undone, all four are undone; If these four are undone, kingship unravels.
A version of this theory holds that any state that has access to easy money—say, by taxing shipping through a key canal (as with Egypt) or even because of foreign aid (as with several African countries)—will remain underdeveloped politically. Easy money means a government does not need to tax its citizens. When a government taxes people it has to provide benefits in return, beginning with services, accountability, and good governance but ending up with liberty and representation. This reciprocal bargain—between taxation and representation—is what gives governments legitimacy in the modern world. If a government can get its revenues without forging any roots in society, it is a court, not a state, and its businessmen courtiers, not entrepreneurs. The Saudi royal family offers its subjects a different kind of bargain: “We don’t ask much of you economically and we don’t give much to you politically.” It is the inverse of the slogan of the American Revolution—no taxation, but no representation either.
This is not to say that countries should hope to be poor in natural resources. Many poor countries become neither democratic nor capitalist. Political institutions, leadership, and luck all matter. Similarly, some countries develop even though they are rich—just as some trust-fund kids turn out well. Most European countries began democratizing when they were better off than the rest of the world. But […] Europe had unique advantages. Its long history of battles between church and state, Catholics and Protestants, and kings and lords created liberal institutions and limited state power. Some non-European countries have had variations of these struggles. For example, the political diversity of India, with its dozens of regions, religions, and languages, might actually secure its democratic future rather than threaten it. Polish democracy has been strengthened by a strong and independent church. In general it is fair to conclude that although certain historical and institutional traits help, capitalist growth is the single best way to overturn the old feudal order and create an effective and limited state.
Regimes that get rich through natural resources tend never to develop, modernize, or gain legitimacy. […] Easy money means little economic or political modernization. The unearned income relieves the government of the need to tax its people—and in return provide something to them in the form of accountability, transparency, even representation. History shows that a government’s need to tax its people forces it to become more responsive and representative of its people.
The trouble is that once a state profits from mineral wealth, it is unlikely to democratize. The easiest way to incentivize the leader to liberalize policy is to force him to rely on tax revenue to generate funds. Once this happens, the incumbent can no longer suppress the population because the people won’t work if he does.
If we want to see which countries are the most equitable and prosperous and successful, we won’t find them among those that do the least taxation and redistribution; on the contrary, the most successful countries today are the ones that provide the strongest social safety nets for their populations and use high taxes to fund them. It has become a cliché in conversations like this for liberals to inevitably bring up the Nordic countries of Northern Europe (Finland, Sweden, Norway, Iceland, and Denmark) as exemplars of government done right – but frankly, they’re right to do so, because these countries really are doing it better than anyone else out there at the moment. As Bernie Sanders explains:
In Denmark, social policy in areas like health care, child care, education and protecting the unemployed are part of a “solidarity system” that makes sure that almost no one falls into economic despair. Danes pay very high taxes, but in return enjoy a quality of life that many Americans would find hard to believe. As the ambassador [Peter Taksoe-Jensen has] mentioned, while it is difficult to become very rich in Denmark no one is allowed to be poor. The minimum wage in Denmark [set through collective bargaining agreements] is about twice that of the United States and people who are totally out of the labor market or unable to care for themselves have a basic income guarantee of about $100 per day.
Health care in Denmark is universal, free of charge and high quality. Everybody is covered as a right of citizenship. The Danish health care system is popular, with patient satisfaction much higher than in our country. In Denmark, every citizen can choose a doctor in their area. Prescription drugs are inexpensive and free for those under 18 years of age. Interestingly, despite their universal coverage, the Danish health care system is far more cost-effective than ours. They spend about 11 percent of their GDP on health care. We spend almost 18 percent.
When it comes to raising families, Danes understand that the first few years of a person’s life are the most important in terms of intellectual and emotional development. In order to give strong support to expecting parents, mothers get four weeks of paid leave before giving birth. They get another 14 weeks afterward. Expecting fathers get two paid weeks off, and both parents have the right to 32 more weeks of leave during the first nine years of a child’s life. The state covers three-quarters of the cost of child care, more for lower-income workers.
At a time when college education in the United States is increasingly unaffordable and the average college graduate leaves school more than $25,000 in debt, virtually all higher education in Denmark is free. That includes not just college but graduate schools as well, including medical school.
In a volatile global economy, the Danish government recognizes that it must invest heavily in training programs so workers can learn new skills to meet changing workforce demands. It also understands that when people lose their jobs they must have adequate income while they search for new jobs. If a worker loses his or her job in Denmark, unemployment insurance covers up to 90 percent of earnings for as long as two years. Here benefits can be cut off after as few as 26 weeks.
In Denmark, adequate leisure and family time are considered an important part of having a good life. Every worker in Denmark is entitled to five weeks of paid vacation plus 11 paid holidays. The United States is the only major country that does not guarantee its workers paid vacation time. The result is that fewer than half of lower-paid hourly wage workers in our country receive any paid vacation days.
Recently the Organization for Economic Cooperation and Development (OECD) found that the Danish people rank among the happiest in the world among some 40 countries that were studied. America did not crack the top 10.
As Ambassador Taksoe-Jensen explained, the Danish social model did not develop overnight. It has evolved over many decades and, in general, has the political support of all parties across the political spectrum.
Naturally, as Sanders points out, all these social programs require considerably higher levels of taxation than we have here in the US – so you might wonder if they can really be that popular among the population. After all, according to many conservatives, if US taxes were any higher than they are now, it would at the very least make the highest earners not want to live here anymore, and would prompt them to flee the country en masse, taking their wealth with them and leaving the country worse off. For this reason and others, these conservatives say, such high tax rates would be unsustainable. But if they really believe this, they must not be familiar with how things work in the Nordic countries, where taxes actually are considerably higher than in the US, and yet people are still perfectly happy to stay and continue paying them. In these countries, people are willing to accept their tax bills even if the taxes are higher than they’d be in other countries, because the quality of life they enjoy as a result is also higher than what it’d otherwise be. Sure, their after-tax income may be lower; but they don’t actually need as much after-tax income in the first place, since so many of their biggest would-be expenses (healthcare, education, etc.) are covered by the state – and generally at a much lower cost than if they were having to pay for them themselves out of their own pockets. In other words, they’re getting more for their money by paying more of it to the government – not just because it allows them to live in communities that are safe and clean and healthy, but because it allows them to enjoy the simple peace of mind that comes from knowing they’ll never have to worry about not being able to afford their children’s preschool, or not being able to pay for healthcare, or not having enough money to go to college, or not having enough savings to retire, or any of the countless other such things that produce a constant baseline of stress and anxiety for so many Americans. In fact, not to put too fine a point on it, it’s not uncommon to hear Nordic political commentators use the phrase “American conditions” as a scare term for the kind of dysfunction and economic insecurity that would result from changing to a more “everyone for themselves” type of system – severe inequality, high crime, haphazard provision of healthcare, and so on. Pretty ironic, considering how much conservative American commentators like to warn about the dangers of turning “socialist” and ending up like Europe.
The fact of the matter is that thanks to their robust social programs, Nordic countries routinely outperform the US in practically all areas – education and literacy, healthcare, crime prevention, employment, overall quality of life, and more. But surely there must be a major downside to all this social stability in terms of their overall economic performance, right? Surely these countries must be taxing themselves so much that they’re suffering from a severe lack of dynamism and entrepreneurship and all those sorts of things?
Actually, the opposite is true; Nordic countries maintain elite levels of economic competitiveness, and they routinely place near the top of global competitiveness rankings (often ahead of the US). They’re widely considered some of the best places in the world to create wealth – with their most famous companies including household names like Volvo, IKEA, Spotify, Maersk, Ericsson, Skype, H&M, Lego, Saab, and Nokia – and in fact, they even produce more billionaires per capita than the US does. (See Harald Eia’s TED Talk on the subject here.)
How can it be that these countries with their high taxes and robust social safety nets are also so good for business? Well, it’s really not all that unbelievable if you think about it; after all, there’s no reason why having high taxes and robust safety nets should mean that a country can’t also have lots of business freedom. In fact, in a very real sense, these countries’ generous social programs are what enable them to have such pro-growth business environments in the first place; as commenter solomute notes:
The most prosperous and peaceful societies in the world, “socialist utopias” like Denmark and Sweden, are actually regarded as the most capitalist societies on the planet. The problem with capitalism in America is that there are too few capitalists. The folks running around burning shit don’t have an ownership stake in society, no investment in its success, and that’s why they don’t care if it burns. A robust social safety net, universal health care, government-funded paternity and maternity leave, debt-free postsecondary education, and other “socialist” policies free people to take risks, make investments, and start their own businesses.
Even something as simple as workers having healthcare coverage that’s not needlessly tied to their private-sector jobs can make a big difference in terms of the dynamism of the labor market, as commenters ConstantinesRevenge and 2pactopus point out in their discussion of the American system:
Employers shouldn’t be responsible for individual employee healthcare contributions. Decoupling this would encourage more free movement of labor.
[…]
And [it would help get] people who are actually fit for the job into these positions.
I don’t know how many times I’ve heard people say something along the lines of “Yeah, I don’t like the job all that much but hell, it pays the bills and the health benefits are nice while my kids are still in school.”
Creativity is brewed from passion, not from people grasping for a pay raise they desperately need.
Unfortunately for Americans, the US still hasn’t quite figured this out yet – so as strong as its economic performance generally is, it’s still much less efficient than it could be. Nordic countries, on the other hand, give their citizens a far greater ability to pursue the careers where they can have the greatest impact – and when you combine this with the fact that they also give their citizens access to free higher education and quality job training (as opposed to charging them tens or hundreds of thousands of dollars like the US does), it’s no surprise at all that these countries are thriving as much as they are in terms of innovation and entrepreneurship. As I wrote previously (and again, you can skim past this part if you already read the last post – you know the drill by now), these countries thrive because their strong markets and their strong social safety nets don’t conflict with each other, but actually reinforce each other and help each other to function even more effectively. The safety nets give firms and individuals the freedom to take risks and pursue their market advantages without worrying that they’ll be utterly ruined if they fail, and the wealth that those people subsequently produce as a result of that risk-taking (along with a relative lack of regulatory interference) ensures a healthy enough tax base to keep the safety nets strong and well-funded. Businesses fail and people lose their jobs all the time – and the government allows this to happen without trying to impede it – but it’s okay, because the government also helps the people who’ve lost their jobs or businesses to get right back on their feet again, thereby making the economy as a whole that much more dynamic. As Kathleen Thelen and Cathie Jo Martin explain:
When people think of the “Danish model” they tend to think first about the country’s generous social policies, and assume that the point of all of this is to protect people from the market. This is wrong: Danish labor markets are very flexible. The difference with the United States is that [Danish] labor market policies are precisely designed to move the unemployed into training programs that enhance their marketable skills. This helps them reenter the labor market as soon as possible and is the core of the country’s famous “flexicurity” model — high flexibility in the labor market combined with extensive state support for skill development. Denmark spends more on active labor market policies than other OECD countries, far and away more than the United States, which is a laggard in this respect, as the graph below shows.
[…]
Denmark is the most egalitarian country in the world, but in December 2014, Forbes (once again) ranked Denmark as the best country in the world to do business. (The U.S. ranking was 18th.) The country’s formula for growth is a high level of workforce skills and extensive cooperation among employers and workers to support labor market flexibility.
[…]
The most important institutions underpinning this flexible approach are those that help both young people and adults develop skills. Denmark has an extremely well developed system for initial vocational education and training (for youth) – well supported both by employers and the state. This is one reason why Denmark’s “NEET” rate (the number of young people Not in Employment, Education or Training) is comparatively low. Beyond this, though, the government also supports ongoing skill development for adults, as well – and not just for the unemployed. Denmark is a leader in adult education – providing training courses that are easily accessed, generously supported by the state and widely available to anyone who wishes to enhance his or her own skills. This is why Denmark has one of the highest rates of participation in adult education and training in the world. Rapid technological change makes it important for all adults to be able to upgrade their skills flexibly and throughout their working lives. This is not big brother socialism. This is really smart capitalism.
[…]
Retraining and vocational training policies both support “flexicurity,” [retooling] workers whose skills are becoming outdated with changing economic conditions. Workers may be easily laid off from their jobs but the government will quickly move them into training programs and then back into the workforce. For example, in 2011 Denmark spent about five percent of its GDP on training, compared to the U.S., which spent less than one percent.
Most people assume the United States is the most [market-friendly] country. Not so.
[…]
The Wall Street Journal and Heritage Foundation produce an annual Index of Economic Freedom. They rate countries for their respect for property rights, freedom from corruption, business freedom, labor freedom, monetary freedom, trade freedom, investment freedom, financial freedom, fiscal freedom, and government spending. Hong Kong, Singapore, Australia, New Zealand, Switzerland, Canada, Chile, Mauritius, and Ireland have higher overall scores than the United States.
… Australia, New Zealand, the United Kingdom, Canada, and Switzerland have higher levels of economic freedom. Many of the Scandinavian countries—which Americans often call “socialist”—beat the US on many central aspects of economic freedom.
[…]
We should regard Denmark in particular as economically freer than the United States. Yes, Denmark has high tax rates, but on almost every measure of economic freedom, it trounces the US.
Denmark ranks much higher than the United States on property rights, freedom from corruption, business freedom, monetary freedom, trade freedom, investment freedom, and financial freedom. Luxembourg, the Netherland, the United Kingdom, and many other countries beat the US on these measures as well. Thus, many other European countries might reasonably be considered more economically libertarian than the US.
[…]
Denmark also rates 99.1 in business freedom, 90.0 in investment freedom, and 90.0 in financial freedom. In comparison, the US scores 91.1, 70.0, and 70.0 respectively on these measures.)
Denmark and Switzerland have remarkably effective welfare states, but that doesn’t make them [socialist]. Rather, think of them as free market countries with strong, well-functioning social insurance programs.
The above quotations point to Denmark as the archetypal example of how strong markets and strong safety nets can bolster each other, and for good reason. But this isn’t just a Danish thing; the same is true across all the Nordic countries. Of all those famous Nordic companies I listed a moment ago, for instance, most of them are Swedish. And Max Chafkin wrote a whole big piece about how it works in Norway; here’s an excerpt:
Norway, population five million, is a very small, very rich country. It is a cold country and, for half the year, a dark country. (The sun sets in late November in Mo i Rana. It doesn’t rise again until the end of January.) This is a place where entire cities smell of drying fish—an odor not unlike the smell of rotting fish—and where, in the most remote parts, one must be careful to avoid polar bears. The food isn’t great.
Bear strikes, darkness, and whale meat notwithstanding, Norway is also an exceedingly pleasant place to make a home. It ranked third in Gallup’s latest global happiness survey. The unemployment rate, just 3.5 percent, is the lowest in Europe and one of the lowest in the world. Thanks to a generous social welfare system, poverty is almost nonexistent.
Norway is also full of entrepreneurs like Wiggo Dalmo. Rates of start-up creation here are among the highest in the developed world, and Norway has more entrepreneurs per capita than the United States, according to the latest report by the Global Entrepreneurship Monitor, a Boston-based research consortium. A 2010 study released by the U.S. Small Business Administration reported a similar result: Although America remains near the top of the world in terms of entrepreneurial aspirations — that is, the percentage of people who want to start new things—in terms of actual start-up activity, our country has fallen behind not just Norway but also Canada, Denmark, and Switzerland.
If you care about the long-term health of the American economy, this should seem strange—maybe even troubling. After all, we have been told for decades that higher taxes are without-a-doubt, no-question-about-it Bad for Business. President Obama recently bragged that his administration had passed “16 different tax cuts for America’s small businesses over the last couple years. These are tax cuts that can help America—help businesses…making new investments right now.”
Since the Reagan Revolution, which drastically cut tax rates for wealthy individuals and corporations, we have gotten used to hearing these sorts of announcements from our leaders. Few have dared to argue against tax cuts for businesses and business owners. Questioning whether entrepreneurs really need tax cuts has been like asking if soldiers really need weapons or whether teachers really need textbooks—a possible position, sure, but one that would likely get you laughed out of the room if you suggested it. Or thrown out of elected office.
Taxes in the U.S. have fallen dramatically over the past 30 years. In 1978, the top federal tax rates were as follows: 70 percent for individuals, 48 percent for corporations, and almost 40 percent on capital gains. Americans as a whole paid the ninth-lowest taxes among countries in the Organization for Economic Cooperation and Development, a group of 34 of the largest democratic, market economies. Today, the top marginal tax rates are 35 percent, 35 percent, and 15 percent, respectively. (Even these rates overstate the level of taxation in America. Few large corporations pay anywhere near the 35 percent corporate tax; Warren Buffett has famously said that he pays 18 percent in income tax.) Only two countries in the OECD—Chile and Mexico—pay a lower percentage of their gross domestic product in taxes than we Americans do.
But there is precious little evidence to suggest that our low taxes have done much for entrepreneurs—or even for the economy as a whole. “It’s actually quite hard to say how tax policy affects the economy,” says Joel Slemrod, a University of Michigan professor who served on the Council of Economic Advisers under Ronald Reagan. Slemrod says there is no statistical evidence to prove that low taxes result in economic prosperity. Some of the most prosperous countries—for instance, Denmark, Sweden, Belgium, and, yes, Norway—also have some of the highest taxes. Norway, which in 2009 had the world’s highest per-capita income, avoided the brunt of the financial crisis: From 2006 to 2009, its economy grew nearly 3 percent. The American economy grew less than one-tenth of a percent during the same period. Meanwhile, countries with some of the lowest taxes in Europe, like Ireland, Iceland, and Estonia, have suffered profoundly. The first two nearly went bankrupt; Estonia, the darling of antitax groups like the Cato Institute, currently has an unemployment rate of 16 percent. Its economy shrank 14 percent in 2009.
Moreover, the typical arguments peddled by business groups and in the editorial pages of The Wall Street Journal—the idea, for instance, that George W. Bush’s tax cuts in 2001 and 2003 created economic growth—are problematic. The unemployment rate rose following the passage of both tax-cut packages, and economic growth during Bush’s eight years in office badly lagged growth during the Clinton presidency, before the tax cuts were passed.
And so the case of Norway—one of the most entrepreneurial, most heavily taxed countries in the world—should give us pause. What if we have been wrong about taxes? What if tax cuts are nothing like weapons or textbooks? What if they don’t matter as much as we think they do?
[…]
Norwegians don’t think about taxes the way we do. Whereas most Americans see taxes as a burden, Norwegian entrepreneurs tend to see them as a purchase, an exchange of cash for services. “I look at it as a lifelong investment,” says Davor Sutija, CEO of Thinfilm, a Norwegian start-up that is developing a low-cost version of the electronic tags retailers use to track merchandise.
Sutija has a unique perspective on this matter: He is an American who grew up in Miami and, 20 years ago, married a Norwegian woman and moved to Oslo. In 2009, as an employee of Thinfilm’s former parent company, he earned about $500,000, half of which he took home and half of which went to the Kingdom of Norway. (The country’s tax system is progressive, and the highest tax rates kick in at $124,000. From there, the income tax rate, including a national insurance tax, is 47.8 percent.) If he had stayed in the U.S., he would have paid at least $50,000 less in taxes, but he has no regrets. (For a detailed comparison, see “How High Is Up?”) “There are no private schools in Norway,” he says. “All schooling is public and free. By being in Norway and paying these taxes, I’m making an investment in my family.”
For a modestly wealthy entrepreneur like Sutija, the value of living in this socialist country outweighs the cost. Every Norwegian worker gets free health insurance in a system that produces longer life expectancy and lower infant mortality rates than our own. At age 67, workers get a government pension of up to 66 percent of their working income, and everyone gets free education, from nursery school through graduate school. (Amazingly, this includes colleges outside the country. Want to send your kid to Harvard? The Norwegian government will pick up most of the tab.) Disability insurance and parental leave are also extremely generous. A new mother can take 46 weeks of maternity leave at full pay—the government, not the company, picks up the tab—or 56 weeks off at 80 percent of her normal wage. A father gets 10 weeks off at full pay.
These are benefits afforded to every Norwegian, regardless of income level. But it should be said that most Norwegians make about the same amount of money. In Norway, the typical starting salary for a worker with no college education is a very generous $45,000, while the starting salary for a Ph.D. is about $70,000 a year. (This makes certain kinds of industries, such as textile manufacturing, impossible; on the other hand, technology businesses are very cheap to run.) Between workers who do the same job at a given company, salaries vary little, if at all. At Wiggo Dalmo’s company, everyone doing the same job makes the same salary.
The result is that successful companies find other ways to motivate and retain their employees. Dalmo’s staff may consist mostly of mechanics and machinists, but he treats them like Google engineers. Momek employs a chef who prepares lunch for the staff every day. The company throws a blowout annual party—the tab last year was more than $100,000. Dalmo supplements the standard government health plan with a $330-per-employee-per-year private insurance plan that buys employees treatment in private hospitals if a doctor isn’t immediately available in a public one. These benefits have kept turnover rates at Momek below 2 percent, compared with 7 percent in the industry.
But it takes more than perks to keep a worker motivated in Norway. In a country with low unemployment and generous unemployment benefits, a worker’s threat to quit is more credible than it is in the United States, giving workers more leverage over employers. And though Norway makes it easy to lay off workers in cases of economic hardship, firing an employee for cause typically takes months, and employers generally end up paying at least three months’ severance. “You have to be a much more democratic manager,” says Bjørn Holte, founder and CEO of bMenu, an Oslo-based start-up that makes mobile versions of websites. Holte pays himself $125,000 a year. His lowest-paid employee makes more than $60,000. “You can’t just treat them like machines,” he says. “If you do, they’ll be gone.”
If the Norwegian system forces CEOs to be more conciliatory to their employees, it also changes the calculus of entrepreneurship for employees who hope to start their own companies. “The problem for entrepreneurship in Norway is it’s so lucrative to be an employee,” says Lars Kolvereid, the lead researcher for the Global Entrepreneurship Monitor in Norway. Whereas in the U.S., about one-quarter of start-ups are founded by so-called necessity entrepreneurs—that is, people who start companies because they feel they have no good alternative—in Norway, the number is only 9 percent, the third lowest in the world after Switzerland and Denmark, according to the Global Entrepreneurship Monitor.
This may help explain why entrepreneurship in Norway has thrived, even as it stagnates in the U.S. “The three things we as Americans worry about—education, retirement, and medical expenses—are things that Norwegians don’t worry about,” says Zoltan J. Acs, a professor at George Mason University and the chief economist for the Small Business Administration’s Office of Advocacy. Acs thinks the recession in the U.S. has intensified this disparity and is part of the reason America has slipped in the past few years. When the U.S. economy is booming, the absence of guaranteed health care isn’t a big concern for aspiring founders, but with unemployment near double digits, would-be entrepreneurs are more cautious. “When the middle class is shrinking, the pool of entrepreneurs is shrinking,” says Acs.
We have a tendency here in the US to get so swept up in our own domestic issues that we forget that it’s possible to live in any way other than how we’re currently living. When liberals argue that it should be possible for us to live in a country with widespread prosperity and a strong social safety net and universal healthcare and top-tier education and so on, conservatives will often dismiss the idea out of hand as an absurd utopian fantasy, insisting that there would simply be no way to pay for it, and adding that liberals’ inability to recognize this obvious fact only demonstrates their lack of intellectual seriousness. That kind of conservative dismissiveness might be more defensible if the US really were the only country in the world and we had no way of knowing what other kinds of systems were possible. But it’s a lot harder to defend when liberals can refute it by simply pointing to the Nordic countries and saying, “No, actually, we know that the kind of system we want is possible, because it literally already exists and is being successfully implemented in these other countries; all we’re proposing is for the US to simply copy what they’re doing.” At that point, all conservatives can do is fall back on the rationalization that there must just be some unique deficiency in America’s culture or demographics or whatever that would prevent it from being able to successfully copy the Nordic system – but needless to say, I don’t think that argument has met its burden of proof yet. Certainly, I can see how there might be some unique parts of American culture that have up to this point prevented us from wanting to try a more Nordic kind of system – rugged individualism and all that – but that’s not the same thing as saying that if we did ever give it a try, it would be impossible for us to implement successfully. That latter point seems like it’s usually just flatly asserted without any actual empirical support behind it.
Now, there is one other strategy that conservatives might employ here, which is to accept the premise that it’s possible to find examples of foreign countries that are better off than the US because they’ve adopted superior policies, but to then contend that liberals are wrong about which countries those are – that the countries that are actually best off are those that have adopted conservative policies, like having extremely low taxes and being extremely business-friendly. In actuality, of course, the conservatives who make this argument will usually just have one country that they want to name as their example – specifically Singapore (or maybe Hong Kong if they’re reaching). But the irony of this is that Singapore actually is one of those countries whose wealth is largely due to a combination of factors that are specific to it and wouldn’t be generalizable as a universal model for all countries (hence why conservatives tend to have such a hard time naming other countries where it’s working). As Alexander explains:
Reactionaries are never slow to bring up Singapore, a country with some unusually old-fashioned ideas and some unusually good outcomes. But as I have pointed out in a previous post, Singapore does little better than similar control countries, and the lion’s share of its success is most likely due to it being a single city inhabited by hyper-capitalist Chinese and British people on a beautiful natural harbor in the middle of the biggest chokepoint in the world’s most important trade route.
In particular, he notes how Singapore’s role as a regional financial hub puts it in a unique position that can’t just be straightforwardly copied by every other country – recall this point from earlier:
[Being a] small financial [hub] like Singapore, Dubai, or Switzerland […] is good work if you can get it, but it really only works for one small country per region; you can’t have all of China be “a financial hub”. In the 1980s, everyone was so impressed with Singapore and Hong Kong that they became the go-to models for development, and people incorrectly recommended liberal free market policies as the solution to everything. But the Singapore/Hong Kong model doesn’t necessarily work for bigger countries, and most of the good financial hub niches are already filled by now.
Singapore and Hong Kong are especially unique cases because, as Alexander mentions, they’re basically just single cities, with functionally 0% of their populations employed in sectors like agriculture and mining, and 100% of their land being urban. Only a handful of city-states like this exist in the world, and all of them are able to organize their economies in ways that other countries simply can’t, precisely because their small size allows them to so effectively hyper-specialize. Commenter idio3 provides a tongue-in-cheek example of an even more pronounced instance of this to illustrate the point:
There’s an even better example! Monaco. 0% income tax. 0% unemployment. Maybe we should all move to a casino and Formula One Grand Prix based economy? Seems legit to me.
In other words, while city-states can produce genuinely impressive results (Monaco is in fact the richest country in the world on a per-capita basis), they don’t provide scalable models that the rest of the world can copy; their success fundamentally depends on the fact that they are so small and specialized.
At any rate, Singapore isn’t really a great example for conservatives regardless, because a lot of the things they like so much about it – the business freedom, the low taxes, etc. – aren’t actually as unambiguously conservative as they want to claim they are. Having lots of business freedom, for instance, is something that countries with strong social safety nets can achieve just as readily as countries without them, if not more so – as we just saw with the Nordic countries (and as we can also see with other countries across the world, from Europe to East Asia) – so it’s not really an argument against them. As for low taxes, it’s true that Singapore’s personal income tax rates (along with its levels of overall government spending) are relatively low. But as Vanessa Snodgrass-Chong points out, one partial reason for this is the fact that the size of its taxpaying workforce, which is considerably inflated by a massive population of migrant workers, is so large relative to that of its non-working population (i.e. children, retirees, and others who are more likely to need government benefits):
[Singapore has a] relatively small population of retirees, [so compared to other countries,] the dependency ratio is much more favorable for Singapore. Singapore has a workforce of about 3.9 million workers supporting 678,100 people over 65. This works out to about 5.7 workers to each person over 65 in Singapore. The figure for most other countries is around 3. [See also the global rankings here.] Singapore can do this because it has a very large foreign workforce. About 45 percent of the work force comprises non-residents. So, the tax burden is spread over a much larger base of workers in Singapore.
What’s more, while all those migrant workers contribute to the tax base and help reduce the burden of paying for government services for native-born Singaporeans, they’re excluded from receiving many of those government services and social programs themselves, which reduces government expenditures further still (and therefore reduces the need for taxes to fund them). Native-born Singaporeans, in other words, enjoy their low taxes at least in part because foreign workers are helping foot the bill.
But even when it comes to the taxes that native-born Singaporeans do pay, there are still other reasons why their individual income tax rates are so low – probably the most obvious of these being the simple fact that the government has other means of collecting revenue from the population aside from just that one tax. As commenter user18 explains, the Singaporean state has all kinds of other taxes and fees that it uses to make up for its low income taxes, including unconventional ones like extreme taxes on car ownership:
One unusual revenue source deserves commentary: car taxes, including the so-called Certificate of Entitlement (COE). The COE is simply a piece of paper that entitles you to own a car. This recent news story [from 2016] mentions how the COE price hit a 5-year low, at “only” S$45,002 (≈ US$31,600) for small cars. [By late 2023 it had risen back up to around US$80,000.]
These COEs (“Vehicle Quota Premiums”) make up 6.0% of government revenue.
Besides this very expensive piece of paper, a car-owner still has to pay all the other usual taxes (GST, road taxes) and insurance, plus a lot of unusual fees (e.g. Electronic Road Pricing – an idea which London borrowed). A lot of these flow to government coffers. Together, these make Singapore easily the most expensive place in the world to own cars.
On top of this, the Singaporean government gets a major share of its revenue from the fact that it flat-out owns and controls massive portions of the economy, including nearly all the land, the vast majority of housing, and a sizeable fraction of the country’s corporations. As Bruenig explains:
In the Heritage Foundation’s Index of Economic Freedom, Singapore ranks as the second most “economically free” country in the world just behind Hong Kong. Since many use this index as a shorthand for “most capitalist” countries, a lot of prominent people end up saying some really weird things about Singapore. For instance, in his Liberty Con remarks, Bryan Caplan claimed Singapore was one of the closest countries to the capitalist ideal.
It is true of course that Singapore has a market economy. But it’s also true that, in Singapore, the state owns a huge amount of the means of production. In fact, depending on how you count it, the Singaporean government probably owns more capital than any other developed country in the world after Norway.
The Singaporean state owns 90 percent of the country’s land. Remarkably, this level of ownership was not present from the beginning. In 1949, the state owned just 31 percent of the country’s land. It got up to 90 percent land ownership through decades of forced sales, or what people in the US call eminent domain.
The Singaporean state does not merely own the land. They directly develop it, especially for residential purposes. Over 80 percent of Singapore’s population lives in housing constructed by the country’s public housing agency HDB. The Singaporean government claims that around 90 percent of people living in HDB units “own” their home. But the way it really works is that, when a new HDB unit is built, the government sells a transferable 99-year lease for it. The value of that lease slowly declines as it approaches the 99-year mark, after which point the lease expires and possession of the HDB unit reverts back to the state. Thus, Singapore is a land where almost everyone is a long-term public housing tenant.
Then there are the state-owned enterprises, which they euphemistically call Government-linked Companies (GLCs). Through its sovereign wealth fund Temasek, the Singaporean government owns a large share (20% or more) of 20 companies (2012 figure). Together these companies make up 37% of the market capitalization of the Singaporean stock market. The state also owns a large share of 8 real estate investment trust (REIT) companies (2012 figure), which they call GLREITs. The value of the GLREITs make up 54% of the country’s total REIT market.
The sovereign wealth fund Temasek doesn’t just own domestic assets. It also is invested broadly throughout the world, especially in other Asian countries. In March of last year, Temasek had a net portfolio value of S$275 billion, which is equal to around 62% of the country’s annual GDP. To put this figure in more familiar terms, Temasek’s total holdings are equivalent to if the US government built a $12.4 trillion wealth fund.
Call me old-fashioned, but I don’t generally associate state ownership of the means of production with capitalism. One way to see whether libertarians or conservatives actually think Singapore’s system is uber-capitalistic is to imagine how they would respond to someone who ran a campaign in the US aimed at bringing the country up to the Singaporean ideal.
In this campaign, the candidate would say that the state should expropriate nearly all of the land in the country, build virtually all of the housing in the country, move almost everyone into public housing leaseholds, become the largest shareholder of more than a third of the country’s publicly-traded companies (weighted by market capitalization), and build out a sovereign wealth fund that holds tens of trillions of dollars of corporate assets. Would this campaign meet with a warm libertarian embrace or perhaps be derided as a bit socialistic?
When a government has this much ownership of the economy, it’s no surprise that it won’t have as much need for extra tax revenue on top of all this. Conservatives may like to point to Singapore as an example of limited government – and in some ways, it certainly is – but in other ways, it’s exactly the opposite. And this brings us to one last tax-related point. It’s true that despite all the caveats, Singapore is a country that keeps its taxing and spending relatively low; and it’s also true that aside from all the other factors I’ve listed here, one of the biggest reasons for this is that it really doesn’t provide all that many “government handouts” to its citizens. Compared to other wealthy countries with their robust state-funded social safety nets, Singapore doesn’t offer nearly as much in terms of things like healthcare, unemployment benefits, pensions, etc. Instead, it relies on a compulsory savings program known as the CPF, which requires citizens to set aside a portion of their earnings in specially designated accounts, then pay for their own expenses using those funds. (These are more akin to individual personal accounts than to a universal fund that the government collects from everyone, pools together, and then redistributes according to need.) It’s ostensibly a less government-heavy approach, since the government isn’t directly controlling how and when each individual’s funds are spent – hence why American conservatives tend to think so highly of it. But this is far from saying it’s a system that has “gotten government out of the picture,” as those conservatives might want to claim, and we should be under no illusion that this is the case. The government is still very much intervening in the private sector to dictate how much of people’s earnings must be set aside for social welfare purposes – it’s just that instead of directly handling the spending of those funds itself, it does so by mandate; instead of pooling the funds together to build a universal social safety net, it compels people to build their own individual safety nets (while still providing more traditional safety nets for its poorest citizens on a strict means-tested basis). In a sense, the Singaporean system is simultaneously “further to the left and further to the right” than the American system, as Matt Miller puts it. But in any event, it’s still using government coercion to achieve its ends – and in fact, in many ways its use of government power is actually considerably less constrained than ours is.
To be clear, the fact that Singapore (or any country) has a strong and capable government can be a very good thing when it comes to providing for its citizens’ welfare; the whole point of this post, after all, has been to argue that government can be good, and historians will attest that in Singapore’s case in particular, its government has been a major factor in its economic success. With that being said, though, we should be wary of American conservatives’ arguments that because of its economic achievements, we should therefore consider the Singaporean government a role model for our own government – because the flip side of Singapore’s economic successes story is that its government also uses its power in ways that aren’t just economically conservative, but are right-wing in more extreme and disturbing ways that often infringe on basic human rights. Here we’re moving beyond purely economic matters, so this is somewhat of a separate conversation from the one we’ve been having so far – but we’d be remiss not to at least acknowledge the fact that, for instance, Singapore’s ruling party controls the press, curbs political opposition, and doesn’t allow free assembly, among other things. As a recent State Department report summarizes:
Significant human rights issues [in Singapore have] included credible reports of: preventive detention by the government under various laws that dispense with regular judicial due process; monitoring private electronic or telephone conversations without a warrant; serious restrictions on freedom of expression and media, including the enforcement of criminal libel laws to limit expression; serious restrictions on internet freedom; and substantial legal and regulatory limitations on the rights of peaceful assembly and freedom of association.
In addition to this, the Singaporean government is infamous for imposing extreme punishments like caning, imprisonment, and crippling fines for minor infractions like littering and graffiti. If you smuggle chewing gum into the country, you may be arrested, jailed, and fined thousands of dollars. And if you bring drugs into the country, you may be flat-out executed; capital punishment (specifically death by hanging) is mandatory for many crimes, including some drug offenses. As Connor Kilpatrick sardonically points out, it isn’t exactly what we’d consider a “limited government” kind of regime:
Ah, Singapore: a city-state near the very top in the world when it comes to “number of police” and “execution rate” per capita. It’s a charming little one-party state where soft-core pornography is outlawed, labor rights are almost nonexistent and gay sex [was only decriminalized in 2022; gay marriage and adoption remain illegal]. Expect a caning if you break a window. And death for a baggie of cocaine.
But hey: no capital gains tax! (Freedom!)
Most relevant to our economic discussion, this kind of authoritarianism even extends into areas like poverty and homelessness. Conservatives will often approvingly cite Singapore’s low homelessness rate as a sign of its economic health – but a major part of the reason why there’s so little homelessness there is that the Singaporean government has functionally made it illegal to be homeless; anyone found sleeping rough can be forcibly institutionalized, and anyone caught begging can be jailed and fined thousands of dollars. In other words, it’s not that Singapore has solved the problem of homelessness; it’s just that it has forbidden anyone from living there unless they can afford a home. Anyone who can’t is forced to either live outside its borders (remember, Singapore is basically just a single city) or risk being locked away. And while that’s certainly one way to reduce homelessness rates, it’s not exactly a humane one – and it’s definitely not one that could be generalized to every country in the world. If we actually want to solve problems in a way that’s humane and constructive, we can’t just have an authoritarian government declare those problems illegal; we actually have to do the work of building a universal system of social support that makes sure everyone’s most basic human rights are respected.
XXI.
Of course, even that argument – that liberal democracy is preferable to authoritarianism – has its occasional detractors. When it comes to solving difficult social problems, some people will argue that it’s better to have an autocratic government that can straightforwardly “get things done” in a unilateral way, without any institutional checks on its power that might hinder its aims, than to have a whole big bureaucracy that has to go to all the trouble of getting popular approval for its actions any time it has to do anything, and is therefore constantly gridlocked and unable to act effectively. Is there anything to this argument? Leaving aside the specific example of Singapore (which has characteristics of both democracy and authoritarianism) and just considering the matter in general terms, is it better to have a single ruling authority, like a monarch or a small core of party leaders, that makes decisions for everybody – the proverbial “benevolent dictator” – or to have decision-making power dispersed across the entire voting population, as in a democracy? We’ve spent a lot of time up to this point talking about all the things government can do and all the reasons it can be a valuable complement to the private sector – and hopefully we’ve adequately established that this is the case – but with that groundwork having been laid, it’s now worth turning to this question of how best to implement those functions and what form an ideal government should actually take. How can a government most effectively accomplish what’s truly in everyone’s best interest?
Well, probably the first thing we’ll have to grapple with here is that phrase “everyone’s best interest.” It seems safe to say that whatever form of government we choose, we won’t be able to make everyone happy all the time, just as a matter of sheer logistics. As Wheelan explains:
Making Public Policy Is Hard
Assume that some twist of fate has made you president of your homeowners’ association. Your primary responsibility is planning the annual block party, which involves arranging for a catered meal and a film that will be projected on a screen outdoors in a common area. The whole event will be paid for out of the annual budget, which comes from mandatory dues. The majority of your neighbors are genial, reasonable people, but there are a handful of folks apt to complain vociferously when they do not get exactly what they want, even when most other homeowners want something entirely different.
You can watch only one film at the block party—because there is just one outdoor screen and the movie is meant to be a communal experience. This is the first test of your leadership. The families with children are pushing for a Disney movie to keep their kids occupied during the party; the families without kids do not want to be subjected to a film about princesses or talking donkeys. The weird guy at the end of the street has generously offered to share his “world class adult movie collection.”
You sensibly opt against a porn film, as well as an anodyne children’s movie, and select a PG-13 adventure flick with decent appeal across the age spectrum (but arguably inappropriate for the youngest of the kids).
On the food front, your dinner committee has chosen grilled chicken, burgers, and hot dogs, and a pasta dish as a vegetarian option, though the two vegetarian families are still peeved that their annual dues are being used to support a banquet built primarily around grilling animal flesh.
Meanwhile, a sizeable minority of residents do not think you should have had the party at all. Yes, 70 percent of the homeowners voted to allocate the funds for the party at your last meeting, so you are delivering what most people want, but that still means 30 percent prefer not to spend money this way. And within the majority who supported the party, a minority was pushing for a lobster bake with a live swing band and ice sculptures.
In the end you will have one outdoor film, one meal, and one budget. Even if you offer superb leadership, adhere to the will of the majority, and make defensible decisions at every step in the process, the only possible outcome is that many people will be watching a film they don’t want to see, eating food they would not have chosen, and paying for a party they did not want in the first place. And if you had opted not to have the party a different and larger group of “constituents” would have been inflamed. That’s public policy.
I use this example when I am speaking in public, and the “That’s public policy” line always gets a laugh. But the homeowners’ block party, albeit somewhat silly, is public policy in the sense that it represents a set of shared decisions that are binding on the entire group, even those who vehemently disagree with the decisions. When there is only one possible shared course of action, we somehow have to come to agreement on what that course of action is going to be.
Most communal policy decisions are a heck of a lot more difficult to reach than choosing a dinner menu and a movie. Think about military engagements, such as the ones in Iraq and Afghanistan. We have one shared military, and we must agree as a country on how to use it.
Or consider social issues like abortion. Abortion is either legal or illegal. (Those who are in favor of legalized abortion would argue that “choice” represents a policy of agreeing to disagree; their opponents would point out that the aborted fetuses do not have much choice in the matter.)
Tax policy has the same basic challenge. It is not possible to have an income tax in which everyone pays what they believe to be fair.
We have hard collective decisions to make on these kinds of issues. The notion that politicians should stop bickering and do the “right thing” makes for fine cocktail-party banter, but it is, in fact, idiotic in a public policy context. Even in the block party example, there is nothing particularly helpful about the homeowner who stands up at the planning meeting and demands, “Let’s just do the right thing here!” Is that the steak or the vegetarian buffet? Is it the Disney movie or the porn film?
Different factions have profoundly different ideas about that “right thing,” and there is no scientific or empirical process for choosing one over another.
All of this stands in stark contrast to the private sector, which is driven entirely by mutually beneficial voluntary market transactions. No one has to agree on anything. I like coffee; I don’t need to get forty-nine other people to agree on the merits of coffee for me to walk into a Starbucks and order a cup. Similarly, Starbucks can make plenty of money by ignoring those who don’t like coffee, can’t afford coffee, live in countries where dictators won’t let them drink coffee, have disabling diseases that prevent them from drinking coffee, and so on.
I buy coffee on any given occasion because I believe it is the very best use of my money; Starbucks makes money for its shareholders by anticipating and meeting my needs. That is all that needs to happen.
The entire private sector operates around these self-organizing transactions, and that is a great thing. But the private sector—no matter how efficient—is not going to defeat the Taliban, keep a “dirty bomb” out of the country, feed or clothe the indigent, reduce carbon emissions significantly, or do anything else to address most of our serious communal challenges.
The notion that we can or should run government like a business is another goofy platitude that often passes for wisdom. The whole point of government is to do things that the private sector cannot or will not do. Businesses have the benefit of a bottom line. Success or failure can be measured with a single metric: profits. If the aforementioned block party were run like a business, all of the complexity of the decision-making would be solved with one simple analytical exercise: Which option makes us the most money? If that is a steak dinner followed by an adult-film marathon, then that is what the block party will look like. Too bad for the vegetarian families with kids.
The profit-maximizing approach is fine for Starbucks; people who do not like coffee can spend their money elsewhere. It does not work for any shared endeavor in which those who disapprove of some course of action cannot simply go elsewhere.
Public policy involves two inexorable realities: 1) We have no objective measure of the “best” course of action in many situations. 2) In such situations, many stakeholders have significantly different opinions on what the best course of action ought to be. If there is a “bottom line,” it’s that we can’t always get what we want, particularly when most other people want something else.
It is ironic that the Tea Party has idolized the self-indulgent (and relatively easy) act of hurling tea into Boston Harbor. The truly extraordinary and unique contribution of that generation of patriots was the Constitutional Convention, which was a long, grinding series of compromises, most of which were profoundly objectionable to some faction or another.
It’s common for certain anarchist types to underscore the necessity of unanimity in decision-making – to say that if a government makes any decision without the full unanimous consent of the people, it’s illegitimate. But the idea that it would be possible to have literally everyone in a community agree on every collective action decision is, to put it lightly, not very realistic. It might be one thing if we were just talking about a tiny group of friends who only had to decide on one or two issues together, like picking a meal or a movie for the evening. But as soon as we start talking about larger groups that have to continually make collective decisions about vital issues (even if the group is just the size of a single neighborhood, as in Wheelan’s example), it quickly becomes clear that maintaining total unanimity at all times simply isn’t a workable long-term strategy. Public policy involves thousands of collective action decisions, not just one or two – and they aren’t the kind of decisions that can just be dropped if no unanimous consensus can be reached (at least not without making the population worse off). So unless we want to constantly be running into situations like, say, a community agreeing on the need for a fire department, but failing to come to a unanimous consensus on where the fire station should be built, so that ultimately no fire station ever gets built at all, it’s simply inevitable that at least some of the time, some people will have to get their way while others don’t. Some anarchists might argue that unanimity could be achieved by splitting up society into millions of tiny micro-communities and having everyone move to whichever micro-community perfectly matched their political beliefs, so they’d never have to compromise with anyone who disagreed with them; but even if we imagined that it would somehow be logistically feasible to relocate everyone like this (which it definitely wouldn’t be), and even if we ignored the fact that people’s political beliefs are constantly evolving and changing over time (so they’d constantly be having to pack up and move any time they changed their opinion on anything, and communities would constantly be fracturing as new political issues emerged and divided public opinion), we’d still run up against the fact that this unanimity-only solution is itself something that not everyone would want to agree to in the first place. People have all kinds of reasons for wanting to live in particular communities aside from just agreeing with their neighbors politically; they typically prefer to live near their extended family members, for instance (whose political ideologies almost never match up perfectly with their own), as well as their friends, their employers, and so on (who likewise invariably have a diversity of beliefs themselves). So even if it were somehow possible to have as many political jurisdictions as there were combinations of political beliefs, that still wouldn’t be enough to get everyone to sort themselves into completely like-minded groups that always agreed internally on everything, because there would always be other considerations at play. For better or worse, there will always be reasons for people to live alongside others who disagree with them on at least some issues; and that’s something that the vast majority of people (at least the non-anarchist ones) willingly accept. In a sense, it’s the biggest thing they do collectively agree on – that they won’t always agree on everything, and they won’t always get their way, and that’s okay.
So all right then, if unanimous consensus isn’t a viable strategy for running a government, then what’s the best alternative for at least keeping the dissatisfaction to a minimum – having everyone vote democratically on how they want their government to be run, or just putting one person (or group of people) in charge and having them make all the decisions? For most of us, our instant answer will be that it’s obviously the former; if we all had to submit to the will of one person without ever having a chance to express our own preferences, that would strike most of us as a major injustice. Having said that though, we can’t necessarily justify this choice by just saying that democracy is always the best choice in every circumstance, and that it’s therefore the right answer by default; after all, there are plenty of situations in which giving a single person all the decision-making power really is the more reasonable choice. When you decide what kind of hairstyle you want to have, for instance, or what color socks you want to wear, it’s not a matter of putting the question up for a public vote and then doing whatever the majority says; you’re the only one whose vote counts, and rightly so. So then what’s the difference between this kind of situation and the broader context of public policy? As Michael Albert explains, it’s actually pretty simple:
Suppose that you have a desk in a workplace. You are deciding whether to place a picture of your child on that desk. How much say should you have? Or suppose that instead of a picture of your child, you want to place a stereo there and play it loudly in the vicinity of your workmates. How much say should you have about that?
There is probably no one who wouldn’t answer that as to the picture you should have full and complete say, but as to the stereo you ought to have limited say, depending on who else would hear the music and therefore be affected by your choice. And suppose we then ask how much say other folks should have? The answer, obviously, depends on the extent to which the decision would affect them.
The norm we favor is thus that to the extent that we can arrange it, each actor in the economy should influence economic outcomes in proportion to how those outcomes affect him or her. Our say in decisions should reflect how much they affect us.
In other words, when it comes to things like personal choices and private market transactions that don’t involve any outside parties, it makes perfect sense that individuals should have the power to make their own decisions unilaterally, without any kind of democratic process. They’re the only ones affected, so they’re the only ones whose preferences are really relevant. But by the same logic, when it comes to public policy – the kind of stuff that affects all of society – all of society should be involved in making the decisions, because everyone’s preferences are relevant. Their preferences might not always be what we’d consider ideal – and the policies they vote for might not always even turn out to be what’s in their own best interest – but as George Mitchell puts it:
Democracy doesn’t guarantee a good result, [but] it guarantees a fair process.
Of course, we can’t just gloss over the fact that voting does sometimes produce bad results, because that’s one of the biggest arguments that critics level against democracy as a concept. According to these critics, the fact that people don’t always make the most accurate judgments about what will best satisfy their preferences – the fact that they sometimes vote for candidates and policies that ultimately make them worse off – means that they could be made much better off if decision-making power were taken out of their hands entirely, and all the decisions were left to someone who actually knew what they were doing – i.e. a benevolent dictator. And in theory, this isn’t necessarily a false argument; in theory, everyone probably would be better off if all the decisions were made by someone who always knew exactly what was best for them and always acted in accordance with their best interests. But to say that this is a very big “if” is an understatement. When we look at all the real-world examples of autocracy in action, it very quickly becomes apparent why its apologists always have to add the qualifier “benevolent” when they use the word “dictator” – because benevolence is most definitely not an innate characteristic of dictatorship that can just naturally be assumed by default (nor is superior judgment). On the contrary, the word “dictatorship” has become practically synonymous with “oppressive regime,” not “enlightened leadership” – and that’s not a coincidence. As Bueno de Mesquita and Smith explain, it’s not just a random accident of history that autocratic regimes almost always end up serving those at the top, at the expense of the broader population, rather than the other way around; it’s a product of the fundamental structure of autocracy itself:
Money, it is said, is the root of all evil. That can be true, but in some cases, money can serve as the root of all that is good about governance. It depends on what leaders do with the money they generate. They may use it to benefit everyone, as is largely true for expenditures directed toward protecting the personal well-being of all citizens and their property. Much public policy can be thought of as an effort to invest in the welfare of the people. But government revenue can also be spent on buying the loyalty of a few key cronies at the expense of general welfare. It can also be used to promote corruption, black marketeering, and a host of even less pleasant policies.
The first step in understanding how politics really works is to ask what kinds of policies leaders spend money on. Do they spend it on public goods that benefit everyone? Or do they spend mostly on private goods that benefit only a few? The answer, for any savvy politician, depends on how many people the leader needs to keep loyal—that is, the number of essentials in the coalition.
In a democracy, or any other system where a leader’s critical coalition is excessively large, it becomes too costly to buy loyalty through private rewards. The money has to be spread too thinly. So more democratic types of governments, dependent as they are on large coalitions, tend to emphasize spending to create effective public policies that improve general welfare pretty much as suggested by James Madison.
By contrast, dictators, monarchs, military junta leaders, and most CEOs all rely on a smaller set of essentials. As intimated by Machiavelli, it is more efficient for them to govern by spending a chunk of revenue to buy the loyalty of their coalition through private benefits, even though these benefits come at the expense of the larger taxpaying public or millions of small shareholders. Thus small coalitions encourage stable, corrupt, private-goods-oriented regimes. The choice between enhancing social welfare or enriching a privileged few is not a question of how benevolent a leader is. Honorable motives might seem important, but they are overwhelmed by the need to keep supporters happy, and the means of keeping them happy depends on how many need rewarding.
Democracy is not the greatest system of government because it “takes into consideration what the population wants”. That’s like a nice bonus. The core mechanism that makes democracy valuable is that democracy diffuses power effectively.
Power corrupts. And democracy works by diffusing the corrupting influence across many millions in order to retard the inherent corrosion of a society’s institutions. Democratization of a system isn’t the aspect of putting things to a vote; rather it is the diffusion of power. Voting is just a means to an end.
[…]
Think about alternatives to a “democracy”. In any alternative system, to varying degrees power is concentrated to either a smaller group within the population or to a limited group or individual. But what is power and why can’t we have a “benevolent dictator”?
There’s a reason you don’t actually see the “benevolent dictator” system in the real world. Political power is essentially the quality of having other powerful people aligned to your interest. And those other powerful people get their power in turn from people further down the chain being aligned to them.
In order to keep those chains of alignment of interest, you have to benefit the people who make you powerful. But you have no need to benefit anyone else. In fact, benefitting anyone else comes at the cost of benefitting those who make you powerful. It’s a weak spot that can be exploited by a usurper. Right?
If you’re going to be a “benevolent dictator”, whose selfish interest do you need to prioritize in what order?
tax collectors?
military generals?
educators?
farmers?
engineers?
doctors?
Well without the military, you’re not really in charge and you can’t defend your borders or your crown from other potential rulers. And without the tax collectors you can’t pay the military or anyone else for that matter. But you can probably get away without educators for decades. So your priorities are forced to look something like this:
Military
Tax collection
Farming
Infrastructure projects
Medicine?
Education??
And in fact, any programs that benefit the common person above the socially powerful will always come last in your priorities or your powerful supporters will overthrow you and replace you with someone who puts them first. So it turns out as dictator, you don’t have much choice.
In short, even in the best-case scenario where an utterly selfless autocrat with the noblest of intentions somehow manages to come to power, the very structure of that power will make it nearly impossible for them to act in a way that puts the general population first, as opposed to serving the most powerful interests (or else being replaced by someone who will). By contrast, in a democracy, the general population is itself the powerful interest that can replace any leader who fails to meet its needs, so leaders must serve those needs first and foremost. As William Easterly writes:
Government institutions such as courts, judges, and police […] help make the market work in rich countries by enforcing contracts, protecting property rights, providing security against predators, and punishing lawbreakers.
The Achilles’ heel is that any government that is powerful enough to protect citizens against predators is also powerful enough to be a predator itself. There is an old Latin saying that goes, “Quis custodiet ipsos custodes?”—which translates freely as “Why would you trust a government official any more than you would a shoplifting serial killer?”
Democracy’s answer to “Who will watch the watchers?” (the more conventional translation of the above) is everyone. The other great invention of human society besides free markets is political freedom. According to the simplest view of democracy, an open society with a free press, free speech, freedom of assembly, and political rights for dissidents is a way to ensure good government. Free individuals will expose any predatory behavior by bad governments, and vote them out of office. Voters will reward with longer terms of office those politicians who find ways to deliver more honest courts, judges, and police. Political parties will compete to please the voters, just as firms compete to please their customers. The next generation of politicians will do better at delivering these services. Of course, no democracy works anywhere close to this ideal, but there are some that come close enough to make [the improvement of their society] possible.
Mind you, none of this is to say that it’s literally impossible for any political leader to ever overcome all the adverse structural incentives and become something like a benevolent dictator; such aberrations have happened on rare occasions throughout history. Once again, people will often cite Singapore here as a modern example of this, with its founding leader Lee Kuan Yew having used his unusual degree of unilateral power to significantly improve his country when it was first getting off the ground. But as with our earlier discussion, the people who bring up Singapore in this context will typically have a hard time coming up with a whole lot of other modern examples aside from just that one – and there’s a good reason for that. Government power that’s not accountable to the people tends to result, unsurprisingly, in government actions that don’t primarily serve the people.
(People will also sometimes cite China’s ruling party as a modern example of an undemocratic government using its unilateral power to make the country significantly better off – but this is more of a stretch, considering that the best thing the party has done for China over the last half-century has been to ease up on its authoritarianism and let its people engage in more free market activities. In reality, it’d be more accurate to describe China as a country that has improved despite the authoritarianism of its government than as one that has improved because of it.)
Leaving aside this whole problem of adverse incentive structures, though, there’s an even more basic issue when it comes to autocracy, which is that even if a benevolent dictator has both noble intentions and the ability to act on them, that still does nothing to guarantee that their good intentions will actually translate to good policy judgments, much less ones that are superior to all other alternatives. Putting a crown on someone’s head, after all, does nothing to change the fact that they’re still flawed humans capable of getting things wrong. And while it’s of course true that the typical voter also gets things wrong all the time, the key difference is that in a majoritarian democracy, the only way for such a mistake to become a national policy failure is if over half the voting population is simultaneously wrong about it – while in an autocracy, all it takes is for the one person at the top to be wrong. There’s just a single point of potential failure around which the entire system hinges – so if and when that one person does make a mistake, it can throw the whole country into turmoil. And because there are no democratic checks on their power, there’s nothing that the people who ultimately suffer the damage can do about it.
But how likely is it really that the kind of benevolent dictator we’re talking about will be more prone to such mistakes than the median voter? If we’re really talking about a leader who’s well-educated and experienced in statecraft, whose intentions are good, and who knows how to get things done, then won’t they be much more likely to get things right overall? Well, sure, if we’re going to rig the question by stipulating that we can only ever have this kind of perfectly ideal autocrat, then of course it’s easy to imagine how they might produce better outcomes. Again though, here in the real world there’s simply no such mechanism for ensuring that autocrats will be anything close to that kind of ideal – and in fact, the general rule has historically been the exact opposite. As it turns out, the kinds of rulers who want to have the power to control entire populations without having to account for their preferences tend not to be the kinds of leaders who have the most extraordinarily superior judgment; more often, they’re just the kind of leaders who think their judgment is always perfect, and who are therefore more likely to stubbornly persist in their misguided beliefs even when they produce abysmal real-world outcomes. Alexander provides a list of examples here to illustrate the point, from Ivan the Terrible – whose paranoia and megalomania led him to destroy his own economy, burn and pillage his own cities, and torture and massacre thousands of his own people – to Henry VIII – whose decision to convert his whole country to a newly-invented religious denomination just so he could marry Anne Boleyn (whom he would later have beheaded) led to thousands of deaths and a series of wars that would last for generations. Alexander concludes:
This is exactly the sort of problem non-monarchies don’t have to worry about. If [an American president] said the entire country had to convert to Mormonism at gunpoint as part of a complicated plot for him to bone Natalie Portman, we’d just tell him no.
There’s another important aspect here too. Reactionaries – ending up more culpable of a stereotype about economists than economists themselves, who are usually pretty good at avoiding it – talk as if a self-interested monarch would be a rational money-maximizer [suggesting that such a monarch would presumably want to maximize their country’s wealth as well so they’d have a bigger tax base]. But a monarch may have desires much more complicated than cash. They might, like Henry, want to marry a particular woman. They might have religious preferences. They might have moral preferences. They might be sadists. They might really like the color blue. In an ordinary citizen, those preferences are barely even interesting enough for small talk. In a monarch, they might mean everyone’s forced to wear blue clothing all the time.
You think that’s a joke, but in 1987 the dictator of Burma made all existing bank notes illegitimate so he could print new ones that were multiples of nine. Because, you see, he liked that number. As Wikipedia helpfully points out, “The many Burmese who had saved money in the old large denominations lost their life savings.” For every perfectly rational economic agent out there, there’s another guy who’s really into nines.
XXII.
Again, none of this is to say that the whole reason democracy is preferable to autocracy is that voters are all perfectly enlightened decision-makers themselves. In fact, if there’s one thing critics of democracy are right about, it’s that voters don’t typically tend to be very well-informed about policy at all. As Brennan writes:
Sixty-five years ago, researchers began studying what voters know and how they think. The results are depressing. The median voter knows who the president is, but not much else. Voters don’t know which party controls Congress, who their representatives are, what new laws were passed, what the unemployment rate is or what’s happening to the economy. In the 2000 U.S. presidential election, while slightly more than half of voters knew that Al Gore was more liberal than George W. Bush, they did not seem to know what the word “liberal” means. Significantly less than half knew that Gore was more supportive of abortion rights, was more supportive of welfare-state programs, favored a higher degree of aid to blacks or was more supportive of environmental regulation.
Of course, I suspect people nowadays would have a much clearer understanding of the difference between liberalism and conservatism, so this specific argument might not be as strong today as it was 25 years ago. But still, the general point remains: When it comes to the finer points of policy, most voters aren’t very knowledgeable. Sure, they may have some policy preferences, but those preferences will tend to be very general broad-stroke kinds of desires without much theoretical grounding – or if they do have strong opinions on specific policies, their attention will typically be limited to just a few pet issues and not much else. And really, this shouldn’t surprise us; after all, most people are working full-time jobs and raising families and so on, so we shouldn’t expect them to have the time, the resources, or the inclination to learn all the nuances of every political issue and become elite policy experts on top of everything else. They simply don’t have the means to develop the kind of fine-grained expertise that they’d need to be able to make informed decisions on every single detail of every government policy themselves – which is why, whenever a particular jurisdiction does attempt to govern itself by putting everything directly in its citizens’ hands and having every issue decided by voter referendums (the so-called “direct democracy” approach), it tends to produce the kind of political dysfunction that has become all too familiar in places like, for instance, California. As Zakaria explains:
In many ways California is the poster child for direct democracy, having experimented most comprehensively with referendums on scores of issues, big and small. [But] if California truly is the wave of tomorrow, then we have seen the future, and it does not work.
[…]
No one disputes the facts. In the 1950s and early 1960s California had an enviable reputation as one of the best-run states in the union. “No. 1 State,” said a Newsweek cover from 1962, “Booming, Beautiful California.” Time agreed; its title read “California: A State of Excitement.” There was much to be excited about. The state’s economy was booming, and with moderate tax rates it had built extraordinary and expanding public resources, from advanced highways and irrigation systems to well-run police forces to breathtaking parks and zoos. The state’s crowning achievement was its world-class public-education system, which started in kindergarten and ended with the prestigious University of California campuses. Californians seemed blissfully content, annoying intellectuals in the cold, damp Northeast to no end. (“All those dumb, happy people,” said Woody Allen.) But for the rest of the world, sunny, prosperous, well-managed California symbolized the dazzling promise of the United States. California was the American dream.
California today is another story. In the spring of 2001 California was plunged into blackouts and electricity shortages that reminded me of India. (In fact they were worse than anything I experienced growing up.) Sure, California is home to Silicon Valley and Hollywood, two of the greatest centers of American industry and creativity. But that is its private sector. Its public sector—indeed its public life—is an unmitigated mess. The state and local governments struggle each year to avoid fiscal crises. Highways that were once models for the world are literally falling apart, and traffic has become both a nightmare and an expensive drag on productivity. In the 1950s California spent 22 percent of its budget on infrastructure; today it spends barely 5 percent. Public parks now survive only by charging hefty admission fees. The state’s education system has collapsed; its schools now rank toward the bottom in the nation when measured by spending or test scores or student skills.
The University of California system has not built a new campus in three decades, despite the fact that the state’s population has doubled. And yet, as the veteran journalist Peter Schrag points out in his penetrating book Paradise Lost, it has had to build twenty new prisons in the last two decades. In 1993 the Economist concluded that the state’s “whole system of government was a shambles.” Three years later the bipartisan Business-Higher Education Forum, which includes corporate executives and education leaders, reported that without major change “the quality of life will continue to decline in California with increasing transportation problems, rising crime and social unrest, and continued out-migration of business.” And this was written at a time when the U.S. economy was at a thirty-year high. The best evidence of California’s dismal condition is that on this one issue, both right and left agree. Echoing Schrag, who is a liberal, conservative commentator Fred Barnes explained in a cover story for the Weekly Standard that the state’s government had stopped working: “California has lost its lofty position as the state universally envied for its effective government, top-notch schools, and auto-friendly transportation system.”
Not all California’s problems can be traced to its experiment with referendums and initiatives. But much of the state’s mess is a result of its extreme form of open, non-hierarchical, non-party based, initiative-friendly democracy. California has produced a political system that is as close to anarchy as any civilized society has seen. Consider the effects of the recent spate of initiatives. After Proposition 13 passed [in 1978, limiting the state’s ability to raise taxes], the state passed dozens of other initiatives, among them Proposition 4 (which limited the growth of state spending to a certain percentage), Proposition 62 (which requires supermajorities in order to raise taxes), Proposition 98 (which requires that 40 percent of the state budget be spent on education), and Proposition 218 (which applied to local fees and taxes the restrictions of Proposition 13). Yet the state legislature has no power over funds either, since it is mandated to spend them as referendums and federal law require. Today 85 percent of the California state budget is outside of the legislature’s or the governor’s control—a situation unique in the United States and probably the world. The vast majority of the state’s budget is “pre-assigned.” The legislature squabbles over the remaining 15 percent. In California today real power resides nowhere. It has dissipated into the atmosphere, since most government is made via abstract laws and formulas. The hope appears to be that government can be run, in Schrag’s words, like “a Newtonian machine immune to any significant control or judgment by elected representatives. This not only turns democracy into some fun-house contortion of the ideal but makes it nearly impossible to govern at all.
Even with referendums dictating what they do, politicians are still required to translate these airy mandates into reality. The initiatives have simply made this process dysfunctional by giving politicians responsibility but no power. Experiences with referendums in places far from California confirm that this is not a problem unique to the golden state. The Connecticut Conference of Municipalities (CCM) determined that 72 of the state’s 169 municipalities held referendums to approve the budgets proposed by their city executive. Of those 72, 52 had to hold additional referendums, often more than once, because the proposed budgets were rejected. Most of these referendums required local officials to cut taxes and yet improve services. “You have to be a magician to accomplish those dual purposes,” complained James J. Finley, the CCM’s legislative services director.
The flurry of ever-increasing commandments from the people has created a jumble of laws, often contradictory, without any of the debate, deliberation, and compromise that characterize legislation. The stark “up or down” nature of initiatives does not allow for much nuance or accommodation with reality. If in a given year it would be more sensible to spend 36 percent of the California budget on schools instead of the mandated 40 percent, too bad.
As another unintended consequence, the initiative movement has also broken the logic of accountability that once existed between politicians and public policy. By creating a Byzantine assortment of restrictions over the process of taxing and spending, the voters of California have obscured their own ability to judge the performance of their politicians. When funds run out for a particular program, was it that the legislature allocated too little money, or that local communities spent too much, or that statewide initiatives tied their hands? You can imagine the orgy of buck-passing that must ensue, given California’s 58 counties, 447 cities, and over 5,000 special districts. Lack of power and responsibility inevitably produces a lack of respect. California’s state government and its legislature have among the lowest public-approval ratings among American states. Having thoroughly emasculated their elected leaders, Californians are shocked that they do so little about the state’s problems.
The moral of the California story, in short, is that simply handing the reins of government directly to the voters and saying “Good luck, you all figure it out” is not a recipe for success. Ordinary citizens just aren’t equipped to be able to fully understand all the planning and budgeting implications of every policy proposal and how all the various moving parts of government fit together; the best they can do is express their general preferences for which direction they want government policies to go in (or at least, that’s the most that can reasonably be asked of them).
So does this mean that democracy itself is a fatally flawed idea? Not at all. True, direct democracy may not be workable on a large scale – but direct democracy isn’t the only kind of democracy that exists. The fact that it has so many difficulties is exactly why we also have the concept of representative democracy, in which voters don’t vote directly on every individual issue themselves, but instead vote for people who can in fact make it their full-time job to gain a reasonable understanding of all the issues and then act accordingly on behalf of the people they represent. By delegating political decision-making responsibility to elected representatives in this way, voters can get some of the same benefits that having benevolent autocrats in charge would supposedly bring – i.e. having leaders who know all the ins and outs of government, have teams of budget experts and policy wonks who can help them make more informed decisions, and so on – but with the key difference being that these elected leaders will still ultimately be accountable to the voters themselves instead of just having the power to do whatever they want. Their role can simply be to implement whatever policies they think will produce the kind of outcomes their constituents will approve of (or at least won’t disapprove of) – and then, depending on whether or not they’re successful in that task, voters can either re-elect them or replace them with someone else. As for the voters themselves, they won’t be expected to understand all the fine-grained details of every single political issue – but the benefit of this system is that they won’t have to understand all those details; their most important job will simply be to notice whether or not they’re satisfied with the way things are going (relative to the alternatives), and then register those sentiments by voting. And that’s one area where they really will have more fine-grained knowledge than their political leaders will; they might not be able to understand all the minutiae of the political process better than their representatives do, but they will have a more detailed understanding of how satisfied they are personally with the current state of things. As Brennan writes (citing Thomas Christiano):
Christiano [describes representative democracy as] a sort of halfway point between [direct] democracy and [rule by enlightened elites]. He begins by noting that it’s unrealistic to expect voters to have sufficient social scientific knowledge to make good choices at the polls:
It is hard to see how citizens can satisfy any even moderate standards for beliefs about how best to achieve their political aims. Knowledge of means requires an immense amount of social science and knowledge of particular facts. For citizens to have this kind of knowledge generally would require that we abandon the division of labor in society.
Christiano believes the typical citizen is competent to deliberate about and choose the appropriate aims of government. For citizens to know the best means for achieving those aims, however, they would have to become experts in sociology, economics, and political science. They are not competent to make such determinations. Christiano’s proposed solution is to create a division of political labor: “Citizens are charged with the task of defining the aims the society is to pursue while legislators are charged with the tasks of implementing and devising the means to those aims through the making of legislation.”
Christiano argues, and I agree, that this regime qualifies as a type of democracy. Fundamental political power is still spread evenly among citizens. Under Christiano’s proposal, the legislators have only instrumental authority. They are administrators more than leaders.
As an analogy, consider the relationship of a yacht owner to the yacht’s captain. The owner tells the captain where to go, but the captain does the actual sailing. While the captain knows how to steer the boat and the owner does not, the owner is in charge. The owner can fire the captain, and as such the captain serves the owner. Christiano might contend that in the same way, under his proposal the legislators serve the democratic electorate. While the legislators set laws that the democratic body must follow, the democratic body told the legislator what direction these laws must go in.
And this kind of division of political labor makes sense, for the same reason that it makes sense for, say, a corporate CEO to delegate certain tasks to their subordinates. A CEO obviously can’t do every task that their company is responsible for all by themselves – developing new products, crunching payroll numbers, responding to customer service calls, etc. – so the most sensible thing for them to do is to delegate those responsibilities to people who can specialize in those particular tasks. Then the CEO can observe the results, and if they’re unsatisfactory, can replace those specialists with more effective ones. It’s an efficient system precisely because it doesn’t put every little responsibility in the CEO’s hands. And the same is true for democratic governance. If we tried to make voters responsible for directly handling every single aspect of governance themselves, they wouldn’t have time for anything else – jobs, families, etc. – that society requires in order to function. By delegating matters of governance to designated specialists (i.e. elected representatives), voters can free themselves up to specialize in different tasks of their own in the private sector – and together, everyone can achieve much better results than they ever could have without this division of labor. Astra Taylor puts it this way:
Most of what government does is a mystery to the average person and no one, not even the most astute legal experts, comprehends all the innumerable and intricate laws that bind us. What’s more, the principles of free choice and citizen consent could quickly become unwieldy if taken to an extreme. It’s unclear, for example, what percentage of decisions affecting our communities we should be expected to participate in. (I, for one, don’t mind letting people more knowledgeable than I determine how best to provide electricity or decide which potholes to fill.)
This is another way of articulating the difference between direct and representative democracy: by choosing representatives, we are, in a sense, choosing not to choose (which is something we do increasingly these days, outsourcing more and more of our daily decision making to, for example, recommendation engines or GPS).
Just as a quick side note, another noteworthy implication of this principle is that in some cases, a particular task or project might be so complicated or technical that even the representatives whose whole job is to work on policy might not fully understand all the relevant details – so in such cases, it may be justifiable for those representatives to themselves appoint people who can specialize even more narrowly in some specific area (i.e. bureaucrats, judges, etc.), and delegate those esoteric tasks to them. These appointees, while not directly accountable to voters, will be accountable to elected officials in much the same way that those officials are themselves accountable to voters; the elected officials will provide them with a particular set of aims to be pursued, then leave it to them to figure out how exactly to implement those aims – and depending on whether they’re able to adequately do so, they may then be allowed to either retain their position, or else be replaced by someone more capable. They’ll still be accountable to voters in the ultimate sense, but in practical terms this accountability will be one step removed; instead of having their performance evaluated and judged directly by voters, they’ll be evaluated and judged by elected representatives, who will then themselves be judged by voters based on their performance. The bureaucrats, in other words, will be acting as sort of meta-representatives – and if they hire staffers of their own to handle the finer details of their tasks, then those staffers will be sort of meta-meta-representatives, and so on. By delegating and sub-delegating tasks based on how technical and/or esoteric they are, the whole system will ultimately function more effectively – again, much like how any large company or organization works better when there are different levels of employees handling different tasks. But won’t this kind of arrangement mean that voters will have a harder time evaluating the actions of the bureaucrats furthest down the chain and holding them accountable? Yes, it will – but as Garett Jones points out, that’s not necessarily a bad thing, because the further down the chain the bureaucrats are, the more likely it is that the government officials overseeing them will be in a better position to hold them accountable than ordinary voters will be:
One of the folk arguments for electing government officials is “accountability.” Citizens, the story goes, need to be able to hold elected officials accountable, and one way to hold them accountable is to retain the right to fire and replace them. But in the case of [for instance] city treasurers, it’s easy to measure (much of) the job the treasurer is doing: just look at the interest rate on the city debt. But even when such a key quality index is so easy to measure, voting citizens do an awful job of keeping the city treasurer accountable. The better option is to let other city officials—the elected mayor, the city council, or maybe the appointed city manager—pick a treasurer and then keep an eye on the job she’s doing. Those city officials will surely notice if the treasurer is saving the city over $200,000 a year, even if voters are too preoccupied watching cat videos to do the job.
Again, this is one of those cases where it can make more sense for voters to delegate a particular task to their representatives than to try and handle it themselves. Overseeing bureaucrats like city treasurers isn’t something that most voters are especially well-equipped for, so they can simply elect someone who is better equipped to do so on their behalf. So then how can we tell which tasks require less of this kind of direct accountability to voters, and which require more? Jones answers this by citing Eric Maskin and Jean Tirole:
Nonaccountability [or rather, accountability that’s a step or two removed from the electorate] is most desirable when (a) the electorate is poorly informed about the optimal action, (b) acquiring decision-relevant information is costly, and (c) feedback about the quality of decisions is slow. Therefore, technical decisions, in particular, may be best allocated to judges or appointed bureaucrats.
–Eric Maskin and Jean Tirole
[In other words,] when it’s crucial to get the technical details right and when the policy debate is less about values and more about facts and competent execution, that’s likely a good opportunity to delegate power to unelected bureaucrats.
The distinction he makes here between policy debates based on values and policy debates based on technical details brings us back to one of our key points, which is that when we talk about voters knowing enough to be able to judge whether their representatives are adequately representing them, that’s not necessarily the same thing as saying that every voter will always know exactly what they want on every issue. After all, as Brennan pointed out earlier, a lot of the time voters won’t even have certain issues on their radar at all, so they’ll be functionally indifferent to competing candidates’ stances on them. If there are two candidates running for office, for instance, and the main difference between them is just their position on some minor regulatory question or obscure tax issue, most voters probably won’t put all that much effort into developing an informed opinion and picking a side. That being said, the flip side of this is that if there’s a situation in which one of the candidates wants to, say, triple everyone’s taxes, or criminalize homosexuality, or ban all guns (or, for that matter, forcibly convert the entire country to their new religion, or destroy everyone’s savings so they can print new bank notes that are all multiples of nine), then that’s the kind of thing that voters absolutely will take notice of and respond to. Voters might not always know exactly which set of policies they’d like best, but they don’t have to know a whole lot to know what they don’t like and won’t put up with – and fundamentally, that’s what representative democracy is really all about. As David Frum explains in a podcast interview, it’s not so much a matter of trying to ascertain voters’ exact preferences on every issue and then do exactly what they want (since that will be effectively impossible in many cases); it’s more about offering them different sets of competing policies, seeing which ones they object to, and then sticking with the ones they seem to be okay with:
DAVID FRUM: I don’t think democracy is about finding out what the people want, because there are too many of them; it’s meaningless. But what you can find out is what the people will consent to. The parties are competing for consent – not for direction, but for consent.
[…]
EZRA KLEIN: Say a word here on the distinction between what they want and consent. So consent is what they will refrain from vetoing?
DAVID FRUM: Yes. Yes. One of the things that populists say is, “I’m going to go out on the hustings, and I’m going to say to the people, ‘Do you want this? Do you want that?’ The people want this.” I think that’s just an illusion. What happens is, politically active groups compete to offer solutions – which they sometimes believe in for objective reasons, and sometimes… I don’t think anybody ever reasoned themself into supporting ethanol. But you have your ideas about what should happen, and then you go out and you try to mobilize people to give you permission to do it. But that’s all you get; you’re not actually executing some will of the masses, because you can’t aggregate the preferences of tens of millions of people in a meaningful way. What you can do is aggregate their permission – that they give you a mandate, go ahead and try it; if it seems to work, you’ll get another four years; if it doesn’t seem to work, you don’t get the four years.
In other words, voters might not always be willing or able to formulate entire political platforms all by themselves – but they don’t have to; that’s not their role. In a representative democracy, their job is just to judge the results of their elected representatives’ policies, and if they don’t like them, to weed them out in a kind of Darwinian selection process. Critics of this process might argue that relying on voters’ judgments in this way can never work because their ignorance will largely prevent them from being able to know whether they’re really getting good results from their political leaders or not – but if anything, this is more of a problem with non-democratic systems than with democratic ones. After all, under more autocratic forms of government, there’s typically little to no reason for ordinary citizens to pay much attention to politics in the first place (since they can’t have any effect on it regardless), so autocratic leaders can more easily take advantage of their citizens’ relative ignorance of public affairs and act in ways that harm their interests without them ever realizing it. By contrast, the handy thing about having a democratic system, in which candidates actually have to compete with each other to win people’s votes, is that this competition itself serves as a mechanism for bringing to light unpopular policies that voters might otherwise be unaware of. If a candidate running for office happens to have some kind of misalignment with voters’ preferences – i.e. if they have some secret fault (ideological or otherwise) that voters are largely unaware of, but which the opposing party knows voters would hate if they were more aware of it – then it’s all but guaranteed that that opposing party will do everything in its power to bring it to voters’ attention, making it a central campaign issue and hammering on it until voters who might otherwise have been ignorant of it have had it firmly ingrained in their consciousness.
Of course, none of this guarantees that no one with bad ideas will ever be able to win political power, or that the will of the people will never be violated; that’s one thing democracy has in common with autocracy and every other form of government. But what democracy has that sets it apart from more autocratic systems is a clear-cut, well-defined process for ensuring that if the will of the people ever is violated, the offending parties can actually be removed from office and swiftly replaced with others who can implement better policies. In other words, democracy offers a way for populations to correct mistakes of governance, via immediate feedback, rather than just having to live with them indefinitely until the autocrat responsible for them dies of old age. And while defenders of autocracy may argue that dissatisfied citizens under an autocratic system can easily do the same thing if their own all-powerful leader turns out to be incompetent or malevolent, by simply overthrowing the bad leader and installing a good one in their place, this argument relies on a pretty loose interpretation of words like “easily” and “simply.” Judging from the way things have actually worked in autocracies throughout history, this kind of method for replacing flawed leaders is, to say the least, far more unstable – and usually far bloodier – than the democratic alternative. And not only that, it’s not even designed to actually select for good leaders whose policies are aligned with the population’s preferences; all it’s really designed to select for are those who are most powerful and best at overthrowing governments. Maybe the whole process could theoretically go more smoothly if it could somehow be streamlined into a kind of universally-recognized formal mechanism through which the population could simply inform bad rulers that they were no longer wanted and then those leaders could peacefully step aside to be replaced – but as we discussed earlier, that’s exactly what elections are; whenever an elected leader is failing to adequately represent their constituents’ interests, those constituents can decree in a systematic, coordinated manner that they no longer wish to have that leader representing them, and can oust them in favor of someone who better reflects their preferences. Elections are essentially just a way of overthrowing political leaders in a way that’s orderly, efficient, and nonviolent.
Earlier I quoted the first half of a comment from fox-mcleod talking about how autocrats’ incentives are structured in such a way that they’re almost always forced to favor the interests of their powerful cronies over those of the broader populace. Here’s the second half of that comment:
If you’re going to be a “benevolent dictator” […] any programs that benefit the common person above the socially powerful will always come last in your priorities or your powerful supporters will overthrow you and replace you with someone who puts them first. So it turns out as dictator, you don’t have much choice.
But what if we expect our rulers to get overthrown and instead write it into the rules of the government that every 4-8 years it happens automatically and the everyday people are the ones who peacefully overthrow the rulers?
Well, that’s called democracy. It’s totally unnecessary for the people to make the best choice. What’s necessary is that in general, the power to decide who stays in power be diffused over a large number of people. Why? Because it totally rewrites the order of priorities.
Now you have a ruler who prioritizes education, building roads that everyday people use, keeping people productive and happy.
Furthermore, nations who prioritize those things tend to be richer and stronger in the long term. Why? Because it turns out education is good and science is important and culture is powerful. It turns out what’s good for the population is better for the country as a whole even though it’s bad for a dictator.
For more on the basic principles behind why democracies are so much more successful than other forms of governance, see CGP Grey’s rules for rulers.
Again, democracy doesn’t always guarantee perfect outcomes; no system of government can. What it does provide, though, is a systematic mechanism through which we can try out different solutions for our various societal problems, explore different approaches, see which ones work, and then select for those while rejecting the ones that don’t. In this sense, as Michael Shermer explains, democracy is essentially government-by-scientific-method – and that’s its greatest strength:
Democratic elections […] are scientific experiments: every couple of years you carefully alter the variables with an election and observe the results. Many of the founding fathers of the United States, in fact, were scientists who deliberately adapted the method of data gathering, hypothesis testing, and theory formation to their nation building. Their understanding of the provisional nature of findings led them to form a social system wherein empiricism was the centerpiece of a functional polity. The new government was like a scientific laboratory conducting a series of experiments year by year, state by state. The point was not to promote this or that political system, but to set up a system whereby people could experiment to see what works. That is the principle of empiricism applied to the social world.
As Thomas Jefferson wrote in 1804: “No experiment can be more interesting than that we are now trying, and which we trust will end in establishing the fact, that man may be governed by reason and truth.”
XXIII.
Now, in light of all this, defenders of autocracy will sometimes concede that okay, yes, maybe democracy does allow for more political churn than autocracy, and maybe it is more difficult for people living under autocratic rule to reject their leaders’ decisions or oust them if they disapprove of the job they’re doing; but then they’ll argue that this is actually a point in autocracy’s favor – that the fact that autocrats can never be questioned prevents disruptive upheaval and creates social stability. (In fact, in many cases they’ll argue that this is the main benefit of autocracy.) And in certain short-term situations, this argument may even have some validity to it; after all, if there’s one thing autocracies excel at, it’s clamping down on any internal dissidence that might threaten to disrupt the status quo. But while these kinds of efforts to constrain political volatility can sometimes succeed in creating some measure of short-term stability, over the long term they tend to backfire and end up producing the opposite of stability; as Nassim Nicholas Taleb and Mark Blyth explain:
When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
Consider that Italy, with its much-maligned “cabinet instability,” is economically and politically stable despite having had more than 60 governments since World War II (indeed, one may say Italy’s stability is because of these switches of government).
[…]
In contrast, consider Iran and Iraq. Mohammad Reza Shah Pahlavi and Saddam Hussein both constrained volatility by any means necessary. In Iran, when the shah was toppled, the shift of power to Ayatollah Ruhollah Khomeini was a huge, unforeseeable jump. After the fact, analysts could construct convincing accounts about how killing Iranian Communists, driving the left into exile, demobilizing the democratic opposition, and driving all dissent into the mosque had made Khomeini’s rise inevitable. In Iraq, the United States removed the lid and was actually surprised to find that the regime did not jump from hyperconstraint to something like France. But this was impossible to predict ahead of time due to the nature of the system itself. What can be said, however, is that the more constrained the volatility, the bigger the regime jump is likely to be. From the French Revolution to the triumph of the Bolsheviks, history is replete with such examples, and yet somehow humans remain unable to process what they mean.
[…]
Rather than subsidize and praise as a “force for stability” every tin-pot dictator on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be out in the open, as Italy and [other democracies] show in different ways, creates the stability of small jumps.
Much of the Reactionary argument for traditional monarchy hinges on monarchs being secure. In non-monarchies, leaders must optimize for maintaining their position against challengers. In democracies, this means winning elections by pandering to the people; in dictatorships, it means avoiding revolutions and coups by oppressing the people. In monarchies, elections don’t happen and revolts are unthinkable. A monarch can ignore their own position and optimize for improving the country. See the entries on demotism and monarchy here for further Reactionary development of these arguments.
Such a formulation need not depend on the monarch’s altruism: witness the parable of Fnargl. A truly self-interested monarch, if sufficiently secure, would funnel off a small portion of taxes to himself, but otherwise do everything possible to make his country rich and peaceful.
As [Mencius] Moldbug puts it:
Hitler and Stalin are abortions of the democratic era – cases of what Jacob Talmon called totalitarian democracy. This is easily seen in their unprecedented efforts to control public opinion, through both propaganda and violence. Elizabeth’s legitimacy was a function of her identity – it could be removed only by killing her. Her regime was certainly not the stablest government in history, and nor was it entirely free from propaganda, but she had no need to terrorize her subjects into supporting her.
But some of my smarter readers may notice that “your power can only be removed by killing you” does not actually make you more secure. It just makes security a lot more important than if insecurity meant you’d be voted out and forced to retire to your country villa.
Let’s review how Elizabeth I came to the throne. Her grandfather, Henry VII, had won the 15th century Wars of the Roses, killing all other contenders and seizing the English throne. He survived several rebellions, including the Cornish Rebellion of 1497, and lived to pass the throne to Elizabeth’s father Henry VIII, who passed the throne to his son Edward VI, who after surviving the Prayer Book Rebellion and Kett’s Rebellion, named Elizabeth’s cousin Lady Jane Grey as heir to the throne. Elizabeth’s half-sister, Mary, raised an army, captured Lady Jane, and eventually executed her, seizing the throne for herself. An influential nobleman, Thomas Wyatt, raised another army trying to depose Mary and put Elizabeth on the throne. He was defeated and executed, and Elizabeth was thrown in the Tower of London as a traitor. Eventually Mary changed her mind and restored Elizabeth’s place on the line of succession before dying, but Elizabeth’s somethingth cousin, Mary Queen of Scots, also made a bid for the throne, got the support of the French, but was executed before she could do further damage.
Actual monarchies are less like the Reactionaries’ idealized view in which revolt is unthinkable, and more like the Greek story of Damocles – in which a courtier remarks how nice it must be to be the king, and the king forces him to sit on the throne with a sword suspended above his head by a single thread. The king’s lesson – that monarchs are well aware of how tenuous their survival is – is one Reactionaries would do well to learn.
This is true not just of England and Greece, but of monarchies the world over. China’s monarchs claimed “the mandate of Heaven”, but Wikipedia’s List of Rebellions in China serves as instructional (albeit voluminous) reading. Not for nothing does the Romance of Three Kingdoms begin by saying:
An empire long united, must divide; an empire long divided, must unite. This has been so since antiquity.
Brewitt-Taylor’s translation is even more succinct:
Empires wax and wane; states cleave asunder and coalesce.
Alexander also adds that this instability isn’t just an issue for the autocratic leaders themselves; under these kinds of regimes, it’s typically the entire country that has to reckon with instability and insecurity:
Reactionaries often claim that traditional monarchies are stable and secure, compared to the chaos and constant danger of life in a democracy. Michael Anissimov quotes approvingly a passage by Stefan Zweig:
In his autobiography The World of Yesterday (1942), the writer Stefan Zweig described the Habsburg Empire in which he grew up as ‘a world of security’:
Everything in our almost thousand-year-old Austrian monarchy seemed based on permanency, and the State itself was the chief guarantor of this stability . . . Our currency the Austrian crown, circulated in bright gold pieces, an assurance of its immutability. Everyone knew how much he possessed or what he was entitled to, what was permitted and what was forbidden . . . In this vast empire everything stood firmly and immovably in its appointed place, and at its head was the aged emperor; and were he to die, one knew (or believed) another would come to take his place, and nothing would change in the well-regulated order. No one thought of wars, of revolutions, or revolts. All that was radical, all violence, seemed impossible in an age of reason.
Michael’s comment: “[This] does a good job capturing the flavor and stability of the Austrian monarchy…it’s very interesting to read this in a world where America and Europe are characterized by political and economic instability and ethnic strife.”
I am glad Mr. Zweig (Professor Zweig? Baron Zweig?) found his life in Austria to be very secure. But we can’t just take him at his word.
Let’s consider the most recent period of Habsburg Austrian history – 1800 to 1918 – the period that Zweig and the elders he talked to in his youth might have experienced.
Habsburg Holy Roman Austria was conquered by Napoleon in 1805, forced to dissolve as a political entity in 1806, replaced with the Kingdom of Austria, itself conquered again by Napoleon in 1809, refounded in 1815 as a repressive police state under the gratifyingly evil-sounding Klemens von Metternich, suffered 11 simultaneous revolutions and was almost destroyed in 1848, had its constitution thrown out and replaced with a totally different version in 1860, dissolved entirely into the fledgling Austro-Hungarian Empire in 1867, lost control of Italy and parts of Germany to revolts in the 1860s-1880s, started a World War in 1914, and was completely dissolved in 1918, by which period the reigning emperor’s wife, brother, son, and nephew/heir had all been assassinated.
Meanwhile, in Progressive Britain during the same period, people were mostly sitting around drinking tea.
This is not a historical accident. As discussed above, monarchies have traditionally been rife with dynastic disputes, succession squabbles, pretenders to the throne, popular rebellions, noble rebellions, impulsive reorganizations of the machinery of state, and bloody foreign wars of conquest.
[Q]: And democracies are more stable?
Yes, yes, oh God yes.
Imagine the US presidency as a dynasty, the Line of Washington. The Line of Washington has currently undergone forty-three dynastic successions without a single violent dispute. As far as I know, this is unprecedented among dynasties – unless it be the dynasty of Japanese Emperors, who managed the feat only after their power was made strictly ceremonial. The closest we’ve ever come to any kind of squabble over who should be President was Bush vs. Gore, which was decided within a month in a court case, which both sides accepted amicably.
To an observer from the medieval or Renaissance world of monarchies and empires, the stability of democracies would seem utterly supernatural. Imagine telling Queen Elizabeth I – whom as we saw above suffered six rebellions just in her family’s two generations of rule up to that point – that Britain has been three hundred years without a non-colonial-related civil war. She would think either that you were putting her on, or that God Himself had sent a host of angels to personally maintain order.
Democracies are vulnerable to one kind of conflict – the regional secession. This is responsible for the only (!) major rebellion in the United States’ 250 year (!) history, and might be a good category to place Britain’s various Irish troubles. But the long-time scourge of every single large nation up to about 1800, the power struggle? Totally gone. I don’t think moderns are sufficiently able to appreciate how big a deal this is. It would be like learning that in the year 2075, no one even remembers that politicians used to sometimes lie or make false promises.
How do democracies manage this feat? It seems to involve three things:
First, there is a simple, unambiguous, and repeatable decision procedure for determining who the leader is – hold an election. This removes the possibility of competing claims of legitimacy.
Second, would-be rebels have an outlet for their dissatisfaction: organize a campaign and try to throw out the ruling party. This is both more likely to succeed and less likely to leave the country a smoking wasteland than the old-fashioned method of raising an army and trying to kill the king and everyone who supports him.
Third, it ensures that the leadership always has popular support, and so popular revolts would be superfluous.
If you remember nothing else about the superiority of democracies to other forms of government, remember the fact that in three years, we will have a change of leadership and almost no one is stocking up on canned goods to prepare for the inevitable civil war.
The benefits of democracy aren’t just limited to internal stability and security, either. One big advantage to being able to change political leaders every few years is that it can help make foreign relations more stable as well. It might not seem immediately obvious why this should be the case; after all, wouldn’t it seem like having the same person in charge for decades would make foreign relations more predictable (and therefore more stable) since other countries would always know exactly who they were dealing with? In reality, though, that kind of constancy can be a double-edged sword; if the autocrat in charge happens to be a belligerent jingoist, for example, and there’s no immediate prospect of them leaving office, other countries might feel compelled to take preemptive action against them in order to protect their own long-term interests – whereas if the same leader had been democratically elected, those rival countries might be content to just sit tight and wait for them to be ousted in the next election. Similarly, if there’s an autocratic political leader whom rival countries perceive to be weak or ineffectual, they might be tempted to seize the opportunity to try and expand their own power by making an aggressive move – whereas if they knew that the leader was going to be replaced in the next election, they might think better of pressing their luck in this way. Foreign policy analysts will often talk about the need for countries to “maintain credibility” in matters of national defense – meaning that if a country’s interests are threatened in some way, it has to be able to demonstrate its willingness to defend those interests, or else other countries will walk all over it. But a useful feature of democracy is that even if a particular country has seemingly lost some of its credibility in this sense, by (say) declining to respond to an act of aggression by some rival country, it might be able to re-establish it in short order simply by electing a new commander-in-chief who, as far as anyone knows, might have a more sensitive trigger (or conversely, reverse course from the reckless actions of an overly aggressive commander-in-chief by replacing them with someone more judicious). And because democratic countries do have this particular ace up their sleeve, it gives them more freedom to do things like (say) strategically decline certain foreign policy entanglements if they think that getting involved will result in some kind of Vietnam- or Afghanistan-style quagmire – whereas autocratic rulers will often feel more pressure to commit to ill-advised foreign policy actions and then never reverse course on them lest they lose some measure of their supposed credibility (see, for instance, Vladimir Putin’s recent invasion of Ukraine).
The aforementioned Sword of Damocles analogy is a good one for illustrating how much pressure autocrats constantly feel to maintain their hold on power, both from challengers at home and abroad. Without the tool of elections to serve as a pressure-release valve for defusing adversity and resolving quandaries with a simple change of leadership, autocratic governments must instead rely entirely on a strategy of trying to heavy-handedly suppress any hint of adversity or defiance before it can ever begin to challenge their rule in the first place. This is why autocracies have earned such a reputation for paranoia and overreactivity; and it’s why they can be such stressful and unpleasant (and often dangerous) regimes for their citizens to live under. I probably don’t need to give a whole big historical overview here of the endless examples of autocracies perpetrating horrendous acts of aggression both against foreign rivals and against their own populations, because such behavior has become so well-known that it has become practically synonymous with the terms “autocracy” and “authoritarianism” (although I’ll once again recommend Alexander’s post if you’re looking for a comprehensive rundown). Suffice it to say, though, that the reputation for tyranny is well-deserved; autocracies are more prone to violence and oppression by practically every measure. As Pinker writes:
Are democracies less likely to get into militarized disputes, all else held constant? [A study by Bruce Russett and John Oneal found that] the answer was a clear yes. When the less democratic member of a pair [of rival countries] was a full autocracy, it doubled the chance that they would have a quarrel compared to an average pair of at-risk countries. When both countries were fully democratic, the chance of a dispute fell by more than half.
[In a similar vein,] the political scientist Bethany Lacina has found that civil wars in democracies have fewer than half the battle deaths of civil wars in nondemocracies, holding the usual variables constant.
[And when it comes to incidents of governments deliberately killing their own people – so-called “democides” – it’s the same story. Rudolph] Rummel was one of the first advocates of the Democratic Peace theory, which he argues applied to democides even more than to wars. “At the extremes of Power,” Rummel writes, “totalitarian communist governments slaughter their people by the tens of millions; in contrast, many democracies can barely bring themselves to execute even serial murderers.” Democracies commit fewer democides because their form of governance, by definition, is committed to inclusive and nonviolent means of resolving conflicts. More important, the power of a democratic government is restricted by a tangle of institutional restraints, so a leader can’t just mobilize armies and militias on a whim to fan out over the country and start killing massive numbers of citizens. By performing a set of regressions on his dataset of 20th-century regimes, Rummel showed that the occurrence of democide correlates with a lack of democracy, even holding constant the countries’ ethnic diversity, wealth, level of development, population density, and culture (African, Asian, Latin American, Muslim, Anglo, and so on). The lessons, he writes, are clear: “The problem is Power. The solution is democracy. The course of action is to foster freedom.”
[Research from Barbara] Harff vindicated Rummel’s insistence that democracy is a key factor in preventing genocides. From 1955 to 2008 autocracies were three and a half times more likely to commit genocides than were full or partial democracies, holding everything else constant. This represents a hat trick for democracy: democracies are less likely to wage interstate wars, to have large-scale civil wars, and to commit genocides.
Here’s one great thing about democracy: […] democracies don’t engage in widespread slaughter of their own voting citizens. Government-led massacres are exceptionally rare within democracies. Economist William Easterly of New York University oversaw the creation of a new database on this topic with data from around the world, spanning 1820 to 1998. A key finding was that “in general, high democracy appears to be the single most important factor in avoiding large magnitudes of mass killings, as the highest quartile of the sample in democracy accounts for only 0.1% percent of all the killings.”
That’s a strong argument for democracy, and it’s one that I believe in. It has a twin argument, also based on over a century of real evidence: […] democracies don’t let their citizens die in famines. Every country in the twentieth century that you can think of that experienced massive, rapid increases in death from hunger and starvation was something other than a functioning democracy. Maybe it was a dictatorship, or quite likely it was colonized by some other country—and in a few cases, it may have had a democratic government on paper, but it lacked a capable government, one up to the task of providing rapid services for its citizens. But for more than a century, widespread, rapid death from hunger has never happened in a country with a functioning government where the citizens had the right to choose their government’s leaders.
It was the Nobel-winning economist Amartya Sen who made this bold claim, most famously and quite sweepingly in his excellent 1999 book, Development as Freedom: “No famine has ever taken place in the history of the world in a functioning democracy.” Other researchers have tried to beat up on Sen’s finding, but they have failed. Of course, one wonders whether this is just a correlation, a repeated pattern that might be caused by some other factor like prosperity or low corruption. Here’s one test, one that Sen himself used: compare what happens in a nation just before versus just after that nation becomes a democracy. India’s last famine—the Bengal famine—occurred in 1943. India became an independent nation in 1947. After the end of British rule that year, India was still poor and its government was riddled with corruption. But Indians never again experienced widespread death from famine. Holding the country constant and changing the type of government from a colonial outpost to a new democracy was, it appears, all it took to save lives.
[…]
Of course, [with both of these claims], there are caveats and provisos in the underlying research. In the case of famines, here’s one recurring question: How many deaths from hunger in how short a time does it take to count as a famine? But the overall message is strong: democracies substantially reduce the risk of widespread, short-run death of a nation’s citizens and overwhelmingly reduce the risk of government-backed massacres compared to other forms of real-world government.
Does making the trains run on time matter more to the economic growth of poor countries than niceties like freedom of expression and political representation? No; the opposite is true. Democracy is a check against the most egregious economic policies, such as outright expropriation of wealth and property. Amartya Sen, a professor of economics and philosophy at Harvard, was awarded the Nobel Prize in Economics in 1998 for several strands of work related to poverty and welfare, one of which is his study of famines. Mr. Sen’s major finding is striking: The world’s worst famines are not caused by crop failure; they are caused by faulty political systems that prevent the market from correcting itself. Relatively minor agricultural disturbances become catastrophes because imports are not allowed, or prices are not allowed to rise, or farmers are not allowed to grow alternative crops, or politics in some other way interferes with the market’s normal ability to correct itself. He writes, “[Famines] have never materialized in any country that is independent, that goes to elections regularly, that has opposition parties to voice criticisms and that permits newspapers to report freely and question the wisdom of government policies without extensive censorship.” China had the largest recorded famine in history; thirty million people died as the result of the failed Great Leap Forward in 1958–1961. India has not had a famine since independence in 1947.
As Alexander concludes, it’s simply undeniable that as a general rule, the quality of life for citizens in democracies is much better than for those in autocracies:
If you want to know why countries are becoming more democratic and less monarchist, it’s hard to get a more direct answer than this graph (although its attempt at a linear fit was a bad idea):
[…]
The most progressive countries today tend to be very wealthy, very peaceful, and comparatively urbanized. The least progressive countries tend to be poor, insecure, and comparatively rural.
If our goal is to “do what works,” then, we have to acknowledge that all the best empirical evidence points in the same direction – and that that direction is toward liberal democracy.
XXIV.
Now, in response to this, some critics will insist that despite whatever benefits democracy might produce at the societal level, it still poses a fundamental threat to liberty at the level of individual citizens which, while perhaps not as overtly oppressive as a typical autocracy, nonetheless constitutes its own kind of oppression. They’ll argue that because democracy relies on a system of majoritarian voting for its decision-making, in which everyone must obey the will of the majority regardless of whether they actually agree with it themselves, it still allows for the powerful to subjugate the weak, by mere virtue of outnumbering them – the so-called “tyranny of the majority.” If (say) there are an overwhelming number of voters of one particular demographic who want to enslave or exploit a smaller minority of another demographic, there’s nothing that smaller group can do about it; all anyone can do is hope they aren’t in that minority. A popular way of expressing this is to say that “democracy is two wolves and a sheep voting on what to have for dinner” – with the point being that it’s unjust to leave matters up to a majority vote when what’s at stake are people’s most fundamental rights (or their very existence).
Of course, this slogan doesn’t really work when it’s being used to try and discredit democracy in favor of a more autocratic system; after all, if the goal is to protect people’s rights, it doesn’t exactly help to switch from a system of majority rule to a system of minority rule (especially if it’s a minority of one). That would be like going from a system in which the wolves can vote to eat the sheep if they outnumber them to a system in which the wolves can vote to eat the sheep even if they don’t outnumber them – or more accurately, a system in which a single wolf makes the decision for everyone. Tyranny of the minority isn’t exactly an upgrade from tyranny of the majority.
Still, that doesn’t mean that the whole idea of the tyranny of the majority is completely baseless, or that we should just dismiss it out of hand – because protecting the rights of the individual against the collective really is an important concern. A powerful majority voting to repress a less powerful minority really would be an unjust outcome, and we should obviously want to avoid it.
Luckily, though, this is something that (here in our relatively liberal society) practically everyone realizes and understands at some level, which is why practically no one favors the kind of crude majoritarianism that would just allow for the majority to do whatever it wanted at all times without any protections for individual rights. Most voters understand that while they might sometimes be part of the dominant majority on certain issues, they won’t always be in the majority on every issue, so it’s important to have explicit rights carved out for people who are in the minority to protect them from unjust subjugation. That’s why the system that nearly every voter favors is “majority rule with minority rights” – a system that operates by majority vote as a general rule, but makes exceptions for government actions that would unduly violate individuals’ most basic rights (e.g. their freedom of speech, their freedom from unwarranted searches and seizures, etc.). That’s the kind of system that people are actually talking about when they argue in favor of democracy – not pure 100% majoritarianism – and accordingly, it’s what our whole legal system here in the US, with the Bill of Rights and everything else, is built around. (Other democratic countries use slightly different systems – in the UK, for instance, they don’t have an official Constitution or Bill of Rights like we do – but they still have a tradition of common law that largely serves the same function and accomplishes the same things.) So when critics try to argue that democracy is nothing but sheer tyranny of the majority, they’re really just attacking a straw-man caricature. Commenter ThatLanguageGuy provides a voice of reason against these kinds of arguments:
I get it, direct one-for-one vote democracy without any built-in protections for the individual are dangerous and, get this, don’t protect the individual. But surely it’s recognized that there aren’t any major institutions that operate in that way, right? Surely it’s acknowledged that a system by which individuals have some say in the direction of their governance is better than a system by which a handful of appointed-by-birth or conquering individuals rule over the population with no feedback whatsoever?
And it’s true; despite the critics’ assertion that democracy can only ever result in competing blocks of voters trying to oppress each other all the time, the reality when you actually look at real-life democratic institutions around the world is that they’re in fact set up with the specific function of helping people amiably coexist in a way that’s just and fair for everyone. And this principle can even be seen at the level of individual voting patterns; when you look at how citizens vote on various issues, what you find is that they mostly aren’t just selfishly voting for whichever policy would personally benefit them the most – they’re instead basing their votes more on their actual principles about what they think would be most fair and just for society as a whole. (Certainly if you consider your own reasons for voting the way you do, you’ll no doubt find that this is the case.) As Kevin Simler and Robin Hanson put it, quoting Haidt:
The literature on voting makes it clear that people mostly don’t vote for their material self-interest, that is, for the candidates and policies that would make them personally better off. Jonathan Haidt provides some examples in The Righteous Mind:
Parents of children in public school are not more supportive of government aid to schools than other citizens; young men subject to the draft are not more opposed to military escalation than men too old to be drafted; and people who lack health insurance are not more likely to support government-issued health insurance than people covered by insurance.
Obviously, there are sometimes exceptions to this – and occasionally, they may even be major exceptions. One of the biggest objections critics will raise against democracy, for instance, is “But we still had slavery in this country even though it was democratic!” And that’s true; prior to America’s founding, slavery had been the norm for centuries, so when the country was founded, it had slavery as well – for a bit. But what’s noteworthy here is that despite the fact that slavery had been the norm for centuries up to that point, once American democracy was established in 1776 slavery was abolished within a single human lifetime – first with northern states abolishing it in the late 1700s, then with a national ban on the importation of slaves in 1807, and finally with total abolition in 1865. A practice that had persisted for thousands of years under autocracy ended in a matter of decades only once democracy entered the picture; and that’s not a coincidence. Oppression of that magnitude simply can’t be sustained in a liberal democracy the way it can under autocracy.
In a similar vein, another argument critics will sometimes use against democracy is “But the Nazis were democratically elected!” And again, this is totally true; before Hitler seized absolute power and became a dictator, he and his Nazi cronies were ordinary politicians working under a democratic system (although he himself was technically appointed, not elected). What’s worth noting, though, is that during this time, they weren’t invading other countries and committing genocides and so on; it was only after they did away with Germany’s democracy and began ruling autocratically that they could commence with those atrocities. So if the argument being made here is just that it’s possible for democracy to be overthrown in favor of some other form of government, then yes, of course that’s true, just as it is for every other political system. But if the contention is that democracy is therefore the problem and ought to be done away with, then that just seems like a backward argument; if Hitler’s example demonstrates anything, it’s the importance of not allowing political leaders to turn democracy into autocracy – which is exactly the point I’ve been arguing for here.
XXV.
Okay then, fine, maybe democracy is generally better than autocracy at avoiding the worst possible outcomes like tyranny and slavery and genocide and so on. But that’s a pretty low bar. On the more basic level of day-to-day public administration, aren’t anti-statists still right to distrust its ability to always do the right thing? Considering how often politicians screw things up, can we really trust them at the most fundamental level?
Well, in one sense, democracy’s answer to this question is no, of course we shouldn’t trust them; that’s the whole reason why elections exist – so we can actually hold them accountable instead of just handing them the reins of power and leaving them to their own devices. But still, even with elections, isn’t the whole system just so inundated with incompetency and corruption as to not even be worth the effort? Isn’t the entire government just one big dysfunctional train wreck? Isn’t our democracy, as is so often said, a fundamentally “broken” institution?
This is a pretty popular attitude nowadays – and I’ll admit, it’s one that I used to repeat fairly often myself back when I was first getting interested in political issues. I suppose that to some extent, anyone who wants to reform politics in any way will find it ideologically expedient to claim that the current system is completely irredeemable, so as to strengthen the case for their proposed reforms. But these days, I no longer feel such an urge to write off our entire system as utterly “broken” or “unsalvageable” – and in fact, I’m now at a point where it frankly strikes me as a bit odd when people do use such terms to describe it. I mean, obviously our democracy isn’t perfect; no form of government is. But when people talk about our democracy being totally broken, the question nowadays that immediately comes to my mind is, “Compared to what?” After all, when you compare modern-day First World democracy to pretty much every other form of government that has ever existed, it’s one of the least corrupt and inefficient and dysfunctional systems in the history of the world. That’s not to say it can’t still be improved, of course (and we should always want to improve it where we can) – but to say that no government can even be considered so much as “pretty good” unless it’s completely faultless is unreasonable and unrealistic. If our only point of comparison is some perfect utopia that only exists in our heads, then every system will fall short of that idealization. A much fairer standard is to compare our democracy to all the other systems that have actually been tried in the real world – and compared to all those other nepotistic kleptocracies and corrupt dictatorships, our system really does look pretty darn good. There’s a reason why that classic Churchill line is always brought up in discussions like this, and that’s because it’s true: Democracy is the worst form of government, except for all the others.
To be fair to the anti-statist critics, it’s not hard to understand why they’re so distrustful of government in general, considering how many examples of bad governments do exist. As we’ve discussed previously, giving the government too much power – particularly unaccountable power – has proven time and time again to be a recipe for disaster. But when the critics point to such examples of bad governments as proof that all governments are therefore bad, or that the very idea of government is innately flawed, it seems to me that they’re making a mistake on par with (say) pointing to a few bad movies as proof that all movies are bad, or pointing to a few poorly-run businesses as proof that it’s impossible to run a business well. The mere fact that bad versions of a thing exist doesn’t mean that it’s therefore impossible for good versions of it to exist; and just saying that the good versions could potentially turn bad doesn’t invalidate their goodness either. As commenter less-effort-than-my-real-blog points out, despite anti-statists’ protestations that we can’t trust politicians to decide what’s right and what isn’t (lest they abuse this power to infringe on our rights), the reality is that under our democratic system, we do in fact trust our elected representatives to do exactly that on all kinds of issues, and it hasn’t sent us down a slippery slope toward totalitarianism – it’s just enabled us to have a basic functional society:
[If you’re an anti-statist, you might insist that] you don’t trust the government to decide [what’s OK and what] isn’t, so you don’t want the government [involved at all. But how far do you go with this principle? Imagine, for instance,] a world in which most people are anti-murder, but there is a small but vocal pro-murder psychopath faction which argues that by murdering the weak we make our species stronger. Someone attempts to make a law banning murder, and someone else says “But you can’t ban murder. I don’t trust the government to decide which acts are OK and which aren’t; first they’ll be non-consensual violence and murder, which I’m fine with, but what if they ban BDSM or euthanasia or martial arts or owning kitchen knives?”
Or “That’s an unstable situation; first we’ll be telling the pro-murder faction that they’re not allowed to act on their values, which is fine, but I’m part of the cat-owning faction. Most of the population owns dogs. I don’t want to set a precedent of banning things that small factions like and the rest of society doesn’t, because although I’m OK with oppressing the pro-murder faction I’m kind of worried that the next faction they oppress will be the cat-owning faction and I’ll be forced to have a dog instead.”
Er, we don’t care. Banning murder is important, and as a matter of fact we actually do trust the government not to make it illegal to own a kitchen knife, because making that illegal would be fucking stupid and we’d vote for someone else. I’m pretty sure I could write at length about why these anti-banning-murder arguments are dumb but I’m also pretty sure I don’t need to because we all agree that banning murder is an Excellent Thing To Do.
And yeah, maybe the government will overstep its bounds and ban not just murder but also owning kitchen knives. In some cases, the government has overstepped its bounds. I think murder should be illegal, but abortion and euthanasia shouldn’t be. The government has nevertheless banned abortion after a certain number of weeks and banned euthanasia, and it has utterly failed to make cryonics mandatory. But this doesn’t seem like an argument for making murder legal. This really doesn’t seem like an argument for making murder legal.
For all the talk about the untrustworthiness of elected officials, the truth is that most of the time, they actually do a pretty good job of keeping their policy platforms aligned with voters’ preferences. There are occasional counterexamples, naturally, and they inevitably tend to get all the attention – but in the grand scheme of things, these really are relatively rare exceptions to the general rule. After all, just consider how many issues there are where a particular politician’s stance could potentially deviate from their constituents’ preferences; you could have a politician who believed that murder should be legal, for instance, or that owning kitchen knives should be illegal, or that having pet cats should be banned, or any of a million other things. But what we see instead under our democratic system (in contrast with autocratic countries, where leaders with such outlandish views actually do sometimes manage to come into power) is that on practically every major issue, elected officials’ stances tend to align pretty closely with the views and preferences of the median voter. They might lean slightly more to the left or right on certain issues depending on their party affiliation and so on – but they can’t deviate too much, or else it’ll cost them their ability to get elected in the first place. So for instance, Republicans might be able to successfully run on a platform of fortifying the southern border and cracking down on illegal immigration, but they can’t go too much further than that – e.g. declaring that they want to permanently ban all immigration (even if that’s what many of them do want) – because such an extreme stance would render them unelectable and their party would promptly go extinct. Likewise, Democrats might be able to successfully run on a platform of reducing penalties for certain drug crimes, but they can’t go so far as to say that they want to make all drugs 100% legal and available in stores (even if, again, that’s what many of them do want), because that stance would lose them so many votes that they’d never be able to win another election. The two parties, in other words, are forced to position themselves just slightly to the left and right of the median voter on every major issue, and no further – because if they tried to run on more extreme positions, the voting equivalent of natural selection would wipe them out of existence.
This whole phenomenon, which I’ve talked about briefly on here before, is known as the Median Voter Theorem, and I think it’s one of the more under-recognized forces in modern politics. Activists and commentators (of both parties) will often become fixated on the idea that if they can just fight hard enough for their favorite far-left or far-right candidates, then all that energy will be enough to propel those candidates into office, and they’ll finally be able to push through a real liberal agenda or conservative agenda – only for those activists to be frustrated when their favored candidates either lose their elections or, if they occasionally manage to win, turn out to have less of a radical impact than advertised. This will then typically trigger a bunch of complaints about the system being broken or corrupt or rigged, along with bitter comments about how “both parties are the same” and our so-called “choice” between them is a sham. But the fact that elected officials generally tend to be more centrist than what radical activists might prefer isn’t a symptom of the system being rigged; it’s a natural result of the Median Voter Theorem working exactly as you’d expect. And to the extent that this results in competing candidates having relatively similar policy positions, that’s not a bug; it’s a feature. As Alexander writes:
IDEALISM: My candidate stands for real values, unlike that awful other guy. And that’s great!
CYNICISM: Both candidates are pretty much the same; elections never offer any real choice. And that’s an outrage!
POSTCYNICISM: Both candidates are pretty much the same. This is exactly what the median voter theorem would predict if both candidates are trying to optimize to match the prevailing opinions of the American people as closely as possible. This proves that the system has done its work even before the final choice between the last two people, and that whoever is elected will agree with the values of the majority of Americans. And that’s great!
Of course, if you’re someone whose political views are more on the radical side, you might not consider it so great that the two parties consistently triangulate their platforms around the median voter’s positions in this way. But the fact that the system works like this doesn’t mean it’s corrupt or broken; all it means is that policies generally have to have a good degree of popular support before they can pass into law. If the kinds of policies you prefer aren’t able to pass into law, that doesn’t mean there’s some sinister cabal of puppet-master elites standing in the way; more likely, it just means that the policies you favor aren’t quite so favored among the rest of the electorate. For that reason, then, if you’re trying to effect real political change, your focus should probably be on trying to change some of those minds and budge the median voter’s position slightly more toward your own, rather than just continuing to operate on the extreme ends of the ideological spectrum, well outside what most people would consider the current range of viable policies (AKA the Overton Window). Changing minds is never easy, of course – but even so, if the reforms you’re pushing for are truly transformative ones, you’re likely to have more luck trying to shift popular opinion on them than just taking that popular opinion for granted and trying to gain electoral advantages within the Overton Window that already exists. In other words, instead of obsessing over elections and party politics and perpetually trying to figure out how to enable the candidates who agree with your preferred policies to defeat the candidates who don’t, you’ll probably be better off putting that same energy into shifting the Overton Window until (ideally) there’s enough of an ideological consensus around your position that the Median Voter Theorem forces both parties to come around and fall into alignment with it, and no candidate can viably contend for office in the first place unless they subscribe to it. That might sound like an impossibly tall order – and to be sure, it is extremely difficult, which is why big transformative policy changes are so rare. But on the occasions where such transformative policy changes actually do break through, it’s not typically because one of the parties has managed to get a supermajority of their most radical candidates elected despite the mass of voters not sharing those radical positions; it’s because the voters themselves have shifted in their positions, and therefore the range of candidates they’re willing to support has shifted as well. That is, the political shifts follow from the ideological shifts; the former is a downstream product of the latter. If the majority of voters oppose a particular policy, then usually not even an elected supermajority will be enough to push it through; back in the 1960s, for instance, not even a fully Democrat-controlled government would have been enough to get gay marriage legalized (even though many Democrats supported gay rights at the time), because pushing for that particular policy in that particular era would have drawn so much voter backlash that it would have cost those Democrats their seats. But once an issue gains enough popular support, passing it into law becomes not only possible but inevitable; in today’s political environment, any national candidate who tried to run on a platform opposing gay marriage would be fighting an uphill battle, if not completely sabotaging their political career.
At the end of the day, the median voters’ positions are what define the range of viable policies – and in turn, the representatives they elect actually do tend to approximate those positions fairly closely (hence why incumbents in the House of Representatives have a reelection rate of over 90%). That doesn’t mean that voters’ preferences are the only thing that matters, of course, or that politicians will always be entirely immune to other influences like special interest groups and lobbyists and so on – such groups can certainly still bend things their way here and there – but as Caplan points out, they can only really do so in the margins where voters don’t care all that much; when it comes to the biggest and most important issues, the will of the voters still holds ultimate sway:
Politicians’ wiggle room creates opportunities for special interest groups—private and public, lobbyists and bureaucrats—to get their way. On my account, though, interest groups are unlikely to directly “subvert” the democratic process. Politicians rarely stick their necks out for unpopular policies because an interest group begs them—or pays them—to do so. Their careers are on the line; it is not worth the risk. Instead, interest groups push along the margins of public indifference. If the public has no strong feelings about how to reduce dependence on foreign oil, ethanol producers might finagle a tax credit for themselves. No matter how hard they lobbied, though, they would fail to ban gasoline.
XXVI.
So all right then, maybe it’s true that politicians mostly keep themselves aligned with voters as far as their on-paper positions go. But that’s a different matter from how well the programs they administer actually function in practice, right? In terms of actual execution, aren’t government programs still grossly inefficient and wasteful and bureaucratic? Aren’t anti-statist critics still right to distrust government on that basis, at least?
Well, it’s certainly true that there’s a lot of bureaucracy and red tape involved in government – I don’t think anybody would deny that. Bureaucracy, after all, is an unavoidable part of any large organization, whether public or private. As much as critics like to portray government as unique in this regard, big companies and other private organizations often have to deal with it just as much themselves, if not more so; there’s a reason the idea of corporate bureaucracy has become just as much of a cliché these days as the idea of government bureaucracy (hence the popularity of satires like Office Space, Dilbert, etc.). For better or worse, once an organization reaches a certain size, some degree of bureaucracy will simply come with the territory.
It’s also true that government programs do sometimes underperform, and even fail entirely in some cases. Again, though, that’s hardly something that’s unique to government; private-sector ventures also fail all the time, and we don’t consider this a strike against them – we recognize that that’s just how these things go. No venture capitalist ever expects 100% of their investments to succeed; they don’t even expect most of them to succeed. They fully accept that the majority of their investments will fail, because they know that the gains from the few investments that do succeed will more than make up for it. In this sense, what they’re doing is more like baseball, where if a player hits even a third of the pitches thrown at them (i.e. they fail two-thirds of the time), that’s still considered extremely good – because hitting major-league pitches is hard, and it would be ridiculous to expect a perfect rate of success. And the same is true of many government functions. Governments are often charged with performing tasks that have to be done, but are so difficult to do efficiently that the private sector has been unable to do them itself. In these kinds of situations, it might well be the case that simply operating at a moderate loss, or having programs that succeed half the time and fail the other half, would actually be a tremendous accomplishment. To criticize them as wasteful and inefficient, then, would once again raise the question of “compared to what?” By what standard, exactly, are they being judged as “underperforming”?
To illustrate this point with an example, consider the case of Solyndra. This is probably the most commonly-cited modern example of government misspending; when the government invested $535M in the solar manufacturer only for it to declare bankruptcy in 2011, it instantly became the go-to shorthand for government waste. But as Mark Paul and Nina Eichacker explain, this wasn’t actually a case of the government just irresponsibly shoveling taxpayer dollars into some random money pit; Solyndra was in fact part of a larger portfolio of investments that turned out to be quite successful overall:
[As part of] the 2009 American Recovery and Reinvestment Act, [the government introduced] a renewable-energy loan guarantee program [which, in addition to financing 23 other companies,] financed the high-profile “failure” of Solyndra.
While Solyndra’s downfall received a lot of spilled ink in the media, Solyndra was actually one of only two failures. The other 22 companies repaid their loans, resulting in a profitable program overall that helped accelerate multiple green industries in the US. And one recipient is now a wildly successful electric automaker: Tesla.
[…]
The failure of one firm, due largely to changes outside of its control, while more than 20 others succeeded under the same program, is precisely the mark of a successful industrial policy. The federal program that supported Solyndra took chances and funded projects at scales that the finance industry and venture capitalists were simply unable or unwilling to. In the end, these bets overwhelmingly paid off, providing a vital boost to the domestic solar, wind, and EV industries.
[Ultimately,] Solyndra was part of a successful program. If no government-backed firms failed, it’d be a clear sign that the government was being too conservative. These investments include risk and benefits that don’t necessarily align perfectly with industry titans. That’s precisely why it’s the government’s job to step in and correct these market failures.
Of course, making these kinds of investments isn’t the only thing government does, so it’s not the only area where its spending habits have come in for criticism. Another popular category of complaints includes various versions of the famous “$600 hammer” – referring to an incident in 1983 in which a federal spending report revealed that the Pentagon had (supposedly) spent hundreds of dollars on a hammer, among other such expenses. This is probably the second-most commonly cited example of the government wasting taxpayer dollars for no good reason (if not the first, even ahead of Solyndra). But again, it’s a misleading one; as Sydney J. Freedberg Jr. explains, the exorbitant price tag for that hammer was actually just a result of the Pentagon’s old procedures for simplifying its bookkeeping:
Ever since the Defense Department procurement scandals of the 1980s, the $600 hammer has been held up as an icon of Pentagon incompetence. Immortalized in the “Hammer Awards” that Vice President Al Gore’s program to reinvent government gives out to waste-cutters, this absurdly overpriced piece of hardware has come to symbolize all that’s wrong with the government’s financial management.
One problem: “There never was a $600 hammer,” said Steven Kelman, public policy professor at Harvard University’s John F. Kennedy School of Government and a former administrator of the Office of Federal Procurement Policy. It was, he said, “an accounting artifact.”
The military bought the hammer, Kelman explained, bundled into one bulk purchase of many different spare parts. But when the contractors allocated their engineering expenses among the individual spare parts on the list—a bookkeeping exercise that had no effect on the price the Pentagon paid overall—they simply treated every item the same. So the hammer, originally $15, picked up the same amount of research and development overhead—$420—as each of the highly technical components, recalled retired procurement official LeRoy Haugh. (Later news stories inflated the $435 figure to $600.)
“The hammer got as much overhead as an engine,” Kelman continued, despite the fact that the hammer cost much less than $420 to develop, and the engine cost much more—“but nobody ever said, ‘What a great deal the government got on the engine!’”
In this light, then, as Freedberg concludes, the legend of the $600 hammer is actually a different kind of cautionary tale from what it’s typically portrayed as. Instead of being about simple, obvious government waste, the real lesson of the story is how selective reporting can distort the facts to create false impressions. This is true of the Solyndra case as well; and it’s also true of many more such stories purporting to reveal the waste and inefficiency of government. And again, to be clear, none of this is to say that it’s impossible for government to ever underperform, or that we shouldn’t want to hold it to a high standard. Obviously, governments can and do screw things up sometimes, just like private companies and organizations do; if you want to find instances of government spending that really are needlessly wasteful, you can certainly do so. This story about San Francisco spending $1.7M on a public restroom is one recent example that comes to mind. And the Pentagon, in particular, is one department that seems especially prone to wasteful spending; despite the $600 hammer turning out to be a myth, you need only search the web for “wasteful Pentagon spending” to see plenty of similarly egregious examples that actually happen to be true. So the point here isn’t to deny that there are areas where the government’s spending habits really could be significantly improved. Rather, the point is that it would be equally misguided to imagine that these kinds of failings are representative of everything the government does, or that they’re the standard for government action as a whole. Anti-statists might want to insist that this is in fact the case – either because they genuinely believe it, or because they know that exaggerating the dysfunction of government can be a cheap way of making their position seem more credible – but the truth is that the “government can literally never do anything right” narrative really is just that: an exaggeration. In actual practice, our modern-day First World form of democratic government does pretty well most of the time, just carrying out its unglamorous business without any muss or fuss, and handling the kinds of tasks that the private sector is unable to fully address on its own. This might not be very exciting or scandalous as narratives go, but as Alexander explains, it does have the advantage of actually being true:
[Q]: Most government programs are expensive failures.
I think this may be a form of media bias – not in the sense that some sinister figure in the media is going through and censoring all the stories that support one side, but in the sense that “Government Program Goes More Or Less As Planned” doesn’t make headlines and so you never hear about it.
Let’s say the government wants to spent $1 million to give food to poor children. If there are bureaucratic squabbles over where the money’s supposed to come from, that’s a headline. If they buy the food at above-market prices, that’s a headline. If some corrupt official manages to give the contract to provide the food to a campaign donor along the way, that’s a big headline.
But what if none of these things happen, and poor children get a million dollars’ worth of food, and eat it, and it makes them healthier? I don’t know about you, but I’ve never seen a headline about this. “Remember that time last year when Congress voted to give food to poor children. Well, they got it.” What newspaper would ever publish something like that?
This is in addition to newspapers’ desire to outrage people, their desire to sound “edgy” by pointing out the failures of the status quo rather than sounding like they’re “pandering”, and honestly that they’re caught up in the same “government can never do anything right” narrative as everyone else.
Since every single time you ever hear about a government project it is always because that government project is going wrong, of course you feel like all government projects go wrong.
[Q]: But a specific initiative to get money to the poor is one thing. What about a whole federal agency? We would know if it were failing, but we’d also be able to appreciate it when it succeeds, too.
Federal agencies that are successful sink into background noise, so that we don’t think to thank them or celebrate them any more than we would celebrate that we have clean water (four billion people worldwide don’t; thank the EPA and your local water board).
For example, the Federal Aviation Administration helps keep plane crashes at less than one per 21,000 years of flight time; you never think about this when you get on a plane. The National Crime Information Center collects and processes information about criminals from every police department in the country; you never think about this when you go out without being mugged. Zoning regulations, building codes, and the fire department all help prevent fires from starting and keep them limited when they do; you never think of this when you go the day without your house burning down.
One of government’s major jobs is preventing things, and it’s very hard to notice how many bad things aren’t happening, until someone comes out with a report like e. coli poisoning has dropped by half in the past fifteen years. Even if you do hear the statistics, you may never think to connect them to the stricter food safety laws you wrote a letter to the editor opposing fifteen years ago.
He reiterates the point elsewhere with one more example:
[Q]: All government programs, even the ones that seem good, have unintended consequences that end up hurting more people than they help.
You’d think that from reading the news, wouldn’t you? Every day, you read another story about how the government tries some well-intentioned plan to help workers or save the poor or something, and then it backfires and workers and the poor end up worse off than ever.
There’s a bit of a selection bias and a recall bias here though, isn’t there? “Government Program Works More Or Less As Planned” doesn’t make good headlines and isn’t very memorable. Take government regulation of formaldehydes. You’ve never heard about government regulation of formaldehydes? They’re a chemical used in various industrial applications that was found to cause cancer. So the government banned it. Now industry uses other chemicals at more or less the same cost and there’s less cancer. As far as I know, there were no unintended consequences or horrible screwups or anything.
[Q]: If government hadn’t banned formaldehydes, consumer outcry would have caused the free market to eliminate them. Once again, the plodding government messes things up with a premature use of force where the free market would have performed so much more elegantly.
Tricked you. The government has banned formaldehyde: the EU government. If you’re in America, it’s perfectly legal and probably in a whole host of products that you use every day. Why hasn’t the free market eliminated it? Well, did you know formaldehyde caused cancer? And do you know exactly which products at your local store do or don’t contain formaldehyde? Are you even sure I’m not making this whole example up?
[Q]: Are you?
No.
Here in the US, we tend to have a harder time than other advanced countries overcoming our mistrust of government, so we’re often unwilling or unable to implement sensible regulations like these. And in fairness, this kind of reluctance is perfectly reasonable and healthy to an extent; after all, as we’ve discussed, giving the government too much regulatory power can interfere with private markets’ ability to operate productively, and in the most extreme cases (e.g. the Soviet Union, Maoist China, North Korea, etc.) can even drag down the entire economy. Having said that, though, it seems equally clear that when we get so carried away with not wanting to become the Soviet Union that we lose the ability to even accept basic common-sense regulations like keeping formaldehyde out of our everyday consumer goods, that’s the kind of overcorrection that takes things too far in the other direction. Not wanting to become a totalitarian command economy is all well and good; but we’re nowhere near that point in the US right now, and to pretend otherwise is counterproductive at best. In truth, our current level of regulation generally allows private markets to operate just fine. And in fact, because we’ve made such a point of obsessively weeding out any regulations that could unduly interfere with the market, it’s hard to find any indication that recently-added federal regulations have appreciably slowed down business growth at all, even when specifically looking for evidence to that effect. Here’s Rachel Cohen, for instance, recounting Tabarrok’s attempt (with Nathan Goldschlag) to do exactly that:
Alex Tabarrok is no one’s idea of a big-government liberal. A libertarian economist at George Mason University, he’s best known for cofounding Marginal Revolution, one of the most popular economics blogs on the internet. A deep skeptic of government bureaucracies, he has written favorably of private prisons, private airports, and even private cities.
That’s why a study he co-published earlier this year is so noteworthy. When Tabarrok and his former grad student Nathan Goldschlag set out to measure how federal regulations impact business growth, they were sure they’d find proof that regulations were dragging down the economy. But they didn’t. No matter how they sliced the data, they could find no evidence that federal regulation was bad for business.
[…]
Armed with RegData, Tabarrok and Goldschlag set out to show that regulations were at least partly to blame. But they couldn’t. There was simply no correlation, they found, between the degree of federal regulation and the decline of business dynamism. The decline was seen across many different industries, including those that are heavily regulated and those that are not. They tried two other independent tests that didn’t rely on RegData, and came to the same conclusion: an increase in federal regulation just could not explain what was going on.
[…]
If federal regulation isn’t behind the dynamism die-off, then what is? Tabarrok’s paper suggests that economists need to look elsewhere. Eli Lehrer, head of the pro-deregulation think tank R Street Institute, argues that some of the most burdensome regulations are state and local—zoning, building codes, occupational licensing, and the like. Tabarrok and Goldschlag agree that more attention should be paid to the potential effects of non-federal regulations.
But a more likely explanation—one that has been gaining purchase among both think tanks and elected Democrats—is rising corporate concentration. (See Gilad Edelman, “The Democrats Confront Monopoly,” November/December 2017.) The trend of declining dynamism since 1980—along with wage stagnation, rising inequality, and a host of other ills—has tracked a parallel rise in monopolization, as the economy becomes increasingly consolidated in the hands of a few giant businesses. As New York Times columnist Eduardo Porter put it recently, “By allowing an ecosystem of gargantuan companies to develop, all but dominating the markets they served, the American economy shut out disruption. And thus it shut out change.”
This hasn’t happened by accident, but is, rather, the result of deliberate decisionmaking, beginning under Reagan, to dial down the enforcement of antitrust law. In other words, it is a consequence of deregulation, not overregulation.
Now, obviously Cohen’s conclusion here isn’t the end of the discussion; even if we grant that overregulation isn’t as much of a problem as is commonly portrayed, there are still plenty of other arguments that anti-statists can turn to aside from just the anti-regulation ones. For instance, they’ll often argue that it’s actually government’s spending habits, even more so than its regulatory habits, that discourage productivity and thereby hurt the economy most. Specifically, they’ll claim that government spends so much money on things like welfare – taxing the rich and redistributing that money to the poor – that it incentivizes millions of people to stop working altogether and just sit at home collecting their government checks instead, dragging down the broader economy with their indolence. Many conservatives will in fact believe this to be such a massive problem that they’ll estimate its scale to be orders of magnitude larger than it actually is – imagining that something like 90% of the federal budget (or some other wild figure like that) goes into the pockets of undeserving “welfare queens.” But contrary to what these conservatives think, the actual ratio of anti-poverty programs to other government expenditures is basically the opposite of their imagined ratio. As explained by Caplan (who himself is more famously libertarian in his leanings than just about anyone):
Noneconomists imagine that welfare disincentives are an implausibly large burden.
[…]
Poverty programs, even broadly interpreted, add up to only 10% of federal spending. This is many times larger than foreign aid [which is itself only about 1% of the federal budget], but still too small to be a “major reason” for subpar economic performance. Furthermore, welfare recipients come from the least skilled segment of the population. This tightly caps the economic damage of their absence from the workforce.
Of course, you could argue that the number is actually bigger than 10% if you expand the definition of “welfare” to include entitlements like Social Security (which people pay into over their entire working lives). But even this expanded definition doesn’t really work as an explanation for reduced productivity, since the vast majority of the benefits from those entitlement programs go to people who are either already working or wouldn’t be able to productively work anyway. As Arloc Sherman, Robert Greenstein, and Kathy Ruffing explain:
Some conservative critics of federal social programs, including leading presidential candidates, are sounding an alarm that the United States is rapidly becoming an “entitlement society” in which social programs are undermining the work ethic and creating a large class of Americans who prefer to depend on government benefits rather than work. A new CBPP analysis of budget and Census data, however, shows that […] 91 percent of the benefit dollars from entitlement and other mandatory programs went to the elderly (people 65 and over), the seriously disabled, and members of working households. People who are neither elderly nor disabled — and do not live in a working household — received only 9 percent of the benefits.
Moreover, the vast bulk of that 9 percent goes for medical care, unemployment insurance benefits (which individuals must have a significant work history to receive), Social Security survivor benefits for the children and spouses of deceased workers, and Social Security benefits for retirees between ages 62 and 64. Seven out of the 9 percentage points go for one of these four purposes.
[…]
In short, both the current reality and the trends of recent decades contrast sharply with the critics’ assumption that social programs increasingly are supporting people who can work but choose not to do so. In the 1980s and 1990s, the United States substantially reduced assistance to the jobless poor (through legislation such as the 1996 welfare law) while increasing assistance to low-income working families (such as through expansions of the Earned Income Tax Credit). The safety net became much more “work-based.” In addition, the U.S. population is aging, which raises the share of benefits going to seniors and people with disabilities.
Given these facts, then, it seems fair to say that the idea that the nation’s productivity is being crippled by freeloading welfare recipients simply refusing to pull their weight – much like the idea that it’s being strangled by unbearable levels of regulation – is generally overstated by conservatives, to say the least. Still, we have to acknowledge that however reasonable all these regulations and anti-poverty programs might be, they do still cost money, and that money has to come from somewhere – which means that people have to give up some of their wealth in the form of taxes in order to fund them. So isn’t that where the real drain on the economy is coming from – from the revenue side of the equation? Aren’t conservatives right about that point, at least, that government is systematically decimating the private sector’s productivity by taxing it?
Well, again, this is one more area where the critics are indeed pointing at something that can happen, but are considerably exaggerating the degree to which it actually is happening. No doubt, it’s true that many kinds of taxes can discourage production if they’re too high, since (for example) work that might be worth doing for $20/hr might no longer be worth doing if the actual take-home pay after taxes only comes out to $5/hr. On the other hand, we can see from real-world examples that if the kinds of taxes being imposed aren’t quite so ham-handed, and are actually targeted in the right way, it’s perfectly possible to have an economy that’s still healthy and productive even if tax revenues are quite high. As usual, the Nordic countries are a good example here, as Bruenig notes:
What the Nordics have shown over the past three decades in particular is that you can transfer (and provide universal welfare benefits, which are transfers in net) at a far higher level than the US currently does without growing slower than the US. One of the interesting things about the last few decades is that cross-country data among rich, developed countries has not shown any consistent relationship between tax levels and growth levels within the range of tax levels out there at present.
So how does this work, exactly? Why don’t higher taxes always necessarily correlate with lower growth levels? To answer this question, we might flip it around and ask it from the opposite direction – why don’t tax cuts always necessarily lead to higher growth levels? Krugman breaks things down by examining the effects (or lack thereof) of a big tax cut passed by the Trump administration in 2017 (Krugman is writing this a year later, in late 2018):
[In his first two years in office, Trump had] only one major legislative achievement: a big tax cut for corporations and the wealthy. Still, that tax cut was supposed to accomplish big things. Republicans thought it would give them a big electoral boost, and they predicted dramatic economic gains. What they got instead, however, was a big fizzle.
The political payoff, of course, never arrived. And the economic results have been disappointing. True, we’ve had two quarters of fairly fast economic growth, but such growth spurts are fairly common — there was a substantially bigger spurt in 2014, and hardly anyone noticed. And this growth was driven largely by consumer spending and, surprise, government spending, which wasn’t what the tax cutters promised.
Meanwhile, there’s no sign of the vast investment boom the law’s backers promised. Corporations have used the tax cut’s proceeds largely to buy back their own stock rather than to add jobs and expand capacity.
But why have the tax cut’s impacts been so minimal? Leave aside the glitch-filled changes in individual taxes, which will keep accountants busy for years; the core of the bill was a huge cut in corporate taxes. Why hasn’t this done more to increase investment?
The answer, I’d argue, is that business decisions are a lot less sensitive to financial incentives — including tax rates — than conservatives claim. And appreciating that reality doesn’t just undermine the case for the Trump tax cut. It undermines Republican economic doctrine as a whole.
About business decisions: It’s a dirty little secret of monetary analysis that changes in interest rates affect the economy mainly through their effect on the housing market and the international value of the dollar (which in turn affects the competitiveness of U.S. goods on world markets). Any direct effect on business investment is so small that it’s hard even to see it in the data. What drives such investment is, instead, perceptions about market demand.
Why is this the case? One main reason is that business investments have relatively short working lives. If you’re considering whether to take out a mortgage to buy a house that will stand for many decades, the interest rate matters a lot. But if you’re thinking about taking out a loan to buy, say, a work computer that will either break down or become obsolescent in a few years, the interest rate on the loan will be a minor consideration in deciding whether to make the purchase.
And the same logic applies to tax rates: There aren’t many potential business investments that will be worth doing with a 21 percent profits tax, the current rate, but weren’t worth doing at 35 percent, the rate before the Trump tax cut.
Also, a substantial fraction of corporate profits really represents rewards to monopoly power, not returns on investment — and cutting taxes on monopoly profits is a pure giveaway, offering no reason to invest or hire.
Now, proponents of the tax cut, including Trump’s own economists, made a big deal about how we now have a global capital market, in which money flows to wherever it gets the highest after-tax return. And they pointed to countries with low corporate taxes, like Ireland, which appear to attract lots of foreign investment.
The key word here is, however, “appear.” Corporations do have a strong incentive to cook their books — I’m sorry, manage their internal pricing — in such a way that reported profits pop up in low-tax jurisdictions, and this in turn leads on paper to large overseas investments.
But there’s much less to these investments than meets the eye. For example, the vast sums corporations have supposedly invested in Ireland have yielded remarkably few jobs and remarkably little income for the Irish themselves — because most of that huge investment in Ireland is nothing more than an accounting fiction.
Now you know why the money U.S. companies reported moving home after taxes were cut hasn’t shown up in jobs, wages and investment: Nothing really moved. Overseas subsidiaries transferred some assets back to their parent companies, but this was just an accounting maneuver, with almost no impact on anything real.
So the basic result of lower taxes on corporations is that corporations pay less in taxes — full stop. Which brings me to the problem with conservative economic doctrine.
That doctrine is all about the supposed need to give the already privileged incentives to do nice things for the rest of us. We must, the right says, cut taxes on the wealthy to induce them to work hard, and cut taxes on corporations to induce them to invest in America.
But this doctrine keeps failing in practice. President George W. Bush’s tax cuts didn’t produce a boom; President Barack Obama’s tax hike didn’t cause a depression. Tax cuts in Kansas didn’t jump-start the state’s economy; tax hikes in California didn’t slow growth.
And with the Trump tax cut, the doctrine has failed again.
In addition to this explanation for why businesses aren’t more responsive to changing tax rates, Krugman also offers an account of how the same kind of thing can occur at the level of individual taxpayers, explaining why wealthy individuals in particular don’t always respond as much to changing tax rates as you might expect:
I [previously] talked a bit about the consistent failure of conservative predictions that say raising taxes on high incomes will lead to economic disaster and introducing tax cuts will lead to nirvana. However, I didn’t talk about why tax rates on the rich don’t seem to have major economic consequences. So I thought I’d devote today’s newsletter to some speculations on that question.
It’s not because incentives don’t matter. Clearly, they do. France’s high taxes haven’t led to low employment of prime-age adults, but generous benefits for those who retire early have led to low employment among near-seniors:
The French are a retiring people. Credit…OECD
How, then, can we explain the lack of clear responses (other than tax avoidance) to changes in the tax rate on top incomes?
One answer, which I suspect is relevant in the uppermost strata of the income distribution, is that at that level people don’t seek more money so they can afford more things, since they’re already able to afford far more luxury than anyone can enjoy. Instead, it’s about keeping score; that is, their goal is to make as much or more than the people they compare themselves with. And raising taxes on rich people in general doesn’t eliminate the race to out-earn one’s rivals.
Even to the extent that the rich seek income for what it can buy, however, it’s not clear that cutting their taxes will lead to greater effort. Indeed, it could lead to reduced effort, because it becomes easier for them to afford what they want.
Readers who took economics probably realize that I’m talking about income effects as opposed to substitution effects, a distinction that plays a crucial role in understanding how wages affect labor supply.
As most intro econ texts including the best one explain, higher wages have two effects on workers. They have an incentive to work more, because an extra hour gets them more stuff. But they’re also more affluent, which lets them consume more — and one of the things they might choose to consume is more leisure, i.e., they might choose to work less.
Historically, in fact, higher wages have generally led to reduced working hours. Wages have increased enormously over the past century and a half, but the workweek has gotten a lot shorter:
Wages up, hours down. Credit…Our World in Data
So if tax cuts for the rich are like a wage hike, they could lead to less rather than more effort.
But wait: the top tax rate is a marginal rate, not an average rate. Individuals making, say, $600,000 a year pay 37 percent on the last dollar they earn, but most of their income is taxed at substantially lower rates — and those rates won’t be affected if President Biden succeeds in raising the top rate back to 39.6 percent. So you might think that raising or lowering the top rate is not, in fact, much like changing affluent Americans’ wages.
But here’s the thing: most of the earned income accruing to people in the top tax bracket is, in fact, taxed at the top rate. (Capital gains etc. are a different story.) Why? Because the distribution of income at the top is itself very unequal: there are huge disparities even within the economic elite. According to estimates by Thomas Piketty and Emmanuel Saez, almost half the income of the top 1 percent accrues to the top 0.1 percent, a category that begins at around three times as high a threshold.
Now, high incomes closely follow a Pareto distribution, indeed to an eerie extent. Here’s a plot of high incomes versus the percentage of taxpayers with incomes above that level, both expressed in natural logs:
A weirdly exact relationship. Credit…Piketty and Saez
In such a distribution, the top .05 percent is to the top 0.5 percent what the top 0.1 percent is to the top 1 percent, so what is true of the distribution of income within the 1 percent is also true of the distribution within the roughly 0.5 percent of Americans subject to the top tax rate. This means that, as I said, most of the income accruing to that group is taxed at the top rate. And this in turn means that cutting that top rate is more like an across-the-board wage rise for the elite than you might think — and wage rises don’t tend to increase work effort.
Or to put it a bit differently, while tax cuts for the rich may offer an incentive to work harder, they’re also a big giveaway that encourages the elite to work less.
Of course, the fact that tax cuts at the top are a big giveaway is precisely the reason that belief in the immense economic importance of low taxes is such an unkillable zombie. As Upton Sinclair famously said, it’s difficult to get a man to understand something when his salary depends on his not understanding it.
Krugman is right that resistance to taxation – whether motivated by ideology or pure self-interest – is a stubbornly persistent phenomenon here in the US. For all the reasons we’ve been discussing, a lot of people just don’t trust the government to make good use of their tax dollars, so they don’t want to give up any of their hard-earned wealth to support it. Why should they support an institution that’s just going to throw their money away on wasteful bureaucracy and inefficient regulations? The irony here, though, is twofold: First, of course, is the basic point that we’ve been coming back to throughout this post, which is that the existence of government regulation and bureaucracy is the very thing that enables markets to function well enough for taxpayers to earn their incomes in the first place – so although it might not always be perfectly efficient, putting a fraction of those earnings back into the government can’t exactly be called a waste of money. But the second irony, which we haven’t really discussed yet, is that to the extent that government actually is sometimes hampered by needless bureaucracy and inefficiency, this is often precisely because people treat it with so much mistrust, so it’s therefore forced to tie itself in knots making sure that all its i’s are dotted and t’s are crossed. Alexander explains, quoting commenter deiseach:
Also, the idea of the maverick coming in and tearing up the rule book and saving the day with his or her bold new thinking outside the box of the stuffy bureaucrats is great in the movies, but in real life – it doesn’t work like that.
I don’t know if it’s the same in America, but here in Ireland I can tell you one reason why the principle of “cover your ass” is implemented: because politicians love to make campaign promises of cutting public spending, reducing waste and inefficiency in the public service, and saving taxpayers’ money. And the public loves those promises, because who wants to pay more tax?
Which means that the civil service and local government gets entangled in rules and limitations about what they can spend, how they can spend it, when they can spend it, and on what they can spend it. Which means that if you can’t account down to the last fifty cents on the invoices to the auditors who arrive in every year to examine the accounts, you are in genuine trouble. I did an evening course in computerised accounts to upskill last year, and most of those on the course were all in local government/town council work. One of the students told the course tutor that she had trouble reconciling her accounts – she was out something around €5,000.
The tutor, who came from private industry, told her not to worry, that this was within acceptable limits for business. The rest of the class laughed hollowly and explained to him that, if we were out €5 on accounts in the public service, the auditors would haul us over the coals. He was astonished that we had so little autonomy, but that’s what happens when the decisions have to be kicked upstairs to the Minister because every penny has to be accounted for, because it’s cheap and easy PR for a government representative to raise a question about public spending in parliament to make it sound to his constituents as if he’s a watchdog on the spending of the taxes.
This resonated with me because it was part of the reason I find libertarians actively dangerous. I didn’t view libertarianism as dangerous because it would create a small government and that would be bad – libertarians have never been remotely successful in creating a small government, and if they did maybe that would be good.
The problem with libertarians is that they don’t make government smaller, they just make it more defensive.
Accuse government officials of bias and corruption, point to their lack of documentation as proof of their guilt, then accuse them of being paper-pushing bureaucrats who prefer forms to efficiency when they start obsessively documenting everything they do. Make the police fill out endless forms before going to catch a criminal because the criminal’s rights might be violated; then when the police have to break the rules in order to keep order, say that their dishonest ways prove they need more limitations and surveillance. Take a research agency, cut the funding it needs to do good research because it’s a parasitic leech and the government just wastes all its money anyway, and then when it can’t do good research with the remainder, pat yourself on the back for predicting its inevitable failure. Criticize every single government decision for nepotism or racism or idiocy, then when the government switches to doing everything according to a single standard procedure in some manual, accuse them of being inhuman and unable to react to changes in the situation.
What Alexander’s saying here, in short, is that accusations of government waste and inefficiency can often become self-fulfilling prophecies. The more pressure government institutions are put under to provide maximum accountability while also cutting costs as much as possible, the less freedom and flexibility they have to do their jobs. And the ultimate result is often the kind of “compromise” that ends up pleasing no one. As commenter PostalPenguin observes:
Congress generates compromises that are worse than either party’s own idea.
It’s like building a house. Democrats want a complete house while Republicans block the installation of a roof and windows because of “cost”. So in a year the house is falling apart and requires more money for repairs just to keep from falling down. Even then the house doesn’t really accomplish what a house is supposed to do: keep you safe and out of the elements. Republicans sit back and say “Look at that POS house the Democrats are wasting your taxpayer money on!”
Of course, as undesirable as these kinds of results are at the object level, PostalPenguin is right to point out that on a more strategic level, they can actually be good for anti-statists – because the more wasteful and incompetent the government appears to be, the more it feeds into their narrative and undermines the credibility of government itself as an institution. Any time there’s some kind of big crisis over, say, debt ceiling negotiations, and it’s unclear if the government is even going to be able to continue to function at all, or if it’s all going to go to hell in a handbasket, this is actually the opposite of a crisis for anti-statists – because after all, keeping the government from being able to accomplish anything is exactly what they want; that’s the whole goal of their ideology. So they have every incentive to try and create such messes, not avoid them; as far as they’re concerned, the more they can weaken the potency of Big Government, the better. As commenter philasurfer puts it:
One of the reasons Republicans can always come across as in control is because they have no investment in government. What I mean by that is, the more government appears dysfunctional, the more it plays into Republican arguments about the failures of government. So Republicans always have the nuclear option in their pocket. They can shut down government, defund things, basically sabotage the government.
Democrats do not have that luxury. They need government to function well in order for their arguments to succeed. As a result, [they] are forced to concede a lot of ground to Republicans in order to keep government functioning and working. Democrats are basically always negotiating with terrorists who are willing to kill the hostage.
In other words, Republicans have the ironic advantage of being able to achieve their goals precisely by screwing everything up; to paraphrase P.J. O’Rourke, they can run for office on the platform that government doesn’t work, and then prove it once they’re elected.
Mike Lofgren, a former long-time Republican staffer in Washington, attests to the accuracy of this point:
A couple of years ago, a Republican committee staff director told me candidly (and proudly) what the method was to all this obstruction and disruption. Should Republicans succeed in obstructing the Senate from doing its job, it would further lower Congress’s generic favorability rating among the American people. By sabotaging the reputation of an institution of government, the party that is programmatically against government would come out the relative winner.
A deeply cynical tactic, to be sure, but a psychologically insightful one that plays on the weaknesses both of the voting public and the news media. There are tens of millions of low-information voters who hardly know which party controls which branch of government, let alone which party is pursuing a particular legislative tactic. These voters’ confusion over who did what allows them to form the conclusion that “they are all crooks,” and that “government is no good,” further leading them to think, “a plague on both your houses” and “the parties are like two kids in a school yard.” This ill-informed public cynicism, in its turn, further intensifies the long-term decline in public trust in government that has been taking place since the early 1960s – a distrust that has been stoked by Republican rhetoric at every turn (“Government is the problem,” declared Ronald Reagan in 1980).
The media are also complicit in this phenomenon. Ever since the bifurcation of electronic media into a more or less respectable “hard news” segment and a rabidly ideological talk radio and cable TV political propaganda arm, the “respectable” media have been terrified of any criticism for perceived bias. Hence, they hew to the practice of false evenhandedness. Paul Krugman has skewered this tactic as being the “centrist cop-out.” “I joked long ago,” he says, “that if one party declared that the earth was flat, the headlines would read ‘Views Differ on Shape of Planet.’”
Inside-the-Beltway wise guy Chris Cillizza merely proves Krugman right in his Washington Post analysis of “winners and losers” in the debt ceiling impasse. He wrote that the institution of Congress was a big loser in the fracas, which is, of course, correct, but then he opined: “Lawmakers – bless their hearts – seem entirely unaware of just how bad they looked during this fight and will almost certainly spend the next few weeks (or months) congratulating themselves on their tremendous magnanimity.” Note how the pundit’s ironic deprecation falls like the rain on the just and unjust alike, on those who precipitated the needless crisis and those who despaired of it. He seems oblivious that one side – or a sizable faction of one side – has deliberately attempted to damage the reputation of Congress to achieve its political objectives.
This constant drizzle of “there the two parties go again!” stories out of the news bureaus, combined with the hazy confusion of low-information voters, means that the long-term Republican strategy of undermining confidence in our democratic institutions has reaped electoral dividends. The United States has nearly the lowest voter participation among Western democracies; this, again, is a consequence of the decline of trust in government institutions – if government is a racket and both parties are the same, why vote? And if the uninvolved middle declines to vote, it increases the electoral clout of a minority that is constantly being whipped into a lather by three hours daily of Rush Limbaugh or Fox News. There were only 44 million Republican voters in the 2010 mid-term elections, but they effectively canceled the political results of the election of President Obama by 69 million voters.
I talked earlier about how one of the things that makes democracy work is voters having the ability to notice when they’re dissatisfied with their representatives’ performance and replace them with someone else. But when one party is able to create a bunch of dysfunction in government and then mislead voters into thinking it’s the other party’s fault (or that it’s just an inherent problem with government itself), this can short-circuit that whole process and undermine democracy’s ability to properly function. Thankfully, this tactic can only be pushed so far before voters catch on to what’s really happening – but in the meantime, all it accomplishes is to needlessly waste a bunch of time and energy and resources that could be better spent on something actually useful.
And creating a bunch of internal disruption to stir up anti-government sentiment isn’t the only way anti-statist politicians can thwart government’s ability to function, either. Another more subtle approach, as Krugman explains, is to cut its legs out from under it by depriving it of funding, and then using that lack of funding as a pretext to gut various programs in the name of “fiscal responsibility”:
Ever since Reagan, the G.O.P. has been run by people who want a much smaller government. In the famous words of the activist Grover Norquist, conservatives want to get the government “down to the size where we can drown it in the bathtub.”
But there has always been a political problem with this agenda. Voters may say that they oppose big government, but the programs that actually dominate federal spending — Medicare, Medicaid and Social Security — are very popular. So how can the public be persuaded to accept large spending cuts?
The conservative answer, which evolved in the late 1970s, would be dubbed “starving the beast” during the Reagan years. The idea — propounded by many members of the conservative intelligentsia, from Alan Greenspan to Irving Kristol — was basically that sympathetic politicians should engage in a game of bait and switch. Rather than proposing unpopular spending cuts, Republicans would push through popular tax cuts, with the deliberate intention of worsening the government’s fiscal position. Spending cuts could then be sold as a necessity rather than a choice, the only way to eliminate an unsustainable budget deficit.
By using this more roundabout technique to target popular government programs – passing huge unfunded tax cuts, then pointing to the resulting budget deficits as evidence that the country can’t afford to keep those programs – Republicans have at various points managed to successfully deprive many such programs of the resources they need to function. As Krugman goes on to point out, though, because those programs are still so popular among voters, merely depriving them of resources hasn’t actually been enough to eliminate them outright; all it’s done is make them worse and less effective. So again, all we’re left with is a government that’s still doing the same things as before, but just doing them worse – like the proverbial half-built house with no roof or windows that ends up being a bigger waste of money than a fully-built house would have been. Government hasn’t actually been stopped; it’s just been stopped from doing anything useful.
Needless to say, this isn’t exactly an optimal model for effective governance. Sure, it’s good and healthy to have some critics within government who can scrutinize its spending decisions and make sure it’s not wasting taxpayers’ money on unworthy causes. And it’s also good to allow room for disagreement and debate about which causes actually are the most appropriate ones for government to get involved in and which aren’t. That’s what democracy is all about. But crude attempts to just wreck the whole thing are no substitute for actually engaging in this democratic process and trying to address areas of disagreement in a straightforward and constructive way. Simply trying to “burn it all down,” while it may be ideologically cathartic for some anti-statists, is hardly ever an actual solution to anything. Commenter geerussell puts it this way:
[It’s] kind of like starving your dog because he had an accident on the carpet. Well, OK that does solve one problem but if you liked having a dog, if you needed a dog for any particular purpose… it’s probably not a smart solution.
If you want better government, however you happen to define “better” … fiscal handcuffs aren’t a shortcut to it. Starving the beast doesn’t make it smarter or better trained, just weaker in the performance of its duties. There’s no avoiding the long, detailed slog of improving policies and process. If we want fewer wars, we have to do the hard work of putting real legal constraints in place. If we want fewer bailouts, we have to do the hard work of building and regulating an improved financial system … and so on across the spectrum of policy.
It’s true that government, like any large institution, can sometimes be inefficient and/or ineffective. Waste and corruption are problems that really do exist and always have to be guarded against, just as they do in any big company or organization. That being said, though, the fact that public institutions don’t always work perfectly doesn’t justify their outright abolition, any more than the fact that private institutions don’t always work perfectly justifies their abolition. Sure, many of us have had frustrating experiences with the DMV or the IRS – but many of us have had equally frustrating experiences with our phone companies, cable companies, credit card companies, insurance companies, and so on, and nobody’s arguing that those sectors should therefore be wiped out of existence. Even after the collapse of the banking sector brought down the entire economy and cost us hundreds of billions of dollars as a country, nobody argued that banks should therefore no longer exist. We all recognize that sectors like banking and phone service, despite having their share of issues, are overall positives for our society. And the same is true of government. A genuinely democratic system of government, when implemented properly, can provide benefits that no private-sector institution can provide. So when we do run into problems within the government, we ought to try to improve it, not dispense with it altogether. To repeat Atcheson’s line from earlier, the solution to bad government is good government, not no government. Corruption and inefficiency obviously ought to be rooted out wherever they do exist – I don’t think any reasonable person would argue otherwise – but trying to resolve such issues by doing away with government entirely is like trying to solve a headache with a guillotine. Sure, it technically solves your immediate problem – but it doesn’t exactly do so in a way that leaves you better off in any kind of broader sense.
XXVII.
And again, it’s worth underscoring that most of the other advanced countries in the world understand this reality perfectly well, and have built healthy, thriving societies as a result. The kind of subculture of anti-statism that we have here in the US – which insists that government can never do anything right, that taxation is theft, etc. – just isn’t really as much of a thing in these other countries, as Paul Kienitz points out:
This received wisdom about the inevitable ineptitude of government is hardly a universally recognized law; rather, it is a peculiarly American cultural attitude, hardly found in most other industrialized nations that have roughly similar experience with problems of governance. It is more a product of anti-statist ideology than a cause of it.
And this relative lack of hostility toward government makes a real difference in those countries’ ability to successfully govern themselves. To return one last time to our recurring example of the Nordic countries, one of the biggest reasons why they’ve been so successful with their governments is that they don’t really have anything equivalent to our Republican Party. They do have right-wing political parties, of course, but they don’t have parties (at least not popular ones) that are so right-wing that their members routinely call for the outright dismantling of the social safety net, or actively try to undermine their government’s ability to even function in the first place. Pretty much everyone in the Nordic countries – even those on the more conservative end of the political spectrum – understands that government is necessary (and in fact preferable) for providing certain services, and agrees that it has a legitimate role to play in improving citizens’ lives. So while naturally, there are still plenty of differences of opinion regarding how that goal should best be pursued, and Nordic governments still have their share of disputes over all kinds of object-level issues, they nonetheless enjoy a pretty universal consensus on the more general question of whether government should even be significantly involved in the economy at all. For them, electing leaders whose primary goal was to “completely get government out of our lives” would make about as much sense as, say, hiring a die-hard Luddite to work as the CEO of Google or something. The Nordic approach might seem like Big Government run amok to a lot of Americans. But Nordics themselves simply regard it as a healthy balance between the public and private sectors. They don’t regard their government as an enemy of the market, but as a legitimate supplement to it; in the same way that the private market allows them to fulfill their needs on an individual basis, the government serves as a tool for them to fulfill their collective needs – simple as that. And accordingly, their societies have flourished; as Krugman writes:
[The Nordic system, often denounced by American conservatives as “socialism,” is in fact] what the rest of the world calls social democracy: A market economy, but with extreme hardship limited by a strong social safety net and extreme inequality limited by progressive taxation.
[And contrary to conservative attempts to liken social democracy to Soviet-style dystopia,] the Nordic countries are not, in fact, hellholes. They have somewhat lower G.D.P. per capita than we do, but that’s largely because they take more vacations. Compared with America, they have higher life expectancy, much less poverty and significantly higher overall life satisfaction. Oh, and they have high levels of entrepreneurship — because people are more willing to take the risk of starting a business when they know that they won’t lose their health care or plunge into abject poverty if they fail.
It seems pretty clear, then, that there’s something of value to be learned from the Nordics’ example. We don’t have to make a binary choice between strong markets and strong social safety nets; as mentioned before, we can have a system in which the strength of one reinforces the strength of the other.
Of course, striking the right balance between the public sector and the private sector isn’t an exact science; there’s not some precise paint-by-numbers formula that every country can follow to achieve perfect results in all circumstances. Nevertheless, there are certain general principles that do seem valuable – including the interesting idea from Ashwin Parameswaran that I mentioned in the last post, which I’ll just repeat again here. Parameswaran notes that here in the US, whenever conservatives complain about the balance tipping too far in the government’s direction, their most cogent complaints tend to be about the scope of government extending too far – i.e. the government needlessly extending its regulatory tentacles into too many private-sector activities – whereas when liberals complain about there being an imbalance, their most cogent complaints tend to be about the scale of government being insufficient – i.e. the government not doing enough in the key areas where it’s most necessary. What Parameswaran proposes, then, is that the best kind of government might be one that’s constrained in its scope while being generous in its scale – i.e. a government that limits itself to a narrow core domain of public goods and services but is very active within that narrow domain. In other words, the best approach might be to give the market as much free rein as possible, and to let the forces of creative destruction exert their full effects even if it means that jobs and businesses are constantly being created and destroyed – but, crucially, to also have a robust government-funded safety net ready to catch anyone whose job or business has fallen victim to this creative destruction, and to quickly re-equip them to bounce back again as smoothly as possible. In Parameswaran’s words:
A robust safety net is as important to maintaining an innovative free enterprise economy as the dismantling of entry barriers and free enterprise are to reducing inequality.
And sure enough, that seems to be exactly the kind of approach that has worked so well for the Nordics. They generally allow the free market to do its thing without trying to micromanage every little transaction with regulations and price controls and so on – but they also make sure that everyone is sufficiently well-protected from the worst downside risks (poverty, homelessness, etc.) that no one has to be afraid of participating fully in the market, taking chances on entrepreneurship, and making the most of their resources. The upshot of this is that Nordics know their government has their back, not that it’s just there to get in their way. And consequently, their attitude toward government isn’t typically one of hostility but one of willing participation; rather than regarding their government as some alien antagonist that can do nothing but destroy their freedom, they tend to regard it more as an extension of their collective democratic will, and therefore as a fundamental expression of their freedom. This isn’t necessarily to say that all of them feel this way, obviously – I don’t want to overgeneralize here – but compared to our attitudes here in the US, at least, there tends to be a much more positive kind of dynamic overall. In the US, we’re a lot more inclined to be suspicious of government, contemptuous of politicians, and loath to give up our tax dollars for any reason, even for ostensibly good reasons. But it’s unfortunate for us that this is the case, because as the Nordics’ example demonstrates, it’s possible to have a relationship with government that’s much more collaborative and productive, and to realize all kinds of positive benefits as a result.
In my opinion, then, it seems worth taking seriously the idea that maybe we can actually do better. If Kienitz really is right that our worst stereotypes about government are more a product of anti-statist ideology than a cause of it, then maybe our relationship with government doesn’t actually have to be as adversarial as it currently is. Maybe if we can start to get past the kind of knee-jerk anti-statism that so often causes us to reflexively dismiss everything the government does as terrible, we’ll find that we can actually build a positive relationship between ourselves and our government, just as other countries have done. And maybe if we can do that, we’ll realize that there’s actually a lot to appreciate about having a functional democratic government – because after all, despite our system’s flaws (and despite the fact that it’s not quite as good as the Nordics’ in many areas), it is still vastly better than practically every other system in world history, and we enjoy all kinds of benefits because of it that most people are never lucky enough to enjoy. We don’t always realize just how much it does for us, as Wheelan points out, but the benefits are real nonetheless, and we do ourselves no favors by believing otherwise:
Thousands of planes landed safely today. Children did not choke on plastic toys. Terrorists did not strike Los Angeles. Government deserves some credit for all of those things. One inherent challenge of public policy is obvious only once you think about it: success is often invisible, or even annoying, since averting harm is not necessarily recognized as success.
Suppose that around 2005, regulators had cracked down on irresponsible mortgage lending and all of the other attendant unsavory practices that led to the global financial crisis. In 2013, would politicians, business leaders, and pundits be heaping praise on this regulatory foresight?
No. Instead, left-leaning critics would be blasting regulators for constraining credit to low-income families and denying them a piece of the American dream (with no awareness that these subprime mortgages would have turned into the American nightmare). Right-leaning critics would be blasting the same regulators for constraining the private sector and hurting profitability (with no awareness that regulators had averted a complete meltdown of the global financial system).
Do you think the CEO of Lehman Brothers would give a speech praising regulators for saving the firm from itself? Of course not, because there would be no awareness that the firm needed saving. This is a stark contrast to the private sector, where success is obvious: innovation, profits, publicity. When government works, we often see nothing but the tax bill—so it should be no great surprise that Americans across the political spectrum are not keen to send bigger checks to the government.
Commenter randomnoise (with tongue firmly in cheek) illustrates the idea even more pointedly:
This morning I was awoken by my alarm clock powered by electricity generated by the public power monopoly regulated by the US Department of Energy. I then took a shower in the clean water provided by the municipal water utility.
After that, I turned on the TV to one of the FCC-regulated channels to see what the National Weather Service of the National Oceanographic and Atmospheric Administration determined the weather was going to be like using satellites designed, built, and launched by the National Aeronautics and Space Administration. I watched this while eating my breakfast of US Department of Agriculture-inspected food and taking the drugs which have been determined as safe by the Food and Drug Administration.
At the appropriate time, as regulated by the US Congress and kept accurate by the National Institute of Standards and Technology and the US Naval Observatory, I get into my National Highway Traffic Safety Administration-approved automobile and set out to work on the roads build by the local, state, and federal Departments of Transportation, possibly stopping to purchase additional fuel of a quality level determined by the Environmental Protection Agency, using legal tender issued by the Federal Reserve. On the way out the door I deposit any mail I have to be sent out via the US Postal Service and drop the kids off at the public school.
After spending another day not being maimed or killed at work thanks to the workplace regulations imposed by the Department of Labor and the Occupational Safety and Health Administration, enjoying another two meals which again do not kill me because of the USDA, I drive my NHTSA car back home on the DOT roads, to my house which has not burned down in my absence because of the state and local building codes and fire marshal’s inspection, and which has not been plundered of all its valuables thanks to the local police department.
I then log on to the Internet which was developed by the Defense Advanced Research Projects Administration and post on freerepublic.com and Fox News forums about how SOCIALISM in medicine is BAD because the government can’t do anything right.
[Q]: Okay, fine. But that’s a special case where, given an infinite budget, they were able to accomplish something that private industry had no incentive to try. And to their credit, they did pull it off, but do you have any examples of government succeeding at anything more practical?
Eradicating smallpox and polio globally, and cholera and malaria from their endemic areas in the US. Inventing the computer, mouse, digital camera, and email. Building the information superhighway and the regular superhighway. Delivering clean, practically-free water and cheap on-the-grid electricity across an entire continent. Forcing integration and leading the struggle for civil rights. Setting up the Global Positioning System. Ensuring accurate disaster forecasts for hurricanes, volcanoes, and tidal waves. Zero life-savings-destroying bank runs in eighty years. Inventing nuclear power and the game theory necessary to avoid destroying the world with it.
[Q]: All right… all right… but apart from better sanitation and medicine and education and irrigation and public health and roads and a freshwater system and baths and public order… what has the government done for us?
I think there’s a way in which government has been othered by 30 or 40 years of neoliberal ideology, so it’s been turned into some alien invading force. And yeah, it’s hugely imperfect, and we’ve all been to the DMV, and a lot of parts of the government are more like the DMV than not. But I also think, when government works, we don’t think about it. I mean, I have lived in other countries. I’ve lived in countries where every meal in a restaurant is a dangerous experience, right. I’m not sure I remember the last time I got sick eating in a restaurant in America. Do you understand, like, what a civilizational accomplishment that is? You understand how hard that is to achieve? If you travel the world you understand that’s a precarious fragile thing that is a goddamn miracle we’ve been able to pull off. And there’s a thousand little miracles like that. When I put a car seat in my car, it does not occur to me that it’s not properly tested, that it won’t work if there’s an accident. Like, when I buy a car, it does not occur to me that, like, it’s not safe – as safe as it could be in the case of an accident. And are there exceptions? Yeah. But in general, we forget that yeah, government’s frustrating and annoying and inefficient, but like, what it is doing in an unsung way every day, that allows us to not be in the situation that, frankly, most countries in the world are still in, is a fucking miracle. So the government is your dysfunctional family, but you don’t go to a restaurant eat alone on Thanksgiving. You go to your dysfunctional family and you try to make it better.
(And just to reinforce his point, it’s worth mentioning that even the DMV, in spite of its bad reputation, actually seems to have improved quite a bit in recent years, and isn’t considered nearly as unpleasant these days as it used to be.)
To grow and prosper, a country needs laws, law enforcement, courts, basic infrastructure, a government capable of collecting taxes—and a healthy respect among the citizenship for each of these things. These kinds of institutions are the tracks on which capitalism runs. They must be reasonably honest. Corruption is not merely an inconvenience, as it is sometimes treated; it is a cancer that misallocates resources, stifles innovation, and discourages foreign investment. While American attitudes toward government range from indifference to hostility, most other countries would love to have it so good, as New York Times foreign affairs columnist Tom Friedman has pointed out:
I took part in a seminar two weeks ago at Nanjing University in China, and I can still hear a young Chinese graduate student pleading for an answer to her question: “How do we get rid of all our corruption?” Do you know what your average Chinese would give to have a capital like Washington today, with its reasonably honest and efficient bureaucracy? Do you know how unusual we are in the world that we don’t have to pay off bureaucrats to get the simplest permit issued?
The relationship between government institutions and economic growth prompted a clever and intriguing study. Economists Daron Acemoglu, Simon Johnson, and James Robinson hypothesized that the economic success of developing countries that were formerly colonized has been affected by the quality of the institutions that their colonizers left behind. The European powers adopted different colonization policies in different parts of the world, depending on how hospitable the area was to settlement. In places where Europeans could settle without serious hardship, such as the United States, the colonizers created institutions that have had a positive and long-lasting effect on economic growth. In places where Europeans could not easily settle because of a high mortality rate from disease, such as the Congo, the colonizers simply focused on taking as much wealth home as quickly as possible, creating what the authors refer to as “extractive states.”
The study examined sixty-four ex-colonies and found that as much as three-quarters of the difference in their current wealth can be explained by differences in the quality of their government institutions. In turn, the quality of those government institutions is explained, at least in part, by the original settlement pattern. The legal origin of the colonizers—British, French, Belgian—had little influence (though the British come out looking good because they tended to colonize places more hospitable to settlement).
Basically, good governance matters. The World Bank rated 150 countries on six broad measures of governance, such as accountability, regulatory burden, rule of law, graft (corruption), etc. There was a clear and causal relationship between better governance and better development outcomes, such as higher per capita incomes, lower infant mortality, and higher literacy. We don’t have to love the Internal Revenue Service, but we ought to at least offer it some grudging respect.
Granted, it’s not always easy to muster up a whole lot of enthusiasm for the IRS. Even despite everything our government does for us – or more accurately, everything we do for ourselves through our government – most Americans’ response when it comes time to pay the tax bill is one of annoyance, if not outright resentment, as Chomsky notes:
In [an idealized] democracy, tax day would be a day of celebration – here, we’ve gotten together as a community, we’ve decided on certain policies, and now we’re moving to implement them by our own participation. But that’s not the way it’s viewed in the US. It’s a day of mourning; there’s this alien entity, which is stealing our hard-earned money from us, and we have to give it up, we have no choice. That reflects the undermining of even a conception of democracy.
No doubt, a lot of this is surely due to the popular conception of government as corrupt and inefficient; if we’re all convinced that the government is just going to throw away our tax dollars on needlessly wasteful projects, we probably aren’t going to be thrilled about it. But I think part of it is also due to a general reluctance to have to sacrifice anything of ours for the good of others at all; even if we believed our tax dollars were 100% going to those in need, I think part of us would still subconsciously resent having to give up own hard-earned wealth, just because it’s ours and we want to keep it. Many of us seem (at least somewhat) to have internalized a model of human behavior which says that because we humans are naturally inclined to act in our own narrow self-interest, that means that always caring solely about ourselves is therefore appropriate and expected – that it’s somehow irrational to want to make sure that others are taken care of too. (This is the Homo economicus model of human behavior that I’ve discussed here previously.) Chomsky describes it acerbically:
[According to this attitude,] Social Security, public schools, and other deviations from [the narrowest conception of self-interest] are based on evil doctrines, among them the pernicious belief that we should care, as a community, whether the disabled widow on the other side of town can make it through the day, or the child next door should have a chance for a decent future. These evil doctrines derive from the principle of sympathy that was taken to be the core of human nature by Adam Smith and David Hume, a principle that must be driven from the mind.
He’s obviously being sarcastic here, but I do think he’s pointing at a kind of attitude that really does exist – not that it’s necessarily wrong to care about other people, of course, but that if nothing else, we don’t have to care about other people; we aren’t obligated to care for them. As Singer writes:
Libertarians resist the idea that we have a duty to help others. Canadian philosopher Jan Narveson articulates that point of view:
We are certainly responsible for evils we inflict on others, no matter where, and we owe those people compensation . . . Nevertheless, I have seen no plausible argument that we owe something, as a matter of general duty, to those to whom we have done nothing wrong.
There is, at first glance, something attractive about the political philosophy that says: “You leave me alone, and I’ll leave you alone, and we’ll get along just fine.” It appeals to the frontier mentality, to an ideal of life in the wide-open spaces where each of us can carve out our own territory and live undisturbed by the neighbors. Yet there is a callous side to a philosophy that denies that we have any responsibilities to those who, through no fault of their own, are in need. Taking libertarianism seriously would require us to abolish all state-supported welfare programs for those who can’t get a job or are ill or disabled, and all state-funded health care for the aged and for those who are too poor to pay for their own health insurance. Few people really support such extreme views.
And it’s true – most people, if you put it to them directly, wouldn’t be able to bring themselves to support something as blatant as the outright abolition of all public social programs. Still, a lot of people do nevertheless feel an underlying reluctance to actually pay the taxes to fund those programs, both because of the nagging unease that their money might be going to someone who doesn’t deserve it, and because of the more general annoyance that they have to give up their money for any reason at all. The result of this, unfortunately, is that even when we make genuine efforts to help needy people, these efforts are often suffused with a thinly-veiled tension; even after we’ve thoroughly means-tested recipients to reassure ourselves that they really do need government assistance, we’ll still gripe about having to pay the taxes to provide that assistance, making it clear how much we mistrust the recipients and begrudge them their inability to fully take care of themselves. As James Gilligan puts it:
The […] ethos of “rugged individualism,” and the social Darwinism that continues to dominate so much public discourse, make it almost impossible for us to take care of people without humiliating them first.
But punishing the needy isn’t exactly an ideal foundation upon which to build a successful civilization. It may be true that there are some people who receive government support who really shouldn’t – welfare cheats, corrupt corporations, etc. (although such cases are much less of an issue than most voters imagine them to be). But there are also a lot of people who genuinely are in need – people who’ve worked hard and done their best, but have simply fallen on hard times or caught some bad breaks. The question we have to answer, then, is whether we want to err on the side of giving more help than is strictly necessary, even if it means occasionally helping someone who doesn’t really need it, or giving too little help, and potentially leaving the genuinely needy out in the cold. For my money, I’d say the answer is obvious: It’s better to err on the side of decency and compassion than to refuse to help anyone out of fear that someone might not have done enough to deserve it.
In fact, speaking more generally, I think it’s quite clear that this whole adversarial attitude, of treating all human interaction as inherently zero-sum, is a fundamentally corrosive one, and one we should be trying our best to get past. When we have this kind of relationship with each other that’s antagonistic by default, it isn’t just bad for us as individuals; it undermines the cohesion of our society as a whole, in a way that makes us all worse off. The truth is, we don’t have to treat our relationships as zero-sum – it really is possible for us to collaborate in ways that make us all better off – and the moments when we realize that and take advantage of it are when our society flourishes most. As Gawande writes:
The Berkeley sociologist Arlie Russell Hochschild spent five years listening to Tea Party supporters in Louisiana, and in her masterly book “Strangers in Their Own Land” she identifies what she calls the deep story that they lived and felt. Visualize a long line of people snaking up a hill, she says. Just over the hill is the American Dream. You are somewhere in the middle of that line. But instead of moving forward you find that you are falling back. Ahead of you, people are cutting in line. You see immigrants and shirkers among them. It’s not hard to imagine how infuriating this could be to some, how it could fuel an America First ideal, aiming to give pride of place to “real” Americans and demoting those who would undermine that identity—foreigners, Muslims, Black Lives Matter supporters, feminists, “snowflakes.”
Our political debates seem to focus on what the rules should be for our place in line. Should the most highly educated get to move up to the front? The most talented? Does seniority matter? What about people whose ancestors were cheated and mistreated?
The mistake is accepting the line, and its dismal conception of life as a zero-sum proposition. It gives up on the more encompassing possibilities of shared belonging, mutual loyalty, and collective gains. America’s founders believed these possibilities to be fundamental. They held life, liberty, and the pursuit of happiness to be “unalienable rights” possessed equally by all members of their new nation. The terms of membership have had to be rewritten a few times since, sometimes in blood. But the aspiration has endured, even as what we need to fulfill it has changed.
When the new country embarked on its experiment in democracy, [something like] health care was too primitive to matter to life or liberty. The average citizen was a hardscrabble rural farmer who lived just forty years. People mainly needed government to insure physical security and the rule of law. Knowledge and technology, however, expanded the prospects of life and liberty, and, accordingly, the requirements of government. During the next two centuries, we relied on government to establish a system of compulsory public education, infrastructure for everything from running water to the electric grid, and old-age pensions, along with tax systems to pay for it all. As in other countries, these programs were designed to be universal. For the most part, we didn’t divide families between those who qualified and those who didn’t, between participants and patrons. This inclusiveness is likely a major reason that these policies have garnered such enduring support.
And this is the important thing to keep in mind when it comes to government; good government, even though it often involves things like taxation and regulation that can annoy us as individuals, is ultimately about making sure everyone can thrive. That’s the whole point of it – creating conditions in which every person, from the most privileged to the most disadvantaged, can have the opportunity for a decent life. This may mean that some people will need more help than others – in fact, it’s basically a certainty that there will always be some people who need more help than others – but to reiterate our point from earlier, that doesn’t mean they’re not worth helping. After all, as judgmental as we can often be about who truly “deserves” help and who doesn’t, the truth is that the people whose lives seem the most unsalvageable are often those who can benefit from help the most. So if we can move ourselves even just a little bit closer to the kind of society in which everyone – even the most miserable among us – can be happy and successful, why shouldn’t we want to do that?
Kelsey Piper has a post which, while not explicitly being about government per se, really gets at the heart of what I’m trying to convey here:
I think a big part of how I see the world is that –
In college I was sick. In particular I was anorexic, and I nearly starved myself to death. I never accomplished anything, made commitments I couldn’t keep, lost track of time, and struggled with the most basic life tasks. I was anxious (mostly because I correctly knew that everything was going horribly) and lazy (because I could not possibly do enough things to matter, and doing things was hard and hurt) and unreliable and terrible. I ended up owing people a lot of money (I have since paid them all back) and failing at things that were really important to me and to other people.
And now I am in a good environment for me. I live with people who I can be reasonably assured don’t hate me and will tell me when they need me to do things differently, and I am no longer anxious. My work has clear expectations and is bite-sized and doesn’t pile up on me, and I reliably deliver it and do a good job. I have enough money I don’t have to deal with the mental overhead of deciding whether to buy the food I want, and I spend that mental overhead on better things. I am still messy and I am still bad at getting places on time, but I’m never late on rent. I am mostly a productive, honest, trustworthy, reliable person and I’m getting better at those things. I have friends and kiss girls (and the occasional boy) and I make a positive difference in peoples’ lives.
Some of the difference was immaturity and lack of skills; much of the difference is that I had starved my brain until it stopped functioning; much of the difference was that I was in an environment that was not shaped to my strengths. But living through it gave me this powerful sense that the difference between a “lazy” person and a “successful” person, between a reliable person and an unreliable person, between a “good” person and a “bad” person, is a lot about whether they are in an environment shaped to their strengths. That almost everybody will be great in the right environment and really really struggle in a bad one. And some people have never ever encountered a bad one and think they’re just inherently great; and some people have never encountered a good one, and think they’re just inherently miserable and hard to get along with and unreliable and untrustworthy.
I absolutely think people are still accountable for the things they do in bad environments. I’ve worked really hard to fix the things I fucked up at when I was sick, and I don’t mean “it’s all the environment” to mean “it’s not you”. Just – the same you who was miserable and did bad things will be happy and do good things, in better circumstances, and lots of the human project is building those circumstances.
I don’t know how to give everyone an environment in which they’ll thrive. It’s probably absurdly hard, in lots of cases it is, in practical terms, impossible. But I basically always feel like it’s the point, and that anything else is missing the point. There are people whose brains are permanently-given-our-current-capabilities stuck functioning the way my brain functioned when I was very sick. And I encounter, sometimes, “individual responsibility” people who say “lazy, unproductive, unreliable people who choose not to work choose their circumstances; if they go to bed hungry then, yes, they deserve to be hungry; what else could ‘deserve’ possibly mean?” They don’t think they’re talking to me; I have a six-figure tech job and do it well and save for retirement and pay my bills, just like them. But I did not deserve to be hungry when I was sick, either, and I would not deserve to be hungry if I’d never gotten better.
What else could ‘deserve’ possibly mean? When I use it, I am pointing at the ‘give everyone an environment in which they’ll thrive’ thing. People with terminal cancer deserve a cure even though right now we don’t have one; deserving isn’t a claim about what we have, but about what we would want to give out if we had it. And so, to me, horrible people who abuse others all the time deserve an environment in which they would thrive and not be able to abuse others, even if we can’t provide one and don’t even have any idea what it would look like and sensibly are prioritizing other people who don’t abuse others. If you have experiences, you deserve good experiences; if you have feelings, you deserve happy feelings; if you want to be loved, you are worthy of love. You flourishing is a moral good; everybody flourishing is in fact the only moral good, the entire thing morality is for. Your actions should have consequences, sure, and we should figure out how to build a world where those consequences are ones that you can handle, and where you can amend the things that you do wrong. When you hurt people, that can change what “you thriving” looks like, because part of thriving is fixing, and growing from, things you have done wrong; but nothing you do can change that it is good for you to thrive.
I reject that I ever deserved to starve, and so I reject that anyone, ever, deserves to starve. I reject that I ever deserved to suffer, and so I reject that anyone, ever, deserves to suffer. Happiness is good. Your happiness is good. And without a single exception anywhere I want you to thrive.
When I talk about making sure everyone is adequately taken care of, this is what I mean. There will be times in all of our lives – even if it’s just when we’re helpless infants – when we’ll be unable to fully take care of ourselves and will need help from others. That help may come in large part from friends and family – or if it’s goods or services that we need, it may come from private-sector sellers or organizations. And ideally, if we’re lucky, we won’t require much more than that; if the market is working properly, it’ll be sufficient to provide us with most of what we need. In some cases, though, the market won’t be enough on its own. In some cases, we’ll need certain services that we either can’t afford to buy in the private sector, or that the private sector simply isn’t equipped to provide at all. And in those cases, government will have a legitimate role to play – not just in making sure that our most basic needs for survival are met, but in providing things like public infrastructure and education and so on that allow us to function productively in a broader society as well. All the things we’ve been discussing throughout this entire post – correcting for externalities, protecting property rights, enforcing regulations to maintain public health and safety, etc. – are things that help create the kind of social conditions in which we aren’t just able to survive, but can actually flourish and realize accomplishments beyond mere survival, like building safe neighborhoods, starting successful businesses, making new scientific discoveries, and so on. And they’re all things that can only be adequately provided by government.
So although it might sometimes feel grating to have to pay taxes, it’s worth bearing in mind that those tax dollars are precisely what have allowed us to elevate our civilization beyond mere subsistence-level existence. Granted, it’s always possible for those tax dollars to be misused – which is why it’s so important to continually maintain strong democratic checks on government power so it doesn’t overstep its bounds and end up doing more harm than good. Government is powerful, so if it’s corrupt or oppressive or flawed in some other serious way, it can choke the life out of a society. But for that very same reason, if it’s not too seriously flawed, it can use its immense power for good, and can nurture a society and help it flourish to an extent that wouldn’t otherwise be possible. To borrow an analogy from Neil Chilson, good government can be like a trellis, which doesn’t get in the way of the vines, but provides them with a support structure that enables them to climb higher than they’d ever be able to alone. And if we can embrace this way of thinking about government – not as something we have to grudgingly put up with, but as a powerful tool that can potentially serve as a multiplier of our productivity and well-being – then maybe we can stop getting so hung up on the idea that government has to be a particular size, and can instead focus on what really matters, which is how much good it’s doing. Our answer to the question of “How big should our government be?” can simply be “However big it needs to be to maximize our well-being.” And in turn, we can give ourselves the ability to accomplish more collectively than we ever could as separate, isolated individuals.
There’s a popular phrase, “The Law of the Jungle,” which is generally used interchangeably with phrases like “survival of the fittest” and “everyone for themselves” as a way of indicating that we live in a dog-eat-dog world in which no one can truly trust anyone except themselves. In other words, “The Law of the Jungle” is basically taken as a shorthand for a complete absence of laws. Ironically, though, the actual origin of the phrase is a poem in Rudyard Kipling’s The Jungle Book, which lays out a code of conduct specifically designed to prevent such disharmony and ensure that everyone will be able to live in cooperation with each other. And its most memorable line is a flat-out negation of the “everyone for themselves” mentality, which simply says:
The strength of the pack is the wolf, and the strength of the wolf is the pack.
I don’t think there’s any better way of encapsulating what it means to have a healthy, productive society. Our ability to succeed as a collective whole depends largely on how much we can nurture the talents of our individual members; and by that same token, our ability as individuals to make the most of our talents depends largely on how much opportunity is created for us by our society. In the interest of creating the greatest possible opportunity for everyone in our society, then, here’s to working together. ∎