I – II – III – IV – V – VI – VII – VIII – IX – X – XI – XII – XIII – XIV – XV – XVI – XVII – XVIII – XIX – XX – XXI – XXII – XXIII – XXIV – XXV – XXVI – XXVII
[Single-page view]
And this brings us to one of the biggest ways in which government can enhance the effectiveness of the private sector by doing the spending that the private sector can’t or won’t do for itself. We’ve been talking all about the importance of having a government that can come to the rescue when the private sector has overinvested in the wrong things and produced massive negative externalities for the rest of the economy. But in addition to these instances in which the government must come in after the fact to clean up the messes created by market failures, there are also cases in which the government can help from the opposite direction – by identifying areas in which the private sector is chronically underinvesting in crucial goods and services (and thereby causing itself to underperform its full potential), and making up the difference by investing in those areas itself. After all, in the same way that market transactions can sometimes produce negative externalities by imposing costs on outside parties without accounting for them, they can just as easily produce positive externalities by creating benefits for outside parties without accounting for them – and when this happens, the government may be justified in intervening to increase the production of those positive externalities, in the same way that it may be justified in intervening to decrease the production of negative externalities, so as to bring them in line with their most market-optimal levels.
So what are some instances of areas where this kind of thing can happen? Sowell starts us off with the most minor kind of everyday example just to illustrate the basic idea:
[In the same way that transactions may impose costs on uninvolved third parties], there may be transactions that would be beneficial to people who are not party to the decision-making, and whose interests are therefore not taken into account. The benefits of mud flaps on cars and trucks may be apparent to anyone who has ever driven in a rainstorm behind a car or truck that was throwing so much water or mud onto his windshield as to dangerously obscure vision [or for that matter, kicking up pieces of gravel or loose rocks]. Even if everyone agrees that the benefits of mud flaps greatly exceed their costs, there is no feasible way of buying these benefits in a free market, since you receive no benefits from the mud flaps that you buy and put on your own car, but only from mud flaps that other people buy and put on their cars and trucks.
These are “external benefits.” Here again, [as with external costs,] it is possible to obtain collectively through government what cannot be obtained individually through the marketplace, simply by having laws passed requiring all cars and trucks to have mud flaps on them.
Wheelan provides a couple more such examples:
It is worth noting that there can be positive externalities as well [as negative ones]; an individual’s behavior can have a positive impact on society for which he or she is not fully compensated. I once had an office window that looked out across the Chicago River at the Wrigley Building and the Tribune Tower, two of the most beautiful buildings in a city renowned for its architecture. On a clear day, the view of the skyline, and of these two buildings in particular, was positively inspiring. But I spent five years in that office without paying for the utility that I derived from this wonderful architecture. I didn’t mail a check to the Tribune Company every time I glanced out the window. Or, in the realm of economic development, a business may invest in a downtrodden neighborhood in a way that attracts other kinds of investment. Yet this business is not compensated for anchoring what may become an economic revitalization, which is why local governments often offer subsidies for such investment.
Things like making cities more physically attractive are often overlooked or dismissed as unimportant or superficial – and to be sure, they aren’t the most important things in the world – but even so, attractive living conditions are something to which people do ascribe real positive value – nobody wants to live in an area that’s all ugly gray architecture and trash in the streets – so a truly efficient market does need to take that positive value into account and make adjustments accordingly.
Beyond these kinds of concerns, though, the impact of positive externalities extends into much more serious areas as well – including areas like public health where they can sometimes be matters of life and death. Consider the example given by commenter LordeRoyale, for instance, of protecting the population from contagious diseases:
Take vaccines, for example. If one person vaccinates herself against a disease, she is less likely to catch it. But because she is less likely to catch it, she is less likely to become a carrier and infect other people. Thus, getting vaccinated conveys a positive externality. If getting vaccinated has some cost, either in money, time, or risk of adverse side effects, too few people will choose to get themselves vaccinated because they will likely ignore the positive externalities when weighing the costs and benefits. The government may remedy this problem by subsidizing the development, manufacture, and distribution of vaccines or by requiring vaccination.
Or consider the example of education, as raised by Sachs:
Many social sectors exhibit strong spillovers (or externalities) in their effects. I want you to sleep under an antimalarial bed net so that a mosquito does not bite you and then transmit the disease to me! For a similar reason, I want you to be well educated so that you do not easily fall under the sway of a demagogue who would be harmful for me as well as you. When such spillovers exist, private markets tend to undersupply the goods and services in question. For just this reason, Adam Smith called for the public provision of education: “An instructed and intelligent people . . . are more disposed to examine, and more capable of seeing through, the interested complaints of faction and sedition. . . .” Smith argued, therefore, that the whole society is at risk when any segment of society is poorly educated.
Even aside from this civic-minded kind of justification, having a society full of knowledgeable, educated citizens makes everyone in the society better off regardless of whether they’ve received any of the education services themselves simply because more highly educated people are less likely to engage in violent crime and other such anti-social activities, and are more likely to make positive contributions like producing valuable breakthroughs in science, technology, and other knowledge-intensive fields. As Sachs notes, though, these spillover effects are never actually included in private-sector prices, since the recipients of those ancillary benefits aren’t the ones actually paying for the education directly – so if left solely to private markets, education tends to be undersupplied and overpriced, just as negative externalities tend to be oversupplied and underpriced. The market alone produces inefficient results; so therefore, it’s possible for the government to improve things by getting involved.
On a related note, another area in which government support for positive externalities can make a major difference is in providing funding for scientific and technological research directly. Scientific advancements like (say) the discovery of some new medical treatment, or the invention of the internet, can have significantly bigger impacts on our quality of life than anything that might be going on in the political realm (despite the latter being where we typically prefer to devote the overwhelming share of our attention and energy). But again, the value of such advancements is often spread across the whole of society, and can’t be fully compensated by the market mechanism alone; the positive externalities at play are often significant. What’s more, this kind of scientific research can often require sizeable up-front investment, and may take years before it yields anything that could be profitable at all (if it ever does). For private firms that depend on short-term profitability in order to stay in business, it simply isn’t feasible to invest in such pursuits fully enough to generate the maximum possible social benefit. This is yet another area, then, where government investment can be indispensable. As Wheelan writes:
We [all know about] the powerful incentives that profits create for pharmaceutical companies and the like. But not all important scientific discoveries have immediate commercial applications. Exploring the universe or understanding how human cells divide or seeking subatomic particles may be many steps removed from launching a communications satellite or developing a drug that shrinks tumors or finding a cleaner source of energy. As important, this kind of research must be shared freely with other scientists in order to maximize its value. In other words, you won’t get rich—or even cover your costs in most cases—by generating knowledge that may someday significantly advance the human condition. Most of America’s basic research is [therefore] done either directly by the government at places like NASA and the National Institutes for Health or at research universities, which are nonprofit institutions that receive federal funding.
Sachs emphasizes the same point:
[Recall that a nonrival good is one where] the use of the [good] by one citizen does not diminish its availability for use by others. A scientific discovery is a classic nonrival good. Once the structure of DNA has been discovered, the use of that wonderful knowledge by any individual in society does not limit the use of the same knowledge by others in society. Economic efficiency requires that the knowledge should be available for all, to maximize the social benefits of the knowledge. There should not be a fee for scientists, businesses, households, researchers, and others who want to utilize the scientific knowledge of the structure of DNA! But if there is no fee, who will invest in the discoveries in the first place? The best answer is the public, through publicly financed institutions like the National Institutes of Health (NIH) in the United States. Even the free-market United States invests $27 billion in publicly financed knowledge capital through the NIH.
To be clear, this isn’t to say that private firms and individuals can never make scientific breakthroughs without government funding; obviously they do so all the time. But the breakthroughs that they do make tend to be geared more toward relatively short-term and narrowly-focused applications that can produce immediate profits, and not so much toward the kind of “big picture” research that might benefit the world in a broader sense despite not being immediately profitable. That latter kind of research more often relies on government funding.
In any case, even the shorter-term, more profit-focused efforts that private-sector innovators generally pursue wouldn’t themselves be possible in the first place without government involvement – specifically the government’s assurance that the innovators will be guaranteed sufficient intellectual property rights over their breakthroughs to be able to profit from them. As Taylor explains:
Thomas Edison’s first invention was a vote-counting machine. It worked just fine, but no one bought it and Edison vowed from then on to make only inventions people would actually buy. More recently, Gordon Gould put off patenting the laser, an idea he came up with in 1957. He had his working notebooks notarized to be sure he could prove when he had developed the idea, but he mistakenly believed he needed a working prototype before he could apply for a patent. By the time he did apply, other scientists were putting his ideas into action. It took twenty years and $100,000 in legal fees for him to earn some money from the invention.
These examples help to illustrate the reason why a free market may produce too little scientific research and innovation: there is no guarantee that an unfettered market will reward the inventor. Imagine a company that’s planning to invest a lot of money in research and development on a new invention. If the project fails, the company will have lower net profits than its competitors, and maybe it will even suffer losses and be driven out of business. The other possibility is the project succeeds; in that case, in a completely unregulated free market, competitors can just steal the idea. The innovating company will incur the development expenses but no special gain in revenues. It will still have lower net profits than all its competitors and may still be driven out of business. Heads, I lose; tails, you win.
In conceptual terms, new technology is the opposite of pollution. In the case of pollution, parties external to the transaction between producer and consumer suffered the environmental costs. With new inventions, parties external to the transaction between producer and consumer reap the benefits of these new innovations without needing to compensate the inventor. Thus, innovation is an example of a positive externality.
The key element driving innovation is the ability of an innovator to receive a substantial share of the economic gains from an investment in research and development. Economists call this “appropriability.” If inventors, and the firms that employ them, are not being sufficiently compensated for their efforts, they will provide less innovation. The appropriate policy response to negative externalities such as pollution is to find a way to make the producers face the social costs; conversely, the appropriate policy response to positive externalities such as innovation is to help compensate the producers for their firm’s costs of innovating. Granting and protecting intellectual property rights is one mechanism for accomplishing this goal. Such rights help firms avoid market competition for a set period of time, so that the firm can earn higher-than-normal profits for a while as a return on their investment in innovation.
Again, though, while it’s true that granting exclusive proprietary rights can be one good mechanism for incentivizing innovation, for many types of research merely granting proprietary rights isn’t an entirely adequate solution, either because the whole point of the research in question is to produce benefits that are shared by everyone, or because it’s not the kind of research that can be profited from at all. That’s why a more straightforward approach in many cases, as Harford notes, is to simply subsidize the research directly (along with whichever other positive externalities are currently being undersupplied by the market):
Just as negative externalities will tend to lead to too much pollution or congestion, positive externalities will leave us undervaccinated, with scruffy neighbors, and a dearth of pleasant cafés. And while negative externalities attract all the attention, positive externalities may be even more important: so many of the things that make life worth living are, in fact, subject to positive externalities and are underprovided: freedom from disease, honesty in public life, vibrant neighborhoods, and technological innovation.
Once we realize the importance of positive externalities, the obvious solution is the mirror image of the policies we considered to deal with negative externalities: instead of an externality charge, an externality subsidy. Vaccinations, for example, are often subsidized by governments or by aid agencies; scientific research, too, usually gets a big dose of government funding.
Taylor elaborates a bit more on what exactly this entails:
The U.S. government uses a range of policies to subsidize innovation. It directly funds scientific research through grants to universities, private research organizations, and businesses. According to the National Science Foundation, in 2008 some $397 billion was spent on research and development in the United States; 65 percent of that was spent by industry, 25 percent by the federal government, and the rest by the nonprofit and educational sector—including state universities. Most of the R&D in the United States is paid for by private industry, with the government’s share shrinking since the aerospace-and defense-related research boom of the 1960s and ’70s. One advantage of industry-funded R&D is that it is likely to focus on applied technology with near-term payoffs. Government-funded research, on the other hand, focuses on big-picture discoveries that might stretch across multiple industries with payoffs that might not appear for decades, such as breakthroughs in how we think about physics or biology. Government-funded research is more often released directly into the public domain, meaning the results are available to anyone who wants to build on them. Firm-financed research is typically subject to patent and trade secret law, so in many cases government-funded research disseminates more quickly through the economy.
If you were judging solely by the percentages, you might assume that government-funded research would take a clear backseat to privately-funded research, and that the latter must be the real force driving scientific and technological progress. This can certainly be true in some cases – but it’s most definitely not true as a universal rule. As Rana Faroohar writes:
In the movie Steve Jobs, a character asks, “So how come 10 times in a day I read ‘Steve Jobs is a genius?’” The great man reputation that envelops Jobs is just part of a larger mythology of the role that Silicon Valley, and indeed the entire U.S. private sector, has played in technology innovation. We idolize tech entrepreneurs like Jobs, and credit them for most of the growth in our economy. But University of Sussex economist Mariana Mazzucato, who has just published a new U.S. edition of her book, The Entrepreneurial State: Debunking Public vs. Private Sector Myths, makes a timely argument that it is the government, not venture capitalists and tech visionaries, that have been heroic.
“Every major technological change in recent years traces most of its funding back to the state,” says Mazzucato. Even “early stage” private-sector VCs come in much later, after the big breakthroughs have been made. For example, she notes, “The National Institutes of Health have spent almost a trillion dollars since their founding on the research that created both the pharmaceutical and the biotech sectors—with venture capitalists only entering biotech once the red carpet was laid down in the 1980s. We pretend that the government was at best just in the background creating the basic conditions (skills, infrastructure, basic science). But the truth is that the involvement required massive risk taking along the entire innovation chain: basic research, applied research and early stage financing of companies themselves.” The Silicon Valley VC model, which has typically dictated that financiers exit within 5 years or so, simply isn’t patient enough to create game changing innovation.
Mazzucato’s book cites powerful data and anecdotes. The parts of the smart phone that make it smart—GPS, touch screens, the Internet—were advanced by the Defense Department. Tesla’s battery technologies and solar panels came out of a grant from the U.S. Department of Energy. Google’s search engine algorithm was boosted by a National Science Foundation innovation. Many innovative new drugs have come out of NIH research.
And Alexander lists still more examples:
[Q]: State-run companies may be able to paper-push with the best of them, but the government can never be truly innovative. Only the free market can do that. Look at Silicon Valley!
Advances invented either solely or partly by government institutions include […] the computer, mouse, Internet, digital camera, and email. Not to mention radar, the jet engine, satellites, fiber optics, artificial limbs, and nuclear energy. And that doesn’t include the less recognizable inventions used mostly in industry, or the scores of other inventions from government-funded universities and hospitals.
Even those inventions that come from corporations often come not from startups exposed to the free market, but from de facto state-owned monopolies. For example, during its fifty years as a state-sanctioned monopoly, the infamous Ma Bell invented (via its Bell Labs division) transistors, modern cryptography, solar cells, the laser, the C programming language, and mobile phones; when the monopoly was broken up, Bell Labs was sold off to Alcatel-Lucent, which after a few years announced it was cutting all funding for basic research to focus on more immediately profitable applications.
Although the media celebrates private companies like Apple as centers of innovation, Apple’s expertise lies, at best, in consumer packaging. They did not invent the computer, the mp3 player, or the mobile phone, but they developed versions of these products that were attractive and easy to use. This is great and they deserve the acclaim and heaps of money they’ve gathered from their success, but let’s make sure to call a spade a spade: they are good at marketing and design, not at brilliant invention of totally new technologies.
That sort of de novo invention seems to come mostly from very large organizations that can afford basic research without an obsession on short-term profitability. Although sometimes large companies like Ma Bell, invention-rich IBM and Xerox can fulfill this role, such organizations are disproportionately governments and state-sponsored companies, explaining their impressive track record in this area.
One of the biggest examples of government producing a massive surge in technological progress was World War II, as Daniel P. Gross and Bhaven N. Sampat document:
During World War II, the US government’s Office of Scientific Research and Development (OSRD) supported one of the largest public investments in applied R&D in US history. Using data on all OSRD-funded invention, we show this shock had a formative impact on the US innovation system, catalyzing technology clusters across the country, with accompanying increases in high-tech entrepreneurship and employment. These effects persist until at least the 1970s and appear to be driven by agglomerative forces and endogenous growth. In addition to creating technology clusters, wartime R&D permanently changed the trajectory of overall US innovation in the direction of OSRD-funded technologies.
But this leap forward was just part of a larger trend that has proven itself repeatedly: When the US government really decides to put its mind to it and dedicate significant resources to scientific research – whether for its own sake or for the purposes of some big military effort – it consistently produces advances in technology that are nothing short of transformative. As Noam Chomsky points out:
[Military spending has] played a prominent role in technological and industrial development throughout the modern era. That includes major advances in metallurgy, electronics, machine tools, and manufacturing processes, including the American system of mass production that astounded nineteenth-century competitors and set the stage for the automotive industry and other manufacturing achievements, based on many years of investment, R&D, and experience in weapons production within US Army arsenals. There was a qualitative leap forward after World War II, this time primarily in the US, as the military provided a cover for the creation of the core of the modern high-tech economy: computers and electronics generally, telecommunications and the Internet, automation, lasers, and the commercial aviation industry, and much else, now extending to nanotechnology, biotechnology, neuroengineering, and other new frontiers.
Of course, as he also goes on to emphasize, achieving this kind of technological progress by way of military spending isn’t actually the most productive way of doing so; if you’re spending billions of dollars on new weapons systems and munitions whose only ultimate purpose is to be destroyed, it’s generally quite a bit more wasteful than using that same money to develop technologies that can be put toward more constructive purposes. Unfortunately though, because so many American conservatives share such a strong knee-jerk opposition to every kind of government spending except military spending, using the pretense of national defense is often the only way of getting such spending approved at all.
Nevertheless, the US government does continue to sponsor all kinds of scientific research for civilian applications as well as military ones, and as a result it continues to accelerate our progress as a species in all kinds of critical areas. One of the most important recent examples of this is how government support has helped advance renewable forms of energy like solar power; as Alexander notes, “government subsidies to solar seem to have been a very successful attempt to push solar out of an area where it wasn’t profitable to improve into an area where it is.” If at some point in the near future we’re all living in a world powered primarily by renewable energy, we’ll in no small part have government to thank for it. But while this is just one of the more obvious areas where investment in technology can make us all better off, the same is true for every sector of the economy – and in fact, it’s far more true than even most pro-technology advocates might realize. As Taylor explains, economists estimate that for an advanced economy like the US, fully half of all economic growth can be attributed to the ongoing development of new productivity-enhancing technologies:
The underlying cause of long-term economic growth is a rise in productivity growth—that is, higher output per hour worked or higher output per worker. The three big drivers of productivity growth are an increase in physical capital, that is, more capital equipment for workers to use on the job; more human capital, meaning workers who have more experience or better education; and better technology, that is, more efficient ways of producing things. In practice, these work together in the context of the incentives in a market-oriented economy. However, a standard approach is to calculate how much education and experience per worker have increased and how much physical capital equipment per worker has increased. Then, any remaining growth that cannot be explained by these factors is commonly attributed to improved technology—where “technology” is a broad term referring to all the large and small innovations that change what is produced.
When economists break down the determinants of economic growth for an economy such as the United States, a common finding is that about one-fourth of long-term economic growth can be explained by growth in human capital, such as more education and more experience. Another one-fourth of economic growth can be explained by physical capital: more machinery to work with, more places producing goods. But about one-half of all growth is new technology.
In light of this (and also for other reasons I’ll get into in my next post), I’m inclined to think that scientific and technological research is one area where we should not only being doing more than we currently are, but vastly more than we currently are. We seem to have an inclination as a society, whether consciously or unconsciously, to think of scientific breakthroughs not as a central driving force of our civilization, but almost as a kind of extra bonus on top of the regular output of the economy – in other words, not really something that we should feel like we have to maximize, but just an added bit of “house money” that we might get to enjoy if we’re lucky. But I think this gets it dead wrong; I think that in an ideal world, if we could ever somehow implement a perfect system of government that truly accounted for all potential positive externalities to their fullest extent, we’d find that the task of accelerating the advancement of science and technology wouldn’t just make up a large part of the budget; it would practically be the main thing the government was directing its energy and resources toward. After all, as much as our lives are generally improved by the various other things that governments do – passing new laws and regulations and tax reforms and so on – history has shown that even the greatest of these accomplishments often pale in comparison to the impact produced by scientific breakthroughs like curing smallpox or inventing computers or what have you. (Smallpox is a particularly good case in point; in the last century before its eradication, it was estimated to have killed around 500 million people, plus countless more in the centuries before that – but by 1980 it had been completely eradicated, for a mere cost of around $300 million.) What’s more, as momentous as past scientific breakthroughs have been, we may actually be at a point in history right now where the next breakthroughs on the horizon – the technologies that we’re just now on the verge of unlocking – could be even more exponentially world-changing than anything else that has come before. For the first time in history, it has become conceivable that within our lifetimes, we could have AIs that are advanced enough to cure every disease, molecular nanofabricators that are advanced enough to instantly give us any physical good we might desire, bioengineering techniques that are advanced enough to let us extend our lifespans for as long as we want, and even more dramatic breakthroughs still. All of these impossibly sci-fi-sounding technologies are not only genuinely attainable – they could be attainable within the very near future. (Again, this will all be discussed more in the next post, but in the meantime you can check out this post by Tim Urban for a glimpse of what I mean.) But if we want to get there – and get there the right way, without destroying ourselves in the meantime – it will require real investment in research; and the more urgency we exercise in doing so, the better.
In my opinion, there’s a very real sense in which we should consider government’s most important role right now to simply be serving as a tool for empowering scientists and engineers – ensuring that the background social conditions of stability and prosperity are maintained to a sufficient degree to allow them to pursue their research without impediment, and helping advance them in their quest in every way possible (with funding, education, etc.). As things currently stand, the number of people who are actually out there in the world doing real important scientific research is, once you crunch the numbers, startlingly low (see Josiah Zayner’s post on the subject here). And among the working scientists who do exist, competition for funding and support is often incredibly fierce; scientists are frequently forced to spend an inordinate amount of their time jumping through hoops just to secure some limited resources for their work, rather than actually doing the work itself. In an ideal world, though, we would recognize the harm of impeding the progress of science in this way, and would make it our highest priority to do just the opposite; any person who wanted to get a science or technology degree and pursue scientific research for a living would essentially be given a blank check to do so. No doubt, it would require a lot of spending – considerably more than what we’re devoting to it now – and it might even require issuing considerably more government debt. But if we had our priorities straight, this wouldn’t be a problem, because we’d recognize it as the investment that it was; we’d be fully willing to pay the up-front cost, because we’d understand how much more we stood to gain in the long run. After all, this is how it works for every kind of positive externality – whether it be investing in scientific research or investing in repairing deteriorating infrastructure; as Sowell explains:
If a nation’s highways and bridges are crumbling from a lack of maintenance and repair, that does not appear in national debt statistics, but neglected infrastructure is a burden being passed on to the next generation, just as surely as a national debt would be. If the costs of repairs are worth the benefits, then issuing government bonds to raise the money needed to restore this infrastructure makes sense—and the burden on future generations may be no greater than if the bonds had never been issued, though it takes the form of money owed rather than the form of crumbling and perhaps dangerous infrastructure that may become even more costly to repair in the next generation, due to continued neglect.
Either wartime or peacetime expenditures by the government can be paid for out of tax revenues or out of money received from selling government bonds. Which method makes more economic sense depends in part on whether the money is being spent for a current flow of goods and services, such as electricity or paper for government agencies or food for the military forces, or is instead being spent for adding to an accumulated stock of capital, such as hydroelectric dams or national highways to be used in future years for future generations.
Going into debt to create long-term investments makes as much sense for the government as a private individual’s borrowing more than his annual income to buy a house. By the same token, people who borrow more than their annual income to pay for lavish entertainment this year are simply living beyond their means and probably heading for big financial trouble. The same principle applies to government expenditures for current benefits, with the costs being passed on to future generations.
To be sure, investing in infrastructure repair isn’t exactly like investing in scientific research. In the case of infrastructure, it can often make perfect sense to want to limit spending exclusively to the projects that are sure to produce the greatest benefits, so as to avoid needlessly wasting valuable resources on worthless money pits. When it comes to science investing, on the other hand, it’s often impossible to definitively know in advance exactly which projects will be turn out to be the most promising; so if anything, science investing might be better likened to venture capital, where the optimal strategy is to invest in a bunch of moonshot projects – even though you know that most of them will never amount to anything – simply because the payoff when one of them ultimately does succeed will be so enormous that it will outweigh all the losses and make it all worthwhile. (Adam Mastroianni’s post here makes this argument wonderfully.) Sure, it may cause conservatives to launch into fiery tirades about government waste when some of those ventures don’t end up producing fruit – as some of them inevitably won’t – but it’s a well-worn maxim in venture capital that if none of your investments ever fail, it’s a sign that you’re being far too restrained in your investing and are therefore missing out on a ton of positive value. And the same is true of government investment in science. Yes, investing in science does mean that you’ll have to incur some costs in the short term – but in the long term, you’ll earn back your original outlays and then some; that, after all, is what investment is. And that’s one thing that science investment and infrastructure investment do have in common – along with education investment, public health investment, and all the other examples we’ve been discussing. Government spending on positive externalities like these, which might strike some as “optional” or “superfluous,” is in fact a crucial part of a well-functioning economy, because it’s the only scalable way of unlocking immense stores of value that private markets wouldn’t otherwise be able to effectively unlock on their own.