I – II – III – IV – V
[Single-page view]
In recent years, the two names that have become most associated with AI risk are Bostrom and Yudkowsky. Both of them have made extremely compelling arguments about the seriousness of AI risk and the possibility that it could lead to our total extinction. But along the route to becoming famous for these arguments, they’ve also both made similarly compelling arguments noting that if we could successfully use technology to escape our own mortality – to make it safely to the other side of the canyon, or to safely jump off the conveyer belt, or whichever analogy you prefer – it would be the single most important thing we could do as a species. Twenty years ago, for instance, Bostrom published his “Fable of the Dragon-Tyrant,” which he concluded thusly:
Searching for a cure for aging [and “natural” death] is not just a nice thing that we should perhaps one day get around to. It is an urgent, screaming moral imperative. The sooner we start a focused research program, the sooner we will get results. It matters if we get the cure in 25 years rather than in 24 years: a population greater than that of Canada would die as a result. In this matter, time equals life, at a rate of approximately 70 lives per minute. With the meter ticking at such a furious rate, we should stop faffing about.
And a few years earlier, Yudkowsky delivered an even more impassioned mission statement to the same effect:
I have had it. I have had it with crack houses, dictatorships, torture chambers, disease, old age, spinal paralysis, and world hunger. I have had it with a planetary death rate of 150,000 sentient [human] beings per day. I have had it with this planet. I have had it with mortality. None of this is necessary. The time has come to stop turning away from the mugging on the corner, the beggar on the street. It is no longer necessary to look nervously away, repeating the mantra: “I can’t solve all the problems of the world.” We can. We can end this.
And so I have lost, not my faith, but my suspension of disbelief. Strange as the Singularity may seem, there are times when it seems much more reasonable, far less arbitrary, than life as a human. There is a better way! Why rationalize this life? Why try to pretend that it makes sense? Why make it seem bright and happy? There is an alternative!
I’m not saying that there isn’t fun in this life. There is. But any amount of sorrow is unacceptable. The time has come to stop hypnotizing ourselves into believing that pain and unhappiness are desirable! Maybe perfection isn’t attainable, even on the other side of Singularity, but that doesn’t mean that the faults and flaws are okay. The time has come to stop pretending it doesn’t hurt!
Our fellow humans are screaming in pain, our planet will probably be scorched to a cinder or converted into goo, we don’t know what the hell is going on, and the Singularity will solve these problems. I declare reaching the Singularity as fast as possible to be the Interim Meaning of Life, the temporary definition of Good, and the foundation until further notice of my ethical system.
Of course, once again, it really can’t be stressed enough that the most critical part of “reaching the Singularity as fast as possible” – as Bostrom and Yudkowsky themselves will be the first to tell you – is ensuring that we don’t destroy ourselves before we can get there. If our development of AI capabilities is outpacing our development of AI safety, that isn’t helping us reach the Singularity more quickly; all it’s doing is helping us reach an early doom more quickly. And in the subsequent years since writing the above, as AI development has accelerated, Bostrom and Yudkowsky have shifted their emphasis to this latter point and become more adamant than anyone that we not just recklessly plow forward on AI capabilities until we’re sure we’ve got a legitimately solid handle on the safety side of things (which, in their view, we currently don’t). Granted, we’ll never be able to reduce the risk to absolutely zero; at the end of the day, there’s simply no way to guarantee that we’ll be able to cover all our bases if the technology we’re aiming to create is more intelligent than any of us. And even despite the fact that the risk level will have to be greater than zero, it’ll still be worth accepting that risk – even if it ultimately remains relatively high after all’s said and done – simply because the potential payoff will be so massive. Nevertheless, doing everything we can to minimize the level of risk that we do have to accept has to be priority number one in our broader mission to reach the Singularity as quickly as we can, precisely because the stakes are so high. As for whether we actually will be able to minimize the risk… that’s currently an open question. One thing’s for sure, though: These next few years leading up to the answer will be the biggest moment of truth we’ve ever faced as a species. As Yudkowsky puts it:
I would seriously argue that we are heading for the critical point of all human history. Modifying or improving the human brain, or building strong AI, is huge enough on its own. When you consider the intelligence explosion effect, the next few decades could determine the future of intelligent life.
So this is probably the single most important issue in the world. Right now, almost no one is paying serious attention. And the marginal impact of additional efforts could be huge.
And Bostrom agrees, summing up the whole issue in his TED talk before concluding with the line: “I can imagine that if things turn out okay, that people a million years from now look back at this century, and it might well be that they say that the one thing we did that really mattered was to get this thing right.”
I think this is spot on. For all intents and purposes, reaching the Singularity will either mean an end to all of our troubles, or an end to us. If it’s really true, then, that this critical turning point will actually happen within our lifetimes – maybe even within these next couple of decades – we should be devoting as much time and energy to it now as we possibly can. As Urban writes:
No matter what you’re pulling for, this is probably something we should all be thinking about and talking about and putting our effort into more than we are right now.
It reminds me of Game of Thrones, where people keep being like, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.” We’re standing on our balance beam [poised between extinction and immortality], squabbling about every possible issue on the beam and stressing out about all of these problems on the beam when there’s a good chance we’re about to get knocked off the beam.
And when that happens, none of these beam problems matter anymore. Depending on which side we’re knocked off onto, the problems will either all be easily solved or we won’t have problems anymore because dead people don’t have problems.
That’s why people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.
So let’s talk about it.
What exactly would it mean to give this issue as much attention as it deserves? Well, aside from just trying to make it a bigger part of our collective conversation, as Urban suggests, another thing that would certainly help would be to immediately start pouring as many of our resources (financial and non-financial) into the relevant research areas as possible. AI research has already begun attracting a ton of private-sector funding in recent years as its potential has become increasingly apparent – which is a good start. But I’ll also just reiterate what I said in my last post, which is that the public sector should be throwing its entire weight behind such research as well – and not just in AI, but in all the other fields I’ve been discussing here (nanotechnology, brain-machine interfacing, etc. – including, of course, safety research for all of the above). As I wrote before, there’s a very real sense in which we should consider government’s most important role right now to simply be serving as a tool for empowering scientists and engineers and programmers – ensuring that the background social conditions of stability and prosperity are maintained to a sufficient degree to allow them to pursue their research without impediment, and helping advance them in their quest in every way possible (with funding, education, etc.). As things currently stand, the number of people who are actually out there in the world doing real important scientific research (particularly the kind of research that would be directly applicable to the Singularity) is, once you crunch the numbers, startlingly low (see Josiah Zayner’s post on the subject here). And among the working researchers who do exist, competition for funding and support is often incredibly fierce; researchers are frequently forced to spend an inordinate amount of their time jumping through hoops just to secure some limited resources for their work, rather than actually doing the work itself. In an ideal world, though, we would recognize the harm of impeding progress in this way, and would make it our highest priority to do just the opposite; any person who wanted to get a science or technology degree and pursue important research for a living would essentially be given a blank check to do so – think “Manhattan Project on steroids.” To quote Yudkowsky again:
Probably a lot of researchers on paths to the Singularity are spending valuable time writing grant proposals, or doing things that could be done by lab assistants. It would be a fine thing if there were a Singularity Support Foundation to ensure that these people weren’t distracted. There is probably one researcher alive today – Hofstadter, Drexler, Lenat, Moravec, Goertzel, Chalmers, Quate, someone just graduating college, or even me – who is the person who gets to the Singularity. Every hour that person is delayed is another hour to the Singularity. Every hour, six thousand people die. Perhaps we should be doing something about this person’s spending a fourth of [their] time and energy writing grant proposals.
No doubt, this kind of all-out research support would require a lot of spending – considerably more than what we’re devoting to it now – and it might even require issuing considerably more government debt. But if we had our priorities straight, this wouldn’t be a problem, because we’d recognize that in a post-Singularity world, with nanotechnology and advanced AI and everything else, whatever government debt we’d accrued up to that point would simply no longer be an issue, one way or another. Either our technology will have made us so fabulously wealthy that we’ll have no trouble repaying the debt (or we won’t even have any need for money at all, since we’ll be living in a post-scarcity techno-utopia), or else we ourselves will no longer exist. In either case, the immediate financial cost of throwing all our weight behind these research efforts isn’t the thing we should be worried about today.
Now, having said all that, are there other economic considerations that we should be worried about in the short term? We’ve been talking all about what might happen once we actually reach the finish line and achieve ASI and everything else, but just to shift gears a bit, what about the transitional period between now and then? How will all of this technological upheaval affect the economy more broadly, especially regarding things like employment? Sure, the researchers working in Singularity-relevant fields might get all the support and funding they need, but if AIs are about to start taking over more and more human tasks going forward, where will that leave the rest of us who aren’t employed in technological research? Are AIs just going to steal all of the jobs and create mass unemployment? Are they just going to make their owners rich while leaving everyone else without any means of supporting themselves? This might seem like a comparatively trivial thing to talk about after having just spent so much time talking about the threat of total extinction and so on; but more and more people are raising it as a real concern nowadays, so I think it’s worth briefly addressing before we wrap things up here. I should mention that this is another one of those topics I’ve already covered in an earlier post, so I’ll basically just be copying here what I wrote in that post (and if you’ve already read that one, you can just skim over this part); but just to reiterate, the short answer is no, technology probably won’t create mass unemployment (at least not until we’ve fully reached the Singularity and no longer have any need for jobs in the first place).
It’s not hard to understand why technological unemployment might seem like a major threat, of course. AIs really have been improving exponentially, and have been overtaking human capabilities in more and more areas recently. If things keep going the way they are, it won’t be long before they’ve surpassed humans across the board, and are capable of doing every job better than humans can. So at that point, isn’t it obvious that humans won’t have any place left in the job market?
Well, if such a scenario actually did take place, let’s think about how it would have to happen. Let’s imagine that a dozen or so mega-conglomerates develop machines so advanced that they’re able to perform literally any task better and more cheaply than the best humans. These firms’ owners (let’s say each firm is owned by just one person) would have no reason at this point not to lay off their entire workforce and replace those workers with machines. And likewise, nobody inside or outside these firms would have any reason to buy anything from anyone other than them, since the fully-automated firms’ products would be better and cheaper than anyone else’s. But this would also mean that no other businesses would be able to compete with these firms, so they’d all go out of business, and everyone except the firms’ owners would be out of a job. And without any stream of income, that would mean that nobody would be able to buy the firms’ products, aside from the dozen or so rich owners of the firms themselves. So ultimately, we’d have a situation in which there were a dozen or so rich individuals using machines to create whatever products their hearts desired, which they then exchanged among themselves – and then the entire rest of the population would just be sitting around doing nothing, unable to engage in any kind of transactions at all.
But wait a minute – that can’t be right, can it? If that were the situation, then everyone outside the fully-automated firms could just as easily pretend that those firms didn’t exist at all, and could simply continue transacting with each other and conducting the same kind of normal economy that we have today, completely separate from anything the firms were doing. After all, the firms’ owners would already be completely ignoring them and not buying anything from them, so they’d already essentially be existing in their own separate bubble economy, with no money or products crossing the boundary in either direction. No one would be able to trade with the firms’ owners even if they wanted to (aside from the owners themselves); so the only way for regular people to obtain goods and services would be to produce them themselves and trade with each other, just as they’re currently doing. So does that mean that the ultimate effect of firms completely automating their workforce would be that nothing would change at all (aside from a dozen or so rich people breaking off into a whole separate second economy)? The story doesn’t quite seem to add up.
So what are we missing? Why wouldn’t the rich owners, with their technology allowing them to be more productive at everything than anyone else, simply secede into a state of absolute self-sufficiency and leave the rest of us behind? Well, when we put it that way, we can just as well ask the same question of people right now who are in the top percentile of capability and potential productivity. After all, there are people out there right now who are stronger and smarter and more capable in practically every way than practically everyone else (think NASA astronauts, for instance). So why do those people still engage in transactions with the rest of us regular people? The short answer is an economic concept called comparative advantage. The idea of comparative advantage is that even if a particular person is more efficient at everything than another person, the fact that they can only do one thing at a time means that it can still be worthwhile for both parties to trade with each other, since doing so would ultimately produce more overall output than each of them trying to do everything on their own. So for instance, let’s say we had a dozen people – six highly productive ones who were each capable of either assembling 20 televisions or giving 10 haircuts per hour, and six less productive ones who were each only capable of assembling 2 televisions or giving 4 haircuts per hour. The more productive group, seeing that they’re more efficient at producing both televisions and haircuts, might decide that they don’t need the second group, and so might decide to produce everything on their own, with three of them assembling televisions and three of them giving haircuts. Meanwhile, the less productive group, forced to fend for themselves, would split up their labor the same way – three of them assembling televisions, and three giving haircuts. Altogether, then, this would result in the first group producing 60 televisions and 30 haircuts per hour for themselves, while the second group produced 6 televisions and 12 haircuts per hour, for a total of 66 televisions and 42 haircuts overall. Another way that things might go, however, would be for both groups to realize that they could be even more productive if they each spent more of their time doing what they were best at (relatively speaking), and then traded with each other as needed. So let’s say one of the more efficient ones switched from giving haircuts to assembling televisions, and three of the less efficient ones switched from assembling televisions to giving haircuts. With this new division of labor, the first group would now be producing 80 televisions and 20 haircuts per hour, while the second group would be producing zero televisions but 24 haircuts per hour, for a total of 80 televisions and 44 haircuts overall. The first group could then sell 11 televisions to the second group in exchange for 11 haircuts, which would leave the first group with 69 televisions and 31 haircuts per hour (an improvement of 9 additional televisions and 1 additional haircut compared to before) and the second group with 11 televisions and 13 haircuts per hour (an improvement of 5 televisions and 1 additional haircut). Everyone would be made better off! That’s the magic of free exchange. And the exact same dynamic can be applied to our aforementioned scenario in which one group of people was extremely productive because they owned a fleet of hyper-efficient machines, and another group of people was less productive because they were just regular workers. Even if the machines were superior to the human workers in literally every way, it would still be worthwhile for their owners to trade with the regular workers – because after all, the mere fact that a machine can do anything doesn’t mean it can do everything. It can still only do one thing at a time; and accordingly, all that matters in the end is what its relative advantages are, not what its absolute advantages are. As Lori G. Kletzer puts it:
Even in a world where robots have absolute advantage in everything — meaning robots can do everything more efficiently than humans can — robots will be deployed where they have the greatest relative productivity advantage. Humans, meanwhile, will work where they have the smallest disadvantage. If robots can produce 10 times as many automobiles per day as a team of humans, but only twice as many houses, it makes sense to have the robots specialize and focus full-time where they’re relatively most efficient, in order to maximize output. Therefore, even though people are a bit worse than robots at building houses, that job still falls to humans.
Tim Worstall gives some additional thoughts:
In the trade model we end up insisting that there is always a comparative advantage. Even if (as is quite likely it true) the US is better at making absolutely everything than Eritrea is it is still to the benefit of both Eritrea and the US to trade between the two. For it allows both to concentrate on their comparative advantage.
When we switch this over to thinking about jobs and work I like to invert it. Not in meaning but in phrasing: if we all do what we’re least bad at and trade the resulting production then we’ll be better off overall. For example, I am not the best in the world at doing anything. I’m not even the best at being Tim Worstall, for I know there’s at least a couple of other people with the same name and it wouldn’t surprise me at all to find out that one or other of them is better at being Tim Worstall than I am. There are also people out there who are better at doing absolutely everything than I am. And yet the world still pays me a living as long as I do what I am least bad at and trade that for what others are least bad at.
The same will obviously be true when the robots are better than us at doing everything. It will still be true that we will be better off by doing whatever we are least bad at because that will be an addition to whatever it is that the robots are making. If what the robots make isn’t traded with us then obviously the economy will be much as it is now. We’ll be consuming what other humans make for us to consume in much the same manner we do now. If the robots do trade with us then we’re still made better off by working away at whatever it is that we do least badly. And the third possible outcome is that there is in fact some limit to human wants and desires and the robots make so much of everything that they manage to satiate us. At which point, well, who cares about a job as we’ve now, by definition because our desires are satiated, got everything we want? (I strongly suspect that there will still be shortages of course, the love of a good woman isn’t going to become in excess supply anytime soon I fear.)
The end state therefore cannot be something to fear. I agree that the transition could be a bit interesting (in that supposed Chinese sense of “interesting times”) but the actual destination of the robots being better than us at everything seems quite pleasant.
In short, then, as long as we’re willing and able to do work, work should be available to us; we won’t have to worry about robots making us all permanently unemployable. And what’s even better, as the robots become more and more productive, it will mean that we’ll be able to receive more and more from them in exchange for less and less labor on our part. Instead of having to do a week’s worth of labor just to be able to afford a new television or washing machine, it’ll eventually get to the point where we’re able to afford new televisions and washing machines with barely any effort at all – just like how we can now afford to buy food for a fraction of what it would have cost our ancestors in terms of labor expended. And ultimately, once we’ve gotten to the point where we’re nearly at the Singularity and the robots have gotten really efficient and productive – like, as efficient as it’s physically possible for them to get – the amount of labor we’ll have to expend in order to afford everything we could possibly want will basically be negligible. Once we have advanced AIs and nanofabricators and so on, the only “labor” we’ll have to perform at that point will just be dropping the occasional clump of dirt or garbage into the nanofabricators to be reassembled into sports cars or gourmet meals or cancer cures or whatever. As Worstall puts it, “jobs” as we currently understand them will no longer be considered necessary at all, because we’ll already have everything we could ever want. And when we look back on our current era, the notion that people might have ever been afraid of “robots taking all the jobs” will seem hopelessly confused.
Again though, all of this is assuming that we don’t destroy ourselves in the process of developing all these technologies – which, as the technologies grow more and more powerful, will become more and more of a legitimate threat with each passing year. We’ve already talked all about the existential risk we’ll be facing once we reach these technologies’ full potential – and naturally, that will be the biggest threat of all – but even before we get to that point, there will also be plenty of other serious dangers and complications along the way, aside from just the misplaced worry that AIs will take all our jobs. As Nathan J. Robinson writes, even just having technologies that are merely extremely powerful (as opposed to being totally all-powerful) will raise some major challenges in the immediate future:
The conceivable harms from AI are endless. If a computer can replicate the capacities of a human scientist, it will be easy for rogue actors to engineer viruses that could cause pandemics far worse than COVID. They could build bombs. They could execute massive cyberattacks. From deepfake porn to the empowerment of authoritarian governments to the possibility that badly-programmed AI will inflict some catastrophic new harm we haven’t even considered, the rapid advancement of these technologies is clearly hugely risky.
[…]
I don’t think anyone can say for certain how likely it is that AI will be used to, for example, engineer a virus that wipes out human civilization. Maybe it’s quite unlikely. But given the scale of the risk, I don’t want to settle for quite unlikely. We need the chance of that happening to be as close to zero as possible. [Likewise,] I’ve been on the record as a skeptic of the hypothesis that a rogue AI, sufficiently capable of improving its own intelligence, could turn on humanity and drive us to extinction. But you don’t need to think that scenario is especially likely to think that we should at least make sure that there’s always an “off switch” built in to intelligent machines. The costs of safety are so low when compared to the costs of the worst outcomes that it’s an absolute no-brainer.
Even if, like Robinson, you’re skeptical of the idea of a misaligned AI eventually going rogue and turning the world into paperclips, you have to admit that the possibility of ill-intentioned humans using AI for malicious purposes will still be a very real danger in its own right, simply because the possibility of ill-intentioned humans misusing new technology is always a danger. This won’t just be limited to AI, either; the more widespread technologies like bioengineering become, the easier it will become for (say) some ordinary run-of-the-mill extremist to create a bioweapon that wipes out all of humanity. And this danger will only be multiplied once we unlock the massive power of molecular nanotechnology; as we discussed earlier, if we somehow end up inventing self-replicating nanobots before we invent ASI, we won’t just have to worry about accidental gray goo scenarios, but intentionally triggered ones by malicious actors as well. In short, then, even if we completely exclude ASI from the equation, our odds of going extinct from the catastrophic misuse of technology in general are only going to increase in the near future. As Kurzweil writes:
We have a new existential threat today in the potential of a bioterrorist to engineer a new biological virus. We actually do have the knowledge to combat this problem (for example, new vaccine technologies and RNA interference which has been shown capable of destroying arbitrary biological viruses), but it will be a race. We will have similar issues with the feasibility of self-replicating nanotechnology [once it becomes potentially attainable]. Containing these perils while we harvest the promise is arguably the most important issue we face.
This is a point that has somehow become oddly neglected in modern debates over AI safety; these days, such debates usually just come down to disputes over how much risk (or lack thereof) we’d be creating for ourselves by inventing ASI, and don’t go any further than that. But as I’ve been stressing throughout this post, we can’t forget the dangers we’d be exposing ourselves to by not inventing ASI – because the way I see it, these are the greatest dangers of all. When it comes to nanotechnology in particular, I think there’s a good case to be made that the threat of uncontrolled nanotechnology without ASI would be even greater than the threat of ASI itself. And in fact, this was what originally made Yudkowsky himself want to push so hard for achieving ASI as quickly as possible, back when he was first writing on the subject:
Above all, I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse – misuse of a purely material (and thus, amoral) ultratechnology, one powerful enough to destroy the planet. We cannot just sit back and wait. To quote Michael Butler, “Waiting for the bus is a bad idea if you turn out to be the bus driver.”
[…]
Since [originally writing on this topic] in 1996, “nanotechnology” has gone public. I expect that everyone has now heard of the concept of attaining complete control over the molecular structure of matter. This would make it possible to create food from sewage, to heal broken spinal cords, to reverse old age, to make everyone healthy and wealthy, and to deliberately wipe out all life on the planet. Actually, the raw, destructive military uses would probably be a lot easier than the complex, creative uses. Anyone who’s ever read a history book gets one guess as to what happens next.
“Active shields” might suffice against accidental outbreaks of “grey goo”, but not against hardened military-grade nano, perfectly capable of using fusion weapons to break through active shields. And yet, despite this threat, we can’t even try to suppress nanotechnology; that simply increases the probability that the villains will get it first.
Mitchell Porter calls it “The race between superweapons and superintelligence.” Human civilization will continue to change until we either create superintelligence, or wipe ourselves out. Those are the two stable states, the two “attractors” in the system. It doesn’t matter how long it takes, or how many cycles of nanowar-and-regrowth occur before Transcendence or final extinction. If the system keeps changing, over a thousand years, or a million years, or a billion years, it will eventually wind up in one attractor or the other. But my best guess is that the issue will be settled now.
His mention of “active shields” there, by the way, is referring to a kind of global “immune system” that could be set up to protect against sudden outbreaks of all-consuming nanobot swarms – e.g. an invisible network of “police” nanobots distributed all around the world that would be able to intercept and neutralize malignant nanobots before they could do too much damage (in other words, a kind of “blue goo” to counteract the gray goo). This is one tool an ASI would be able to use to stop a nanobot attack from instantly wiping us all out. But as Yudkowsky mentions, it would just be one such tool designed to handle one specific threat; ASI would also have a whole array of tools for every conceivable threat and every conceivable problem, including ones that we can’t even imagine now.
And to be sure, our species will inevitably have to face major extinction-level threats in one form or another, whether we ever invent ASI or not. Whether it’s a deadly virus or a catastrophic war, or even something totally out of left field like (say) a swarm of all-consuming nanobots suddenly arriving without warning from some other star system where the Singularity has already been reached, we can imagine all kinds of ways in which humanity might be completely wiped out, unless we have some ultra-powerful way of protecting ourselves. And this might be the biggest reason of all to want to unlock ASI as soon as we safely can. Yes, there is a very real risk that it might destroy us; but there are also many other things that might destroy us, and the longer we go without ASI, the more of them there will be. Inventing ASI, even as risky as it would be in its own right, might very well be the best chance we’ll have at saving ourselves from extinction in overall terms. As commenter FosterKittenPurrs puts it:
I think that without ASI, there is a guarantee that something horrible will happen.
We may kill ourselves with nukes or other future weapons in a world war, maybe bio weapons or just accidental research escaping from a lab, nanobots, genetic engineering, climate change, through an asteroid, or maybe when we develop space infrastructure some drunk asteroid miner will fling that asteroid towards Earth etc.
With ASI, you only roll the dice once. You get it right, and it takes care of all this other crap for you. With everything else, you have to keep rolling the dice and hope you roll right every single time.
Needless to say, even having to roll the dice once on total annihilation feels like once too many. In an ideal world, we’d never have to make such a decision at all. But unfortunately, “never having to face the possibility of annihilation” doesn’t appear to be one of our options, and never will be – unless we can safely reach the Singularity. As things stand today, we aren’t just facing the possibility that all of us will die at some point – we’re facing a 100% guarantee of it; 60 million of us drop into the proverbial shredding machine every year, and the rest of us are steadily moving down the conveyer belt toward that same fate. Sure, if we keep reproducing, we can continually repopulate the conveyer belt with new people faster than the old ones drop off – and we’ve so far been able to keep the species going in that way. But there’s no guarantee that we’ll be able to keep that up forever – and even if there were, it wouldn’t actually matter in anything but the most abstract sense. “The human species,” after all, isn’t actually a moral end in itself; it’s not a conscious entity in its own right, with its own interests and its own inherent moral value (even though it’s often treated as such in discussions like this). Rather, “the human species” is just a shorthand term that we use to collectively refer to all the individual people who exist, and who do have innate moral value. Those individual people are what actually matter. And as of today, all of them – all of us – are doomed to die in just a few decades (or less) unless we can reach the Singularity first. That’s our only hope of ultimate survival. And yes, if we fail to do it right, there’s a very real risk that we’ll meet our doom a bit earlier than we otherwise would. But if we succeed, it’ll be nothing short of the greatest triumph ever accomplished; all those other threats and problems will simply disappear, and we really will be able to live happily ever after (or at least until the universe itself ends).
Of course, I can’t say for sure that any of this will happen exactly as I’ve been describing it. (That should hopefully go without saying.) Despite all the various confident-sounding predictions given by technology experts, nobody truly knows precisely how the future will play out. We might achieve full ASI by the end of this decade without ever encountering any significant problems with alignment at all, or it might turn out that there are major unanticipated obstacles that make it far more difficult than expected, so we don’t achieve ASI for another century or more; and neither outcome would completely shock me. I will say, though, what would shock me is if we don’t ever reach the Singularity (or go extinct) at any point in the future. Sooner or later, it seems inevitable to me that this is how the course of technology must eventually run, just based on everything we know about how machines work and what it’s physically possible for them to do. Whether or not we reach the critical turning point within our lifetimes is an open question – but whether or not it ever happens at all seems beyond doubt.
To wrap things up, then, I just want to address one final concern: Even if you do accept all of this, and you can in fact believe that our species as a whole really might be capable of reaching the Singularity, what does all this mean for you as an individual who may or may not personally live long enough to make it to 2035 or 2045 (or whenever the Singularity happens)? What should you do if you’re starting to approach old age, or are in poor health, or are simply worried about accidents or whatever, and you fear that you might be one of the billion or so people who won’t make it another decade or two (or if the Singularity takes much longer than that, the much larger chance that you’ll be one of those who doesn’t make it)? Well, obviously there aren’t really any great solutions here – if you get hit by a bus and die tomorrow, there’s not much that can be done about that. However, I will say that there might still be some small ray of hope to grab onto while you’re still alive today – specifically, cryonics. I’ve briefly discussed cryonics on here before (and I apologize again for repeating myself if you’ve already read that post), but just in case you missed it, cryonics is basically a way of having yourself preserved after you die (your body is vitrified and placed in super-cold liquid nitrogen) and then stored by a cryonics company for the next few decades (or centuries, or however long it takes) until technology has advanced enough for you to be revived and restored to full health. The idea is that while it might sound like pure sci-fi fantasy today, reviving a well-preserved body in this way would presumably be trivially easy in a post-Singularity world of ASI and nanotechnology and so on – so if your body can’t naturally survive until then, why not “put it on pause” until the Singularity does arrive? Obviously, there’s no guarantee that humanity actually will reach the Singularity safely – or even if it does, that the cryonics companies themselves will survive that long – so the whole venture is far from a sure thing. But still, the way I see it, even a small chance of surviving your own death is better than none at all. So if you can afford it (and it is expensive, but usually your life insurance company will pay the majority of the cost if you decide to sign up), it seems like a no-brainer to me. If you want to learn more (or if you’re still skeptical), Urban has an outstanding post here breaking down the whole process; I can’t recommend it enough. In fact, if you care about your continued survival as much as I do, I’d go so far as to say that it might be one of the most important things you ever read. (Clearer Thinking also has a great podcast episode on the subject here with Max Marty.) It might seem like a long shot from where we’re sitting here in the present day – in fact, all of this Singularity stuff might seem pretty unbelievable today, to put it mildly – but if it does all turn out to be for real, then as I said at the very beginning of this post, making it safely to the other side of the Singularity will be the most important thing that ever happens – not just for each of us as individuals, but for the entire universe as a whole. This is the thing that, if it goes right, will be the difference between oblivion and apotheosis for all sentient life everywhere. As far as I’m concerned, then, it’s worth taking very seriously, and doing whatever we have to do to make sure that it does go right. So cross your fingers – and hopefully I’ll see you on the other side. ∎