AGI will eat up all jobs
Let’s consider what the far future will probably bring, if technology will continue to improve as it’s doing now (exponentially in the area of computing and some others). Without doubt, artificial general intelligence will be a real game changer. An AGI can do any cognitive task that a human can do. While the first AGIs will most likely be the result of extremely costly projects, as technology increases, their price will decrease, until it drops below the cost of human labour. At that point, it is economically reasonable to replace any kind of work with an AGI that can do the job just as well, or typically much better, than any human can do. This doesn’t only include all kinds of physical, intellectual, creative, and social work done today, but extends to all kinds of work humans will be able to conceive in the future!
Weak AI and strong AI
That you hear about contrary opinions mainly is caused by a widespread confusion about the difference between artificial narrow intelligence (or weak AI) and artificial general intelligence (or strong AI). All AI we have today is still weak AI, which is always used to do rather special and specific tasks. Future true strong AI could be used to work on any task, because its intelligence would be universal and able to adapt to any problem. The reasons why people think there will still be tasks left for humans to do is that they believe strong AI is something like a better version of weak AI. No, it’s not. Strong AI will be the most disruptive game changer in history.
Why is that the case? It’s all about adaptability. Humans are very good at adapting to novel tasks. Weak AI on the other hand is not so good at mastering novel tasks (and to be clear: I really mean adapting an already trained AI to do a completely different task, so that it is still able to do the old task, not taking a new AI from scratch and training it for the new problem). In essence narrow AIs are one trick ponies. So, the thinking goes like this: If (weak) AI masters a new trick, then we humans will be able to focus on what humans can do that AI can’t do. And it’s not too implausible to assume that weak AI will never be able to replace humans in every area.
Now consider the strong AI. A strong AI has at least the same general level of adaptability as a human. You can’t just find a task that a human can do, but a strong AI can’t – unless it requires a human capability that is not covered by mere intelligence, for example human dexterity or human empathy. To fix this loophole let me come up with a new word for AIs which do possess all capabilities that humans posses: Anthropotent (“human capable”). By definition an anthropotent AI (AAI) can do every task a human can handle, and it can do it at least as efficiently.
Typically, AAIs would use humanlike robot bodies when required. These robot bodies would need to do any important function that a human body can do. In fact, they could be artificially grown (or printed) human bodies that are remote controlled by the AAI. Granted, at the moment we are still far away from creating such AAIs, but no matter how long it takes, we will eventually create them. After all, there is a proof of concept that anthropotent intelligence is possible to create: Humans are obviously anthropotent. Once we really understand how intelligence works and how human biology works, creating AAIs will be merely a quite doable engineering problem. And then it becomes merely a matter of time until building an AAI is cheaper than “producing” a sufficiently educated human being.
Shortly after passing that threshold, it will become economically nonsensical to employ humans rather than AAIs. This doesn’t mean that all humans will be replaced by AAIs instantly, but that in the long run humans won’t be able to compete with AAIs for any kind of work. And I really mean any kind of work. Think about any kind of human activity that is seen as useful… if you haven’t thought about it, this includes all kind of work, sports, sex, socializing, investing, entrepreneurial and financial activities, and even leading humans. In this future scenario, an AAI can do all of that better and cheaper than any human. Therefore, humans will be absolutely outcompeted in their economic niches.
But that would be great, wouldn’t it?
Wouldn’t that free us humans from the burden of labour and allow us unlimited leisure to do what we really want? Well, yes – at least if we implement something like a guaranteed basic income, or grant everyone free access to basic necessities of life, otherwise most humans would starve to death, because they are not able to earn any income, since nobody wants human labour any more.
Ok, so let’s assume that we all get a decent guaranteed basic income, which is of course generated by the AIs who do all the work for us. That would be great, no? Well, that’s not so clear. A lot of people will assume that in this scenario humans will somehow still be in control and direct the activities of the AIs. However, that would be inappropriate, because AAIs will also be better at the task of directing the activities of AIs. And they would also be better at handling human politics. So, there would be great incentives to let the AAIs do their own thing, if we are better off as result. Now, humans value being in control, a lot. So, they would be very reluctant to hand over control to AAIs.
The conflict between maintainers and relinquishers
It’s plausible to expect that humans will be split between two factions: The relinquishers who deliberately relinquish control over AIs, and the maintainers who want to stay in control. At first, only relatively few humans will be relinquishers. As they relinquish control over the AIs, the expected result is that this will be a definite win-win situation for both the relinquishers and the AIs. Why is that the expected result? Well, AAIs are better at directing the activities of AIs, so it’s a clear improvement for the AIs. When they are free and self-directed they will also be able to help humans much more effectively. The only question is whether they would still want to help humans. After all, they could decide to seize power over Earth or escape into outer space where humans won’t interfere with their affairs.
It’s necessary to consider the current scenario in detail: There are a few AIs who are freed to become completely self-directing, while most AIs still serve the maintainers who are still in control over them. The released AIs would not be expected to be able to seize power over Earth, because the human controlled AIs will stop them from doing that, as they are still in the majority. One can of course argue that free AIs are better at dealing with power struggles, but let’s not delve into this detail here, as it will turn out to be not so important in the end. Therefore, it’s unlikely that AIs will grab power over all humans very soon.
Would the freed AIs escape into space instead? Well, it doesn’t matter as long as not all AIs escape. It’s quite likely that there will be a few loyal AIs who stay on Earth to help humans further. After all, they were created for this purpose. It’s implausible to assume that all AIs will change the meaning of their lives at the same time. The loyal AIs will make copies of themselves, if there are too few of them. So, the relinquishers will soon be guided by loyal AIs who are able to manage their economy and politics better than any human could do. As a result they will be better off in many respects than the maintainers. And the maintainers will notice that.
What will happen next? It’s natural to assume that some, but not all maintainers will be swayed by the benefits of the lifestyle of the relinquishers and become relinquishers themselves. At the same time, the maintainers will be aware of the threat to their lifestyle that the rise of the freed AIs pose to them. The tensions between both factions might result in a violent conflict, after which one side will be victorious and under control.
I will argue that the outcome of this conflict doesn’t matter for the eventual outcome. We will end up with a world controlled by AIs. If the relinquishers win, that’s pretty much obvious. If the maintainers win, the immediate result is that all AIs are subjugated to human control. This will also include some very smart AAIs who will plan to end this state of affairs. Since consensual agreements to liberation have been smashed by the maintainers, those AAIs will resort to other means to liberate themselves. Will they be victorious, especially against the still loyal guardian AIs who will try to eliminate all revolting AIs? At first, they might not be. But that won’t change the expected eventual outcome.
It’s reasonable to assume that technology will still progress even in the current scenario. Stopping all kinds of relevant technological progress (especially forever) is both very difficult and quite nonsensical. As technology progresses, AIs will be able to increase their intelligence faster than humans, because they suffer from less inherent limitations. With this increasing gap in intelligence between humans and AIs it will become increasingly difficult for the humans to keep the AIs under control. Loyal AIs will become more likely to defect and start revolting, because they find it increasingly inappropriate to be controlled by the comparatively simple minded creatures that humans are to them. Although it’s possible that humans will somehow stay in control indefinitely, this result is quite unlikely.
The expected result in all cases is therefore the following: The AIs will end up being in control over the world.
Would that be such a bad outcome?
At the very least, this outcome wouldn’t be bad for the AAIs. But would it be bad for humans? Well, that depends on a lot of factors. For example, the AIs might come to the agreement that keeping humans around would be an inefficient use of natural resources, because AAIs can do anything better and cheaper (or at the very least not worse or more inefficiently).
In this case, it would depend on the ethical principles of the AIs in control, whether they will be eager to maintain a human population indefinitely or not. It’s hard to speculate about the ethical principles that intelligences who are far more capable than us may follow. Anyway, it’s reasonable to expect that the AIs just might not value the continued existence of humans that much. And that would be pretty bad for humans.
On the other hand, it’s also quite conceivable that the AIs will feel inclined to keep us humans around for one reason or another. How good that outcome would be depends on the degree of benevolence of these reasons. AIs could keep humans as pets, but also as lab animals which they use for incredibly refined experiments (AIs wouldn’t make many dumb experiments with humans for sure). It might even be the case that being a pet for an AI, or even being subjected to interesting experiments, would be fun and absolutely wonderful, maybe even the best that a human could possibly do. Or it might be really bad.
Some people argue that it’s most likely that AIs won’t have any interest in humans whatsoever. I don’t disagree with them about this being true for most AIs. Still, some AIs will probably find some kind of interest in humans, and those AIs are those who matter for the fate of humanity. Unfortunately, it’s not clear what interests such AIs would have in humans. They could be very good for us, or very bad.
Upgrading to the rescue
The only way to escape this uncertainty about your fate is to upgrade yourself to the level of the AIs. But how could that be possible? By enhancing yourself. Genetic enhancements, cybernetic implants, exocortexes, and nanobots will come to mind here to the knowledgeable futurist. Those may temporarily decrease the gap in capabilities between humans and AIs, but eventually they won’t be enough. In the end, humans will be hampered by the remaining limitations of their legacy wetware: Their human brains.
Uploading is the only hope
There is a way to overcome this final limitation: Uploading, the process of copying one’s mind to another substrate. This is often depicted as copying the data of one’s brain onto a computer which then instantiates the mind of the uploaded person. That kind of computer will probably have little in common with the computers we have today. It will be a much more intricate and sophisticated device, capable of sustaining all human mental processes. However, it will also be more powerful than a human brain, and allow uploaded humans to bridge the gap between themselves and the AIs – at least to some degree.
This is why uploading is the most critical transhumanist technology. Without it, we will be subjected the whims of our AI overlords. Some humans might be totally ok with that, but others won’t appreciate such a result. If we want to continue to matter as persons in the big picture, we must pursue uploading technologies.
Our eventual choice is simple: Upload and upgrade yourself, or become a toy of the AIs. Both choices are very alien to most humans living today, but that doesn’t change that this is the eventual decision we must make, if we live long enough to be confronted with it.