AGI will lead to mutual human isolation

Let’s take a look at the social consequences of the introduction of cheap AGI. This line of reasoning is a kind of follow-up to the thread:

In that thread I’ve coined the term “anthropotent artificial intelligence” (AAI), which is defined as followed:

By definition an anthropotent AI (AAI) can do every task a human can handle, and it can do it at least as efficiently.

In particular, that means that AAI is at least as good at doing the work done by humans, and also at socializing with humans. If AAIs are cheaper than humans, then it should follow that there aren’t any rational reasons left for humans to deal with other humans, because they can get anything from an AAI that is cheaper and at least as capable as a human to fulfill the desired function. The conclusion is that every human will create a shell/cloud/sphere/bubble of AAIs around oneself, ceasing to interact with other humans directly (if not at all).

Such an outcome might feel alien to us, in particular because we are used to the idea that humans tend to surround themselves with other humans naturally. But if AAIs are more available than humans, and better in any relevant respect, then humans would indeed have the best of reasons to hang out with those AAIs instead.

The conclusion “Cheap AAI => AAI bubble around every human” seems to be inescapable. If there was any quality that would seem to make human engagement with other humans superior, then that quality would violate the definition of AAI to be at least as good as humans in every quality. Therefore, the conclusion seems to be rock solid. The only way to escape the implication of the “AAI bubble” is to question the validity of the premise. There are two basic ways to do that:

  1. Argue that AAI is not possible.
  2. Argue that AAIs won’t be affordable for every human.

There’s no obvious reason why AAI won’t be possible in principle. I’m open to this possibility, but as transhumanist and technooptimist, I find it rather unlikely. This leaves the second option that not every human will be able to afford a bubble of AAIs. By extension that kind of argument splits up humanity into two distinct classes: Those who can afford having an AAI bubble around them, and those who can’t. For brevity’s sake, let’s call the first class the wealthy and the second class the poor. Relationships between human beings become a consequence of economic deficiency for those poor humans. Abundance breeds mutual human isolation, in contrast. The characteristic of being poor will be to have to deal with other human beings – which can be quite troublesome as we all know from first-hand experience. Those who still deal with humans, even though they are wealthy, must have very particular reasons for doing so. Perhaps they are interested in authentic encounters with humans, because they are curious. Perhaps they are masochists. Or perhaps they are too stupid or irrational to know any better. Those aren’t very strong reasons however, so it’s safe to assume that most wealthy humans won’t be interested in contact with other humans most of the time.

Now, it might be important to exmine the possible causes for this kind of poverty in the future. Why won’t all humans be able to afford an AAI bubble? Well, the most obvious answer would be that in a world with cheap AAI humans won’t have a chance to get any kind of income-generating work, because everyone would prefer their work to be done by those cheaper AAIs. So, the question boils down to what kind of income humans in a world dominated by cheap AAI labour would still have. If it’s an unconditional basic income, it may not suffice for a full AAI bubble for everyone. That does indeed seem like a very plausible scenario. If humans can be kept alive comfortably by cheap automation, that doesn’t imply that humans will be able to afford an army of highly advanced AAIs to keep themselves company. After all, it should be expected that simple automation, sufficient to keep humans healthy and happy, will still be much cheaper than sophisticated AAIs.

If that’s the case, then shouldn’t in the end all humans be poor, because if no human can get any job, or operate a profit-generating business, and every humans get the same universal basic income sufficient for all basic needs, but insufficient for a full AAI bubble, then nobody should be able to afford one? Well, no. The basic income might still be sufficient to save enough resources over years or decades in order to afford a full AAI bubble. Or it may suffice to rent one for a limited amount of time.

Finally, note that the AAI bubble only counts as “unusual” as long as humans retain their special status as humans. Once humans upgrade themselves to AAI status, then surrounding themselves with AAI will align much more naturally with reasonable expectations, since AAIs will be the natural peers of AAIs. This will indeed be the eventual resolution of the AAI bubble phenomenon, in my opinion.

2 Likes

There seems to be an implicit assumption here that the AAIs are controlled by the humans rather than independent agents. Which basically means that any human that’s controlling a certain amount of AAIs is basically self sufficient to such a degree that they don’t need to interact with other humans to survive.

In such a world income would lose it’s meaning as humans would have no need to trade with each other as the AAIs could provide for their every need. Whatever trade remains will likely end up delegated to the AAIs in the vast majority of cases and would likely not be done in any currency but rather as direct trading of resources. Potentially as multiparty circular exchanges as those are easier to form than 2-party direct exchanges.

In this kind of a world, I can see only 2 reasons how people could exist who don’t own AAIs.

  1. They don’t want to own them.
  2. Other people with AAIs have decided to not allow them to have AAIs for some reason.

It’s tempting to add number 3. for a case where no-one has given a particular person AAIs but where there’s no reason to not allow that. However, I don’t think that’s likely to be common as the premise is that AAIs are cheap.

1 Like

It may look that way, but it’s actually not. The same considerations make sense when viewed from a perspective of free AAIs and an attention economy framework. Humans compete for attention. Since there are likely to be more AAIs than humans, since they can be manufactured cheaply (as stated in the premise of this scenario), it will likely be easier to get the attention of AAIs, at least of those who are interested in humans at all. Humans on the other hand, will be likely to invest their attention into high quality interaction, meaning interaction with AAIs or their creations.

Though, perhaps the emphasized phrase this is actually the solution to this paradox: There are simply few AAIs who are interested in humans. So, there are still more humans interested in humans than AAIs who are interested in dealing with humans. But even that line of reasoning seems flawed. If AAIs are really superior, humans should always prefer the attention of AAIs to that of humans.

As a consequence, it’s the relative scarcity of attention they can get from AAIs that draws poor humans together.

In the case that AAIs are still controlled by humans, that is correct.

Yes, that’s what should be expected. Unless some humans are very fond of trading “manually”, or are absolute control freaks.

Why? What’s the problem with using currencies as universal representation of value that simplifies trading?

“Cheap” is relative. If you have no money at all, everything is “expensive”. If you have no labour to sell, what reasons do others have to equip you with the resources or rights needed to possess an AAI?

Universal representation of value is something that’s likely to continue to be useful. However, I doubt currencies will be they way to define that unit of value in the future. Currencies are useful when bandwidth is seriously limited because they condense the required communication to a bare minimum. However, when bandwidth is plentiful, they become a bottleneck in getting things to run smoothly and dynamically.

When you go through the trouble of actually making use the bandwidth available these days, currencies actually become superfluous and you can remove that bit of complexity and just deal directly with the underlying resources.

In my experience, humans tend to prefer what they understand over what they don’t understand. If we’ve got AAIs that are superior to humans, they will be difficult to understand for the average human. It’s easier to understand humans, so I’d expect that would keep humans prefer other humans.

Of course, where you grow up is important here too. If all you interact with while growing up is AAIs, then that’s what you understand.

Also, the premise of the question here seems somewhat odd to me. By the time we are able to make AAIs, I expect most human minds to be already heavily augmented with lesser AI technology, which will make it even more difficult to categorize what is human and what is AAI.

With augmentation here I mean direct brain interfaces with the unconscious parts of the mind as well as the conscious parts of the mind. I expect human mental capability will increase dramatically as the result of such augmentations.

3 Likes