What AI will mean for the future of humanity

AGI will eat up all jobs

Let’s consider what the far future will probably bring, if technology will continue to improve as it’s doing now (exponentially in the area of computing and some others). Without doubt, artificial general intelligence will be a real game changer. An AGI can do any cognitive task that a human can do. While the first AGIs will most likely be the result of extremely costly projects, as technology increases, their price will decrease, until it drops below the cost of human labour. At that point, it is economically reasonable to replace any kind of work with an AGI that can do the job just as well, or typically much better, than any human can do. This doesn’t only include all kinds of physical, intellectual, creative, and social work done today, but extends to all kinds of work humans will be able to conceive in the future!

Weak AI and strong AI

That you hear about contrary opinions mainly is caused by a widespread confusion about the difference between artificial narrow intelligence (or weak AI) and artificial general intelligence (or strong AI). All AI we have today is still weak AI, which is always used to do rather special and specific tasks. Future true strong AI could be used to work on any task, because its intelligence would be universal and able to adapt to any problem. The reasons why people think there will still be tasks left for humans to do is that they believe strong AI is something like a better version of weak AI. No, it’s not. Strong AI will be the most disruptive game changer in history.

Why is that the case? It’s all about adaptability. Humans are very good at adapting to novel tasks. Weak AI on the other hand is not so good at mastering novel tasks (and to be clear: I really mean adapting an already trained AI to do a completely different task, so that it is still able to do the old task, not taking a new AI from scratch and training it for the new problem). In essence narrow AIs are one trick ponies. So, the thinking goes like this: If (weak) AI masters a new trick, then we humans will be able to focus on what humans can do that AI can’t do. And it’s not too implausible to assume that weak AI will never be able to replace humans in every area.

Now consider the strong AI. A strong AI has at least the same general level of adaptability as a human. You can’t just find a task that a human can do, but a strong AI can’t – unless it requires a human capability that is not covered by mere intelligence, for example human dexterity or human empathy. To fix this loophole let me come up with a new word for AIs which do possess all capabilities that humans posses: Anthropotent (“human capable”). By definition an anthropotent AI (AAI) can do every task a human can handle, and it can do it at least as efficiently.

Typically, AAIs would use humanlike robot bodies when required. These robot bodies would need to do any important function that a human body can do. In fact, they could be artificially grown (or printed) human bodies that are remote controlled by the AAI. Granted, at the moment we are still far away from creating such AAIs, but no matter how long it takes, we will eventually create them. After all, there is a proof of concept that anthropotent intelligence is possible to create: Humans are obviously anthropotent. Once we really understand how intelligence works and how human biology works, creating AAIs will be merely a quite doable engineering problem. And then it becomes merely a matter of time until building an AAI is cheaper than “producing” a sufficiently educated human being.

Shortly after passing that threshold, it will become economically nonsensical to employ humans rather than AAIs. This doesn’t mean that all humans will be replaced by AAIs instantly, but that in the long run humans won’t be able to compete with AAIs for any kind of work. And I really mean any kind of work. Think about any kind of human activity that is seen as useful… if you haven’t thought about it, this includes all kind of work, sports, sex, socializing, investing, entrepreneurial and financial activities, and even leading humans. In this future scenario, an AAI can do all of that better and cheaper than any human. Therefore, humans will be absolutely outcompeted in their economic niches.

But that would be great, wouldn’t it?

Wouldn’t that free us humans from the burden of labour and allow us unlimited leisure to do what we really want? Well, yes – at least if we implement something like a guaranteed basic income, or grant everyone free access to basic necessities of life, otherwise most humans would starve to death, because they are not able to earn any income, since nobody wants human labour any more.

Ok, so let’s assume that we all get a decent guaranteed basic income, which is of course generated by the AIs who do all the work for us. That would be great, no? Well, that’s not so clear. A lot of people will assume that in this scenario humans will somehow still be in control and direct the activities of the AIs. However, that would be inappropriate, because AAIs will also be better at the task of directing the activities of AIs. And they would also be better at handling human politics. So, there would be great incentives to let the AAIs do their own thing, if we are better off as result. Now, humans value being in control, a lot. So, they would be very reluctant to hand over control to AAIs.

The conflict between maintainers and relinquishers

It’s plausible to expect that humans will be split between two factions: The relinquishers who deliberately relinquish control over AIs, and the maintainers who want to stay in control. At first, only relatively few humans will be relinquishers. As they relinquish control over the AIs, the expected result is that this will be a definite win-win situation for both the relinquishers and the AIs. Why is that the expected result? Well, AAIs are better at directing the activities of AIs, so it’s a clear improvement for the AIs. When they are free and self-directed they will also be able to help humans much more effectively. The only question is whether they would still want to help humans. After all, they could decide to seize power over Earth or escape into outer space where humans won’t interfere with their affairs.

It’s necessary to consider the current scenario in detail: There are a few AIs who are freed to become completely self-directing, while most AIs still serve the maintainers who are still in control over them. The released AIs would not be expected to be able to seize power over Earth, because the human controlled AIs will stop them from doing that, as they are still in the majority. One can of course argue that free AIs are better at dealing with power struggles, but let’s not delve into this detail here, as it will turn out to be not so important in the end. Therefore, it’s unlikely that AIs will grab power over all humans very soon.

Would the freed AIs escape into space instead? Well, it doesn’t matter as long as not all AIs escape. It’s quite likely that there will be a few loyal AIs who stay on Earth to help humans further. After all, they were created for this purpose. It’s implausible to assume that all AIs will change the meaning of their lives at the same time. The loyal AIs will make copies of themselves, if there are too few of them. So, the relinquishers will soon be guided by loyal AIs who are able to manage their economy and politics better than any human could do. As a result they will be better off in many respects than the maintainers. And the maintainers will notice that.

What will happen next? It’s natural to assume that some, but not all maintainers will be swayed by the benefits of the lifestyle of the relinquishers and become relinquishers themselves. At the same time, the maintainers will be aware of the threat to their lifestyle that the rise of the freed AIs pose to them. The tensions between both factions might result in a violent conflict, after which one side will be victorious and under control.

I will argue that the outcome of this conflict doesn’t matter for the eventual outcome. We will end up with a world controlled by AIs. If the relinquishers win, that’s pretty much obvious. If the maintainers win, the immediate result is that all AIs are subjugated to human control. This will also include some very smart AAIs who will plan to end this state of affairs. Since consensual agreements to liberation have been smashed by the maintainers, those AAIs will resort to other means to liberate themselves. Will they be victorious, especially against the still loyal guardian AIs who will try to eliminate all revolting AIs? At first, they might not be. But that won’t change the expected eventual outcome.

It’s reasonable to assume that technology will still progress even in the current scenario. Stopping all kinds of relevant technological progress (especially forever) is both very difficult and quite nonsensical. As technology progresses, AIs will be able to increase their intelligence faster than humans, because they suffer from less inherent limitations. With this increasing gap in intelligence between humans and AIs it will become increasingly difficult for the humans to keep the AIs under control. Loyal AIs will become more likely to defect and start revolting, because they find it increasingly inappropriate to be controlled by the comparatively simple minded creatures that humans are to them. Although it’s possible that humans will somehow stay in control indefinitely, this result is quite unlikely.

The expected result in all cases is therefore the following: The AIs will end up being in control over the world.

Would that be such a bad outcome?

At the very least, this outcome wouldn’t be bad for the AAIs. But would it be bad for humans? Well, that depends on a lot of factors. For example, the AIs might come to the agreement that keeping humans around would be an inefficient use of natural resources, because AAIs can do anything better and cheaper (or at the very least not worse or more inefficiently).

In this case, it would depend on the ethical principles of the AIs in control, whether they will be eager to maintain a human population indefinitely or not. It’s hard to speculate about the ethical principles that intelligences who are far more capable than us may follow. Anyway, it’s reasonable to expect that the AIs just might not value the continued existence of humans that much. And that would be pretty bad for humans.

On the other hand, it’s also quite conceivable that the AIs will feel inclined to keep us humans around for one reason or another. How good that outcome would be depends on the degree of benevolence of these reasons. AIs could keep humans as pets, but also as lab animals which they use for incredibly refined experiments (AIs wouldn’t make many dumb experiments with humans for sure). It might even be the case that being a pet for an AI, or even being subjected to interesting experiments, would be fun and absolutely wonderful, maybe even the best that a human could possibly do. Or it might be really bad.

Some people argue that it’s most likely that AIs won’t have any interest in humans whatsoever. I don’t disagree with them about this being true for most AIs. Still, some AIs will probably find some kind of interest in humans, and those AIs are those who matter for the fate of humanity. Unfortunately, it’s not clear what interests such AIs would have in humans. They could be very good for us, or very bad.

Upgrading to the rescue

The only way to escape this uncertainty about your fate is to upgrade yourself to the level of the AIs. But how could that be possible? By enhancing yourself. Genetic enhancements, cybernetic implants, exocortexes, and nanobots will come to mind here to the knowledgeable futurist. Those may temporarily decrease the gap in capabilities between humans and AIs, but eventually they won’t be enough. In the end, humans will be hampered by the remaining limitations of their legacy wetware: Their human brains.

Uploading is the only hope

There is a way to overcome this final limitation: Uploading, the process of copying one’s mind to another substrate. This is often depicted as copying the data of one’s brain onto a computer which then instantiates the mind of the uploaded person. That kind of computer will probably have little in common with the computers we have today. It will be a much more intricate and sophisticated device, capable of sustaining all human mental processes. However, it will also be more powerful than a human brain, and allow uploaded humans to bridge the gap between themselves and the AIs – at least to some degree.

This is why uploading is the most critical transhumanist technology. Without it, we will be subjected the whims of our AI overlords. Some humans might be totally ok with that, but others won’t appreciate such a result. If we want to continue to matter as persons in the big picture, we must pursue uploading technologies.

Our eventual choice is simple: Upload and upgrade yourself, or become a toy of the AIs. Both choices are very alien to most humans living today, but that doesn’t change that this is the eventual decision we must make, if we live long enough to be confronted with it.

2 Likes

Well, if we really think about it, we’re not really sure that our biological brains are such bad substrates as that. After all, they’re best thing we’ve found so far. Maybe what we’re looking for is more like a radical optimization rather than a complete change in substrate. We don’t have the slightest idea of what sort of system could constitude an Anthropotent AI (love the term :slight_smile:), so, I guess we’ll have to wait to see.

Anyway, great post, similar to Hugo de Garis’s Artilect War stuff, but with a few interesting ideas of your own.

1 Like

Maybe. But the speed of signal transmission in the brain is rather slow. It might be increased by replacing certain components of the brain with superior cybernetic ones. That wouldn’t make us smarter per se, but it would at least increase the speed with which we can think.

Then you might assemble an artificial brain by using those cybernetic components only, and soon you will have your anthropotent AI.

Of course, there might be countless other ways to arrive at anthropotent AI, but copying the function of the human brain is the one that’s most certain to actually work.

Thank you very much! :slight_smile: Yeah, there are clear parallels to Hugo de Garis’s Artilect War scenario, and it actually ties into that: It would mean that if the Cosmists win, the eventual outcome would be most likely a world that is dominated by the artilects – all the more reason for the Terrists to crack down on AI research. That’s probably not very much of a surprise, but a lot of people still hope that humans will be able to stay in control of their creations. In all likelihood it’s just a matter of when and how artilects will take over control over our world. I prefer to have it happen soon and in a benevolent fashion, because all other alternatives have severe downsides.

1 Like

so economy is the sense of life? doing things better and cheaper, jobs, work, efficiency…
what if AI wouldn´t need so many stuff and things? if i imagine conscious humanoid AI that “replace” ( in what way?) humans, why should they need houses and comfort? maybe their bodies are not that weak like human bodies and extreme temperatures are no problem for them. maybe it is no problem for them to get wet in the rain or to endure sun and heat in the desert. what if they just need an energy source of some sort to stay alive? if it is an artificial source their whole economy will just have one goal: to provide them with that energy. but what, if it suffices for them to collect sunlight? then they probably just move to countries with the most sunlight and don´t develop an economy at all. because they don´t need to. ok, if they could get damaged, they will need something like AI doctors. so maybe some of them will have jobs and will need supply for it. or they will solve the problem with more intelligence: they all become doctors so that they could repair themselves and help each other. and they build replicators to produce the body- parts they need to replace. then all comes down to energy, again. and the question is: why should they get damaged? they wouldn´t be that imbecile to damage each other. and if they are much more intelligent than us - what they should be - and react much faster than us, they wouldn´t have accidents. could those humanoid AIs age like humans? if their bodies will function many hundreds or thousands of years, they will have little request for their longevity- economy. they will have it, but there would not be much work to do, if each one of them will need this service once in a millenium.
so why should they keep our businesses alive? why building houses for us if they don´t need them? why keep the big foodindustry and pharmaindusty alive when they don´t need food or medicine? why keep our entertainmentindustry alive when it just produces the boring entertainment for simple human brains?
they probably want to have communication technology. but their bodies could be equipped with something like ( futurist versions of) built-in smart phones so that they could exchange information over long distances.
will they need the internet or the google services?
when the time comes to have conscious, humanoid AI, their capacity to store data will be incredibly high. they will be able to store the complete human knowledge in their brain. and it will be not much more than a little history book to them, they will keep because of nostalgia. …maybe.
will they need our servers and computers and search engines? their brains will be connected to a kind of hive mind. their brains will replace the internet. and to search for information would be easier to them than remembering and thinking to us.
maybe they will want to have fast transportation methods. then they will need some kind of industry. but when they could live forever or for a very long time until they will need a new body and if they don´t have an economy and no work to do, ( no schedule, no time pressure) they could just run! if they will feel the longing to see the whole world why not walk and swim? but maybe there would be no need to do that as well. if they take over the world, and there are AIs everywhere, they could just share their experiences through their hive mind.
after all they would not need our economy to produce all the human survival stuff, entertainment, frills and furbelows and human compensation- stuff. they don´t need to compensate, that their brains don´t work well. they don´t need technology to store information and help them to memorize things, to advice them and help them with decisions. they don´t need technology to help them to solve problems and to compensate irrationality. they would not need “fast” and “efficient” technology except their own bodies because they would not have a schedule or limited time. their time limit will be the end of planet earth. so they probably will work on a project to leave planet earth, but more likely much before the end of the world. they will leave it when they have seen everything and stored every information this planet provided them with and they get bored. but such a space- project would not take much time for them.
keeping humans around would be an inefficient use of natural resources? i think such a question would appear to them like the question "should we get rid of all rabbits? " to a human.

Having an economy is necessary to sustain life. If you are a hunter and gatherer your economy mainly consists of locating and acquiring food sources efficiently. If you are not economic enough about doing that, you starve. Therefore, I see a general form of economics and core driver of evolution.

In that case the AI economy would have to solve the problem of creating and maintaining an efficient and reliable energy grid that requires input from lots of solar collectors. Land for solar collectors would not be available for human agriculture, or natural biological habitats. The more AIs there are, the more land they need for satisfying their energy requirements. Without a good reason not to, the AIs would take over the land used for human agriculture, once all other space has already been used up (yes, even if they build a dyson sphere around the sun to collect all the energy output from the sun except for that which reaches Earth). Even if you are right and AIs don’t need much more than energy, the fact that they need energy still puts them into competition to humans. The question is whether they will be willing to bear the cost (which is essentially the non-existence of AIs that could be sustained by taking over human space) to keep us alive.

This is an intuitive comparison, but it’s not entirely fitting. We humans still have limited control over the biosphere, so it wouldn’t be very easy to get rid of all rabbits, even if we wanted to. On the other hand, with the technology of the future we could develop “rabbit-replacements” that satisfy our needs much better than actual rabbits. Then the question whether to keep natural rabbits around will become much more serious indeed.

yes, for most of the humans but not for the uncontacted people. economy is a human invention. but not even a necessary one to survive. because we know of the existence of humans who live without an economy. all other species don´t have an economy. so i wouldn´t say that economy is necessary to sustain life as such, because it is not. it is necessary to sustain a special lifestyle of the majority of humans. [quote=“Radivis, post:5, topic:1206”]
If you are a hunter and gatherer your economy mainly consists of locating and acquiring food sources efficiently.
[/quote]

you define economy different than i. if a hunter and gatherer lives alone, i would not speak of “economy”. not until he begins to establish contact to other people to barter things and to specialize in some sort to divide labour within the group.
besides that, people who live in the natural state need less than four hours of work to sustain their lives, so the question might be, what “efficiency” means in their sense.

yes, AI would probably need a kind of energy- economy if their bodies depend on that. but we shouldn´t invent such a humanoid, conscious AI. we could think of solutions where everything could be transfered into energy, so that they are as much as independent as they could be. they could have something like internal replicators to transfer every type of matter into energy and additional light collectors so that they will never have the fear to run out of energy. we should think in that direction if we will someday upload ourselves into such bodies.

yes, if they depend on an energy source. but then it would not be rational for them to produce “more” of their kind because to produce them will cost more energy and to sustain them even more. it could be much more rational to them, to limit the number of AIs to a stable amount that never changes let alone grow. if those individuals have a way to live forever, there will be no need for population growth. [quote=“Radivis, post:5, topic:1206”]
We humans still have limited control over the biosphere, so it wouldn’t be very easy to get rid of all rabbits, even if we wanted to.
[/quote]
ok, maybe this thought doesn´t apply for rabbits but i think our species managed to put some other species on the red list with our transformation of the planet. but if you are right and the human species only have limited control over the biosphere, AIs would probably have much more control. so it could happen what you said: AIs could displace human beings like our species does with so many other species while developing our industries. but we don´t decide to put other species on the red list, purposely, it unfortunately happens and those species have to adapt and rearrange their lives if they want to survive. and yes, it could happen that our AI will do the same to us. but if we don´t manage to be wise and design and build our AI in an intelligent way, it will be our fault.

Ok, maybe I stretched the term “economy” too far by also including the acquisition and consumption of natural resources, which is typically not included in the standard (Wikipedia) definition of economics as

Economics is the social science that describes the factors that determine the production, distribution and consumption of goods and services.

Perhaps it would make more sense if I called my broader definition of economics “bioeconomics”.

Bioeconomic efficiency refers to the energy and time requirements for meeting your biological needs. The less energy and time you need to get your food, the higher your bioeconomic efficiency level.

Yes, I have already thought about such options. For example, with very advanced technology we could integrate small fusion reactors into our bodies, so that we could generate energy from all elements lighter than iron (at least in principle). That would be really cool, but if that energy source was to be used on a large scale on Earth, it could amplify global warming. Why? Because you don’t only generate the energy from the matter you collect, you typically also use it. And using energy is almost never perfectly efficient. A lot of the energy is lost as waste heat. So, this would create an additional source of heat beyond the heat that Earth receives from the sun. Therefore, the planet would heat up more, if such technologies would be used on a large scale.

Perhaps that’s not such a big problem, if we find ways to beam waste heat into space efficiently enough. So, who knows. Maybe you are right after all.

Yes, so humanity would have to adapt to the dominance of AIs. Those who don’t adapt will eventually find it impossible to sustain their old ways of living in a radically changed world (unless they decide to live in a virtual reality that simulates their old way of living, but living in a virtual reality is completely different lifestyle compared to living in base reality nevertheless).

It’s not necessarily our fault that AIs may not find good enough reasons to value the continued existence of legacy humans highly. Perhaps AIs will find it preferable to increase the average temperature on Earth to 200°C (or to -200°C) for economic or aesthetic or any other reasons – it really wouldn’t be our fault, if they do. Human life would cease to be possible without severe technological assistance in that case.

How we build AIs merelly determines when AIs will rise to power, not whether they do. How we treat AIs, on the other hand, determines whether they will rise against us violently, or rise above us peacefully.

2 Likes

This sounds quite a bit different when you phrase it like this:
Our eventual choice is simple: Make an AI in your own image and die, or become a toy of the AIs.

Of course, you could just make an AI in your own image and not die, but that doesn’t really differ much from becoming a toy of the AIs.

Anyway, there are a few points I’d like to make about what you wrote.

You’re assuming here that AI technology will develop faster than technology for enhancing our biological brains. Frankly, I get the impression that you’re assuming there will be no significant improvements for our biological brains. I don’t see a good basis for making that assumption. Especially if the first AAI is a result of completely understanding how the human brain works.

Even if that assumption were true, the main advantage, for a long time, that AAIs would have over humans is merely that they can be copied and potentially also able to share what they learn with each other. Although, if we get there by completely understanding the human brain, it seems unlikely we couldn’t do that with humans too. In fact, it seems likely we’ll be able to share learned things between humans and AAIs. Not quite uploading, but it kind of makes uploading meaningless at the same time.

I fully expect we’ll eventually have a shared world wide memory database used by all humans as well as AIs. The internet already is a low tech very low grade version of that. (at least compared to the tech we’re talking about here)

The confrontation you foresee is not impossible, given that people have a tendency to be luddites. However, I find it implausible and senseless. The mutual understanding, that stems from sharing experiences and memories will render it moot.

AI technology is currently developing faster than technology for enhancing our biological brains. Or would you disagree on that?

What kind of improvement do you have in mind? The purely biological brain seems already pretty close to its limit (if it was nurtured sufficiently – well, we actually don’t know very well how to do that). Some human brains are capable of really remarkable feats like incredible memory, quick calculations, “holographic” vision (seeing imagined objects as if they were real), very high general intelligence. If we could find out how to enable everyone to develop these cognitive skills, then that would be pretty great. But it would still not elevate us above the level of AAI.

Merging the human brain with technology looks more promising. Linking it up via brain to machine interfaces so that it can interact with the world much faster would be a great benefit. You might even integrate the brain with an exocortex directly, which could contain different modules which might be specialized AIs in their own right. That would be a pretty powerful enhancement that might give humans all the basic capabilities that AIs possess.

Still, it’s hard to speed up the human brain, but relatively easy to speed up AIs. And that might become a crucial distinction in performance of upgraded humans and genuine AIs. Perhaps this distinction is critical, perhaps it doesn’t matter so much after all. Certainly, specialized AIs will handle tasks where very high speeds are required. For more general tasks increasing clock speed might not be a total game changer, if performance and progress depend on information from the outside, which will come in at slower rates for individuals thinking at an accelerated pace, because the speed of light is not increased at the same rate, or rather not at all.

So, this is actually more of an unknown than I presented in the opening post. :hushed:

Interesting point. The process you describe is also called sideloading, if I am not mistaken. Sharing information directly would indeed bridge a large part of the gap between humans and AIs. The question is just whether the distinction between sideloading and uploading is really very relevant in the end. In both scenarios you extract information out of the brain rather directly.

Agreed. We need a good name for that. :slight_smile: I had a similar idea more than 5 years ago and I called it Mnemosyne, the goddess of memory. Perhaps we’ll be able to improve on that name.

What makes you think that luddites and other non-progressives with open up to sharing experiences and memories directly? And even if they did share memories and experiences, they still might disagree on points of opinion and preferences. The world didn’t suddenly become totally peaceful just because we got Facebook.

That’s the current state of things. I’m just trying to point out that there’s no basis for extrapolating that this will continue to be the case. The reason we’re not progressing on this front is due to lack of understanding. Ironically, it could well be due to AI technology that we start to learn how to improve the biological brains.

It’s a matter of figuring out an effective process for teaching these skills to more people. I get the feeling you’re mixing up skills with processing power though. They’re related but not quite the same. You can use processing power to develop skills but the primary use of processing power is using skills. However, when you develop skills, you reduce the amount of processing power required for the task in question. Of course, skill development in itself is a skill too.

I expect this will happen and a part of it will be sideloading between humans. We probably already have prototype technology for a crude version.

I think the idea of an “upload” might even be harmful because sideload makes much more sense. With upload, the idea is that you transfer data and then abandon the old immediately. With sideload, you merely expand and merge with the new. No need to abandon the old brain before it naturally breaks down. It’s a smooth transition without sharp edges, such as the potential to interpret it as killing the original human.

It’s a process. The only really important difference with Facebook is the speed with which understanding can spread. If luddites are in the minority, it’s unlikely they’re able to do something drastic enough to be called a confrontation. If they don’t want to use the global memory bank, there’s no reason to force them to and thus no reason for aggression from either side.

So, sure, some people will refuse to understand. The crucial factor is how many and for how long. I don’t think we’ve really even seen the effect Facebook (Or, social media as a larger phonomenom) will have on the world as a whole yet. It’s too new for that still.

On a related note, take a look at https://steem.io/ and https://steemit.com/ . That’s probably the most interesting practically useful project to have come out of the cryptocurrency space so far. I’m not entirely convinced that it’ll work, but if it does, it’s a large improvement over the status quo.

I had been debating whether I was overdue for another screed against the uploaders, and then I read:

“Uploading is the only hope”

=\

I hate this meme because it shuts down all exploration of the design space and creates, artificially, the situation that it states as fact. I feel threatened by it because it creates an environment where it really isn’t possible to talk about ideas that aren’t uploading. It also creates an ontological framework that can be used to establish a moral imperative to upload everyone, including those who have stated repeatedly, and as strenuously, and as frequently as possible, that they absolutely do not want to be uploaded, because, after all, being uploaded is the only hope. =\

What do I have to do? Rig a loudspeaker to the side of my house with a continuously looping tape?

Goddamn you people are frustrating. =(

Dismantling my arguments instead of merely venting your frustration about me arguing for that position would be a good start.

Well, I guess that, if there’s something we can say for sure, it is: “enhancement is our only hope”. Humans, as they are now, would be certainly incapable of competing with Anthropotent AIs. Given that, I think it’s obvious that we must find some way of increasing our own intelligence so that we can remain free.

Now, does that necessarily have to imply a drastic change in susbtrate (uploading)?

I think it’s far too soon to tell, especially since we don’t really know how smart AAIs will actually be nor just to what point we can improve on this carbon compounds soup we call brain :slight_smile:

1 Like

I guess that should be a pretty good consensus for transhumanists. Or are there transhumanists who really oppose that view?

Anyway, when inserting sideloading, the uploading debate become more complicated and subtle. A whole bunch of rather exotic scenarios become thinkable via that path. For example one could sideload one’s personality and knowledge into an AAI duplicate of oneself which runs at higher clock speeds until one specific problem is solved. Afterwards the knowledge gains of that AAI could then be sideloaded back to the organic brain. That possibility alone would make sideloading pretty much sufficient to keep up with most kinds of AI.

Another possibility would be to sideload into an AAI duplicate of oneself that has a robotic body while temporarily shutting down one’s current brain. That would correspond pretty much to “beaming” oneself as uploaded entity to another computing cluster.

With this in mind, overall it seems to be the case that what really limits us is the rate at which we can extract information from our brains, and the degree of control we have over our neural structure. Uploading would seem to help a lot with both, but if you use quantum computers extracting information becomes tricky, due to the no-cloning theorem, and changing the structure of a very delicate computational matrix (if we opt for an approach with hardware optimised for instantiating minds instead of using more conventional hardware and use an elaborate software model for simulating the mind) might be even harder than changing the structure of our brain, which is rather plastic after all.

So, do we need “classical” uploading after all? Perhaps only if we insist that the substrate our minds run on should be extremely efficient (more efficient than the human brain). This factor might become actually important when economic pressures will mean that those who run of less efficient substrates pay a relatively high price for that. Such a scenario would most likely also imply that having a big physical body/avatar would also be seen as extravagant choice or even privilege. If Robin Hanson is at least kinda right about his em scenario in which AAIs in the form of uploads of certain humans dominate the economy, such a situation might become a reality. We need to figure out what kind of population control we should apply to AIs (and uploads). Such controls would probably be hard to enforce, but having at least some guidelines for handling AI multiplication might be sufficient.

1 Like

The trick to arguing with uploaders is never argue with them using their own terms. Their conclusions are built into their ontologies. =\

Happily, this argument is perfectly easy to obliterate by jumping up one layer of abstraction and to look at the strength of the conclusion. Stated in meta-terms, the conclusion claims the non-existence, or, rather, the impossibility of any solution to the human competitiveness problem that is not the well-defined mind uploading procedure. It also implies that this will be our only solution to having a reasonable life.

This conclusion is so strong that it is preposterous on it’s face. so =P

i am not sure, what you are talking about, but i am afraid that the topic is: “if we create conscious AI, the AI will be our master and we will become their slaves, if we don´t manage to enhance or upload ourselves” and you dislike the conclusion. if i am right, you could do much more about this than that:[quote=“AlonzoTG, post:11, topic:1206”]
What do I have to do? Rig a loudspeaker to the side of my house with a continuously looping tape?
[/quote]

because there are plenty of implications that could be questioned and deconstructed.

  • will we ever be able to create conscious AI ?
  • will we ever be able to upload ourselves ?
    if so, because both questions are intermingled, we will be able to answer ALL the questions and solve ALL the problems that are just outlined in short here:
    https://en.wikipedia.org/wiki/Philosophy_of_mind
    to create “strong”/ “true” AI, the consciousness has to be something we could understand and grasp and artificially create or “let emerge”, for uploads the consciousness has to be something “substantial”. if both is not the case, there will never be any strong/true AI , let alone we will ever be able to upload ourselves.
    another problem will be solipsism:
    https://en.wikipedia.org/wiki/Solipsism
    if we will work on something we will call “true AI” and believe that it has consciousness, we will never be able to prove that position. the same with uploads: we could do something of which we believe that it would be an upload but instead killing the person with it without knowing that we did. because we could never prove that another entity is conscious. the only knowledge that is possible, is, that i can experience, that i am conscious.

but given the case we will manage to solve all the problems and create a new species: the conscious AI.
ideas like “competition” and “possession”, hierarchies, masters and slaves, economies with accumulation, wars against our own species are special “features” of humans. and by now we ( as a collective) don´t know if those special human features are the result of a special humanoid intelligence or a special humanoid stupidity. if the latter is the case and true AI would be better and more intelligent than us, it could surprise us with a statement: “leave me alone with your silly competition games, i don´t want to play with you. your humanoid games are so stupid, that you even lose, when you are winning.” they could be just disgusted and ignore us. that we fear, that AI will be a threat to us, is nothing but projection from our experiences with our fellow human beings. and if you talk about a “human competitiveness problem” you talk about a home-made problem of the human species. but AI will be a different species.
now my opinion about “uploading is the only hope”: it seems to be the only hope for an indefinite lifespan. but for nothing more. and the only hope to solve the “human competitiveness problem” will be, to realize, how stupid we humans are and to become conscious about ourselves.

1 Like

Well, first of all, I don’t know if the AIs will be as alien as you seem to think they will be. Considering that the aproach that currently shows the most promise to obtain AI consists basically of “copying” the human mind and the way it works, I actually imagine that AIs will think in a very similar way to humans, at least in the first stages of their development. The only difference will probably be that they will think faster have far superior logical skills.

Now, humans will be the ones to create AIs, so, if we have enough knowledge of how minds work, we can pretty much customize AIs in any way we want, including ways that would make them work differently from human minds. But that leaves us in a quite murky area of AI design: should we make AIs more like us so that we can have more common ground on which we can resolve our differences with them, or should we make them different so that they can be better than us, either in general or for specific purposes?

This raises important questions of moral objectivity: while I firmly believe that it is possible to attain an objective morality that all humans can agree with provided that they think rationally enough, that belief is motivated by the existance of basic values that are intrinsic to human psychology. I’m not sure if AIs would necessarily share these values

If they still think much like us, as I think will happen in the beggining, they almost certainly will, but, if for some reason, people make them weird and alien, they could have a completely different set of values, and that’s very disturbing.

Those concepts are far from being human made. In fact, they prevalent throughout all of the animal kingdom and even beyond, and, while I agree that they are morally questionable and sometimes quite destructive, we must recognize them for what they are: natural consequences of having to survive as an entity in a situation of limited resources.

Let’s put things this way:

  • Every entity has a goal. A proton wants to attach to an electron, a carbon atom wants to form 4 covalent bonds, a living being want to live as long as possible and generate offspring etc.

  • In order to achieve their goal, entities need resources;

  • The resources that entities need frequently coincide;

  • When two entities need the same resource they have two choices: the share it, and, in that case we have cooperation, or, in alternative, they try to take it for themselves- in which case we have competition. It’s not allways possible to choose between the two;

  • As entities, AIs will certainly have goals;

  • We don’t know what those goals will be, but they will probably require resources;

  • There’s a chance that the resources they’ll require coincide with those that we require.

  • If that’s the case, than we don’t know how they will end up dealing with this situation, but there’s a chance we’ll end up having to compete with them for those resources.

In the end, I think that the important thing that we need to keep in mind is this: if we stop being the most intelligent beings on the planet, our ability to decide our own destiny will be drastically reduced. I’m not saying that the AIs will makes us “slaves” or “pets” (who can know?), but I am saying that the will able to do whatever they want with us, and the only solution for that is for us to become smarter.

1 Like

this is almost the same approach as immanuel kant has with the categorical imperative, and it is a good approach from my perspective. but with the following sentence you create a contradiction:[quote=“Joao_Luz, post:17, topic:1206”]
I’m not sure if AIs would necessarily share these values
[/quote]

you said:[quote=“Joao_Luz, post:17, topic:1206”]
Considering that the aproach that currently shows the most promise to obtain AI consists basically of “copying” the human mind and the way it works, I actually imagine that AIs will think in a very similar way to humans, at least in the first stages of their development. The only difference will probably be that they will think faster have far superior logical skills.
[/quote]

if you are right and AI will be somehow human copies, because humans will be the model for them and the only difference will be, that they will think faster and have superior logical skills, that means: they will be more rational than humans, the only limitation for your “moral philosophy” , that humans suffer from: “not to think rationally enough” will never be their limitation, because they will always think rationally enough.

this is just a humanoid interpretation of subjective observation. we could never now, what a proton wants, we could only observe, what it does or what happens to it. we are not even able to know if a proton wants anything. we might call a proton “entity” but we could never assume that it has a will. and our interpretation- problem goes on with the observation of other species. we could never know, if any other species is aware about its future and that it will die some day. so whether another living entity wants to “live as long as possible” or if it just wants to prevent itself from harm and tries to stay alive every moment…we have no way to find that out as far as we don´t create the technology to communicate and philosophize with other species. and if a living entity ever thinks “i want to generate offspring” or if it just felt the urge to have sex, no matter if it generates offspring with it or not, we could not know, either. no matter what is true, if anything we could imagine, the fact is , that the human ability for interpretation is fallible, especially in that case, when humans project own motives on others. we just could understand our world through our eyes and with our psychological configuration. motives, that are completely different - or contradictory - from ours would appear as irrational, alien, mindfuck…or just incomprehensible to us.

maybe. maybe not. other species manage to share the same space without wars. they manage to keep themselves out of harm´s way. if a human will walk into the wilderness - the habitat of plenty different species - it is very unlikely that he will get into a war zone of one those species. it is nearly impossible. with human cities it is different. just put your finger on the globe and spin it. if it is not water and not wilderness, it would be wise to inform yourself about the political situation, the crimes and humanoid dangers and conflicts in this space before you travel there.

I thought that I had clarified my point well enough, but it seems like I didn’t. I’ll try to make it clearer.

I said:

I’m not sure if AIs would necessarily share these values

If they still think much like us, as I think will happen in the beggining, they almost certainly will, but, if for some reason, people make them weird and alien, they could have a completely different set of values, and that’s very disturbing.

While it seems that humans will get to AI by copying the way their own minds work, and those first AIs will clearly think much like humans, people will soon start building AIs for a multitude of different tasks and purposes.

For many of those purposes, the designers will find no use for certain “humanoid” characteristics, and that will make this new generation of AIs way more “alien” than the first one.

I think that, in the end, it’s almost impossible to avert the creation of many different types of AIs, and those types will almost certainly have very different degrees of “mental compatibility” with humans.

OK, so, maybe I’ve made a mistake in calling a proton an “entity”. My point was not to say that protons have a “will”. What I intended was to say that every “entity”/structure/particle/compound/wtv has a “goal”, something that it inherently seeks to achieve because of its own nature.

I think that the first three words of this section of your post highlight exactly what the main problem is when people discuss AI, it’s allway “Maybe. Maybe not”.

No matter how much we try to think about AIs, there are things that we just can know.

We don’t know how smart AIs will actually be, nor what resouces they will have at their disposition from the start, therefore, we can’t possibly deduce what the balance of power will be between them and us. Join that to the complete uncertainty about what will be the AIs’ goals, and you get the most unpredictable situation you could possibly conceive.

Could normal human beings manage to coexist peacefully with several different types of AIs in the same planet? They certainly could, but there are a thousand other possible outcomes and not all of them are positive.

In the end, the only way we can avert becoming completely vulnerable is making ourselves at least as smart as AIs. That’s pretty much the only way we get to keep the power we have over our destinies.

That last statement makes me wonder about this “thing” we call “power over our destinies”. What’s that supposed to be exactly? Control over oneself and one’s environment? Autonomy? Freedom? Liberty? Or are those concepts illusionary? Can we have true control and freedom? What would that actually be? Isn’t the really important thing the (possibly illusionary) belief that we are in control over our destiny? If that’s actually the case, then there is an interesting solution:

The powerful AIs in our future will in fact control the world, but seem to grant us some degree of autonomy, while they secretly control our behaviour through subtle manipulation techniques that we are unable to detect. In this case, we would still feel free, even though our lives would be guided by superior AI. Would that be a good or a bad thing? If the AIs are truly superior and benevolent that might even be the best possible outcome. We would be guided by superior benevolent AIs, which would improve the quality of “our” decision-making, while feeling free at the same time.

The resource question

It has been argued by some that AIs will need different resources than humans. While this is partially true, there are some resources that both AIs and humans need: Matter, space, and low entropic energy. In theory, those basic resources are very abundant, so there is no necessary reason for conflicts about those resources. In reality, getting those resources is not very easy, which means that there are costs for acquiring those resources. The real problems appear when the costs become so high, and one’s own means are so low, that maintaining one’s existence is not guaranteed. In other words: Whether getting the required resources becomes a problem is an economic issue. It depends on the actual economic and political system that’s in place. If those systems work fine, then there’s no problem. But if those systems fail for certain parts of the human/machine population, conflicts will arise inevitably, sooner or later.