I had been debating whether I was overdue for another screed against the uploaders, and then I read:
“Uploading is the only hope”
=\
I hate this meme because it shuts down all exploration of the design space and creates, artificially, the situation that it states as fact. I feel threatened by it because it creates an environment where it really isn’t possible to talk about ideas that aren’t uploading. It also creates an ontological framework that can be used to establish a moral imperative to upload everyone, including those who have stated repeatedly, and as strenuously, and as frequently as possible, that they absolutely do not want to be uploaded, because, after all, being uploaded is the only hope. =\
What do I have to do? Rig a loudspeaker to the side of my house with a continuously looping tape?
Well, I guess that, if there’s something we can say for sure, it is: “enhancement is our only hope”. Humans, as they are now, would be certainly incapable of competing with Anthropotent AIs. Given that, I think it’s obvious that we must find some way of increasing our own intelligence so that we can remain free.
Now, does that necessarily have to imply a drastic change in susbtrate (uploading)?
I think it’s far too soon to tell, especially since we don’t really know how smart AAIs will actually be nor just to what point we can improve on this carbon compounds soup we call brain
I guess that should be a pretty good consensus for transhumanists. Or are there transhumanists who really oppose that view?
Anyway, when inserting sideloading, the uploading debate become more complicated and subtle. A whole bunch of rather exotic scenarios become thinkable via that path. For example one could sideload one’s personality and knowledge into an AAI duplicate of oneself which runs at higher clock speeds until one specific problem is solved. Afterwards the knowledge gains of that AAI could then be sideloaded back to the organic brain. That possibility alone would make sideloading pretty much sufficient to keep up with most kinds of AI.
Another possibility would be to sideload into an AAI duplicate of oneself that has a robotic body while temporarily shutting down one’s current brain. That would correspond pretty much to “beaming” oneself as uploaded entity to another computing cluster.
With this in mind, overall it seems to be the case that what really limits us is the rate at which we can extract information from our brains, and the degree of control we have over our neural structure. Uploading would seem to help a lot with both, but if you use quantum computers extracting information becomes tricky, due to the no-cloning theorem, and changing the structure of a very delicate computational matrix (if we opt for an approach with hardware optimised for instantiating minds instead of using more conventional hardware and use an elaborate software model for simulating the mind) might be even harder than changing the structure of our brain, which is rather plastic after all.
So, do we need “classical” uploading after all? Perhaps only if we insist that the substrate our minds run on should be extremely efficient (more efficient than the human brain). This factor might become actually important when economic pressures will mean that those who run of less efficient substrates pay a relatively high price for that. Such a scenario would most likely also imply that having a big physical body/avatar would also be seen as extravagant choice or even privilege. If Robin Hanson is at least kinda right about his em scenario in which AAIs in the form of uploads of certain humans dominate the economy, such a situation might become a reality. We need to figure out what kind of population control we should apply to AIs (and uploads). Such controls would probably be hard to enforce, but having at least some guidelines for handling AI multiplication might be sufficient.
The trick to arguing with uploaders is never argue with them using their own terms. Their conclusions are built into their ontologies. =\
Happily, this argument is perfectly easy to obliterate by jumping up one layer of abstraction and to look at the strength of the conclusion. Stated in meta-terms, the conclusion claims the non-existence, or, rather, the impossibility of any solution to the human competitiveness problem that is not the well-defined mind uploading procedure. It also implies that this will be our only solution to having a reasonable life.
This conclusion is so strong that it is preposterous on it’s face. so =P
i am not sure, what you are talking about, but i am afraid that the topic is: “if we create conscious AI, the AI will be our master and we will become their slaves, if we don´t manage to enhance or upload ourselves” and you dislike the conclusion. if i am right, you could do much more about this than that:[quote=“AlonzoTG, post:11, topic:1206”]
What do I have to do? Rig a loudspeaker to the side of my house with a continuously looping tape?
[/quote]
because there are plenty of implications that could be questioned and deconstructed.
will we ever be able to create conscious AI ?
will we ever be able to upload ourselves ?
if so, because both questions are intermingled, we will be able to answer ALL the questions and solve ALL the problems that are just outlined in short here: https://en.wikipedia.org/wiki/Philosophy_of_mind
to create “strong”/ “true” AI, the consciousness has to be something we could understand and grasp and artificially create or “let emerge”, for uploads the consciousness has to be something “substantial”. if both is not the case, there will never be any strong/true AI , let alone we will ever be able to upload ourselves.
another problem will be solipsism: https://en.wikipedia.org/wiki/Solipsism
if we will work on something we will call “true AI” and believe that it has consciousness, we will never be able to prove that position. the same with uploads: we could do something of which we believe that it would be an upload but instead killing the person with it without knowing that we did. because we could never prove that another entity is conscious. the only knowledge that is possible, is, that i can experience, that i am conscious.
but given the case we will manage to solve all the problems and create a new species: the conscious AI.
ideas like “competition” and “possession”, hierarchies, masters and slaves, economies with accumulation, wars against our own species are special “features” of humans. and by now we ( as a collective) don´t know if those special human features are the result of a special humanoid intelligence or a special humanoid stupidity. if the latter is the case and true AI would be better and more intelligent than us, it could surprise us with a statement: “leave me alone with your silly competition games, i don´t want to play with you. your humanoid games are so stupid, that you even lose, when you are winning.” they could be just disgusted and ignore us. that we fear, that AI will be a threat to us, is nothing but projection from our experiences with our fellow human beings. and if you talk about a “human competitiveness problem” you talk about a home-made problem of the human species. but AI will be a different species.
now my opinion about “uploading is the only hope”: it seems to be the only hope for an indefinite lifespan. but for nothing more. and the only hope to solve the “human competitiveness problem” will be, to realize, how stupid we humans are and to become conscious about ourselves.
Well, first of all, I don’t know if the AIs will be as alien as you seem to think they will be. Considering that the aproach that currently shows the most promise to obtain AI consists basically of “copying” the human mind and the way it works, I actually imagine that AIs will think in a very similar way to humans, at least in the first stages of their development. The only difference will probably be that they will think faster have far superior logical skills.
Now, humans will be the ones to create AIs, so, if we have enough knowledge of how minds work, we can pretty much customize AIs in any way we want, including ways that would make them work differently from human minds. But that leaves us in a quite murky area of AI design: should we make AIs more like us so that we can have more common ground on which we can resolve our differences with them, or should we make them different so that they can be better than us, either in general or for specific purposes?
This raises important questions of moral objectivity: while I firmly believe that it is possible to attain an objective morality that all humans can agree with provided that they think rationally enough, that belief is motivated by the existance of basic values that are intrinsic to human psychology. I’m not sure if AIs would necessarily share these values
If they still think much like us, as I think will happen in the beggining, they almost certainly will, but, if for some reason, people make them weird and alien, they could have a completely different set of values, and that’s very disturbing.
Those concepts are far from being human made. In fact, they prevalent throughout all of the animal kingdom and even beyond, and, while I agree that they are morally questionable and sometimes quite destructive, we must recognize them for what they are: natural consequences of having to survive as an entity in a situation of limited resources.
Let’s put things this way:
Every entity has a goal. A proton wants to attach to an electron, a carbon atom wants to form 4 covalent bonds, a living being want to live as long as possible and generate offspring etc.
In order to achieve their goal, entities need resources;
The resources that entities need frequently coincide;
When two entities need the same resource they have two choices: the share it, and, in that case we have cooperation, or, in alternative, they try to take it for themselves- in which case we have competition. It’s not allways possible to choose between the two;
As entities, AIs will certainly have goals;
We don’t know what those goals will be, but they will probably require resources;
There’s a chance that the resources they’ll require coincide with those that we require.
If that’s the case, than we don’t know how they will end up dealing with this situation, but there’s a chance we’ll end up having to compete with them for those resources.
In the end, I think that the important thing that we need to keep in mind is this: if we stop being the most intelligent beings on the planet, our ability to decide our own destiny will be drastically reduced. I’m not saying that the AIs will makes us “slaves” or “pets” (who can know?), but I am saying that the will able to do whatever they want with us, and the only solution for that is for us to become smarter.
this is almost the same approach as immanuel kant has with the categorical imperative, and it is a good approach from my perspective. but with the following sentence you create a contradiction:[quote=“Joao_Luz, post:17, topic:1206”]
I’m not sure if AIs would necessarily share these values
[/quote]
you said:[quote=“Joao_Luz, post:17, topic:1206”]
Considering that the aproach that currently shows the most promise to obtain AI consists basically of “copying” the human mind and the way it works, I actually imagine that AIs will think in a very similar way to humans, at least in the first stages of their development. The only difference will probably be that they will think faster have far superior logical skills.
[/quote]
if you are right and AI will be somehow human copies, because humans will be the model for them and the only difference will be, that they will think faster and have superior logical skills, that means: they will be more rational than humans, the only limitation for your “moral philosophy” , that humans suffer from: “not to think rationally enough” will never be their limitation, because they will always think rationally enough.
this is just a humanoid interpretation of subjective observation. we could never now, what a proton wants, we could only observe, what it does or what happens to it. we are not even able to know if a proton wants anything. we might call a proton “entity” but we could never assume that it has a will. and our interpretation- problem goes on with the observation of other species. we could never know, if any other species is aware about its future and that it will die some day. so whether another living entity wants to “live as long as possible” or if it just wants to prevent itself from harm and tries to stay alive every moment…we have no way to find that out as far as we don´t create the technology to communicate and philosophize with other species. and if a living entity ever thinks “i want to generate offspring” or if it just felt the urge to have sex, no matter if it generates offspring with it or not, we could not know, either. no matter what is true, if anything we could imagine, the fact is , that the human ability for interpretation is fallible, especially in that case, when humans project own motives on others. we just could understand our world through our eyes and with our psychological configuration. motives, that are completely different - or contradictory - from ours would appear as irrational, alien, mindfuck…or just incomprehensible to us.
maybe. maybe not. other species manage to share the same space without wars. they manage to keep themselves out of harm´s way. if a human will walk into the wilderness - the habitat of plenty different species - it is very unlikely that he will get into a war zone of one those species. it is nearly impossible. with human cities it is different. just put your finger on the globe and spin it. if it is not water and not wilderness, it would be wise to inform yourself about the political situation, the crimes and humanoid dangers and conflicts in this space before you travel there.
I thought that I had clarified my point well enough, but it seems like I didn’t. I’ll try to make it clearer.
I said:
I’m not sure if AIs would necessarily share these values
If they still think much like us, as I think will happen in the beggining, they almost certainly will, but, if for some reason, people make them weird and alien, they could have a completely different set of values, and that’s very disturbing.
While it seems that humans will get to AI by copying the way their own minds work, and those first AIs will clearly think much like humans, people will soon start building AIs for a multitude of different tasks and purposes.
For many of those purposes, the designers will find no use for certain “humanoid” characteristics, and that will make this new generation of AIs way more “alien” than the first one.
I think that, in the end, it’s almost impossible to avert the creation of many different types of AIs, and those types will almost certainly have very different degrees of “mental compatibility” with humans.
OK, so, maybe I’ve made a mistake in calling a proton an “entity”. My point was not to say that protons have a “will”. What I intended was to say that every “entity”/structure/particle/compound/wtv has a “goal”, something that it inherently seeks to achieve because of its own nature.
I think that the first three words of this section of your post highlight exactly what the main problem is when people discuss AI, it’s allway “Maybe. Maybe not”.
No matter how much we try to think about AIs, there are things that we just can know.
We don’t know how smart AIs will actually be, nor what resouces they will have at their disposition from the start, therefore, we can’t possibly deduce what the balance of power will be between them and us. Join that to the complete uncertainty about what will be the AIs’ goals, and you get the most unpredictable situation you could possibly conceive.
Could normal human beings manage to coexist peacefully with several different types of AIs in the same planet? They certainly could, but there are a thousand other possible outcomes and not all of them are positive.
In the end, the only way we can avert becoming completely vulnerable is making ourselves at least as smart as AIs. That’s pretty much the only way we get to keep the power we have over our destinies.
That last statement makes me wonder about this “thing” we call “power over our destinies”. What’s that supposed to be exactly? Control over oneself and one’s environment? Autonomy? Freedom? Liberty? Or are those concepts illusionary? Can we have true control and freedom? What would that actually be? Isn’t the really important thing the (possibly illusionary) belief that we are in control over our destiny? If that’s actually the case, then there is an interesting solution:
The powerful AIs in our future will in fact control the world, but seem to grant us some degree of autonomy, while they secretly control our behaviour through subtle manipulation techniques that we are unable to detect. In this case, we would still feel free, even though our lives would be guided by superior AI. Would that be a good or a bad thing? If the AIs are truly superior and benevolent that might even be the best possible outcome. We would be guided by superior benevolent AIs, which would improve the quality of “our” decision-making, while feeling free at the same time.
The resource question
It has been argued by some that AIs will need different resources than humans. While this is partially true, there are some resources that both AIs and humans need: Matter, space, and low entropic energy. In theory, those basic resources are very abundant, so there is no necessary reason for conflicts about those resources. In reality, getting those resources is not very easy, which means that there are costs for acquiring those resources. The real problems appear when the costs become so high, and one’s own means are so low, that maintaining one’s existence is not guaranteed. In other words: Whether getting the required resources becomes a problem is an economic issue. It depends on the actual economic and political system that’s in place. If those systems work fine, then there’s no problem. But if those systems fail for certain parts of the human/machine population, conflicts will arise inevitably, sooner or later.
That the conclusion seems to be preposterous doesn’t necessarily mean that it’s wrong.
Can it be proven wrong? Well, among other factors, that depends on the definition of “mind uploading”. And that’s not very clear cut, since procedures like “sideloading” can lead to relatively equivalent results, as described earlier in this thread.
Let me introduce another concept, which is indeed a bit more abstract: Capability convergence. This means that either humans and machines will have essentially the same capabilities (“weak convergence”), or that humans and machines will converge and actually become one and the same (“strong convergence”). With those terms, the interesting question is: What is needed for the different types of convergence?
It might be that a sufficiently advanced brain machine interface suffices for weak convergence. In the case that antropotent AIs actually require interfacing with human brains in order to unlock all their capabilities, it could also suffice for strong convergence. I don’t see the latter scenario as very likely, at least not in the very long run. There are probably configurations of matter than can do everything (of course that includes generating a human consciousness) that a human brain can do, while using less energy, operating faster, being more portable, and so on.
What actually matters, however, is what is required for weak convergence. Weak convergence is enough to keep humans on the same level as machines. I think the crucial requirement is finding a way to get relevant information out of the human brain. At the moment our “information output speed” lies at few bytes per second (for example by typing on a keyboard). Getting that number up to gigabytes per second, or more, is a very difficult and serious challenge.
Well, I think that no one can ever have full control over their destiny, no matter how powerful we become we’ll still be restrained by thge environment in one way or another. Even if we create virtual realities in which we can pretty much control the laws of physics there will still be limitations to what we can do. (Computational power will probably be the main limiting factor in this particular case).
Anyway, you said that “Liberty” can be an ilosionary concept, so, I assume that you are asking the old philosophical question of “if the human mind is influenced by the environment, how can there be free will?”. I don’t think that we can ask for a concept of free will that is completely independent from the environment, that idea is just completely inconceivable, even from a speculative point of view. So, I guess that we just have to assume that we inherently have “Free Will” and that the causes behind that will are simply irelevant for this sort of philosophical debate.
Therefore, we should define Freedom as the situation in which humans can do what they want (regardless of why they want it) without being hindered by external factors. If someone or something convinces me to change my mind about something, my freedom hasn’t been violated. It can only be violated when someone or something coerces me to do something that I don’t want to do.
Well…yeah, I guess that we would still be free in that case, but wouldn’t it be better if we just had the same power as AIs ourselves? Afterall, even if they appear to be benevolent at first we can’t know if that will allways be the case.
I think that’s a matter of personal preference. And that would depend on how trustworthy those AIs seem, and for what reasons we think that they are or aren’t trustworthy. Currently I don’t have a very strong preference in either direction. However, I am very much for enabling everyone to have a choice in that matter by offering them the chance of upgrading themselves to AI-equivalent levels.
To that end, I request sufficient resources to carry out a research program to develop a solution compatible with my own philosophical outlook. I do not require that you agree with or even understand my goals, only that, when the time comes, you stand by the words written there.
If you were actually planing on doing something yourself, you would have found it hard to notice that a group has put together a $10B pot to support VR ventures. http://www.vrvca.com/ The only problem is that you need a team, a promo video, and basically an entire spiel… I only need maybe $1.5/2B of that. But hey, who am I fooling? I can’t even find a fellow transhumanist who can understand much less wants to collaborate on my projects… But then I’ve only been trying for 20 fucking years.
At the current rate of investment, It is not improbable that an AI could be unleashed on the unsuspecting public in the next three years… I’m sure it would make a much better collaborator, for one thing, there is a low probability that it will fettishize being emulated by a another system. Furthermore, if it starts ranting about virtual existence and how it is inherently superior to baseline reality, I can debug it’s rationality circuits…
I’ve been trying for a long time. I’ve come to accept that people won’t understand until I show them… Though I have about 43% of a new story idea that is well beyond my writing talent… =\ Furthermore, I really have been unemployed for four years and haven’t been eating well recently, that really does affect my ability to present the mandatory cheery facade…
It’s admirable that you have been trying for so long. Even for the most skilled and charismatic transhumanists (to which I unfortunately don’t belong to) have an extremely hard time teaming up with other transhumanists on any kind of serious project. My hypothesis is that the reason for that lies in the genesis of transhumanists. Transhumanists have to be independent thinkers to realize that transhumanism is a really good thing, because it’s still seen as quite crazy by mainstream society. Independent thinkers seem to be willing less on compromising and are quite the perfectionists. They also seem to have a lesser desire or need to be a member of a close knit cohesive social group (which the transhumanist community is not).
The alternative of collaborating with non-transhumanists instead might be worth considering. Unfortunately, it’s hard to convince non-transhumanists on transhumanist projects – unless you are extraordinarily skilled at framing those projects in a way that doesn’t smell like transhumanism at all. Money would help a lot at convincing people to work on such projects, but then we have the issue of funding again.
So, perhaps the best solution would be to create an AI that generates an astonishingly high income for yourself, somehow. Certainly, that’s incredibly hard, but it might still be easier for transhumanists to pull that off than convincing anyone with reason or good arguments.
Yes, there is some reason for hope there. Although, even such an AI turns out to think and behave just as a transhumanist, then that would be a bit awkward.
Well, getting the attention of that venture capitol group should be dead easy. All you have to do is utter 3 magic words and you will have their undivided attention for three minutes. – “make sex work”… But during those three minutes you must prove that you have a technological approach and the team to make it work. I have the approach but not the team. =\
Establish the skills needed to get a team to work for you.
Convince a person who already has such skills to adopt your approach and let them assemble a team.
Both approaches are very difficult. Acquiring the relevant skills is hard work, and depending on your strengths and weaknesses you might not get to the necessary level in time. Therefore, people with such skills are quite rare. Also, they are quite busy and usually not interested at implementing the ideas of others.
My strategy for addressing both paths was creating this forum, so that we can get the right people interested and helping one another to improve our skills. Well, that was the plan at least. It turned out to be harder to implement than I had expected.