That the conclusion seems to be preposterous doesn’t necessarily mean that it’s wrong.
Can it be proven wrong? Well, among other factors, that depends on the definition of “mind uploading”. And that’s not very clear cut, since procedures like “sideloading” can lead to relatively equivalent results, as described earlier in this thread.
Let me introduce another concept, which is indeed a bit more abstract: Capability convergence. This means that either humans and machines will have essentially the same capabilities (“weak convergence”), or that humans and machines will converge and actually become one and the same (“strong convergence”). With those terms, the interesting question is: What is needed for the different types of convergence?
It might be that a sufficiently advanced brain machine interface suffices for weak convergence. In the case that antropotent AIs actually require interfacing with human brains in order to unlock all their capabilities, it could also suffice for strong convergence. I don’t see the latter scenario as very likely, at least not in the very long run. There are probably configurations of matter than can do everything (of course that includes generating a human consciousness) that a human brain can do, while using less energy, operating faster, being more portable, and so on.
What actually matters, however, is what is required for weak convergence. Weak convergence is enough to keep humans on the same level as machines. I think the crucial requirement is finding a way to get relevant information out of the human brain. At the moment our “information output speed” lies at few bytes per second (for example by typing on a keyboard). Getting that number up to gigabytes per second, or more, is a very difficult and serious challenge.
Well, I think that no one can ever have full control over their destiny, no matter how powerful we become we’ll still be restrained by thge environment in one way or another. Even if we create virtual realities in which we can pretty much control the laws of physics there will still be limitations to what we can do. (Computational power will probably be the main limiting factor in this particular case).
Anyway, you said that “Liberty” can be an ilosionary concept, so, I assume that you are asking the old philosophical question of “if the human mind is influenced by the environment, how can there be free will?”. I don’t think that we can ask for a concept of free will that is completely independent from the environment, that idea is just completely inconceivable, even from a speculative point of view. So, I guess that we just have to assume that we inherently have “Free Will” and that the causes behind that will are simply irelevant for this sort of philosophical debate.
Therefore, we should define Freedom as the situation in which humans can do what they want (regardless of why they want it) without being hindered by external factors. If someone or something convinces me to change my mind about something, my freedom hasn’t been violated. It can only be violated when someone or something coerces me to do something that I don’t want to do.
Well…yeah, I guess that we would still be free in that case, but wouldn’t it be better if we just had the same power as AIs ourselves? Afterall, even if they appear to be benevolent at first we can’t know if that will allways be the case.
I think that’s a matter of personal preference. And that would depend on how trustworthy those AIs seem, and for what reasons we think that they are or aren’t trustworthy. Currently I don’t have a very strong preference in either direction. However, I am very much for enabling everyone to have a choice in that matter by offering them the chance of upgrading themselves to AI-equivalent levels.
To that end, I request sufficient resources to carry out a research program to develop a solution compatible with my own philosophical outlook. I do not require that you agree with or even understand my goals, only that, when the time comes, you stand by the words written there.
If you were actually planing on doing something yourself, you would have found it hard to notice that a group has put together a $10B pot to support VR ventures. http://www.vrvca.com/ The only problem is that you need a team, a promo video, and basically an entire spiel… I only need maybe $1.5/2B of that. But hey, who am I fooling? I can’t even find a fellow transhumanist who can understand much less wants to collaborate on my projects… But then I’ve only been trying for 20 fucking years.
At the current rate of investment, It is not improbable that an AI could be unleashed on the unsuspecting public in the next three years… I’m sure it would make a much better collaborator, for one thing, there is a low probability that it will fettishize being emulated by a another system. Furthermore, if it starts ranting about virtual existence and how it is inherently superior to baseline reality, I can debug it’s rationality circuits…
I’ve been trying for a long time. I’ve come to accept that people won’t understand until I show them… Though I have about 43% of a new story idea that is well beyond my writing talent… =\ Furthermore, I really have been unemployed for four years and haven’t been eating well recently, that really does affect my ability to present the mandatory cheery facade…
It’s admirable that you have been trying for so long. Even for the most skilled and charismatic transhumanists (to which I unfortunately don’t belong to) have an extremely hard time teaming up with other transhumanists on any kind of serious project. My hypothesis is that the reason for that lies in the genesis of transhumanists. Transhumanists have to be independent thinkers to realize that transhumanism is a really good thing, because it’s still seen as quite crazy by mainstream society. Independent thinkers seem to be willing less on compromising and are quite the perfectionists. They also seem to have a lesser desire or need to be a member of a close knit cohesive social group (which the transhumanist community is not).
The alternative of collaborating with non-transhumanists instead might be worth considering. Unfortunately, it’s hard to convince non-transhumanists on transhumanist projects – unless you are extraordinarily skilled at framing those projects in a way that doesn’t smell like transhumanism at all. Money would help a lot at convincing people to work on such projects, but then we have the issue of funding again.
So, perhaps the best solution would be to create an AI that generates an astonishingly high income for yourself, somehow. Certainly, that’s incredibly hard, but it might still be easier for transhumanists to pull that off than convincing anyone with reason or good arguments.
Yes, there is some reason for hope there. Although, even such an AI turns out to think and behave just as a transhumanist, then that would be a bit awkward.
Well, getting the attention of that venture capitol group should be dead easy. All you have to do is utter 3 magic words and you will have their undivided attention for three minutes. – “make sex work”… But during those three minutes you must prove that you have a technological approach and the team to make it work. I have the approach but not the team. =\
Establish the skills needed to get a team to work for you.
Convince a person who already has such skills to adopt your approach and let them assemble a team.
Both approaches are very difficult. Acquiring the relevant skills is hard work, and depending on your strengths and weaknesses you might not get to the necessary level in time. Therefore, people with such skills are quite rare. Also, they are quite busy and usually not interested at implementing the ideas of others.
My strategy for addressing both paths was creating this forum, so that we can get the right people interested and helping one another to improve our skills. Well, that was the plan at least. It turned out to be harder to implement than I had expected.