i am too lazy to write an other post, but i suggest you to read about the einstein-rosen-podolski-thought experiment.
so you have not to invent the wheel twice
i am too lazy to write an other post, but i suggest you to read about the einstein-rosen-podolski-thought experiment.
I meant, if universes are completely identical in every way it is one and the same so we only have one universe. If there are nearly identical universes that are completely isolated from one another, the difference, that they reside somewhere else, is enough to state that they are not the same but two different universes.
Ok, my AI -idea was not that important but pretty unclear, i see…
Radivis mentioned supervenience:
If the individual experience of a person depends on the configuration of particles the person consists of, it might be very difficult (for practical reasons) to determine if two copies have the same configuration of particles. Because if just one particle differs, one copy feels an itch on the nose and the other not. So if the identity of a human being needs a special configuration of all his particles, his body consists of, we have to compare the configuration of particles of both copies if we want to find out whether they are in the same mental state and their identity is identical. This comparision might turn out to be impossible. Maybe we will never be able to beam persons like it is described on star trek. In a simple materialist view people are nothing more but a tower of toy building blocks, except that the blocks are much tinier and a lot more. But if we deconstruct a person in one place, send the construction-information of the exact configuration of her body to another place and construct her there again, will we just transported her? Or did we kill the original person and constructed a new one who happens to have the same memory of the original and claims to be the original although she is not? We could never determine that. But a set of data remains the same if we send it somewhere. And if we will ever be able to create strong AI and the identity of one AI “only” depends on the configuration of data and not on the configuration of particles of a body, maybe we will be able to beam the AI without killing it first. Maybe we will be able to copy AI completely with the same identity…but this is just a speculation.
No, that wasn’t the problem: I didn’t understand your idea grammarwise.
This sentence doesn’t make sense:
Artificial Intelligences will be easier to compare, if their identities are based on data.
yes, and after the upload of someones mind, it will be possible to beam him too.
i see an analogy to the problem of the first-person-perspective.
when someone uploads his consciousness in a computer, what implies an analogue-digital-conversion (this is imho what @zanthia meant by making the difference between the configuration of the particles oft someones body and the configuration of the data of an ai), we have the same problem mentioned above by @Radivis about the identity of two (or more) persons.
lets imagine, someone connects his brain to an ai for uploading his mind. while the upload is going on it will be a moment this person change his perspective, because a part of his consciousness is transfered or copied into the ai (the computer) and this person is becoming two persons, one in his human body and another in the ai, connected by somekind of network (a neuralink elon musk ). or maybe (and this is what i am expecting as a transhumanist) a totally different kind of person emerging from this process (a cyborg).
it is similar to the beaming problem mentionned by @zanthia.
in my reflections i saw a solution for this problem in the synchronicity and non-locality described in the einstein-rosen-podolsky-paradoxon, but i am not sure, if a quantum process will work on a macroscopic world application.
hence the real and obvious application for @Radivis superworld superposition problem and the derived implications is the upload-problem.
edit: i see, that i have to explain a premise made above:
imho the upload can not take place in a computer like in an empty bottle (this is the darkest way to hell), but there must be an ai for receiving him into his new habitate.
the question is if ,or if not this ai has to be embodied (connected to the outerworld by a sensorium) or if the ai may only access a cyberspace.
what will happen to the uploaded person? will he be one, or two or a third, or just loose his mind (ok, this will not be an option ) . and what will happen, when the uploaded person and the uploaded mind were separated like the cutting of the umbilical cord separate the new-born child from the placenta feeding it?
What is identity? Identity is merely a construct. In the objective view it’s a societal construct. A person is an identity that distinguishes that person from other persons by specific criteria, which often aren’t strictly defined, but rather work on similarity heuristics. My body and mind today are similar to my body and mind a year ago, so I count as the same identity. It is conceivable that society could work differently and everbody gets a new identity whenever a sufficiently large change to body or mind happen. Of course, that would make societal interactions more complicated, which is the main reason why identity is considered to be stable throughout the whole life of a person. Under closer scrutiny, this construct is obviously quite flawed. A human at age 2 is very different from that human at ages 16 or 60. Why see all of them as the same person? Because making a clear cut transition is hard. And also unnecessary.
With a technology that allows a duplication of a person, the most simple solution of seeing identity as constant and monolithic become more obviously problematic. If I consider an original person and their copy as one and the same person, I run into different problems:
- The illusion of identity is much harder to main, because I see two systems that are however supposed to count as one. That’s still doable, but quite unnatural.
- Each instance of that identity is responsible for the whole identity. Especially when both instances diverse, this can easily become a big problem. If one instance commits a murder, both are to be held accountable.
Such problems make it seem reasonable to give both the original and the copy their own respective identities that are not the same.
Still, identity in the context of societal roles, rights, and duties, is a social construct that could in theory be constructed with arbitrary definitions and criteria. In a reasonable society the identity constructs will however mostly turn out to be practical, of not even pragmatic. Almost nobody wants to live in a society in which the definition of identity is impractical, because that would be a major cause of suffering. Imagine that you counted as new person ever time you woke up. You had to get a new ID first thing in the day before you are allowed to do anything in relation to other people. You would also get a new job every day and then purchase stuff every day, because you start out with nothing, unless you inherit stuff from a previous identity. Life in such a society is conceivable, but hellish. Similar considerations imply that identities should remain as stable as possible. Switching to a different body or computational substrate shouldn’t change identity. Moving from one place to another, whether slowly (by walking) or fast (by beaming), shouldn’t change identity either. Of course, societies are free to define identity as they please, but societies pay a high price in terms of added superfluous complexity by making the concept of identity more complicated than necessary.
From the subjective view identity doesn’t seem like a concept, but even there it’s a psychological concept. At the basic level of subjectivity there is just a stream of subjective perception – no sense of self or identity. The idea of a subjective identity is an idea that is formed to order certain parts of the stream of subjective perception. Some parts of that perception refer to a “self”, while others refer to a “not self”. This is basically a neural network based classifier at work. Also in this case, the “self” classifier could have any possible configuration. As in the case of the societal identity construct, the personal psychological self construct is however required to be at least somewhat practical. Otherwise all kinds of psychological problems may appear whenever a person classifies something as “self” or “not self” when it’s not very appropriate to do that.
Of course, both identity constructs, the societal and the personal one, do share certain relations to each other. I am not free to declare my self identity construct to be something completely differently from what society sees as my societal identity construct. If I say that I am my body and my car, this is already quite eccentric, but if I say that I am a whole nation and people are supposed to do what I say, this is sufficient reason for me to be locked up in an asylum, unless I am a kind of absolutist dictator.
indeed you are the king of apodictic statements
but i see in your disambiguation of the term “identity” no debatable contribution to the upload-problem, whose importance för transhumanistic belongs can not be overestimated.
will the upload make you another person (give you an other identity) ?
here some informations about the stream of consciousness
an thought-experiment: imagine you will fall in a sleeping beauty sleep and wake up in complete amnesia. you have no memories and no idea about your identity. but you feel as a person.
here my problem: when it is possible to feel like a person even in a state of complete amnesia, what is the core of you, you have to upload it in a computer for staying the felt “you”
this question is imho of a crucial importance for the upload-problem, because the knowledge aquired in a persons life can easily be obtained by going online, and the personal memories may in the most cases be superfluous. so what rests to be uploaded for saving the felt “me”, for preserving the continuity of the “identity”.
in german exists a word describing this entity: “wesenskern”.
that is what i am looking for.
i have an idea about: don`t look at the content, but at the structure.
- do any person (better: any beeing) have a individual structure of mind disregarding the phänomenon of consciousness?
- may this distinctive structure of mind be identic to the concept of “identity”, and not as we intuitively assume the phänomenal perceptions, the contents of our mind, what we call “consciousness” ?
- can it be sufficient to copy this structure for transfering someones “self” on an uploading ai?
- if this ai adapt this structure, will this ai be identic to the uploading “self”?
- if more copies of this same structure regardeless of the body or the environement they habitate had the possibility to communicate - will they forme ONE self ?
- the transhumanist debate is about modular bodies, but i have the idea that the solution may be modular minds
(please give me some likes or i will never write a word in this forum again )
Let’s ignore that part of Radivis’ hypothesis at first: Let’s say, we just happen to life in a simulated world and after our deaths the simulation re-incarnates us into “the real world”. What do ye imagine it to be like?
I hope it’s a highly developed cosmic civilization that dominates space and time and where life is like a paradise for most of her inhabitants.
Akin to the science fiction franchises “The Culture” and “Orion’s Arm”, only better.
What does this have to do with this topic?
So, anybody here knows those “The Culture” books or wants to browse the “Orion’s Arm” homepage?
the title of this topic is “superworld superposition” . superposition is a physical concept
i have found a link, where the hypothesis of the holistic universe is largely explained.
this hypthesis is imho a mind-opener, who allows us to liberate our thinking from the telluric cognition of our existence and to take a bird`s eye view to the world, like the demiurg does.
How do ye think dying and reincarnating feels like if this hypothesis is true?
Say I fly a plane, loose control and go to hit a mountain. Then it comes, I hit the rock and then…
…it gets wet, glass splitters, I hit the floor and have a new body into which my soul was downloaded in another world?
It feels like absolute terror about your impending death. Then the world around you dissolves and you feel memories of your whole life rushing into your mind. It’s like escaping out of a thick fog that stopped you from thinking clearly your whole life.
First of all, if you die in any world, your patterns have degraded so much that reconstructing them is highly problematic. That’s why it’s much more reasonable to take a snapshot of your body-mind-patterns shortly before your death. That snapshot is then cut out of this world and pasted into another world. Your body is then augmented so that it is reinstantiated at its historic peak condition. Your mind is altered to reintegrate all the memories and skills you have ever possessed at the same time.
Why would simulators do that? Because it’s not too difficult for them and the resurrected with experience all of that as great gift. If the simulators want to manipulate or even just merely educate the simulated, then this would be a quite reasonable strategy. Further augmentations are of course optional.
Of course other scenarios are possible. if the simulators want to instrumentalize us, they may cut out parts of our memories and minds, or bodies, that are useless for their purposes. That probably wouldn’t feel too traumatic, because the parts of your mind that would be traumatized by such an experience would likely also be removed or silenced. You might not even notice that something crucial is missing.
There’s always a chance that the simulators would do something even worse to you, if you die. Dying is not to be taken lightly, even if it doesn’t mean the subjective end of experience. In any case, it means the end of a chapter (or rather book) of your existence. The end of a story, and the beginning of a new one.
Such a dramatic experience will make many people feel a loss of meaning. They lose their friends, relatives, job, and their missions of their past lives. They also wake up in a world that could kill them with the sheer future shock alone. Reorienting oneself after such an experience would surely be very challenging. At the very least for that reason, death is something that should be feared rightfully.
Is it possible that the Disney duck universe is real and I’ll be reincarnated there? Because it was through Donaldistic scientific journals (research on Duckburg) that I first encountered the idea that many worlds exist. https://disney.fandom.com/wiki/Duck_universe
Possible, yes. Likely? No.
We would need to consider the reasons for simulators to reincarnate us into a different world. Why would they do that? After all, it’s easier and cheaper for them not to continue our existence in any way.
Perhaps our world is something like a drama for the simulators. In that case, they might want to interact with the protagonists of that drama personally. They might have preferences about the world in which they wanted to interact with us. As we perform like ourselves most in the world we actually live in, it would be natural for them to reincarnate us in a world that is similar to our, but which lets the simulators interact with us directly. They might not be too interested in transferring us to their “real” world, or into worlds of our choosing.
The Ethical Obligation
If they reincarnate us according to some ethical considerations, any world that is not too terrible will suffice for us. In that case, life in the world we get reincarnated might be quite boring, because it happens to be a cheap accommodation for resurrected simulated people (“ressims” as I call them). (But of course we won’t really feel bored, because we get great cheap drugs that make us not feel bored.)
The Scientific Experiment
If the simulators are just curious what kind of life emerges in a world like this, they will have relatively little motivation to resurrect us anywhere.
This world could also be a game for the simulators. Purely simulated humans would then play the role of non-player characters. Since the game is about the players, there would be little motivation to resurrect NPCs.
The Historical Simulation
This world may be a simulation of the past of the “real” world. As such we play the roles of historical figures. This situation is quite similar to the first case, but we might have better chances to get resurrected into the real world, because maybe some simulators hope that our characters may have some positive effect in their real world.
It seems to be difficult to find plausible motivations for simulators to elevate us to their own levels of power, wealth, and influence. Exceptions are simulators that are very ethical, or very interested in us (and then the most important question would be “why?”). If we rise to sufficient wealth and freedom in the “real world”, we may then be able to retire into the worlds in which we want to live. Or we may make vacations there.
Or maybe they will resurrect me into that Duckburg universe because they want to do me a favour?
Besides, what is when one dies in a world into which he was resurrected or in the ‘natural’ world, e.g. from an overdose of those mentioned drugs?
A favour? What for?
Why should that make any difference?
Btw: I just recollect that once I had a dream in which I was resurrected in a purgatory-like landscape in which I had to fight demon-like beings. Apparently this was supposed to further my character development. But I believe it was rather an excuse for the simulators to see some brutal action. Even if we live forever through different worlds, some of them will probably be extremely unpleasant.
- Since you are a fan of animation (MLP, Furry, anime), you probably know “Adventure Time”. I imagine that world to look like the Nightosphere from that show.
- Wouldn’t “the simulators” have to answer to an ethics comittee?
That’s an interesting thought. In some religions there is the belief that you get rewarded for heroic actions with rewards in the afterlife. How plausible is that in a setting in which simulators simulate our world. Why would they value our heroic actions in our world? It’s not obvious that they would. Perhaps heroic actions disturb their real plans. In that case, we could even get punished for performing heroic actions.
What does constitute a heroic action after all? Is it defined by overcoming fear? If yes, then that’s a valuable skill to have. Performing a heroic action would prove that you possess that skill.
It would probably be easy for the simulators to grant everone the skill of heroism after they die. But it would be probably even easier just to select from the most heroic people in the first place. So, the idea that people are rewarded for heroic deeds is not unlikely. Though, the reward will probably be something that requires you to perform heroic deeds in certain roles. So, you are probably sent into increasingly challenging adventures, if you prove your worth to the simulators in this world.
Probably something like that. But it doesn’t need to have anything to do with our ideas of ethics.
There’s a post I’ve written that summarizes different ethical positions when it comes to simulations: