Superworld Superposition

That is a possibility. With increasing technological and scientific sophistication it will be increasingly hard for the simulators to fool us into thinking that we live in a base reality. And then what? Contact with the simulators perhaps? Or would they rather revert this game to the last save point before we figured out that we live in a simulation, and improve the quality of the simulation so that we won’t find out this easily?

But of course with superworld superposition both cases will happen. If the simulators contact us, we will know that we live a that kind of simulated world. Superposition will collapse (at least with regard to our direct superworld). In the latter case, superworld superposition remains intact and we will be none the wiser.

As an aside, Elon Musk jokingly stated that humanity might be the bootloader for AI. So, if this is a simulation, perhaps running through the history of the universe, including all of human history, is just the start up sequence for the really interesting stuff that will happen once the AI takes over.

1 Like

You made a critical error here. I stated that we model N as non-simulated world. Therefore, h_N does not live in a simulated world and therefore simulators know nothing about h_N. Consequently, they can’t ‘kill’ h_N.

But let’s assume that there is this world S* and we have two simulations S1 and S2 that are identical to t=75. From the perspective of inside the simulation there is just a function h_S, which is identical for both simulations, until t=75. A bifurcation of h_S only happens if the simulators somehow intervene, either within one of the simulations, or by creating a continuation of h_S. Let’s assume a small intervention at t=75. In S1 no intervention happens, but in S2 the simulators rearrange some molecules in the nose of that human, so that it maybe itches a bit and the person scratches it. Now we clearly have two different functions h_S1 and h_S2 that coincide on the interval [0,75], but differ on the interval [75,t_{death}].

We can of course extend the function of any individual to the whole universe, and even superworlds simulating that universe. Our function h_S would not only keep track of that human, but also anything happening in S and S*. That way we can make a distinction between humans that are identical, except for being simulated in different worlds. I would call that a “superobjective view”, because it goes beyond any information that is even theoretically accessible from a simulated world.

Maybe, but in a materialist view the individual experience of a person should supervene on the configuration of the individual particles that person consists of. If you want a different individual experience, you need a different configuration of particles. If there are identical copies of me in multiple simulations, I can’t tell any difference between them. I don’t know what copy I am. The most reasonable model I have for that is that I am all of those copies. At least until something happens that triggers a bifurcation in the configuration of particles that define me among different worlds. Say, some simulator makes my nose itch in one simulation, but not the others. Then the “I” will objectively bifurcate into different versions, but from “my” own perspective I still feel as monolithic “I”. Especially because I have no information about the other versions of me.

Now that opens an interesting thought experiment: Assume that the simulators revealed to us that they had run this world in parallel slightly different versions and then gave us all the information about the different versions of us. How would we refer to our other versions? We only refer to our “own” version as “I”. When we talk about other versions we perceive them from the outside, and refer to them as “them”. If the simulations have numbers we would refer to them with that number. Maybe I am Michael 789 and know that there is a version Michael 421 with a particularly itchy nose, but otherwise pretty much the same as Michael 789.

And that’s also the interesting thing about superworld superposition. Before the simulators break the superposition that we live in a particular simulation, for my own perspective, there is no Michael 789 or Michael 421. There is only Michael. In a way, what the collapse of superworld superposition does to us is adding more numbers to our identity / identities. Repeated collapses of superworld superposition create a branching tree of identities indexed with the names or numbers of the branches that are taken.

Does adding more numbers to one’s identity change one’s identity? Well, it would certainly change my subjective perspective on my existence and identity. Because then I would need to contextualize myself in a much larger world. And that’s pretty hard. But it would also be quite exciting.

1 Like

hum, can you explain in a more accurate way your worldview?
in my understanding, every time the person throws a dice, the world splits in six branches and so on … you had a bifurcation at every decision made… that is the problem of the many-world-hypothesis, the increase of the number of possible worlds

Does it really matter for the thought experiment if N is simulated or not? You said yourself that alternatively we can consider N as a simulation in which simulators don´t bother to resurrect h. I just chose this version.

But even if we choose the version with one simulated and one non-simulated world, the information of the difference of both h is there, because it is our thought experiment and we know it. Call that “superobjective view” as well, but no matter what, it is a theory we need a mathematical model for. The same problem remains if we choose my version of two simulated worlds. If one h is resurrected and the other not, we need a description of this reality, whether the subjects or the simulator know this difference or not.

It seems so, but we know more. We all talked about the “i” although we can not explain, what it really is.

Good point. If we will one day be able to create strong AI it might be possible to reduce the “problem” of the “configuration of particles”. Here is another thought experiment: if mental states supervene on the configuration of particles in any human, mental states must supervene on the configuration of data the AI consists of. We can copy data without any loss or difference of any kind. (hopefully). If we create two AI with the same data, they would not know which copy they are, as well. But if we give each of them a body ( even if we give them two identical robot-bodies, they will not be identical as copied data will be identical, but let us assume we could built completely identical bodies) they will experience the necessity to occupy a different space. And this difference is enough, like an itch on the nose is, to change the mental states of both because their input of data (their experience of their surroundings and their bodies) varies. Like an itch on the nose or the occupation of a different space it seems to me that there is just little difference necessary to divide two identical persons into two different persons.
Another interesting problem comes with your statement that you are all of your copys. In your model h_S and h_N there is only one person until the specific point in time (t=75) when one simulator decides to resurrect h_S and the other decides to keep h_N dead, or as you prefer: a natural, non-simulated world will lead to death without resurrection afterwards. So the decision of the simulator S to resurrect h_S against the “decision” of the universe N, not to resurrect h_N rips a person, that is at first one and the same (function) apart and creates two.

Would you agree that the occupation of a different space is trigger enough for a bifurcation? If you agree, then my question would be why the completely different residence of h_S and h_N should not be enough. Another problem is: if h_N and h_S are the same until the age of 75, why should h_N not become h_S and live on ? Or if you consider my version of your model and we have two different simulators deciding the fate of one person will they experience a tug war over the fate of h ? And if S wins h is resurrected ?

So here are my remaining questions/problems:

  1. If no body knows (not the subject h, not the simulator S and no “superobjective view”) that there are two identical universes S and N are there really two ? Or will they automatically merge into one Universe U if they happen to be identical ? Or is it the other way round:
  2. Even if no body knows that there are two Universes N and S, the information is there, no matter if it is part of a brain or just implemented in the cosmos like a natural law. And the existing information of a difference forbids to call N and S identical.
  3. If two Simulators quarrel over the decision what to do with h after he dies at the age of 75, would h remain in a state like Schrödingers cat until the decision is made? And what is the consequence of a decision:
    a) would h_N and h_S who are the same identical person h_U until the age of 75 split up into two persons h_N and h_S or is it the other way round:
    b) and h_N and h_S merge into one of both, depending which Simulator wins the quarrel and decides the fate of h because:
  4. Your version is strange. You posit that there are two persons h_N and h_S, although h would never know. So you have the “superobjective view”. You posit a difference between them and at the same time you declare them identical. Wouldn´t that be the same if we suggest that one equals two? Back to 1. and 2.:
  5. If there are two identical Universes S and N but they occupy a different spacetime, or a different quantumreality, they are not the same. And if they occupy the same spacetime they merge into one. If one is simulated and the other is not it suffices to view them as not identical, even if they occupy the same spacetime. (if that is possible). Or is there another possibility?

Yes. Because we must occupy a different space, when we met those versions, otherwise there will be no different versions. This alone should suffice to have different mental states because we perceive our surroundings different.

Another interesting question is why we refer to a person as “I” that doesn´t share the same time. „Yesterday, I did x…“ is possible, because we perceive a continuation in time. But what makes our own version our „own“?

here we meet a neglected problem: the “others”
who remembers you every day of your lifetime, who you are? people around you.

as pink floyd said:

THE WALL SONGTEXT

We don’t need no education
We dont need no thought control
No dark sarcasm in the classroom
Teachers leave them kids alone
Hey! Teachers! Leave them kids alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.
We don’t need no education
We dont need no thought control
No dark sarcasm in the classroomTeachers leave them kids alone
Hey! Teachers! Leave them kids alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.
“Wrong,
Do it again!”
“If you don’t eat yer meat, you can’t have any pudding.
How can you have any pudding if you don’t eat yer meat?”
“You! Yes, you behind the bikesheds, stand still laddy!”

here they refer (the age of aquarius) at carlos castaneda, who said: people are building walls around you.

didn`t you hear the whispers behind the walls?
they say: we know who you are. we know where you are. we are everywhere.

you can not wake up one morning and decide to be an other person, because you are just an other brick in the wall. :skull:

Yes, that’s true. Bifurcation (or multifurcation) should occur to the many world hypothesis every time that entropy increases. In the example of the thrown die the result is a selection of one out of six possible outcomes. That is irreversible. Irreversible actions increase entropy. The die and the thrower multifurcate into six different versions in six different worlds after the result is clear.

In the context of simulations of world it’s conceptually easier to just assume that worlds are deterministic. This avoids such complications as bifurcation happening both through quantum world branching and interventions of simulators. But of course, both happen assuming that the many world theory is true and that we live in a simulation.

Then why do I - subjectively - see only one result when I throw a die?

It matters a lot. Deviating from initial assumptions only leads to confusion and makes it harder for anyone to understand what is meant. It’s a didactic sin to redefine things as something completely else when they were defined clearly already.

What exactly do you mean with that?

Not necessarily. if the AI can’t determine its spacial position, it may very well be in a superposition of different spacial positions with all of them experiencing the same subjective experiences. This reminds me of a variant of the Schrödinger’s cat thought experiment I thought of: Schrödinger’s van. In this thought experiment Schrödinger himself is abducted in a dark van. He’s first ambushed and anesthetized with chloroform. When he wakes us the van had already moved to the home base of his abductees. He doesn’t know where he is, because the back of the van is completely dark and he can’t leave it. There are many possible groups which might have a motive to abduct Schrödinger, but they prefer to keep him alone in the van in order to wear him out.

In this situation Schrödinger himself is in a superposition of multiple spacial locations. There may be many different possible worlds in which Schrödinger finds himself in that unfortunate situation, but in all of them he experiences the same subjective experiences. But of course, from an objective perspective those different versions of Schrödinger are not the same, because they landed up in different locations and are abducted by different groups.

From the objective view, yes. From the subjective view not necessarily. See above.

That’s exactly the idea behind subjective immortality through simulator-enabled continuation. There is a version of you that is continued by the simulators and you can only experience that. You cannot experience not being continued after your death. So, by all reasonably perspectives, your continuation inherits your identity.

Two identical universes are one and the same. Is there any reasonable alternative to that?

If there is any difference between entities, they can’t be the same by definition. Isn’t that obvious?

Yes.

Yes. Just because an entity has the potential to bifurcate doesn’t mean it’s a good idea to model it as already two entities before it has actually bifurcated. Of course you could do that, but you should really have a very good reason for doing that, because by doing so you would violate Ockham’s razor.

From an objective view the distinction between h_N and h_S wouldn’t make any sense, because the objective view doesn’t include any information about the potential superworlds in which N and S are embedded in. This distinction only really makes sense in a superobjective view, which we assume for the sake of argument. It is only possible to make a distinction between h_N and h_S if we also add the superreality S* of S into the picture. As mere functions, h_N and h_S are really identical. Only an extended version of h_S that also adds the information that S is a simulated reality of S* does make the difference between h_N and h_S. But then we would strictly not speak of h_S, but an extended function that included information about S*. We would need to call it something like h_S^S* to be precise. And of course h_N and h_S^S* are not the same.

Spacetime is a concept that only makes sense inside a certain universe. Two independent universes U1 and U2 usually have no spaciotemporal relation to each other. Which is something that is really hard to understand. Of course it’s possible to add such a relation to them by embedding them in some kind of larger universe U3. Or by letting one of them simulate the other, which might at the very least under some idealized circumstances create a temporal relation between both worlds when considering how fast time flows in each of those worlds. Perhaps an hour in the simulation is a second in the simulating superworld – or the other way around.

At first, all universes are completely isolated from any other universe. Relations between universes are purely optional.

If our surroundings are the same, we cannot experience any difference. See Schrödinger’s van above.

Probably because most versions of “identity” are constructs that we make up in order to be able to operate in social context. Also, because the you of yesterday has so many similarities to the you of today that it is justified to see both as effectively the same person.

Only the fact that we decide to see our own version as our own. There are many ways to define identity. And most of them are absurd. We tend to stick to those that are slightly less absurd than the others.

1 Like

Phyicists would throw the catchphrase “decoherence” into the discussion and claim that this explains everything. Of course, it’s not clear how “decoherence” can appear in a holistic single deterministic quantum world encompassing every possible outcome. The problem is that it’s actually a big mystery why our perception is so limited that we only perceive singular outcomes of events. It’s actually similar to the question why we can’t see or hear everything at once. It’s conceivable that we could, but for some reasons we don’t. We are very limited beings. Very simple “stuff”. Perhaps there are beings that can perceive multiple quantum realities at once. Who knows…

Thou dost?

No. But that means it is impossible to state that there are two.

Yes, it is obvious. This is my point from the beginning. If we have Universe N and Universe S, the humans h_N and h_S are two different persons if their residence differs, even if they could not see a difference. But this means that there are no different versions of “me”, because whenever a bifurcation is triggered, a different person than me lives in a different timeline than me.

And if there is a copy of our universe, where everything else is the same, but they are completely isolated from each other, then again, it is difference enough to state that there are two. My conclusion is, that there are no such things like two or more “me”. A person that lived the same life and has now the same configuration of particles and writes this post but lives in a universe completely isolated from mine, is somebody else, like a twin is somebody else and not his sibling.

i just meant that the configuration of particles is very complex that it might be difficult to determine if there are two identical configurations. AI could be easier to compare when their identity based on data.

Actually, two identically universes would be two different universes that just “are” (look, feel, act etc.) exactly alike. Two identitical photos are two different pieces of printed paper anyway.

That’s what I think too. Even when two people are exactly alike from an external point of view, they can never ever be the same person subjectively, that would not make any sense.

Sorry, but I don’t understand that.

i am too lazy to write an other post, but i suggest you to read about the einstein-rosen-podolski-thought experiment.
so you have not to invent the wheel twice

I meant, if universes are completely identical in every way it is one and the same so we only have one universe. If there are nearly identical universes that are completely isolated from one another, the difference, that they reside somewhere else, is enough to state that they are not the same but two different universes.

Ok, my AI -idea was not that important but pretty unclear, i see…
Radivis mentioned supervenience:

If the individual experience of a person depends on the configuration of particles the person consists of, it might be very difficult (for practical reasons) to determine if two copies have the same configuration of particles. Because if just one particle differs, one copy feels an itch on the nose and the other not. So if the identity of a human being needs a special configuration of all his particles, his body consists of, we have to compare the configuration of particles of both copies if we want to find out whether they are in the same mental state and their identity is identical. This comparision might turn out to be impossible. Maybe we will never be able to beam persons like it is described on star trek. In a simple materialist view people are nothing more but a tower of toy building blocks, except that the blocks are much tinier and a lot more. But if we deconstruct a person in one place, send the construction-information of the exact configuration of her body to another place and construct her there again, will we just transported her? Or did we kill the original person and constructed a new one who happens to have the same memory of the original and claims to be the original although she is not? We could never determine that. But a set of data remains the same if we send it somewhere. And if we will ever be able to create strong AI and the identity of one AI “only” depends on the configuration of data and not on the configuration of particles of a body, maybe we will be able to beam the AI without killing it first. Maybe we will be able to copy AI completely with the same identity…but this is just a speculation.

No, that wasn’t the problem: I didn’t understand your idea grammarwise.

This sentence doesn’t make sense:

Artificial Intelligences will be easier to compare, if their identities are based on data.

1 Like

yes, and after the upload of someones mind, it will be possible to beam him too.

i see an analogy to the problem of the first-person-perspective.

when someone uploads his consciousness in a computer, what implies an analogue-digital-conversion (this is imho what @zanthia meant by making the difference between the configuration of the particles oft someones body and the configuration of the data of an ai), we have the same problem mentioned above by @Radivis about the identity of two (or more) persons.

lets imagine, someone connects his brain to an ai for uploading his mind. while the upload is going on it will be a moment this person change his perspective, because a part of his consciousness is transfered or copied into the ai (the computer) and this person is becoming two persons, one in his human body and another in the ai, connected by somekind of network (a neuralink :smiling_face_with_three_hearts:elon musk ). or maybe (and this is what i am expecting as a transhumanist) a totally different kind of person emerging from this process (a cyborg).

it is similar to the beaming problem mentionned by @zanthia.

in my reflections i saw a solution for this problem in the synchronicity and non-locality described in the einstein-rosen-podolsky-paradoxon, but i am not sure, if a quantum process will work on a macroscopic world application.

hence the real and obvious application for @Radivis superworld superposition problem and the derived implications is the upload-problem.

edit: i see, that i have to explain a premise made above:
imho the upload can not take place in a computer like in an empty bottle (this is the darkest way to hell), but there must be an ai for receiving him into his new habitate.
the question is if ,or if not this ai has to be embodied (connected to the outerworld by a sensorium) or if the ai may only access a cyberspace.

what will happen to the uploaded person? will he be one, or two or a third, or just loose his mind (ok, this will not be an option :stuck_out_tongue_winking_eye:) . and what will happen, when the uploaded person and the uploaded mind were separated like the cutting of the umbilical cord separate the new-born child from the placenta feeding it?

On identity

Objective identity

What is identity? Identity is merely a construct. In the objective view it’s a societal construct. A person is an identity that distinguishes that person from other persons by specific criteria, which often aren’t strictly defined, but rather work on similarity heuristics. My body and mind today are similar to my body and mind a year ago, so I count as the same identity. It is conceivable that society could work differently and everbody gets a new identity whenever a sufficiently large change to body or mind happen. Of course, that would make societal interactions more complicated, which is the main reason why identity is considered to be stable throughout the whole life of a person. Under closer scrutiny, this construct is obviously quite flawed. A human at age 2 is very different from that human at ages 16 or 60. Why see all of them as the same person? Because making a clear cut transition is hard. And also unnecessary.

With a technology that allows a duplication of a person, the most simple solution of seeing identity as constant and monolithic become more obviously problematic. If I consider an original person and their copy as one and the same person, I run into different problems:

  • The illusion of identity is much harder to main, because I see two systems that are however supposed to count as one. That’s still doable, but quite unnatural.
  • Each instance of that identity is responsible for the whole identity. Especially when both instances diverse, this can easily become a big problem. If one instance commits a murder, both are to be held accountable.

Such problems make it seem reasonable to give both the original and the copy their own respective identities that are not the same.

Still, identity in the context of societal roles, rights, and duties, is a social construct that could in theory be constructed with arbitrary definitions and criteria. In a reasonable society the identity constructs will however mostly turn out to be practical, of not even pragmatic. Almost nobody wants to live in a society in which the definition of identity is impractical, because that would be a major cause of suffering. Imagine that you counted as new person ever time you woke up. You had to get a new ID first thing in the day before you are allowed to do anything in relation to other people. You would also get a new job every day and then purchase stuff every day, because you start out with nothing, unless you inherit stuff from a previous identity. Life in such a society is conceivable, but hellish. Similar considerations imply that identities should remain as stable as possible. Switching to a different body or computational substrate shouldn’t change identity. Moving from one place to another, whether slowly (by walking) or fast (by beaming), shouldn’t change identity either. Of course, societies are free to define identity as they please, but societies pay a high price in terms of added superfluous complexity by making the concept of identity more complicated than necessary.

Subjective identity

From the subjective view identity doesn’t seem like a concept, but even there it’s a psychological concept. At the basic level of subjectivity there is just a stream of subjective perception – no sense of self or identity. The idea of a subjective identity is an idea that is formed to order certain parts of the stream of subjective perception. Some parts of that perception refer to a “self”, while others refer to a “not self”. This is basically a neural network based classifier at work. Also in this case, the “self” classifier could have any possible configuration. As in the case of the societal identity construct, the personal psychological self construct is however required to be at least somewhat practical. Otherwise all kinds of psychological problems may appear whenever a person classifies something as “self” or “not self” when it’s not very appropriate to do that.

Of course, both identity constructs, the societal and the personal one, do share certain relations to each other. I am not free to declare my self identity construct to be something completely differently from what society sees as my societal identity construct. If I say that I am my body and my car, this is already quite eccentric, but if I say that I am a whole nation and people are supposed to do what I say, this is sufficient reason for me to be locked up in an asylum, unless I am a kind of absolutist dictator.

indeed you are the king of apodictic statements :sweat_smile:
but i see in your disambiguation of the term “identity” no debatable contribution to the upload-problem, whose importance för transhumanistic belongs can not be overestimated.
will the upload make you another person (give you an other identity) ?

here some informations about the stream of consciousness

an thought-experiment: imagine you will fall in a sleeping beauty sleep and wake up in complete amnesia. you have no memories and no idea about your identity. but you feel as a person.
here my problem: when it is possible to feel like a person even in a state of complete amnesia, what is the core of you, you have to upload it in a computer for staying the felt “you”

this question is imho of a crucial importance for the upload-problem, because the knowledge aquired in a persons life can easily be obtained by going online, and the personal memories may in the most cases be superfluous. so what rests to be uploaded for saving the felt “me”, for preserving the continuity of the “identity”.
in german exists a word describing this entity: “wesenskern”.
that is what i am looking for.
i have an idea about: don`t look at the content, but at the structure.

  • do any person (better: any beeing) have a individual structure of mind disregarding the phänomenon of consciousness?
  • may this distinctive structure of mind be identic to the concept of “identity”, and not as we intuitively assume the phänomenal perceptions, the contents of our mind, what we call “consciousness” ?
  • can it be sufficient to copy this structure for transfering someones “self” on an uploading ai?
  • if this ai adapt this structure, will this ai be identic to the uploading “self”?
  • if more copies of this same structure regardeless of the body or the environement they habitate had the possibility to communicate - will they forme ONE self ?
  • the transhumanist debate is about modular bodies, but i have the idea that the solution may be modular minds

(please give me some likes or i will never write a word in this forum again :crazy_face: )

1 Like

Qualia

1 Like