Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

Superworld Superposition

Now that I’m watching the movie “Welt am Draht” from 1973 about simulated worlds, I am reminded of a remarkable semester during my mathematics studies that changed everything for me. I studied mathematical logic, I came up with the idea of superworld superposition, my dad died, I realized the abundance of immortality. It was terrific. My world was shaken to its core. My view of reality expanded beyond infinity.

Back then I already had contacts with the international transhumanist community, and the idea that this world could be a highly sophisticated computer simulation hadn’t been foreign to me. Of course, this idea implies the question whether this world is a computer simulation or not. Now, one is usually compelled to answer this question with “yes”, or “no”, but this reveals a thinking that’s rooted in Aristotelian logic and only accepts the answers of “true” or “false” for propositions. And that’s an assumption that simplified reality, perhaps too much. No, definitely too much. If we try to think beyond Aristotelian logic and entertain the thought that there may be answers beyond “true” / “yes”, and “false” / “no”, things start getting really interesting.

Now imagine we are 1000 years in the future and we could simulate worlds of comparable complexity to that we currently live in. Suppose that we created a simulated world that merely relied on modelling a world based on very basic and never chaning physical laws. And suppose that this world was inhabited by intelligent beings who reasoned about their existence. What are they supposed to think about the question whether they live in a computer simulation or not. From our point of view, the right answer to their question would be undoubtably “yes”, but that answer would be inappropriate to them! Why? Let’s suppose that somwehere in the multiverse there was a world that happened to exactly coincide with the world we simulated. After all, it’s merely based on simply physical rules, so why shouldn’t that be plausibly be the case? Now, we have at least two versions of that world in which those inhabitants ask about the nature of their world. What’s the right answer now? Well, “yes” and “no”! Their world is a simulation, and it’s not a simulation. And that’s what I call “superworld superposition”. The worlds that simulate ours are our superworlds. And that there are probably many of them, means that our world is in some kind of quantum superposition regarding its state of being simulated in a higher world.

Well, why shouldn’t the same be true for our world? At least that should be plausible unless there was overwhelming evidence that our world would make sense only as simulatiion or as not-simuation. But given the possibility of us being Boltzman brains, such evidence is really hard to come by.

How does this connect to the prospect of abundance of immortality? Well, if our world was a simulation, we would be packages of data that could be re-initialized in new mind-body complexes in our “superworld”. Reincarnation becoming true. If our world happened not to be a simulation, we would simply disappear after our mundane deaths. So, those versions of us who happened to be in a non-simulation wouldn’t really matter after all. Only those versions of us who happen to be in a simulated world, and happened to be resurrected in a superworld would have a continued existence which validated their point of view.

What suffices for us to be essentially immortal is the mere possibility that we live in a simulated world. And given that we don’t have conclusive evidence that our world can’t be simulated, means that chances are that our world is in a superworld superposition that includes many situations in which we are ressurrected into some kind of superworld. So, there’s quite likely an afterlife for all of us. How would that look like? Well, that’s hard to tell. What motivations do the simulators have to simulate a world like ours? Well, what really matters is what motivations those simulators have that simulate our world and plan to resurrect us into their world after our Earthly end. This closely ties in with “simulation ethics”. In the following thread I’ve reflected on the basic possibilities:

Well, as you might know, Elon Musk tabooed talking about the possibility about our world being a simulation. The reasoning behind that is rather solid. We can only vaguely speculate about the kind of worlds that our world in being simulated in. Entertaining the prospect of being immortal in a trans-mundane way may soothen us psychologically, but it may negatively impact our ability to make a positive change in this, our world. Or does it? I am not sure about that. Taking this world as some kind of game, movie, challenge, play, training may very well colour our worldy experience pretty much and change its nature. Still, the interpretation of what our world truly is, remains with us. We are still the prime determinants of our destinies – unless proven otherwise.

We could very well be artificial intelligences in testing. In a testing environment that should evaluate our fitness for being let loose on the “real” world. How do we “win”? How do we prove that we are ready for the “real” world? Or does that feel like too much of a responsibility? Well, the prospect may be very real, and very scary. Just take a look at the following magnificent thought experient turned into a YouTube animaton series:

Thoughts like these may be dangerous, but they will appear quite naturally. The more naturally, the more highly advanced the civilization in question. This poses a potential issue for simulators. Their sims will increasingly get the idea that they live in a simulated world, as their technology and philosophical maturity increases. How should they deal with that? I’ve sometimes caught myself entertaining these thoughts and reflected on how our simulators would think about me having those thoughts. Huh. Well, I guess the answer is that this is just all-too natural. It’s a normal part of life. Just as sometimes questioning whether you are dreaming or not. Some people do that more ofthen than others, but there’s nothing truly special about such thoughts.

Anyway, we need to come to terms with the likely reality that we live in a simulation. Why not just stop pretenting that all of this is “real” and accept the truth that all of this is essentially simulated? Wouldn’t that be the most mature and enlightened thing to do? That doesn’t mean that we shouldn’t take this world we are living in seriously – after all it’s the only world that we know about.

Then what’s the point of all these thoughts and ideas? Are they merely a signal of our awareness? An awareness that implies that we wouldn’t be truly surprised about anything happening?

For at least nine more seasons!


sorry my english is not so good, that is always my main problem with your texts.

one idea: any evidence that make our world look like a simulation has to be something that looks like it has to be designed by some intelligent being.

for instance: christians try to proof with their “intelligent design” theory that darwin is wrong. maybe they find out that we are some sort of simulation by an outer dimensional force. :smiley:

another possibility would be an accident of some sort, so that a bug appears in the simulation, like some sort of bufferoverflow in temperature, speed or whatever.

edit: of course i can understand english, but i like listening more to english than reading it, especially if the text is not that easy.

We already wrote about that in private messages and I am very interested and fascinated by your theories, yet, just like the user @begeistert above and me before in our PM conversation: I haven’t understand most of this. But, let’s try and see if I am right or not (if the latter, then please correct me!):

There is an infinite number of worlds, so everybody of us exists for an infinite number in infinite versions in infinite worlds. Some of those worlds are simulations were people will be resurrected into “the real world” when they die.

So, I’ve noticed one problem in your hypothesis: If I die now, and the world I am in is not a simulation, BUT there is another world which is like ours, except that it is a simulation and everybody who dies there will be downloaded into the physical world, that doesn’t benefit ME in any way! Because the “I” from the other world is another person with his own consciousness. He just happens to be exactly like I am, but he isn’t the same person as I am, because only I am that one, or?


Actually, those are two separate hypotheses. Hypothesis 1 is that there is an infinite number of worlds. Hypothesis 2 is that everybody of us exists in an infinite number in infinite versions in infinite worlds.

Hypothesis 1 is a much weaker hypothesis than hypothesis 2. If we replace “infinite” with “astronomically many”, then many cosmological models say that hypothesis 1 should be true; including eternal inflation, an infinite spacial size of our universe, and the multiverse interpretation of quantum mechanics.

Hypothesis 2 is much stronger and is in particular implied by modal realism and its versions. My own philosophy states that the world is effectively the totality of all mathematical structures, which also includes all possibly physical worlds (strictly speaking those are two different statements which could be true or false independently from each other).

If hypothesis 1 turns out to be true, this doesn’t mean that hypothesis 2 is automatically true. It could very well be the case that the world is infinite in extent, but that it is structured in a way that stops that from producing all possible variations of individuals and individual-world relations. Yet, that seems very unlikely. Just worth pointing out that this case could be possible, still.


Good that you point that out. This is a complication that is caused by existence of different ways of defining identity. Let’s start by comparing the objective view with the subjective view.

The objective view of identity

It’s hard to define identity objectively. Humans are system embedded in a much larger system that we call the “world” or the “universe” or the “cosmos”. Furthermore, humans are no static systems. They change over time. It may therefore be more appropriate to see humans (in the context of them being beings with a history) as system-valued functions. At their birth a human is a system h(0). At their first birthday that human may be a system h(1). If that human dies at age 75, they will then be a system h(75). In this sense, the human in question can be seen as the whole function h that is defined on the interval [0,75] (actually [-0.75,75] if you consider the typical time before birth). At each point t in that time, the function h provides a value h(t) that completely defines the properties of the human in question. A very reductionist approach to that would be to take the properties of all elementary particles that the human consists of and taking that set of properties to be h(t). Of course, that would violate the Heisenberg uncertainty principle, but let’s just ignore that for a minute a claim that we can nevertheless define and measure all those properties at the same time with absolute precision anyway, just for the sake of argument. There are certainly more meaningful ways to define an individual, but this way is at least conceptually one of the simplest possibilities to do that.

Now, according to hypothesis 2, h is defined in an infinite number of universes. Let’s consider two universes for a start in which that is the case: universe N and universe S. Universe N which is not a simulation, or alternatively is a simulation in which the simulators don’t bother to resurrect h after t=75. In universe S h is simulated in a world S*, but the simulators decide to resurrect h after their death in the simulation. In mathematical terms this means defining a continuation of the function h that extends beyond t=75. It may be the case that this continuation lasts until t=1 000 000 or so, or until the heat death of universe S*, or infinity, if S* happens to be some kind of strange universe that lasts forever without degrading in any way.

From this objective view we now have two functions: h_N, which is strictly defined in the interval [0,75] and h_S, which has an extension, let’s call it h_S*, that is defined on a much larger interval than [0,75].

We can of course see that h_N and h_S are two different functions, because they are defined in relation to different universes in which they are defined. Yet, from a purely mathematically perspective, both functions are the same, because they have the same interval on which they are defined, and because they have the same values in those intervals. For all practical purposes the persons modeled by h_N and h_S are the same. The only difference is that we say that h_S has a continuation h_S*, but h_N doesn’t.

The subjective view of identity

It’s even harder to define identity subjectively. As subject you don’t experience yourself as configuration of particles, but as stream of subjective experiences. We currently don’t know how to model subjective experiences mathematically, or whether that’s possible at all (I think it is, but it’s probably very difficult). We even don’t know how subjective experiences relate to objective configurations of particles. This is of course the old mind-body problem.

But what’s relatively clear is that in our subjective steam of consciousness we have no idea what kind of world we are embedded in. It could be universe N or universe S, or anything else. So, we have no idea whether we will have a continuation of our subjective experience or not.

However, we cannot experience our own nonexistence. That would simply be impossible. Therefore, the only thing we would be able to perceive after a death in a simulated world, would be our resurrection in the world in which we are simulated in. In that sense, we are subjectively immortal, because we cannot experience permanent death, but only subjective continued existence in the cases that the simulators continue our existence.

1 Like

Well, I still haven’t understand a lot. But okay, it is late here where I live. So, let me just stick to this one obvious point:

Why could it only be a resurrection and only into the natural world which simulates our simulated world? Why not other simulated worlds too? If we are packages of data for the simulators, sure, we could be downloaded into physical bodies in their world. But why couldn’t the simulators instead, move or send us into other simulations?

1 Like

I never claimed it could only be a resurrection into a “natural world”. Resurrection into other simulated worlds is probably much easier, and probably also far more common.

That is probably what will actually happen most of the time. I merely started with the case of material resurrection to simplify the discussion.

1 Like

Okay, let me think this through for some time, until I understand everyting.

But, now I wonder about one very obvious question that comes to my mind… WHAT worlds may we get resurrected into? I mean, what are they like? What do they look like, who lives there, how’s the weather there etc.? :thinking:

1 Like

“There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.”

But just read some good science fiction books to broaden your horizon. :slight_smile:

1 Like

Well, I am creating my own cartoon universe. I hope, that when or if I’ll die, the simulation will be kind to me and transfer me into a simulated version of that. :wink:

When I think about it, I imagine such a world which does the simulation to look akin to the science fiction universes “Orion’s Arm” and “The Culture”.

it is only one way to find out the truth about this hypothesis: finding the bugs!
by this bugs it may be possible to exit to a most higer level world, and so on, like a russian matryoshka doll. this is what basic scientific resarch is doing.

like schrödingers super-cat? i think an other approach to the theme must be more witchcraftfull.

lets suppose the superworld is not a hypothesis, but exists here and now, and it is obviously superposed, and the aim of this game is not to accede to a higher level world, but to create a lower level world.

reduction of complexity must be the goal.

let me show you, what i mean (as a transhumanist) :

the human body is composed by 100 000 000 000 000 cells organised in a cooperative way as an individual organism.
at the first look it seems a very complex system. but it is not. it is a reduced system.
100 000 000 000 000 single cell protozoa in a twenty gallon water-sac are a complexe system.
but a higher level, a cell is made by a billion atomes. and so on …

what is the next lower level? imho it is the super-ai.

what is the motivation of the billions of protocellulars to aggregate and form a human body?

there must be one, but i can not see it.

if the super-ai will one day asks us about the motivation of creating her, what would we answer? the egomania of elon musk? :rofl: (i am kidding, i am in love with him like a lemmon tree loves the sun)

but back to the superworld position. actually those worlds are superposed, the exists synchronically, the myriards of myriards of myriards of single atomes, the myriards of myriards of protozoa, the myriards of multicellular organisms, the milliards of human beeings and the millions of …of what? of transhumanists? :wink: ,

the key word is: focussing.

focussing means reducing complexity. myriards of protozoa are very complex system, but not a focussed system.
billions of humans are a complexe system, but they are not focussed (except elon musk :heart_eyes:)

we are the simulation of a computer called earth, who is the simulation of a computer called universe (memento mori to douglas adams). if you search for the answer, it will be 42, but look for the spoor and where it will lead.

so imho the spoor, the teleological vector does not point out to the superworld, but points in to the next smaller puppet of the matryoshka doll, more reduced and more focussed, and it ist just what is going on by the creation of the super-ai.

and if we had great fortune, we shall see at what the super-ai at the after-next layer of the superposed superworld will focus on. and like schrödingers super-cat, this next layer of the superworld ist superposed: it exists and it exists not …yet…

1 Like

A mathematical perspective makes two different functions the same? Then both persons h_ N and h_S are identical. If that is the case, it is impossible that two identical persons differ in just one aspect. Both persons h_ N and h_S lead a predetermined life if they live in simulated worlds. If their simulators would talk to each other one of them could say: “I will kill my h_N at the age of 75 and never resurrect him. It is nice of you to let your h_S live a million years more…” If the simulators could know the difference between h_N and h_S there should be a way for mathematics to model that difference as well. But the much more serious point that is missing in your mathematical model is the “I”. The individual experience of the person. And i missed an answer to this important remark:

“For all practical purposes” both persons are not the same. I think your distinction between “objective view” and “subjective view” is flawed. The “subjective view” shows clearly that something is missing in your model and even if “we currently don´t know how to model subjective experiences”, we know that they exist. And if you insist on your objective view, you must admit that simulators N and S would know the difference between seemingly identical persons that live in one scenario 75 years, 1 day or one second and in the other a million years. If mathematics do not have the means to model our subjective experience and the difference of diverging lifespans, it should be enhanced. Probably by AI. :sunglasses:

1 Like

That is a possibility. With increasing technological and scientific sophistication it will be increasingly hard for the simulators to fool us into thinking that we live in a base reality. And then what? Contact with the simulators perhaps? Or would they rather revert this game to the last save point before we figured out that we live in a simulation, and improve the quality of the simulation so that we won’t find out this easily?

But of course with superworld superposition both cases will happen. If the simulators contact us, we will know that we live a that kind of simulated world. Superposition will collapse (at least with regard to our direct superworld). In the latter case, superworld superposition remains intact and we will be none the wiser.

As an aside, Elon Musk jokingly stated that humanity might be the bootloader for AI. So, if this is a simulation, perhaps running through the history of the universe, including all of human history, is just the start up sequence for the really interesting stuff that will happen once the AI takes over.

1 Like

You made a critical error here. I stated that we model N as non-simulated world. Therefore, h_N does not live in a simulated world and therefore simulators know nothing about h_N. Consequently, they can’t ‘kill’ h_N.

But let’s assume that there is this world S* and we have two simulations S1 and S2 that are identical to t=75. From the perspective of inside the simulation there is just a function h_S, which is identical for both simulations, until t=75. A bifurcation of h_S only happens if the simulators somehow intervene, either within one of the simulations, or by creating a continuation of h_S. Let’s assume a small intervention at t=75. In S1 no intervention happens, but in S2 the simulators rearrange some molecules in the nose of that human, so that it maybe itches a bit and the person scratches it. Now we clearly have two different functions h_S1 and h_S2 that coincide on the interval [0,75], but differ on the interval [75,t_{death}].

We can of course extend the function of any individual to the whole universe, and even superworlds simulating that universe. Our function h_S would not only keep track of that human, but also anything happening in S and S*. That way we can make a distinction between humans that are identical, except for being simulated in different worlds. I would call that a “superobjective view”, because it goes beyond any information that is even theoretically accessible from a simulated world.

Maybe, but in a materialist view the individual experience of a person should supervene on the configuration of the individual particles that person consists of. If you want a different individual experience, you need a different configuration of particles. If there are identical copies of me in multiple simulations, I can’t tell any difference between them. I don’t know what copy I am. The most reasonable model I have for that is that I am all of those copies. At least until something happens that triggers a bifurcation in the configuration of particles that define me among different worlds. Say, some simulator makes my nose itch in one simulation, but not the others. Then the “I” will objectively bifurcate into different versions, but from “my” own perspective I still feel as monolithic “I”. Especially because I have no information about the other versions of me.

Now that opens an interesting thought experiment: Assume that the simulators revealed to us that they had run this world in parallel slightly different versions and then gave us all the information about the different versions of us. How would we refer to our other versions? We only refer to our “own” version as “I”. When we talk about other versions we perceive them from the outside, and refer to them as “them”. If the simulations have numbers we would refer to them with that number. Maybe I am Michael 789 and know that there is a version Michael 421 with a particularly itchy nose, but otherwise pretty much the same as Michael 789.

And that’s also the interesting thing about superworld superposition. Before the simulators break the superposition that we live in a particular simulation, for my own perspective, there is no Michael 789 or Michael 421. There is only Michael. In a way, what the collapse of superworld superposition does to us is adding more numbers to our identity / identities. Repeated collapses of superworld superposition create a branching tree of identities indexed with the names or numbers of the branches that are taken.

Does adding more numbers to one’s identity change one’s identity? Well, it would certainly change my subjective perspective on my existence and identity. Because then I would need to contextualize myself in a much larger world. And that’s pretty hard. But it would also be quite exciting.

1 Like

hum, can you explain in a more accurate way your worldview?
in my understanding, every time the person throws a dice, the world splits in six branches and so on … you had a bifurcation at every decision made… that is the problem of the many-world-hypothesis, the increase of the number of possible worlds

Does it really matter for the thought experiment if N is simulated or not? You said yourself that alternatively we can consider N as a simulation in which simulators don´t bother to resurrect h. I just chose this version.

But even if we choose the version with one simulated and one non-simulated world, the information of the difference of both h is there, because it is our thought experiment and we know it. Call that “superobjective view” as well, but no matter what, it is a theory we need a mathematical model for. The same problem remains if we choose my version of two simulated worlds. If one h is resurrected and the other not, we need a description of this reality, whether the subjects or the simulator know this difference or not.

It seems so, but we know more. We all talked about the “i” although we can not explain, what it really is.

Good point. If we will one day be able to create strong AI it might be possible to reduce the “problem” of the “configuration of particles”. Here is another thought experiment: if mental states supervene on the configuration of particles in any human, mental states must supervene on the configuration of data the AI consists of. We can copy data without any loss or difference of any kind. (hopefully). If we create two AI with the same data, they would not know which copy they are, as well. But if we give each of them a body ( even if we give them two identical robot-bodies, they will not be identical as copied data will be identical, but let us assume we could built completely identical bodies) they will experience the necessity to occupy a different space. And this difference is enough, like an itch on the nose is, to change the mental states of both because their input of data (their experience of their surroundings and their bodies) varies. Like an itch on the nose or the occupation of a different space it seems to me that there is just little difference necessary to divide two identical persons into two different persons.
Another interesting problem comes with your statement that you are all of your copys. In your model h_S and h_N there is only one person until the specific point in time (t=75) when one simulator decides to resurrect h_S and the other decides to keep h_N dead, or as you prefer: a natural, non-simulated world will lead to death without resurrection afterwards. So the decision of the simulator S to resurrect h_S against the “decision” of the universe N, not to resurrect h_N rips a person, that is at first one and the same (function) apart and creates two.

Would you agree that the occupation of a different space is trigger enough for a bifurcation? If you agree, then my question would be why the completely different residence of h_S and h_N should not be enough. Another problem is: if h_N and h_S are the same until the age of 75, why should h_N not become h_S and live on ? Or if you consider my version of your model and we have two different simulators deciding the fate of one person will they experience a tug war over the fate of h ? And if S wins h is resurrected ?

So here are my remaining questions/problems:

  1. If no body knows (not the subject h, not the simulator S and no “superobjective view”) that there are two identical universes S and N are there really two ? Or will they automatically merge into one Universe U if they happen to be identical ? Or is it the other way round:
  2. Even if no body knows that there are two Universes N and S, the information is there, no matter if it is part of a brain or just implemented in the cosmos like a natural law. And the existing information of a difference forbids to call N and S identical.
  3. If two Simulators quarrel over the decision what to do with h after he dies at the age of 75, would h remain in a state like Schrödingers cat until the decision is made? And what is the consequence of a decision:
    a) would h_N and h_S who are the same identical person h_U until the age of 75 split up into two persons h_N and h_S or is it the other way round:
    b) and h_N and h_S merge into one of both, depending which Simulator wins the quarrel and decides the fate of h because:
  4. Your version is strange. You posit that there are two persons h_N and h_S, although h would never know. So you have the “superobjective view”. You posit a difference between them and at the same time you declare them identical. Wouldn´t that be the same if we suggest that one equals two? Back to 1. and 2.:
  5. If there are two identical Universes S and N but they occupy a different spacetime, or a different quantumreality, they are not the same. And if they occupy the same spacetime they merge into one. If one is simulated and the other is not it suffices to view them as not identical, even if they occupy the same spacetime. (if that is possible). Or is there another possibility?

Yes. Because we must occupy a different space, when we met those versions, otherwise there will be no different versions. This alone should suffice to have different mental states because we perceive our surroundings different.

Another interesting question is why we refer to a person as “I” that doesn´t share the same time. „Yesterday, I did x…“ is possible, because we perceive a continuation in time. But what makes our own version our „own“?

here we meet a neglected problem: the “others”
who remembers you every day of your lifetime, who you are? people around you.

as pink floyd said:


We don’t need no education
We dont need no thought control
No dark sarcasm in the classroom
Teachers leave them kids alone
Hey! Teachers! Leave them kids alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.
We don’t need no education
We dont need no thought control
No dark sarcasm in the classroomTeachers leave them kids alone
Hey! Teachers! Leave them kids alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.
Do it again!”
“If you don’t eat yer meat, you can’t have any pudding.
How can you have any pudding if you don’t eat yer meat?”
“You! Yes, you behind the bikesheds, stand still laddy!”

here they refer (the age of aquarius) at carlos castaneda, who said: people are building walls around you.

didn`t you hear the whispers behind the walls?
they say: we know who you are. we know where you are. we are everywhere.

you can not wake up one morning and decide to be an other person, because you are just an other brick in the wall. :skull:

Yes, that’s true. Bifurcation (or multifurcation) should occur to the many world hypothesis every time that entropy increases. In the example of the thrown die the result is a selection of one out of six possible outcomes. That is irreversible. Irreversible actions increase entropy. The die and the thrower multifurcate into six different versions in six different worlds after the result is clear.

In the context of simulations of world it’s conceptually easier to just assume that worlds are deterministic. This avoids such complications as bifurcation happening both through quantum world branching and interventions of simulators. But of course, both happen assuming that the many world theory is true and that we live in a simulation.

Then why do I - subjectively - see only one result when I throw a die?

It matters a lot. Deviating from initial assumptions only leads to confusion and makes it harder for anyone to understand what is meant. It’s a didactic sin to redefine things as something completely else when they were defined clearly already.

What exactly do you mean with that?

Not necessarily. if the AI can’t determine its spacial position, it may very well be in a superposition of different spacial positions with all of them experiencing the same subjective experiences. This reminds me of a variant of the Schrödinger’s cat thought experiment I thought of: Schrödinger’s van. In this thought experiment Schrödinger himself is abducted in a dark van. He’s first ambushed and anesthetized with chloroform. When he wakes us the van had already moved to the home base of his abductees. He doesn’t know where he is, because the back of the van is completely dark and he can’t leave it. There are many possible groups which might have a motive to abduct Schrödinger, but they prefer to keep him alone in the van in order to wear him out.

In this situation Schrödinger himself is in a superposition of multiple spacial locations. There may be many different possible worlds in which Schrödinger finds himself in that unfortunate situation, but in all of them he experiences the same subjective experiences. But of course, from an objective perspective those different versions of Schrödinger are not the same, because they landed up in different locations and are abducted by different groups.

From the objective view, yes. From the subjective view not necessarily. See above.

That’s exactly the idea behind subjective immortality through simulator-enabled continuation. There is a version of you that is continued by the simulators and you can only experience that. You cannot experience not being continued after your death. So, by all reasonably perspectives, your continuation inherits your identity.

Two identical universes are one and the same. Is there any reasonable alternative to that?

If there is any difference between entities, they can’t be the same by definition. Isn’t that obvious?


Yes. Just because an entity has the potential to bifurcate doesn’t mean it’s a good idea to model it as already two entities before it has actually bifurcated. Of course you could do that, but you should really have a very good reason for doing that, because by doing so you would violate Ockham’s razor.

From an objective view the distinction between h_N and h_S wouldn’t make any sense, because the objective view doesn’t include any information about the potential superworlds in which N and S are embedded in. This distinction only really makes sense in a superobjective view, which we assume for the sake of argument. It is only possible to make a distinction between h_N and h_S if we also add the superreality S* of S into the picture. As mere functions, h_N and h_S are really identical. Only an extended version of h_S that also adds the information that S is a simulated reality of S* does make the difference between h_N and h_S. But then we would strictly not speak of h_S, but an extended function that included information about S*. We would need to call it something like h_S^S* to be precise. And of course h_N and h_S^S* are not the same.

Spacetime is a concept that only makes sense inside a certain universe. Two independent universes U1 and U2 usually have no spaciotemporal relation to each other. Which is something that is really hard to understand. Of course it’s possible to add such a relation to them by embedding them in some kind of larger universe U3. Or by letting one of them simulate the other, which might at the very least under some idealized circumstances create a temporal relation between both worlds when considering how fast time flows in each of those worlds. Perhaps an hour in the simulation is a second in the simulating superworld – or the other way around.

At first, all universes are completely isolated from any other universe. Relations between universes are purely optional.

If our surroundings are the same, we cannot experience any difference. See Schrödinger’s van above.

Probably because most versions of “identity” are constructs that we make up in order to be able to operate in social context. Also, because the you of yesterday has so many similarities to the you of today that it is justified to see both as effectively the same person.

Only the fact that we decide to see our own version as our own. There are many ways to define identity. And most of them are absurd. We tend to stick to those that are slightly less absurd than the others.

1 Like