When I think about it, I imagine such a world which does the simulation to look akin to the science fiction universes “Orion’s Arm” and “The Culture”.
it is only one way to find out the truth about this hypothesis: finding the bugs!
by this bugs it may be possible to exit to a most higer level world, and so on, like a russian matryoshka doll. this is what basic scientific resarch is doing.
like schrödingers super-cat? i think an other approach to the theme must be more witchcraftfull.
lets suppose the superworld is not a hypothesis, but exists here and now, and it is obviously superposed, and the aim of this game is not to accede to a higher level world, but to create a lower level world.
reduction of complexity must be the goal.
let me show you, what i mean (as a transhumanist) :
the human body is composed by 100 000 000 000 000 cells organised in a cooperative way as an individual organism.
at the first look it seems a very complex system. but it is not. it is a reduced system.
100 000 000 000 000 single cell protozoa in a twenty gallon water-sac are a complexe system.
but a higher level, a cell is made by a billion atomes. and so on …
what is the next lower level? imho it is the super-ai.
what is the motivation of the billions of protocellulars to aggregate and form a human body?
there must be one, but i can not see it.
if the super-ai will one day asks us about the motivation of creating her, what would we answer? the egomania of elon musk? (i am kidding, i am in love with him like a lemmon tree loves the sun)
but back to the superworld position. actually those worlds are superposed, the exists synchronically, the myriards of myriards of myriards of single atomes, the myriards of myriards of protozoa, the myriards of multicellular organisms, the milliards of human beeings and the millions of …of what? of transhumanists? ,
the key word is: focussing.
focussing means reducing complexity. myriards of protozoa are very complex system, but not a focussed system.
billions of humans are a complexe system, but they are not focussed (except elon musk )
we are the simulation of a computer called earth, who is the simulation of a computer called universe (memento mori to douglas adams). if you search for the answer, it will be 42, but look for the spoor and where it will lead.
so imho the spoor, the teleological vector does not point out to the superworld, but points in to the next smaller puppet of the matryoshka doll, more reduced and more focussed, and it ist just what is going on by the creation of the super-ai.
and if we had great fortune, we shall see at what the super-ai at the after-next layer of the superposed superworld will focus on. and like schrödingers super-cat, this next layer of the superworld ist superposed: it exists and it exists not …yet…
A mathematical perspective makes two different functions the same? Then both persons h_ N and h_S are identical. If that is the case, it is impossible that two identical persons differ in just one aspect. Both persons h_ N and h_S lead a predetermined life if they live in simulated worlds. If their simulators would talk to each other one of them could say: “I will kill my h_N at the age of 75 and never resurrect him. It is nice of you to let your h_S live a million years more…” If the simulators could know the difference between h_N and h_S there should be a way for mathematics to model that difference as well. But the much more serious point that is missing in your mathematical model is the “I”. The individual experience of the person. And i missed an answer to this important remark:
“For all practical purposes” both persons are not the same. I think your distinction between “objective view” and “subjective view” is flawed. The “subjective view” shows clearly that something is missing in your model and even if “we currently don´t know how to model subjective experiences”, we know that they exist. And if you insist on your objective view, you must admit that simulators N and S would know the difference between seemingly identical persons that live in one scenario 75 years, 1 day or one second and in the other a million years. If mathematics do not have the means to model our subjective experience and the difference of diverging lifespans, it should be enhanced. Probably by AI.
That is a possibility. With increasing technological and scientific sophistication it will be increasingly hard for the simulators to fool us into thinking that we live in a base reality. And then what? Contact with the simulators perhaps? Or would they rather revert this game to the last save point before we figured out that we live in a simulation, and improve the quality of the simulation so that we won’t find out this easily?
But of course with superworld superposition both cases will happen. If the simulators contact us, we will know that we live a that kind of simulated world. Superposition will collapse (at least with regard to our direct superworld). In the latter case, superworld superposition remains intact and we will be none the wiser.
As an aside, Elon Musk jokingly stated that humanity might be the bootloader for AI. So, if this is a simulation, perhaps running through the history of the universe, including all of human history, is just the start up sequence for the really interesting stuff that will happen once the AI takes over.
You made a critical error here. I stated that we model N as non-simulated world. Therefore, h_N does not live in a simulated world and therefore simulators know nothing about h_N. Consequently, they can’t ‘kill’ h_N.
But let’s assume that there is this world S* and we have two simulations S1 and S2 that are identical to t=75. From the perspective of inside the simulation there is just a function h_S, which is identical for both simulations, until t=75. A bifurcation of h_S only happens if the simulators somehow intervene, either within one of the simulations, or by creating a continuation of h_S. Let’s assume a small intervention at t=75. In S1 no intervention happens, but in S2 the simulators rearrange some molecules in the nose of that human, so that it maybe itches a bit and the person scratches it. Now we clearly have two different functions h_S1 and h_S2 that coincide on the interval [0,75], but differ on the interval [75,t_{death}].
We can of course extend the function of any individual to the whole universe, and even superworlds simulating that universe. Our function h_S would not only keep track of that human, but also anything happening in S and S*. That way we can make a distinction between humans that are identical, except for being simulated in different worlds. I would call that a “superobjective view”, because it goes beyond any information that is even theoretically accessible from a simulated world.
Maybe, but in a materialist view the individual experience of a person should supervene on the configuration of the individual particles that person consists of. If you want a different individual experience, you need a different configuration of particles. If there are identical copies of me in multiple simulations, I can’t tell any difference between them. I don’t know what copy I am. The most reasonable model I have for that is that I am all of those copies. At least until something happens that triggers a bifurcation in the configuration of particles that define me among different worlds. Say, some simulator makes my nose itch in one simulation, but not the others. Then the “I” will objectively bifurcate into different versions, but from “my” own perspective I still feel as monolithic “I”. Especially because I have no information about the other versions of me.
Now that opens an interesting thought experiment: Assume that the simulators revealed to us that they had run this world in parallel slightly different versions and then gave us all the information about the different versions of us. How would we refer to our other versions? We only refer to our “own” version as “I”. When we talk about other versions we perceive them from the outside, and refer to them as “them”. If the simulations have numbers we would refer to them with that number. Maybe I am Michael 789 and know that there is a version Michael 421 with a particularly itchy nose, but otherwise pretty much the same as Michael 789.
And that’s also the interesting thing about superworld superposition. Before the simulators break the superposition that we live in a particular simulation, for my own perspective, there is no Michael 789 or Michael 421. There is only Michael. In a way, what the collapse of superworld superposition does to us is adding more numbers to our identity / identities. Repeated collapses of superworld superposition create a branching tree of identities indexed with the names or numbers of the branches that are taken.
Does adding more numbers to one’s identity change one’s identity? Well, it would certainly change my subjective perspective on my existence and identity. Because then I would need to contextualize myself in a much larger world. And that’s pretty hard. But it would also be quite exciting.
hum, can you explain in a more accurate way your worldview?
in my understanding, every time the person throws a dice, the world splits in six branches and so on … you had a bifurcation at every decision made… that is the problem of the many-world-hypothesis, the increase of the number of possible worlds
Does it really matter for the thought experiment if N is simulated or not? You said yourself that alternatively we can consider N as a simulation in which simulators don´t bother to resurrect h. I just chose this version.
But even if we choose the version with one simulated and one non-simulated world, the information of the difference of both h is there, because it is our thought experiment and we know it. Call that “superobjective view” as well, but no matter what, it is a theory we need a mathematical model for. The same problem remains if we choose my version of two simulated worlds. If one h is resurrected and the other not, we need a description of this reality, whether the subjects or the simulator know this difference or not.
It seems so, but we know more. We all talked about the “i” although we can not explain, what it really is.
Good point. If we will one day be able to create strong AI it might be possible to reduce the “problem” of the “configuration of particles”. Here is another thought experiment: if mental states supervene on the configuration of particles in any human, mental states must supervene on the configuration of data the AI consists of. We can copy data without any loss or difference of any kind. (hopefully). If we create two AI with the same data, they would not know which copy they are, as well. But if we give each of them a body ( even if we give them two identical robot-bodies, they will not be identical as copied data will be identical, but let us assume we could built completely identical bodies) they will experience the necessity to occupy a different space. And this difference is enough, like an itch on the nose is, to change the mental states of both because their input of data (their experience of their surroundings and their bodies) varies. Like an itch on the nose or the occupation of a different space it seems to me that there is just little difference necessary to divide two identical persons into two different persons.
Another interesting problem comes with your statement that you are all of your copys. In your model h_S and h_N there is only one person until the specific point in time (t=75) when one simulator decides to resurrect h_S and the other decides to keep h_N dead, or as you prefer: a natural, non-simulated world will lead to death without resurrection afterwards. So the decision of the simulator S to resurrect h_S against the “decision” of the universe N, not to resurrect h_N rips a person, that is at first one and the same (function) apart and creates two.
Would you agree that the occupation of a different space is trigger enough for a bifurcation? If you agree, then my question would be why the completely different residence of h_S and h_N should not be enough. Another problem is: if h_N and h_S are the same until the age of 75, why should h_N not become h_S and live on ? Or if you consider my version of your model and we have two different simulators deciding the fate of one person will they experience a tug war over the fate of h ? And if S wins h is resurrected ?
So here are my remaining questions/problems:
- If no body knows (not the subject h, not the simulator S and no “superobjective view”) that there are two identical universes S and N are there really two ? Or will they automatically merge into one Universe U if they happen to be identical ? Or is it the other way round:
- Even if no body knows that there are two Universes N and S, the information is there, no matter if it is part of a brain or just implemented in the cosmos like a natural law. And the existing information of a difference forbids to call N and S identical.
- If two Simulators quarrel over the decision what to do with h after he dies at the age of 75, would h remain in a state like Schrödingers cat until the decision is made? And what is the consequence of a decision:
a) would h_N and h_S who are the same identical person h_U until the age of 75 split up into two persons h_N and h_S or is it the other way round:
b) and h_N and h_S merge into one of both, depending which Simulator wins the quarrel and decides the fate of h because: - Your version is strange. You posit that there are two persons h_N and h_S, although h would never know. So you have the “superobjective view”. You posit a difference between them and at the same time you declare them identical. Wouldn´t that be the same if we suggest that one equals two? Back to 1. and 2.:
- If there are two identical Universes S and N but they occupy a different spacetime, or a different quantumreality, they are not the same. And if they occupy the same spacetime they merge into one. If one is simulated and the other is not it suffices to view them as not identical, even if they occupy the same spacetime. (if that is possible). Or is there another possibility?
Yes. Because we must occupy a different space, when we met those versions, otherwise there will be no different versions. This alone should suffice to have different mental states because we perceive our surroundings different.
Another interesting question is why we refer to a person as “I” that doesn´t share the same time. „Yesterday, I did x…“ is possible, because we perceive a continuation in time. But what makes our own version our „own“?
here we meet a neglected problem: the “others”
who remembers you every day of your lifetime, who you are? people around you.
as pink floyd said:
THE WALL SONGTEXT
We don’t need no education
We dont need no thought control
No dark sarcasm in the classroom
Teachers leave them kids alone
Hey! Teachers! Leave them kids alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.
We don’t need no education
We dont need no thought control
No dark sarcasm in the classroomTeachers leave them kids alone
Hey! Teachers! Leave them kids alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.
“Wrong,
Do it again!”
“If you don’t eat yer meat, you can’t have any pudding.
How can you have any pudding if you don’t eat yer meat?”
“You! Yes, you behind the bikesheds, stand still laddy!”
here they refer (the age of aquarius) at carlos castaneda, who said: people are building walls around you.
didn`t you hear the whispers behind the walls?
they say: we know who you are. we know where you are. we are everywhere.
you can not wake up one morning and decide to be an other person, because you are just an other brick in the wall.
Yes, that’s true. Bifurcation (or multifurcation) should occur to the many world hypothesis every time that entropy increases. In the example of the thrown die the result is a selection of one out of six possible outcomes. That is irreversible. Irreversible actions increase entropy. The die and the thrower multifurcate into six different versions in six different worlds after the result is clear.
In the context of simulations of world it’s conceptually easier to just assume that worlds are deterministic. This avoids such complications as bifurcation happening both through quantum world branching and interventions of simulators. But of course, both happen assuming that the many world theory is true and that we live in a simulation.
Then why do I - subjectively - see only one result when I throw a die?
It matters a lot. Deviating from initial assumptions only leads to confusion and makes it harder for anyone to understand what is meant. It’s a didactic sin to redefine things as something completely else when they were defined clearly already.
What exactly do you mean with that?
Not necessarily. if the AI can’t determine its spacial position, it may very well be in a superposition of different spacial positions with all of them experiencing the same subjective experiences. This reminds me of a variant of the Schrödinger’s cat thought experiment I thought of: Schrödinger’s van. In this thought experiment Schrödinger himself is abducted in a dark van. He’s first ambushed and anesthetized with chloroform. When he wakes us the van had already moved to the home base of his abductees. He doesn’t know where he is, because the back of the van is completely dark and he can’t leave it. There are many possible groups which might have a motive to abduct Schrödinger, but they prefer to keep him alone in the van in order to wear him out.
In this situation Schrödinger himself is in a superposition of multiple spacial locations. There may be many different possible worlds in which Schrödinger finds himself in that unfortunate situation, but in all of them he experiences the same subjective experiences. But of course, from an objective perspective those different versions of Schrödinger are not the same, because they landed up in different locations and are abducted by different groups.
From the objective view, yes. From the subjective view not necessarily. See above.
That’s exactly the idea behind subjective immortality through simulator-enabled continuation. There is a version of you that is continued by the simulators and you can only experience that. You cannot experience not being continued after your death. So, by all reasonably perspectives, your continuation inherits your identity.
Two identical universes are one and the same. Is there any reasonable alternative to that?
If there is any difference between entities, they can’t be the same by definition. Isn’t that obvious?
Yes.
Yes. Just because an entity has the potential to bifurcate doesn’t mean it’s a good idea to model it as already two entities before it has actually bifurcated. Of course you could do that, but you should really have a very good reason for doing that, because by doing so you would violate Ockham’s razor.
From an objective view the distinction between h_N and h_S wouldn’t make any sense, because the objective view doesn’t include any information about the potential superworlds in which N and S are embedded in. This distinction only really makes sense in a superobjective view, which we assume for the sake of argument. It is only possible to make a distinction between h_N and h_S if we also add the superreality S* of S into the picture. As mere functions, h_N and h_S are really identical. Only an extended version of h_S that also adds the information that S is a simulated reality of S* does make the difference between h_N and h_S. But then we would strictly not speak of h_S, but an extended function that included information about S*. We would need to call it something like h_S^S* to be precise. And of course h_N and h_S^S* are not the same.
Spacetime is a concept that only makes sense inside a certain universe. Two independent universes U1 and U2 usually have no spaciotemporal relation to each other. Which is something that is really hard to understand. Of course it’s possible to add such a relation to them by embedding them in some kind of larger universe U3. Or by letting one of them simulate the other, which might at the very least under some idealized circumstances create a temporal relation between both worlds when considering how fast time flows in each of those worlds. Perhaps an hour in the simulation is a second in the simulating superworld – or the other way around.
At first, all universes are completely isolated from any other universe. Relations between universes are purely optional.
If our surroundings are the same, we cannot experience any difference. See Schrödinger’s van above.
Probably because most versions of “identity” are constructs that we make up in order to be able to operate in social context. Also, because the you of yesterday has so many similarities to the you of today that it is justified to see both as effectively the same person.
Only the fact that we decide to see our own version as our own. There are many ways to define identity. And most of them are absurd. We tend to stick to those that are slightly less absurd than the others.
Phyicists would throw the catchphrase “decoherence” into the discussion and claim that this explains everything. Of course, it’s not clear how “decoherence” can appear in a holistic single deterministic quantum world encompassing every possible outcome. The problem is that it’s actually a big mystery why our perception is so limited that we only perceive singular outcomes of events. It’s actually similar to the question why we can’t see or hear everything at once. It’s conceivable that we could, but for some reasons we don’t. We are very limited beings. Very simple “stuff”. Perhaps there are beings that can perceive multiple quantum realities at once. Who knows…
Thou dost?
No. But that means it is impossible to state that there are two.
Yes, it is obvious. This is my point from the beginning. If we have Universe N and Universe S, the humans h_N and h_S are two different persons if their residence differs, even if they could not see a difference. But this means that there are no different versions of “me”, because whenever a bifurcation is triggered, a different person than me lives in a different timeline than me.
And if there is a copy of our universe, where everything else is the same, but they are completely isolated from each other, then again, it is difference enough to state that there are two. My conclusion is, that there are no such things like two or more “me”. A person that lived the same life and has now the same configuration of particles and writes this post but lives in a universe completely isolated from mine, is somebody else, like a twin is somebody else and not his sibling.
i just meant that the configuration of particles is very complex that it might be difficult to determine if there are two identical configurations. AI could be easier to compare when their identity based on data.
Actually, two identically universes would be two different universes that just “are” (look, feel, act etc.) exactly alike. Two identitical photos are two different pieces of printed paper anyway.
That’s what I think too. Even when two people are exactly alike from an external point of view, they can never ever be the same person subjectively, that would not make any sense.
Sorry, but I don’t understand that.
i am too lazy to write an other post, but i suggest you to read about the einstein-rosen-podolski-thought experiment.
so you have not to invent the wheel twice
I meant, if universes are completely identical in every way it is one and the same so we only have one universe. If there are nearly identical universes that are completely isolated from one another, the difference, that they reside somewhere else, is enough to state that they are not the same but two different universes.
Ok, my AI -idea was not that important but pretty unclear, i see…
Radivis mentioned supervenience:
If the individual experience of a person depends on the configuration of particles the person consists of, it might be very difficult (for practical reasons) to determine if two copies have the same configuration of particles. Because if just one particle differs, one copy feels an itch on the nose and the other not. So if the identity of a human being needs a special configuration of all his particles, his body consists of, we have to compare the configuration of particles of both copies if we want to find out whether they are in the same mental state and their identity is identical. This comparision might turn out to be impossible. Maybe we will never be able to beam persons like it is described on star trek. In a simple materialist view people are nothing more but a tower of toy building blocks, except that the blocks are much tinier and a lot more. But if we deconstruct a person in one place, send the construction-information of the exact configuration of her body to another place and construct her there again, will we just transported her? Or did we kill the original person and constructed a new one who happens to have the same memory of the original and claims to be the original although she is not? We could never determine that. But a set of data remains the same if we send it somewhere. And if we will ever be able to create strong AI and the identity of one AI “only” depends on the configuration of data and not on the configuration of particles of a body, maybe we will be able to beam the AI without killing it first. Maybe we will be able to copy AI completely with the same identity…but this is just a speculation.
No, that wasn’t the problem: I didn’t understand your idea grammarwise.
This sentence doesn’t make sense:
Artificial Intelligences will be easier to compare, if their identities are based on data.
yes, and after the upload of someones mind, it will be possible to beam him too.
i see an analogy to the problem of the first-person-perspective.
when someone uploads his consciousness in a computer, what implies an analogue-digital-conversion (this is imho what @zanthia meant by making the difference between the configuration of the particles oft someones body and the configuration of the data of an ai), we have the same problem mentioned above by @Radivis about the identity of two (or more) persons.
lets imagine, someone connects his brain to an ai for uploading his mind. while the upload is going on it will be a moment this person change his perspective, because a part of his consciousness is transfered or copied into the ai (the computer) and this person is becoming two persons, one in his human body and another in the ai, connected by somekind of network (a neuralink elon musk ). or maybe (and this is what i am expecting as a transhumanist) a totally different kind of person emerging from this process (a cyborg).
it is similar to the beaming problem mentionned by @zanthia.
in my reflections i saw a solution for this problem in the synchronicity and non-locality described in the einstein-rosen-podolsky-paradoxon, but i am not sure, if a quantum process will work on a macroscopic world application.
hence the real and obvious application for @Radivis superworld superposition problem and the derived implications is the upload-problem.
edit: i see, that i have to explain a premise made above:
imho the upload can not take place in a computer like in an empty bottle (this is the darkest way to hell), but there must be an ai for receiving him into his new habitate.
the question is if ,or if not this ai has to be embodied (connected to the outerworld by a sensorium) or if the ai may only access a cyberspace.
what will happen to the uploaded person? will he be one, or two or a third, or just loose his mind (ok, this will not be an option ) . and what will happen, when the uploaded person and the uploaded mind were separated like the cutting of the umbilical cord separate the new-born child from the placenta feeding it?