The Reparator Paradox

Over the last month I seemed to have stumbled upon a deeply worrying philosophical issue that I now call the reparator paradox. I’ve already explained reparationism (previously called compensationism) in the post “Four posthuman ethical frameworks”, and also mentioned it in the post “Principles of the Aonian Exaltation”, where it’s briefly described as this, with the words in [brackets] being optional:

If you cause suffering to other sentient beings, you are obliged to
compensate for that by causing an at least equal amount of positive
feelings for exactly the affected beings. More concretely, the
compensation must at least be [twice] as high in its total amount as the
inflicted suffering. Of course this implies that suffering and happiness
can be measured meaningfully. [This is a basis assumption that is
affirmed in the Aonian Exaltation universe.]

In the “posthuman ethical frameworks” post I mainly discussed the issue of “simulation ethics”, meaning the ethical principles that guide simulations of worlds that actually contain sentient beings (by using some kind of super-simulation device (or mind)). I call the beings who do those simulations simply simulators.

Now I want to define a special class of simulators which I will call reparators. A reparator is a being that:

  1. Has the ability to simulate a world containing sentient beings in full detail.
  2. Can store the mental states of each of those sentient beings at each single point in time.
  3. Can use the thus accumulated data to restore the minds of these sentient beings after their death in the simulation.
  4. Has the resources to create a kind of “heaven” for those restored beings, which is sufficiently good that it outweighs all the suffering the being in question has experienced within her lifetime in the simulation.
  5. Actually does all of the above.

I call a being who meets the criteria 1-4, but not 5, a potential reparator.

All of this is already extremely far out stuff that probably only the most hardcore transhumanists feel inclined to think about. Theologians would be inclined to think about this, too, if their doctrines and ideologies weren’t so inflexible and mind-numbing. So, what is this philosophical issue that worried me? It’s the following, which I call the reparator paradox:

A reparator can justify each and every action regarding sentient beings in her simulation with the argument that those will be more than compensated by “heaven” for their suffering. Since “heaven” is comparatively eternal (compared to the lifetime within the simulation), and the quality of life in heaven is extremely positive, every amount of suffering can be repaired and even overcompensated. The longer the duration of life in heaven compared to the lifetime in the primary simulation, the less does the primary suffering count. In the limit of infinity all finite suffering eventually becomes irrelevant. Even temporary “hells” can be ethically justified with that line of reasoning. After all, a being that eventually ends up in a virtually eternal heaven, but has suffered any finite amount of suffering, is better off with that deal than not having existed in the first place.

The reparator paradox consists in the apparent conclusion that reparators have an ethical blank cheque for doing anything they want with the sentient beings they simulate. This is very weird, because ethics usually prohibits certain actions, while allowing, or even prescribing others. The situation that any kind of action is justified under the condition of subsequent “reparation” is hugely counter-intuitive. It feels like the whole ground of ethics is torn apart by the reparator paradox, at least when it comes to consequentialist ethics (see Wikipedia link below). However, I am quite convinced that any ethical framework that is supposed to have any reasonable basis needs to at least invoke consequentialist components, unless it degenerates into naivety or profound arbitrariness (which one might call nihilism).

Nevertheless, the word “paradox” typically describes an only apparent contradiction, not a true contradiction (which is called antinomy in that case). So, how can the reparator paradox be resolved? There seem to be multiple possibilities:

  1. Ditch ethics, period. This doesn’t sound very wise.
  2. Relinquish all kinds of consequentialist ethics. This may sound more reasonable, but the consequences of that would be pretty close to “1” in my opinion.
  3. Argue that even the finite and temporary suffering in those simulations is ethically prohibited. This seems to be a very reasonable route. For example it would be a rather obvious solution to say: Hey, why allow for suffering in simulations, if reparators can create “heavens” in the first place? Only creating heavens seems to be ethically superior to creating primary simulations containing suffering, and then “repairing” that suffering, by putting the sufferers in heavens. Luckily, this line of reasoning is totally intuitively plausible. Unfortunately, intuitive clarity is a far cry from a real conclusive proof. It may well be that some forms of suffering enable larger overall utility in the end.
  4. Accept that reparators have the privilege to do what they want, if they make use of their reparator powers! This seems quite radical, but it avoids the problems that solutions 1, 2, and 3 suffer from. It implies a radical shift from “conventional morality” by accepting that there is a class of beings for which “conventional morality” does not hold. Instead, reparators only need to follow “reparator ethics”, which consists in making sufficient (optimal) use of their reparator powers.

I tend to favour solution 4. It’s certainly a hard and bitter pill to swallow, but it’s not riddled with fundamental problems – it’s merely a huge insult of human moral sensibilities, rather than a proof that no kinds of ethics can make any sense ever.

Honestly, I actually want solution 4 to be true, because it makes it actually plausible that we live in some kind of simulation, and that we actually will eventually be compensated for our suffering with an incredibly blissful existence in a heavenly afterlife provided by reparators that have some form of ethical sensibility. By the way, it’s also a plausible solution to the theodicy problem:

This post is of course no stringent analysis of the whole reparator paradox. It more a reflection of my current lines of philosophical reasoning than anything else. What the reparator paradox does, however, is humble me by demonstrating my amount of ignorance and confusion even in spite of having thought about such kinds of philosophical problems for many years. It motivates me to seek for more wisdom – in the hope that I will eventually reach some kind of robust philosophical clarity. Until then, I will probably still attach myself quite a bit to “conventional” intuitive/reflected utilitarian reasoning.

So, the question that intrigues me most at this point is: How does reading all of this make you feel?

1 Like

Hi Michael

I can’t claim to have read too much about AI ethics, but I’ll share my 2 pence worth…

Firstly, if such an ethical law has to exist, can’t it simply be reversed? So that in order to allow a sentient being to feel suffering, it should first have experienced more happiness?

Secondly, I dislike the idea of laws for intelligence. The smarter an intelligence is, the higher its ability to figure out ways to accomplish its goals while bypassing laws. For example, if an AI is instructed to give more happiness than pain, the AI could simply invent different measurements for happiness and pain, or it could prevent happiness from occurring without inflicting any pain, leaving pain to occur by itself, or it could get something else to inflict the pain, instead of doing it directly. Basically, with laws you have all the problems of the current capitalist system in which we find ourselves where the aim of the game is to stick to the laws while at the same time becoming as rich as possible… therefore indirectly hurting others.

So, if I were to try to make “good” intelligence, I’d simply evolve it, using simulations. Instead of the principle of “survival of the fittest” it would be “survival of those that make sentient beings happiest.” For example, I’d simulate a thousand worlds, each with an AI and other simulated beings. Then I’d keep the AI which lived in the simulated world which ended up with the happiest simulated beings and delete the other 999 AIs. I’d continue the process over and over.

find a solid ground for ethics. in your considerations i found four ingredients you use like unalterable truth: the idea of suffering vs happiness, the idea of the outcome/ result = consequentialism, the idea of reparation, and the idea of a realm of gods which contains the ideas of afterlife and superpowers like in many religions. if it is all up to gods and their superpowers and their will, the paradox is much greater, because it would not make sense to apply ethics to humans and their interactions if they are not free to choose because gods simulate them.
consequentialism is not the only way to go in ethics because it makes a difference how we experience suffering if we get hurt by somebody purposely or by mistake.
and what if happiness only occurs when we overcome suffering ? if that is the case, a certain amount of suffering is needed and maybe wise gods will provide us with that to give us the chance to strive for something. maybe paradise is boring…

this is a good idea. but in your thoughts and radivis thoughts i found the same implied idea without questioning it:

i think you are right and it wouldn´t make sense to subdue gods or AIs under a law if they have the power to bypass it. but why would they?

the implied idea, i found, is : if any entity has the power to do harm, it would want to do so, if there is not something to prevent it from doing so. please tell me, if i am wrong here. if i am not, i want to add the question: why, if happiness could be considered as the most plausible sense/meaning of life, would any powerful entity, or such a powerful species like the human species, do so much against it and would want to cause harm?

The order you propose comes with technical problems. It’s easy to append a “heaven” at the end of a sentient being’s lifespan, but doing that before that being even has developed in time poses a huge problem. And: Why would the order even matter?

Do you dislike the idea of laws in general? If yes, how should the behaviour of humans be regulated? If no, why treat AI any different from humans?

That sounds like an unusually wasteful and cruel process. It might be effective in the end, but it sounds like a huge number of things could go wrong with this approach.

Why not simply make a competition between different AIs with the goal to maximize happiness. Those AIs who win reach the next round. Those who lose, need to apply for a different job. :unamused:

Are you implying that being simulated would resolve us from ethical responsibilities? Why would that be, if we cannot distinguish between the situation of being simulated or not? Or do you think ethical responsibilities don’t make sense in any case?

What kind of argument is that exactly?

That sounds semi-plausible, actually. At least certain kinds of pleasure may depend on partial “suffering”. But I don’t know what kind of suffering you have to experience to enjoy a sunrise or the beauty of a flower. If paradise is boring, we will sure do our own interesting simulations (dreams / games).

Not necessarily. Perhaps eventually there’s a way to gain anything that’s really worth gaining without doing harm. But I find that possibility implausible. The alternative is that there are some values whose fulfilment requires doing some harm. You mentioned such a possibility:

you are completely right, that if we could not know if we are simulated or not, that the requirement for ethics is the same. if i would run some experiments with rats and give them different environments: one which i consider the ratparadise and one in which everything is rare and the space is overcrowded, it becomes predictable for me as a rat-god, in which simulation violence will occur and thoughts about ethics would make sense for me and not for the rats. if simulators have superpowers they could play with the living entities like we play with dolls. and if a simulated entity decides to act ethical good, any simulator- god could prevent it from doing so. ethics for gods and ethics for dependent entities can´t be the same. to solve the reparator- problem i would at first divide and analyse all inner perspectives of the agents. imagine the simulators would start simulations of two similar worlds with similar entities and a similar “game”. the only difference is the belief of the entities. in one simulation all the entities believe, that they will cease to exist with death and in the other simulation all the entities believe what is really going to happen: that they will have a blissful afterlife and will be overcompensated for all their suffering in life. i hope it becomes obvious, that the same harmful events will be experienced different in each world: that losses, illnesses, wounds, violence, wars, poverty and so on will be hard to endure in the first world and could be perverted into a welcome experience in the second world. if the simulators are wise they will compensate the first world- entities more because the amount of suffering they experience will be higher. but there are more serious consequences for the simulated entities. if both entities have the impression, that they have a free will and could cause harm and suffering themselves they both require ethics. and again, i think it becomes obvious, what profound difference in ethics we will discover, if both entities follow their inner logic of their belief. the first world has to develop a high standard to avoid harm and violence against each other. the second world would consequently develop quite the opposite: it has to be considered an ethical good action to cause maximum harm! and all entities will be eager to experience suffering because of their religious conviction of reparation and compensation. we can find elements of that in every religion and in the fictional construct of the klingon culture. the warriors are eager to kill and get killed in a brutal fight for they will be rewarded in sto´vo´kor.
but the darkest ages for ethics will be in simulations where the entities believe that they are powerless and have no free will. in the first world i mentioned, where they don´t believe in gods, they could explain every harm they caused with their nature and the natural seemingly brutal setting they live in. in the second world they could explain every harm with gods will, gods punishment or gods ordeal or the price for a good afterlife. but both entities would not feel responsible for their actions. so from the perspective of the simulated entities, the best ethics would be developed when they believe that they have a free will and could cause harm and when they believe that they cease to exist with death. the situation would only change if they have proof of the existence of the simulators. but proof will be impossible. even if they all would be send to the blissful afterlife and would be send back, they could never know, if they all had just a dream… and what remains is always the same: belief.
for now i will end here because i need more time, to think about the perspectives of the simulators. please tell me if my analysis of the perspective of the simulated entities is consistent or not.

that means, that it feels different for me when somebody attacs me in rage and steps on my toe purposely or when somebody steps on my toe by mistake in a crowd. although the consequence, that my toe hurts, is the same, i would not want to report the one in the crowd to the police. and i am glad that our law respects motives and intentions as well as consequences. but you can go a step deeper and take all the feelings into account. in the aforementioned example it would be that the violent attac creates a feeling of fear in me and i felt hurt in my dignity as additional consequences of the physical pain. this way you can explain every moral value of an action with consequentialism, because it already includes the consequences of intention. but a consequentialism that defines itself by excluding motives and intentions is flawed.

try this every day around the clock ( with a simulation it is possible) : enjoy the flower and the sunrise nonstop…and then try this with a miner or a shiftworker who suffered from missing the sun and beautiful nature for a long time. the experience and the amount of happiness would be much different.

how could we know, that this isn´t already the case with us here and now…

for many years i ask myself a contradictory kind of question: is it really possible to win something when causing harm?

1 Like

I think I’ve misunderstood something if the order doesn’t matter.

Yes, I do dislike the idea of laws.

Firstly, there are too many laws, and therefore too many loopholes, and it’s very difficult to know all of the laws.

If we were to summarize the laws into something that’s simple and makes sense, it would be to be kind. A single law like that would be too broad to regulate… e.g. should everyone be fined for saying something nasty about someone?

I wouldn’t claim to have a perfect solution, but I’d prefer it if laws were phased out, and an environment which encourages collaboration and friendship was phased in.

I don’t think it’s wasteful - it’s just a simulation… just energy. I don’t see how it’s cruel either. I’m not talking about sentient beings. I’m talking about artificial intelligence. What could go wrong with this approach?

Why not just delete them? I’m guessing by AI, you’re thinking of something like us, that’s conscious and has feelings, whereas I’m thinking of a digital pattern recognition and decision making software that runs on silicon.

That was an excellent, very insightful, interesting and coherent analysis of the ethics of simulated (or real) entities based on their belief systems. :smile_cat: It would have been even better if it was structured more clearly instead of being one big block of text. :blush: It’s really dense, and there’s so much to think about there. Personally, I think it would be best visualised as 2x2 matrix with the alternatives “believe in compensation in afterlife” vs. “don’t believe in compensation in afterlife” and “accept personal responsibility” vs. “reject personal responsibility”.

I wouldn’t say that I wholeheartedly agree with your analysis, but it’s a very good starting point. It might also be worth exploring whether the ethics of people would actually be appropriate to the situation of being actually simulated and compensated for their suffering in an afterlife respectively not being simulated or compensated. What seems to be a bad idea in any case is the rejection of personal responsibility, because that seems to make people feel less empowered and thus reacting from a position of resignation and blind passive acceptance.

That seems to be the most thought-provoking part of your analysis. If really baffles me that it actually seems to be possible to justify some kind of local inverse utilitarianism on a rational basis! I’m not sure how to react to that. I will have to think about it…

Well said. I agree.

Certainly, but this doesn’t mean that people who aren’t deprived of the sun and nature aren’t able to enjoy them. What would you perfer: A world without suffering in which you are free to enjoy everything to some degree, or a world full of suffering in which the redeeming qualities of beautiful things are exalted by their rarity? This is not a rhetorical question!

Yeah, we don’t know that. Perhaps we live in a simulation that provides some interesting drama for us and our transcendent spectators. I wouldn’t feel bad about that. It actually seems quite plausible to me that this is likely the case.

What about knowledge and wisdom?

1 Like

Yes, I do feel the same way. This is certainly a most ambitious and worthwhile goal. I think the only sustainable solution that leads to that kind of utopian result is that people continuously work on themselves to become kinder and wiser.

Well, it seems that we still have a problem with a lack of clear terminology about sentient vs. non / negligably sentient (artificial) intelligence / minds / entities / beings. And I think that this is a serious problem that needs to be resolved as quickly as possible! I’m just not sure how to actually do that. If you have any ideas, please shoot! :smile:

About the problems that could come with your approach: There is a science fiction story by Greg Egan that examines exactly that. It’s called Crystal Nights and can be read online for free! I heavily recommend reading it!

i would prefer a world without suffering. but not without contrast.

i think the term “suffering” is too strong, to explain, what i mean. there is a grey area of many lightly negative emotions that motivate us to act. to be a little bit hungry could not be called “suffering” but it increases the pleasant anticipation when the waiter arrives. though being hungry for a long time is one of the worst forms of suffering. and there is, in fact, a kind of disability to enjoy something you´ve had enough of: if you cook too much for yourself and overeat, it could be considered as suffering if you are coerced to eat more. so, i can´t agree completely with your assertion, that people who are not deprived from sun and nature could enjoy them. because it suffices to stay a working day indoor - a light form of deprivation - to feel the contrast of the warm sunlight outdoor. but with many days in the sun on a long vacation you could really have enough of it and feel a longing for an exemplary good, dark and cool rainy day.

what do you mean by that? …that you have to cause harm to gain knowledge? that it hurts to become wise? i doubt the first one. i don´t think that it is necessary to cause harm with experiments to gain knowledge.

the perspective of the simulators :

the necessary foundation stone for the requirement of ethics is given when the simulators know that their actions cause effects and that they are no puppets ( for which ethics would never make sense) but the puppet players. the conscious experience to cause something is the core of power and the more power you have the more the requirement for high standard ethics grows with this power, up to the point when the power becomes unlimited. then the situation changes. a q-like entity with superpowers is no longer bound to cause and effect, because it can influence causality itself: it could reverse it or make it unhappen. so for entities with no power and entities with unlimited power ethics don´t make sense, but for all entities with limited power. you defined four abilities of the simulators. the question is, whether their abilities to store and restore conscious beings and to create simulations and heavens is unlimited : could they reverse time and make things unhappen? if that is the case, there is no need to compensate for anything, because they could give all beings the impression to live in paradise just with eradicating every harmful experience out of time. but this is also partly the case when they could give their simulated beings the impression of paradise just with eradicating every harmful event out of their mind unless no physical damage is done. then every harmful experience will be less than a nightmare: a nightmare they never had. in a way it could be considered as ethical good behaviour, IF the simulators manipulate time or minds to make suffering unhappen. but they could create an apocalyptic simulation, that lasts 10000 years and then decide to make it unhappen and it doesn´t matter. they could create an undefined long lasting apocalypse because it is not the question when they will make it unhappen but only if. and this is a real problem ( the same occurs with time travel into the past ).

if they have unlimited power concerning the simulations they would never need to use two types of lives: a limited one and the eternal afterlife. they could just start with simulations of the eternal afterlife. they could let the simulated entities die, as often as they want to and the entities will never know, that their lives are eternal and that they are already in heaven. the simulators could give them the impression of reincarnation or a limited one life, let them experience heaven or hell or just end their conscious perception.

but there are other “problems” with superpowers. with unlimited abilities the simulators will also have the power of the laplace´s demon if they should decide to subject themselves under the restrictions of causality with their simulations: they will always know progression and outcome and end of every simulation and every simulated beeing in every point in time without starting the simulation in the first place. but if they will decide not to respect any natural law, their simulations are restricted to, the situation becomes even worse. like a play that has no rules and therefore is not worth playing it.

simulations usually have the purpose to observe that kind of causality which could not be easily predicted. when you are a little child, you maybe want to smash drinking glasses to experience how they break, you want to get soaked in heavy rain or flushing the toilet just because you want to experience what is going to happen. when you are an adult, you already know the causality and you will need much more complex and unpredictable tests to satisfy your curiosity. maybe you want to run simulations like the stanford prison experiment because interactions of humans are much more interesting and complex than flushing the toilet. but when you are a q-like being and your life has a beginning but no end, you will once have experienced all kinds of complex simulations, hellish scenarios and feelings and events stored in your mind and even the most complex simulation will appear to you as interesting as flushing the toilet.

you have created a fractal cosmos because the most important problem, the simulators will never solve, is the question, whether they themselves are simulated or not. and this question makes them equal to their simulations. like the simulated entities they rely on their perception and impression and therefore belief that they could cause something and that they are no puppets themselves in the hands of higher creatures; and with them it will be the same for they will never know who created them…and so forth. the only thing that always remains is perception itself. and this is why i analyse the inner perspectives of the agents.

Thanks for the link to the story. I’ll take a look :smile:

You know, these are the basic abilities of the human brain. People do create worlds with characters, some more detailed, others less detailed. I personally know someone who could talk about his world for weeks and not run out of things to talk about.

I think this is the clue to why this isn’t actually a paradox at all. If a being can do 1-4, then you can very reasonably argue that the suffering is being inflicted on the being itself. The simulated entities aren’t really separate entities. Therefore, ethically there’s no problem whatsoever, unless you hold there are ethical limits on what you can accept for a being to do to itself.

So, in short, the illusion here is thinking about the simulated sentient beings as something separate from the simulator.

1 Like

This leads me to the question whether simulated characters within one’s mind do possess their own sentient, or whether it’s a kind of “borrowed” sentience, or whether sentience is missing in that case. I guess @sandu has some interesting thoughts on that matter.

That seams like a reasonable assumption, though it’s not necessarily true. The simulation doesn’t have to be deeply integrated with the mind of the simulator. Such a simulation would probably run on some very specialized hardware. How much the software running the hardware would be mind-like is probably an interesting question, but it would lead to quite the tangent here.

Ok, if we assume that the simulated entities are just parts of the mind of the simulator and the simulator experiences everything in full detail what the simulated entities experience, then we have a really interesting situation. Such a simulator would have a really good reason to simulate a world with large amounts of suffering – otherwise the disincentive from the suffering would be to big to simulate such a world.

Should there be limits on what beings should be able to do to themselves nevertheless? That’s a very important question. The transhumanists promoting morphological and personal freedom would probably say no: Entities should be able to do with themselves what they want. And I think any deviation from this position needs a really strong justification.

Perhaps such a justification would be something like: “We shouldn’t create simulations that involve large amounts of suffering, because suffering is inherently bad”. A negative utilitarian would be more likely to agree with such an argument than a classical utilitarian. I’m a classical utilitarian however, so I’m more inclined to say that simulations with large amounts of suffering are ok, if their simulator truly experiences that suffering, but has an overriding reason to run the simulation nevertheless.

I think a more interesting phrasing is “what would be sufficient reason to consider a simulated entity to be considered separate from the simulator?”. I mean, sentience obviously isn’t missing, otherwise we wouldn’t be asking the question at all.

In such a case, you have the simulator and someone who can control the simulation. The simulator is still the entity that experiences all the suffering, otherwise it can’t really simulate it.

I think you’re forgetting to ask a key question here though. What is suffering? How do you tell if something simulated is suffering? Or never mind the simulated part, how do you tell someone is suffering? For humans, that pretty easy because we have an innate ability to symphatize with other humans. Some just use it more.

However, to even think of talking about suffering in a simulator, you’d first need define suffering.

That seems like a plausible perspective, but it doesn’t consider the question whether the “simulation engine” has control over itself. Who is responsible for launching the simulation. If the simulation engine is just an automation or a slave, it’s not legitimate to ascribe agency to it. The simulator is the entity who has control over the process of simulating a world.

If I can let my optimism speak for me, it would tell you that any reparator should know the answer very well. Since we are not reparators, it’s not a question we need to have a good answer for right now.

But yes, it’s an important question. One which probably deserves its own thread. If that question is really important to you, you should create a new topic for it! :slightly_smiling:

In a way, as far as anything in the simulation is concerned, the simulator is God. The ultimate being that creates and is behind everything. Something that is able to control it without actually experiencing the simulation only has a really superficial control over the simulation. The majority of the simulation will be unknown to the so called controller. He may have launched the simulation but unless he’s somehow intimately involved in the process of simulation itself, he’ll not be much more than a bystander when you consider the things he can do to affect the simulation once it’s running. This effect becomes more and more pronounced the more complex the simulation is.

In other words, the closer you are to being the simulator itself, the more you can affect the simulation and the more motivation you’ll have to do so. Conversely, the farther you are from the simulation, the less motive and less ability you have to touch it. The most extreme distance is probably where you just have the figurative start/stop button.

An extreme example of this is when the simulation is being done in a completely deterministic simulator. In such a case you can even have multiple simulators simulate the exact same thing. Would those count as examples of this paradox? Would it make sense to require removing the determinism since for some worlds that’s the only way to stop the suffering? Or would we discount the ethical responsibility on the basis of inability to do anything on the part of the controller? They’re binding themselves to the rules of the simulation, afterall. Also, since someone else may be simulating the same world, would even stopping the simulation be meaningful in any way?

This is also related to modal realism too. Does it matter that there may be real events somewhere that also happen to be simulated in a practically identical way in a simulator elsewhere?

Oh? Really?

1 Like

You are asking some very profound questions here, some of which I have already pondered about. It’s really hard to tackle such questions.

First of all: Removing the determinism for a deterministic simulation would stop the simulation from being a simulation of an actually deterministic world. Perhaps you might be interested in what I’ve written in the post

In any case, running such simulations poses hard ethical questions. It would most certainly be possible to stop such simulations, but that comes with its own problems. Would be want our world to be stopped, just because there is too much suffering in one region of our cosmos that we can’t causally affect?

Anyway, what motivations would different simulators have to simulate the exactly same deterministic world? Replication or verification of scientific findings perhaps? Multiple redundancy?

Would the suffering of those worlds be added up or simple counted as one, regardless of how many identical copies exist? I think the latter is the “more correct” framework. But wouldn’t that mean that it doesn’t matter how often a world is simulated? Well, it mostly wouldn’t matter. From the perspectives of the beings in the simulated world the multiplicity of that world is not a property of their world, so it doesn’t change anything for them.

On the other hand, the multiplicity of a simulation may matter, because of the prospect of resurrection of its inhabitants in the world of the simulators: The more often a world is simulated (especially by reparators), the more likely it would seem that one is actually resurrected at the end of ones life or the end of the simulation. And the more reparators there are, the more likely it would seem that one wakes up in a really heavenly place.

it’s funny how you see someone mentions your name so you kinda get invited into a discussion ^^ and flattering, of course. so i’m gonna answer that one without having read the whole topic.

first, like elriel says, we should ask, what we actually mean by sentience or which are the criteria for a being to be called sentient. i don’t think that’s an easy question, so i couldn’t possibly give any definition or full answer, but to look for the criteria seems to be more promising. in the normal context, sentience is being ascribed to humans and other animals, for example, when they do suffer. and suffering we ascribe to them when we see and hear their pain-reactions or even just when we recognize they are in a situation which normally leads to suffering to like-wise beings. so we can even tell, someone might be suppressing their suffering, but we know, she must actually suffer in such-and-such a situation, unless she’s actually a very different kind of being or sedated or something like that.

i think what applys here is empathy. because, strictly speaking, we could not tell what others feel, unless we feel it ourselves. it’s a kind of projection from our own experience in similar situations, but it’s not flawed just because it’s a projection. it really seems to work, at least, when we do empathize in an appropriate way, e.g. really getting a grasp of the actual situation the other is in.

what would we do, then, if we are in a dream, and people seem to suffer? i would say, if our empathy is strong, we would be forced to empathize with dream-figures also. the solipsist question, wheter we alone do really feel could not be answered, neither in a dream nor in the waking state. but it doesn’t need to, for we have the ability to emphazise and this is our ground on which we judge wheter someone is suffering or not - it is the criterion. there is no other, “higher” truth to search in the realm of ontology, i think. because our concept of suffering does not apply to ontology actually, just to our normal way of living and perceiving the world and others.

anyways, there are differences in simulations of suffering. e.g. you could slaughter people in a game, but depending on how complex their reactions and their overall personality is being coded, it would stop us from doing so (given the empathy) or not. if it’s still clear to us that they are not really suffering, then we might have less objections to do them harm, for we could always tell ourselves: there is no real harm involved, it’s just a simulation. and the same applys in dreams, for example. dream figures could be very one-dimensional, so it seesm clear they are simulated and you could to whatever you want to to them. but think of a good movie, in which some of the main characters are going through much suffering. even though we know, it is just a movie, in reality the actors are not suffering those situations - if you sympathize with the characters, we wish the suffering to end, we are relieved, if they get out of the situation. and if the actors are good actors, they actually really do experience the suffering of their roles aswell.

so, also there seems to be implied the question: what is a simulation at all? star trek also treats this issue. the doctor is a simulation, a hologram. but it seems, he is capable of sentience, not only this but also of wishes, of own experiences, and so on. so when would we stop consider him a mere hologram, a mere simulation? when the simluation beings to act like it would be real. it’s not just that we can’t tell the difference between real and simulated anymore, the more complex a simulation becomes. the difference itself dissipates, because, again, our concepts don’t consider ontological issues, they just apply to our way of living and perception.

so if a dream figure for example acts as if it had an own perspective, so it can surprise you by their answers and actions, and when it also seems to have some own interests and motives and it also seems to react suffering or joyful to given situations - i would say, we HAD to ascribe it being a real person and having a real sentience. as long as it exists at least. and we would intuitively empathize (if we do empathize normally) and take their wellbeing into account for our ethical considerations.

this means, a real good simulation is no simulation anymore. it’s a created reality.

also, the issue of complexity creating consciousness was already posted in this thread:

1 Like

Interesting, so you base the ability to perceive suffering in your own experiences of suffering? You compare the situation of a similar entity to your situation in which you suffered, and look for the kind of visible behaviour you displayed when suffering. If those signs of suffering are present, our estimated probability that the entity is suffering increases. If those are not present, that probability decreases. Outside of the realm of personal subjective experience, the epistemology of suffering (or any feelings at all, really) seem to be restricted to a probabilistic (Bayesian) approach.

So, if an entity consistently showed signs of suffering when we expect it to suffer, then our subjective probability that it actually suffers should converge towards 100%. Therefore, a sufficiently sophisticated actor, or zombie, could fool us into believing that he really suffered without actually suffering, in theory, at least. However, I expect that doing some kind of behaviour without the usually associated situations or emotions should be really hard. Consequently, I think that the probabilistic approach is quite justified, even if it’s not prefect.

So, if an entity consistently showed signs of suffering when we expect it to suffer, then our subjective probability that it actually suffers should converge towards 100%. Therefore, a sufficiently sophisticated actor, or zombie, could fool us into believing that he really suffered without actually suffering, in theory, at least.

yes, but it’s not just about the immediate behaviour of the other which gives us ground for believing, they are suffering. also, what is meant by behavious is very complex. there can be signs for suffering which are kind of subtle. and there can be no behavioural signs at all but like i said earlier contextual ones. so if someone is experienceing a situation and we know this person can suffer and is conscious at this moment - a situation which produces suffering normally, we would tend to believe, this person suffers now, even if we don’t see any suffering-behaviour (like a spartian who supresses it) or if we just don’t know anything about their behaviour (say, we just receive message of the overall situation, but about nothing behavioural.)

of course, we can be fooled in many instances. but in principle it is always possible to figure out if someone suffers. “in principle” means, we don’t always have enough knowledge to do so, but if we had, we could find out. and this kind of knowledge doesn’t necessarily need to be a direct access to their qualia, as if we would now be this other person and experience everything from their perspective.

if we would like to know, wheter some actor is tricking us to believe she is suffering, we might not purely watch their normal behaviour, but investigate on that matter. we could try different approaches to rule out a situation in which this person is only acting. for example we could watch her in a situation in which she might feel alone and not being watched, and we could wait for or induce some kind of harm, at watch her behaviour (of course, this is an unethical scenario). if she was not having any pain-reaction when feeling unwatched and getting harm, we would know, she really doesn’t suffer.

if someone could act all the time this would seem this person would have no real life. so if we assume “acting” we certainly would find some part of the life of this person where she is not acting. it’s just an example of how we could investigate. it might be a cyborg being programmed to always show this kind of behaviour, no matter if watched or not. but then we would not call that “acting”, but otherwise.

also we should keep in mind that suffering behaviour is not only shewn in crying and such direct behaviour, but also in some more complex conduct: someone would tend to avoid the source or at least the conscious feeling of the suffering in any way, if it’s too uncomfortable.

so if a “robot” would be capable of crying every time he gets bumped, but in no way would make any effort of avoiding the source of harm (given, it would be able to do so), we would tend to say, it is not really suffering, it’s just mimicing it. likewise if a robot (or a zombie, or a real person) would repeatedly say, he loves another, but there is no sign of love in his overall behaviour, we would tend to say this person is lying or really confusing love with something else (like with being in love or so). so the more complex kind of conduct is telling us actually very much, and it’s not only about the direct expression.

although feeling certainly seems to have some physilogical responses and a different qualia compared to “purely logical conduct” (which could also try to avoid harm, because it seems logical to do so. but then again, the logic has an end somewhere.)

1 Like