you are completely right, that if we could not know if we are simulated or not, that the requirement for ethics is the same. if i would run some experiments with rats and give them different environments: one which i consider the ratparadise and one in which everything is rare and the space is overcrowded, it becomes predictable for me as a rat-god, in which simulation violence will occur and thoughts about ethics would make sense for me and not for the rats. if simulators have superpowers they could play with the living entities like we play with dolls. and if a simulated entity decides to act ethical good, any simulator- god could prevent it from doing so. ethics for gods and ethics for dependent entities can´t be the same. to solve the reparator- problem i would at first divide and analyse all inner perspectives of the agents. imagine the simulators would start simulations of two similar worlds with similar entities and a similar “game”. the only difference is the belief of the entities. in one simulation all the entities believe, that they will cease to exist with death and in the other simulation all the entities believe what is really going to happen: that they will have a blissful afterlife and will be overcompensated for all their suffering in life. i hope it becomes obvious, that the same harmful events will be experienced different in each world: that losses, illnesses, wounds, violence, wars, poverty and so on will be hard to endure in the first world and could be perverted into a welcome experience in the second world. if the simulators are wise they will compensate the first world- entities more because the amount of suffering they experience will be higher. but there are more serious consequences for the simulated entities. if both entities have the impression, that they have a free will and could cause harm and suffering themselves they both require ethics. and again, i think it becomes obvious, what profound difference in ethics we will discover, if both entities follow their inner logic of their belief. the first world has to develop a high standard to avoid harm and violence against each other. the second world would consequently develop quite the opposite: it has to be considered an ethical good action to cause maximum harm! and all entities will be eager to experience suffering because of their religious conviction of reparation and compensation. we can find elements of that in every religion and in the fictional construct of the klingon culture. the warriors are eager to kill and get killed in a brutal fight for they will be rewarded in sto´vo´kor.
but the darkest ages for ethics will be in simulations where the entities believe that they are powerless and have no free will. in the first world i mentioned, where they don´t believe in gods, they could explain every harm they caused with their nature and the natural seemingly brutal setting they live in. in the second world they could explain every harm with gods will, gods punishment or gods ordeal or the price for a good afterlife. but both entities would not feel responsible for their actions. so from the perspective of the simulated entities, the best ethics would be developed when they believe that they have a free will and could cause harm and when they believe that they cease to exist with death. the situation would only change if they have proof of the existence of the simulators. but proof will be impossible. even if they all would be send to the blissful afterlife and would be send back, they could never know, if they all had just a dream… and what remains is always the same: belief.
for now i will end here because i need more time, to think about the perspectives of the simulators. please tell me if my analysis of the perspective of the simulated entities is consistent or not.
that means, that it feels different for me when somebody attacs me in rage and steps on my toe purposely or when somebody steps on my toe by mistake in a crowd. although the consequence, that my toe hurts, is the same, i would not want to report the one in the crowd to the police. and i am glad that our law respects motives and intentions as well as consequences. but you can go a step deeper and take all the feelings into account. in the aforementioned example it would be that the violent attac creates a feeling of fear in me and i felt hurt in my dignity as additional consequences of the physical pain. this way you can explain every moral value of an action with consequentialism, because it already includes the consequences of intention. but a consequentialism that defines itself by excluding motives and intentions is flawed.
try this every day around the clock ( with a simulation it is possible) : enjoy the flower and the sunrise nonstop…and then try this with a miner or a shiftworker who suffered from missing the sun and beautiful nature for a long time. the experience and the amount of happiness would be much different.
how could we know, that this isn´t already the case with us here and now..
for many years i ask myself a contradictory kind of question: is it really possible to win something when causing harm?