That’s quite fascinating! I am kinda intrigued by the concept of these “ressims”. That’s an abbreviation for resurrected sims, right?
Correct. There are many simulations going on in the Canonical Coherence, in which their inhabitants don’t know that they live in a simulation. Nowadays I would call those simulations “occult” in contrast to “lucid” simulations in which everyone knows that they are currently in a simulation. Resurrecting people who die in occult simulations is a standard practice in the Canonical Coherence that serves as a kind of compensation for being initialized in a world that is far less advanced that the main part of the Coherence.
To me it seems like a quite questionable practice to create simulated entities, even if they are “compensated” after their “death”. Why do something like that, if there are so many sentient entities already living fulfilling lives?
The basic argument for that is that freedom is one of the core values of the Canonical Coherence. That includes the freedom to create new kinds of sentient entities, with any kind of initial conditions. It could be argued that those simulated entities, or sims for short, are instrumentalized by the creators of the simulation they live in. To counter that instrumentalization, the sims are provided with citizenship of the Canonical Coherence after their death in their simulation.
Doesn’t that imply that anything is permitted, as long as some kind of compensation is awarded to the victims afterwards?
That is indeed the logical conclusion of such an arrangement. Our moral intuition is quick to condemn such a state as wrong. However, the ethics of the Canonical Coherence are based on hardcore consequentialism, not on the moral intuitions of our current time. Consequentialism is the category of ethical systems that encompasses systems like utilitarianism. It’s a rather modern abstraction of utilitarianism that maintains the idea of judging the ethical value of an action by its consequences, rather than other considerations as conformity with abstract rules or alignment with virtues. It’s an assumption of me that advanced artificial intelligences will devise ethics within the framework of consequentialism rather than within competing frameworks.
What lead you to that assumption?
First of all, this seemed to be the dominant idea within the Less Wrong community at the time when I examined it. Less Wrong is a community dedicated to cultivating rationality, and does contain some discussions of ethics and artificial intelligence. It always appeared problematic to me to pursue ethical approaches that don’t seem to “care” about the actual consequences of actions. Take the laws of robotics by Isaac Asimov for example. The point of most of his stories is that those laws do have unintended consequences and therefore aren’t prefect by any margin. By contrast, the more modern approach of AI research is to let the AI have some kind of “utility function” that enables it to evaluate the consequences of its actions and choose the action with the best consequences as measured by that utility function. Even if that approach might turn out to be kinda primitive, the general line of reasoning does resonate a lot with me.
But in theory, AIs might develop some kind of ethical reasoning that is completely incomprehensible to humans, but is still supremely logical. Have you thought about that possibility?
You are right, but this doesn’t seem likely to me. AIs might develop some kinds of advanced ethical intuition, but they still need some way to evaluate those intuitions. If consequences are not used as evaluation for ethical decisions, then what else? I don’t categorically exclude the possibility that there is some kind of reasonable alternative to consequences, but if it’s as you described and it transcends human level comprehension, I cannot reasonably write about something like that. So, this possibility is excluded, within the context of my writing, for artistic reasons at least.
Ethics seems to be a major interest of yours. Have you had some kind of formal training in it?
You are right. Ethics is a major interest of mine. I considered studying philosophy at university, but decided against it, for various reasons. My knowledge about ethics comes from my school time in which I had ethics and philosophy as classes, and my own subsequent private research of that subject.
What were your reasons against studying philosophy back then?
First of all, philosophy as it was known to me back then didn’t feel to be sufficiently rigorous to me. When I read the Tractatus Logico-Philosophicus of the early Wittgenstein I had the sense that I need to understand mathematics deeply in order to “do” philosophy correctly. That’s why I chose mathematics as my major in university. I still could have chosen philosophy as my minor, but I settled for physics, because I had the impression that academic philosophy isn’t really taken seriously by society at large. I considered it to be more impactful to try to change society by writing science-fiction, rather than “dry” philosophy papers.
Still, the impact of science-fiction on society does seem quite marginal, even though it may just barely exceed that of academic philosophy. If you are so interested in changing society, why didn’t you consider going into politics of media?
I did exactly that when I co-founded the German transhumanist party (TPD), which used this forum as major communication platform. Unfortunately, that party became defunct after some serious internal drama. The failure of that party had a quite disillusioning effect on me.
So, do you stick to the hope that people get interested in your stories, and change their minds by reading them?
Yes, though I am quite aware that the audience I can reasonably reach is a very small one. Still, the people who can understand my works may be the people who will be instrumental in changing our world to the better.
I think it’s the best to end this interview with this message of hope. Thank you very much for your time, Radivis.