Sockrad Radivis Interview II: Culture, Ethics, and Impact

Ok, it’s time for another interview with Radivis about the ideas behind his sci-fi story fragment “Guidance Withdrawal”. What struck me when reading it, was that it felt like some kind of imperfect utopia. There still seems to be pain, fear, and suffering, judged by all the drama that Adano and Kathatus go through.

Actually, there is a perfect utopia in Guidance Withdrawal, and that’s the Ecstasium in which its members experience perpetual bliss without any pain, fear, or suffering. It is a very important and justified question why the Ecstasium doesn’t extend to all of the Canonical Coherence, the political body that encompasses the known world of that setting. In the earlier stages of my thinking, maybe around 2010 I might have written such a story, but the longer I thought about the matter, the less I could defend the primacy of the values of happiness and freedom from suffering against competing values like knowledge and freedom per se. In fact, this is enshrined in the world of Coherence as the event I call the “Fall of Heaven”, which is the proof that the pursuit of the reduction of suffering is not suited as universal value system.

That sounds like a crass statement. What’s the justification for that?

It comes down to the difference between classical utilitarianism and negative utilitarianism. In classical utilitarianism the imperative is to maximize happiness (with suffering counting as negative happiness), while negative utilitarianism has the imperative of minimizing suffering. Both philosophies appear to be reasonable, at first. But then the latter seems to ignore a large part of what seems valuable to humans: Happiness. Negative utilitarianism seems to claim that you cannot compensate the lack of suffering with even the most amount of happiness. That seems absurd. It would mean that we must refrain from pleasurable activities, if there’s even the slightest chance that something inconvenient might happen. And I have the intuition that there’s a way to enshrine some reasoning along those lines into a kind of mathematical proof, which would at least favor classical utilitarianism above negative utilitarianism.

In other words, you don’t have a strict proof right now, but negative utilitarianism just seems wrong to you for various reasons?

At the moment we don’t have a framework to create mathematical proofs for ethical statements. The point of Coherence is that a framework like that might be found, and accepted, in the distant future. My basic idea is that we could treat certain ethical statements as axioms of an axiom system like those of set theory. If we find that an ethical axiom system can lead to a contradiction, we need to reject it, just as we reject axiom systems in set theory that provably contain a contradiction. In the best case, only one reasonable ethical axiom system without contradictions would remain. That’s the basic premise of my sci-fi setting named Coherence.

And you seem to assume that artificial superintelligences will succeed where humanity has failed until now. Is that a reasonable assumption?

There are a lot of unsolved problems for humanity. It may very well be that some of those unsolved problems turn out to be unsolvable in principle, or just silly misconceptions. Anyway, for the class of problems that humanity hasn’t solved yet, but have a solution, artificial superintelligence is our best shot at actually solving them. It may very well turn out that the quest for a unique universal value system is actually misguided. However, before there is an actual proof that it’s misguided, we should pursue that hope, especially since it could be our only chance for lasting universal peace and unity.

And yet the indomitability training that Adano goes through does appear quite barbaric to me. It seems ludicrous that there won’t be a much better way to strengthen one’s character in the far future than to go through a programme that we could call torture nowadays.

Of course there are more effective ways to strengthen one’s character, but the technology for direct character augmentation is considered to be too disruptive. It doesn’t come with a smooth progression in character, but caused what many consider a discontinuity of one’s personality. That’s why exaltationists often reject such methods and prefer more “primitive” methods. In a world in which people could have anything at any time, the valuable pursuits that remain are the challenges that one accepts for oneself. Going through a torture program is sure an extreme challenge, which few are willing to put oneself through.

It appears to me that there’s the motive of seeking status behind such activities. Am I wrong in that assumption?

You are right. Seeking status is a common human motive throughout the ages, and it’s still common in the Canonical Coherence. After all, status is one of the few remaining scarce goods in a world full of abundance. Though, many exaltationists don’t care about status and pursue their training merely for their personal character progress. Seeking status is certainly an important part of the motivation of Adano, while characters like Artin Sherlock pretty much don’t care about status.

Isn’t it a trick to inflate one’s status by engaging in some kind of signalling through engaging in activities like indomitability training?

If it’s a trick, it’s certainly not a cheap trick. One certainly needs at least some strong motivation to willingly agree to being tortured. In our age there are certain kinds of people who effectively do that: Members of elite military groups, or spies who receive brutal training that most people would classify as torture. In their own peer groups, these people do gain status by that, but it’s not like those people represent the elites of our “normal” society. Similarly, those who participate in indomitability training are seen are rather eccentric in the general society of the Coherence. They don’t enjoy any kind of special privilege.

If the exaltationists represent a special part of the society of the Coherence, what does the regular part of that society look like?

That’s a very good question. There’s a bipartition of society into the Ground State and the Cultural Forest. The society of the Ground State is characterized by a maximum of freedom: Anything goes. The people living there are essentially anarchists and engage in whatever activity they please, comparable to the people living in the Culture of the Iain Banks novels. The Cultural Forest is a different matter, though. The members of the Cultural Forest agree to stick to the constitution of their respective culture, which can range from super abstract and generic to absolutely elaborate, complex, by byzantine. The Cultural Forest does represent the majority of the Canonical Coherence, though. Ressims start in the Ground State by default, so that they can accommodate to the general culture of the Canonical Coherence without being forced into a cultural corset that might constrain them prematurely.

That’s quite fascinating! I am kinda intrigued by the concept of these “ressims”. That’s an abbreviation for resurrected sims, right?

Correct. There are many simulations going on in the Canonical Coherence, in which their inhabitants don’t know that they live in a simulation. Nowadays I would call those simulations “occult” in contrast to “lucid” simulations in which everyone knows that they are currently in a simulation. Resurrecting people who die in occult simulations is a standard practice in the Canonical Coherence that serves as a kind of compensation for being initialized in a world that is far less advanced that the main part of the Coherence.

To me it seems like a quite questionable practice to create simulated entities, even if they are “compensated” after their “death”. Why do something like that, if there are so many sentient entities already living fulfilling lives?

The basic argument for that is that freedom is one of the core values of the Canonical Coherence. That includes the freedom to create new kinds of sentient entities, with any kind of initial conditions. It could be argued that those simulated entities, or sims for short, are instrumentalized by the creators of the simulation they live in. To counter that instrumentalization, the sims are provided with citizenship of the Canonical Coherence after their death in their simulation.