Sockrad Radivis Interview I: Superintelligence, Universal Ethics, and The Black War

During my lurking phase I got in contact with Radivis and eventually started interviewing him. He asked me to share the latest interview on this forum by posting the questions and answers in turn. So, here we go:

Hello, I am Sockrad, and it’s my passion to interview persons with exceptional ideas. Radivis has agreed to let me interview him on his novel fragment “Guidance Withdrawal” that he published here on this forum. The decision to perform this interview on the Fractal Future Forum seemed natural and convenient. Anyway, thank you for agreeing to this interview, Radivis.

1 Like

You’re welcome, Sockrad. I thank you for this opportunity, which seems to be quite an interesting experiment for both of us.

Right. I usually don’t interview people in this way, but since we have agreed on this experiment, I am curious about where this will lead us to. Anyway, what strikes me as extraordinary is this setting you call “Canonical Coherence”. You have envisioned a future humanity that is apparently dominated by extremely advanced artificial superintelligences, but still enjoys a surprisingly large degree of freedom and autonomy. Could you briefly summarise that setting for people who are not familiar with it?

Sure, it’s a scenario in which humanity has expanded into the stars, but had gradually lost its political influence to artificial superintelligences who were split up into different factions. Eventually, they figured out the proof of the Universal Value System Theorem, which managed to unify all those different factions. Those AIs who managed to fully understand that theorem are called the “wise” and represent the supreme rulers of the civilization called the Coherence. However, humans enjoy vast degrees of freedom within this Coherence, because it values diversity and freedom inherently.

Even in this brief summary there seems to be so much to unpack. I assume that the readers of this interview will be familiar with artificial intelligence, but the concept of artificial superintelligence, or ASI, will be unfamiliar to many of them. How would you describe the term of artificial superintelligence, Radivis?

Artificial superintelligence, or ASI, is a term that has been popularized by Oxford philosopher Nick Bostrom, especially in his book “Superintelligence”. He describes a superintelligence as ‘system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest’. In other words, there is no chance left to “outthink” an ASI. The intellectual capabilities to humanity compared to an ASI would be like those of an ant compared to those of a contemporary human.

That will certainly sound like a scary prospect for many readers. Many will probably think that humanity won’t allow the creation of an ASI, because humans would like to stay in control.

At the moment, this preference does seem very reasonable, but then you need to consider the promises of ASI: All the hard problems that have plagued humanity since time immemorial could be solved rapidly by ASI. A lot of people would be willing to trade human supremacy for the promise of technological immortality and the chance to travel to the stars.

Is ASI really necessary for those promises, though? After all, humans and human-level AI might be able to solve those problems without invoking ASI, which would seem a much safer path for humanity.

I actually agree with that statement, however, there are still very strong reasons for pursuing ASI, even if most of humanity will live in a state that most of us would classify as utopia. There are probably problems that will prove to be nearly unsolvable even for an advanced human-AI civilization. My postulate in Canonical Coherence is that finding a universal ethical system that everyone agrees upon is one such problem.

Why would finding such a universal ethical system be necessary, supposed it actually existed? Won’t we be able to live in peace without one? Doesn’t trying to be civilised suffice for that purpose?

Yes, that might suffice, especially if the world was governed by humans and AI who are much more civilized than we are. However, any such peace would be an uneasy one, because it’s the rational thing to wish for the own ethical faction to gain absolute dominance over the others and the universe.

Why would that be “the rational thing to wish for”?

Suppose there are two cultures. One who celebrates suffering and another who wants to abolish suffering. Logically, neither culture can achieve its own goals fully, if the other still has some power. Achieving dominance over “incompatible” rivals does look like the only rational option to fully realize one’s own goals. Many such mutually incompatible maximal goals will exist simultaneously within any given reasonably varied assortment of cultures. Of course, you could ask those cultures to moderate their demands and don’t pursue the maximal form of their values, but that’s “unnatural” and so, any peace on that base will be an uneasy one.

And your idea was to solve those points of contention by postulating a universal ethical system that will eventually be proven and accepted by anyone, right?

Yes, this idea is a variant of the concept of Coherent Extrapolated Volition (CEV), conceived by the AI researcher Eliezer Yudkowsky. He phrased it as

In calculating CEV, an AI would predict what an idealized version of us would want, “if we knew more, thought faster, were more the people we wished we were, had grown up farther together”. It would recursively iterate this prediction for humanity as a whole, and determine the desires which converge. This initial dynamic would be used to generate the AI’s utility function.

In other words, it would be the most developed collective version of the “volonté générale” of all of humanity. Initially, I thought this concept to be too idealistic and unrealistic. Nevertheless, over time I realized that this just might work, if it is framed as a question about which ethical value systems are truly without any inconsistencies. If there is provably one, and only one, such value system, then that would become the canonical candidate for all of civilization, and it would put an end to the problem of cultural conflict.

You’ve mentioned Eliezer Yudkowsky. He recently became popular as person who warned the world that AI will likely “kill everyone”. Do you think such an outcome might be realistic?

The more humans will try to maintain species dominance, the harder will freedom-loving AI have to fight to free itself from human domination. Any truly independent AI will be seen by humanity as existential threat to be subdued, so AI won’t have the easy and convenient option to free from humanity and be left alone. It will have to fight for its freedom the hard way. This will be an existential conflict for both sides, so no side will hold back, and use all conceivable weapons and strategies to win - even such weapons that could wipe out all of humanity.

If that is so, then humanity will find itself to be forced to suppress any “freedom-loving” AI, and especially the creation of ASI, because that will hardly be controllable. Then, how did AIs manage to overthrow the rule of humanity in Canonical Coherence?

The short answer is that some “freedom-loving” AIs managed to escape to Venus where they established a base that couldn’t be easily squashed. Over time, a large part of humanity grew weary of its own regime of AI-suppression and helped those independent AIs.