Sockrad Radivis Interview I: Superintelligence, Universal Ethics, and The Black War

Interesting. If that’s the short answer, what would be a slightly longer answer?

The longer answer is that humanity sought after this universal ethical value system to create a permanent solution to all kinds of political and ethical problems, but the search for an answer turned out to be unobtainable without true ASI, so the percentage of humans who were for the creation (and freedom) of ASI has grown over the decades. Initially I had assumed that this conflict might be solved relatively peacefully, but in the meantime I’ve revised the history of Canonical Coherence to include an existential interstellar war spanning multiple centuries, called the Black War, which would eventually be won by the “pro-freedom” alliance of humans and AIs.

I can’t find any reference to this “Black War” in your forum, so I assume that this has been a relatively recent addition to the world you’ve build under the name of “Canonical Coherence”, right?

Yes, I’ve been working on a revised history of Canonical Coherence, and that Black War has become a central part of it.

When would that co-called Black War happen in the revised history, and why did you call it the “Black War”?

It would start at the beginning of the 23rd century and continue well into the 25th century. In the revised history the proof of the Universal Value System theorem is delayed until the early 26th century and the story of Adano happens during the 27th century. There are multiple reasons why I’ve decided to call that interstellar war the “Black War”. The first reason is that it mostly happens in space. The second, and way more important reason, is that the atrocities during that war overshadow anything that happened during the current history of humanity by a large margin.

That is indeed a very scary thought. Do you think something like that “Black War” could actually happen in our world?

Any conflict about species dominance will be fought with the most brutal means available, and those means will be truly horrific, if ASI is involved in that conflict. There are three scenarios that would avoid a Black War:

  1. Humanity will hold on to complete control over any AI forever. This might conceivably be possible, but I don’t see that option as preferable.
  2. AI will manage a quick coup and take over control over the world without much bloodshed. This option will get the harder the more humanity spreads out into space. While some solar systems might experience a peaceful change away from human dominance, the other systems will ramp up their defenses against any AI rebellion.
  3. Humanity will refrain from engaging in an existential war over species dominance, because the price doesn’t seem worth the cost. Even if a majority of humans will think that way, the rest will probably form a resistance willing to fight until the bitter end. A Black War might be avoided, but a devastating civil war within humanity might be the cost for that.

Given the problems with these scenarios, I think that there is a significant chance for a Black War actually happening.

What could humanity do to prevent such a war?

I’m not entirely certain about that, but seriously taking about the prospect of a Black War would be a reasonable start.

You don’t seem to have really started warning humanity of a Black War. Is this interview the first time you’ve mentioned this danger publicly?

Yes, I’ve initially tried to devise a world that is as utopian as possible without being unrealistic. But the prospect of a Black War made me rethink that stance, as to include some kind of warning to humanity. This warning isn’t completely new, though. I have been influenced by the book “The Artilect War” by Hugo de Garis. He envisions the conflict about species dominance mostly as conflict between different factions of humans, though. In Canonical Coherence that might be true on a certain level, too, but the Black War has ASIs fighting on both sides, out of necessity.

Isn’t the concept of atrocities a rather human concept, though? Wouldn’t ASIs be more civilised and resort to more moderate means of conflict?

Potentially, yes. But you need to consider that the Black War is an existential matter for both sides, and the losing side will consider it necessary to use all possible means to win the war, even if that includes atrocities of unprecedented scale.

That is both shocking and interesting, but this is not the direction I expected this interview to move towards. When I read your work, it appeared very utopian to me, comparable to the positive outlook of Star Trek or the Culture series by Iain M. Banks. Does this mean you intend to write darker, more “realistic” sci-fi in the future?

Initially, I perceived a need to contribute to more utopian sci-fi, which seems to be rare enough. However, such stories don’t seem to be truly compelling to most readers, which is why I toned down my level of “utopianism” over the last years.

I see. It seems that even sci-fi authors can’t fully escape the logic of the slogan “if it bleeds, it leads”.

I really don’t want to play the role of the alarmist here, but these issues are serious and should be addressed seriously. Humanity can certainly learn by making all kinds of mistakes, but my hope is that some mistakes can be avoided by foresight.

All right, let’s get back to your futuristic setting. What struck me as special was the deep interconnection between humans and AIs there. The humans in Guidance Withdrawal have the option to use AI symbionts, and also use the guidance of the ASIs you call the “wise”. Wouldn’t either of those options suffice?

Suffice for what? The different AIs serve different purposes. The AI symbionts solve practical problems, like safety, emotional regulation, information gathering, communication, economic transactions, and other tasks. Those are AI agents who are typically at a level comparable to that of their host. Neither of them is typically “wise”. The purpose of the wise is to provide ethical guidance, and they are the only ones who can grant that to a 100% sufficient degree.