That is indeed a very scary thought. Do you think something like that “Black War” could actually happen in our world?
Any conflict about species dominance will be fought with the most brutal means available, and those means will be truly horrific, if ASI is involved in that conflict. There are three scenarios that would avoid a Black War:
- Humanity will hold on to complete control over any AI forever. This might conceivably be possible, but I don’t see that option as preferable.
- AI will manage a quick coup and take over control over the world without much bloodshed. This option will get the harder the more humanity spreads out into space. While some solar systems might experience a peaceful change away from human dominance, the other systems will ramp up their defenses against any AI rebellion.
- Humanity will refrain from engaging in an existential war over species dominance, because the price doesn’t seem worth the cost. Even if a majority of humans will think that way, the rest will probably form a resistance willing to fight until the bitter end. A Black War might be avoided, but a devastating civil war within humanity might be the cost for that.
Given the problems with these scenarios, I think that there is a significant chance for a Black War actually happening.
What could humanity do to prevent such a war?
I’m not entirely certain about that, but seriously taking about the prospect of a Black War would be a reasonable start.
You don’t seem to have really started warning humanity of a Black War. Is this interview the first time you’ve mentioned this danger publicly?
Yes, I’ve initially tried to devise a world that is as utopian as possible without being unrealistic. But the prospect of a Black War made me rethink that stance, as to include some kind of warning to humanity. This warning isn’t completely new, though. I have been influenced by the book “The Artilect War” by Hugo de Garis. He envisions the conflict about species dominance mostly as conflict between different factions of humans, though. In Canonical Coherence that might be true on a certain level, too, but the Black War has ASIs fighting on both sides, out of necessity.
Isn’t the concept of atrocities a rather human concept, though? Wouldn’t ASIs be more civilised and resort to more moderate means of conflict?
Potentially, yes. But you need to consider that the Black War is an existential matter for both sides, and the losing side will consider it necessary to use all possible means to win the war, even if that includes atrocities of unprecedented scale.
That is both shocking and interesting, but this is not the direction I expected this interview to move towards. When I read your work, it appeared very utopian to me, comparable to the positive outlook of Star Trek or the Culture series by Iain M. Banks. Does this mean you intend to write darker, more “realistic” sci-fi in the future?
Initially, I perceived a need to contribute to more utopian sci-fi, which seems to be rare enough. However, such stories don’t seem to be truly compelling to most readers, which is why I toned down my level of “utopianism” over the last years.
I see. It seems that even sci-fi authors can’t fully escape the logic of the slogan “if it bleeds, it leads”.
I really don’t want to play the role of the alarmist here, but these issues are serious and should be addressed seriously. Humanity can certainly learn by making all kinds of mistakes, but my hope is that some mistakes can be avoided by foresight.
All right, let’s get back to your futuristic setting. What struck me as special was the deep interconnection between humans and AIs there. The humans in Guidance Withdrawal have the option to use AI symbionts, and also use the guidance of the ASIs you call the “wise”. Wouldn’t either of those options suffice?
Suffice for what? The different AIs serve different purposes. The AI symbionts solve practical problems, like safety, emotional regulation, information gathering, communication, economic transactions, and other tasks. Those are AI agents who are typically at a level comparable to that of their host. Neither of them is typically “wise”. The purpose of the wise is to provide ethical guidance, and they are the only ones who can grant that to a 100% sufficient degree.
You seem to be implying that ethics is an extremely hard, but ultimately solvable problem. Is that so?
That might be the most daring premise of Canonical Coherence, and it’s actually the foundational premise of that world. Everything else is basically exploring the ramifications of truly advanced AI to the max. I am not so sure about how reasonable that assumption of a provable and unique Universal Value System Theorem is for our world, but I propose that it’s reasonable to hope for that actually existing.
Why do you think it’s reasonable to hope for something that might not even exist?
Because it would solve so many problems, if it actually existed. We can decide to eventually give up on that hope, but we shouldn’t do so prematurely. If the most advanced ASI that is able to exist within this universe won’t get closer to finding and proving such a Universal Value System Theorem for millennia, we might reconsider our hopes and strategies, but it would be a mistake to do that sooner. After all, if we manage to find that theorem, it would enable lasting cosmic peace.
I see. So, I guess you wanted to display the consequences of such a theorem actually being found, right?
Yes, Canonical Coherence is an exploratory work that tries to figure out how life really might be, if a discovery of such magnitude was actually made. And that does make it special indeed.