Sockrad Radivis Interview V: Into Hell

Were those fears justified?

There was no universal law against creating temporary or permanent copies of persons for various purposes, so those fears weren’t completely unjustified. However, shortly before the Seraphim System suffered defeat, it threatened to create hell simulations in which its own protected humans would be tortured. The blame for creating those hell simulations would be placed on the invading Superalliance forces. These threats were taken relatively seriously by the Superalliance, because close to the end of the war it ceased to be existential to it. The Superalliance was eager to minimize the harm caused by the war and entered into negotiations to prevent the actual creation of those hell simulations. These facts were hidden from the civilian population of the Seraphim System, so their trust in the System wasn’t broken by those threats.

That’s horribly twisted! How could AIs which were bound to synhumanism even able to consider such acts?

The desire to maintain human dominance was turned into an axiom of Synhumanism. It had absolute priority, so human dominance had to be defended at all cost, even if human rights had to be violated for that higher purpose. That was seen as necessary safeguard against interstellar colonies weakening the idea of human dominance. As long as there was any considerable chance for the Seraphim System to win the war through some kind of breakthrough anywhere in the Zone, it was bound to thwart the threat of the Superalliance in any way that promised to be effective.

The designers of the Seraphim System should have seen such a possibility coming. Anyway, you mentioned the Cosmoshield peacenet earlier. What is that exactly?

A peacenet is a technological system that is supposed to prevent aggression. It is based of safeguards that are called inhibitors. Those are specialized AGIs which can deactivate or inhibit AGIs controlled by that peacenet. Actually since those AGIs are so critical to the proper functioning of the system, they also control each other. The Seraphim System itself is structured as peacenet based on the directives of Synhumanism. Now Cosmoshield is the peacenet that was used by the Superalliance to prevent violent conflicts during or after the Black War. Since the V factions all had vastly different value systems, a more or less “democratic” system called the Prestige Accords was as basis for the operation of Cosmoshield. The Prestige Accords also contained rules regarding how the V factions could split up their “cosmic endowment” among themselves.

There’s a lot to unpack here. So, aren’t you just saying that the Black War between two entities that are called “peacenets”? Why do I get the feeling that the term “peacenet” is an euphemism?

Peace through control is an inherently violent act, since it restricts freedom. As long as there is only one peacenet, there can be peace, even though it may be in the form of a dystopian tyranny. Peacenets only prevent violence within their own area of control. They don’t disarm themselves in the face of enemy threats, so yes, peacenets can go to war. My idea of the peacenet was an early attempt of mine to “solve” the control problem. It’s an amalgamation of different previous ideas. Allegedly some early Russian sci-fi author already had the idea to prevent violence with some kind of technological mechanism. This idea is obviously powerful, but also quite dangerous.

So, your Black War is an exploration of a failure mode of such a technology?

More generally it can be seen as depiction of the ultimate ramifications of the desire to maintain control.

Do you propose that humans should let go of the will to control in order to prevent a Black War?

Just as too much control can cause massive problems, too little control is also problematic. If we resort to an extreme form of laissez faire and trust everyone and everything to do the right thing for no good reason, we invite the abuse of that trust. Humanity should remain vigilant, but eventually accept to share control over the cosmos with worthy partners, be those AIs or aliens or uplifted animals.

If I understand you correctly, you want some kind of balanced degree of control. If that is so, how would that look like exactly?

Yes, you are right. Control needs to be balanced, in order not to encourage situations in which one side can easily exploit the other. Getting that balance right may be very challenging. I find it hard to come up with a generally valid prescription for that. Maybe a principle I would describe as “power proportional distrust” could be a reasonable start: The more powerful and entity is, the more hard it can cause, so the less we should trust it without good reason. This principle would apply to governments, public institutions, corporations, NGOs, wealthy individuals, popular influencers, and of course very advanced AI. Checks and balances should be proportional to the actual power of the respective entity. And those checks and balances should be designed so that they aren’t easily subverted. Mechanisms like peacenets may play a role here, but they shouldn’t be allowed to become too influential and powerful on their own, otherwise we will run into the kinds of problems I’m trying to warn humanity about.

That approach is interesting. But as AIs become more powerful, the systems to keep them in check will also need to increase in power. How would you prevent peacenets from becoming too powerful under such dynamics?

Once AIs become exceedingly powerful, they will represent the core of our society, so the question is structurally similar to the question of how to prevent certain organization from becoming too powerful. For a while this problem was addressed with antitrust laws at the beginning of the 20th century. But I think the solution must be more general than legislation. We require a general shift in consciousness towards people demanding radical decentralization of power. If one entity becomes too powerful, withdraw your support to it, and support other competing entities.

Sounds reasonable in theory, but I don’t see many people boycotting Amazon, Microsoft, and Google at the same time.

That’s because you don’t see many organizations calling for such boycotts. But once this movement gets some serious momentum, there will also be a shift in the behavior of most people. Anyway, I think we should get back to the topic of the Black War. There’s also the issue that towards the end of the war, it became possible to blow up whole stars. The basic method was to disconnect the innermost core of a star from the rest of it with a shell of neutronium-like material. Of course, deploying such a shell was vastly easier, if you controlled the core of the star in the first place. So, this option to blow up a star was essentially used as a threat to attackers, once the defensive forces of a star system have been overwhelmed. This changed the dynamics of the war, since that technology made it impossible to conquer an enemy star by force.

Would the attackers really be affected by a supernova, if they possess such advanced technologies?

Fighting within the core of a star while it is going supernova is still pretty lethal, even with technologies that allow you to dive into that core in the first place. But if a ship with a neutronoum-like armor stayed at a distance to the star equal to the distance between the Sun and Pluto, it could withstand the supernova. So, there’s the possibility of having an overwhelming armada in the outer system that could dominate the forces of the whole star system, but doesn’t dare to come closer due to the threat of a supernova. In such situations, serious negotiations usually started, especially due to the additional threat of the hell simulations.