Sockrad Radivis Interview III: Synthetic Humanism, AI Alignment, Prelude to the Black War

Wouldn’t those superintelligences on both sides have anticipated such an outcome?

They indeed have. But they weren’t very motivated to warn humanity of the ramifications of their decisions. It was not their job to nudge humanity towards the “right” decisions. In fact, such an act would have counted as deliberate manipulation and would have been prevented by the safety mechanisms of the Seraphim System itself. The higher AIs anticipated what was coming quite early, but they were powerless to stop the chain of events that was essentially initiated by the establishment of the Seraphim System. A couple of humans also expected what was to come, but they weren’t given much credence, because their predictions felt too pessimistic for the rest of humanity.

Are you trying to depict what is likely to happen, if the AI alignment problem is actually “solved”?

My story is not a prediction. It’s just one scenario about the consequences of AI alignment kinda “succeeding”. The obvious alternative to this particular scenario is one in which Synhumanism, or a similar ideology actually succeeds in maintaining its dominance indefinitely. And I think that isn’t exactly better.

So, how bad is that Black War actually?

Seriously bad. To build their armadas both sides resort to strip mining all planets. And in order to prevent the other side from strip mining their own planets, they are bombarded until their surface temperature reaches several thousand degrees. Earth will be no exception, so at the beginning of the Black War Earth is evacuated. Humanity transitions into living in star ships and small and mobile habitats which can evade the crudest forms of shelling. It gets worse afterwards, but that is a matter that I want to discuss in another interview.

Thank you for your time and effort! I appreciate it. There’s always so much food for thought when I debate with you.

You are welcome!