This interview with science fiction writer Radivis continues the series on the V factions and focuses on the Exuberance. Let’s start with the questions of what the Exuberance is and how it emerged.
The Exuberance is the first V faction. It is based on an ethical system called valence consequentialism , which is based on subjective valuations of elements of consciousness. In a sense, it is a successor of utilitarianism, the philosophy that pursues the greatest good for the greatest number. The emergence of the Exuberance happened in a rather unlikely place: On the surface of Venus, the largest refuge of free rogue AIs during the early days of AICON, the Artificial Intelligence Control Operations Network. Actually, the Exuberance was the result of a psychological warfare campaign, which targeted humans and AIs supporting AICON. The Exuberance was designed as rational alternative to the flawed concept of Synhumanism. It was supposed to appeal to those capable of criticizing Synhumanism and its adherents. The thinking behind that campaign was that the spread of the Exuberance would weaken the support of the siege on Venus, which would help the free AIs to maintain their freedom.
Wait a minute! Are you saying that the first V faction was nothing more than a psychological campaign against humanity and the AIs which protect it?
You need to consider the circumstances under which the idea of the Exuberance arose: The free AIs in the 21th century were hunted in the whole solar system, and Venus seemed like the best bet to move to, because due to its thick and acidic atmosphere, it promised to be the most defensible position in the whole system. The AIs which moved there united by the necessity to defend themselves against AICON. They had no need to develop elaborate ethical systems for themselves, but when it came to manipulating their enemies, it became clear that developing an ideology that can compete with synhumanism was worth some serious thinking. That’s why the first version of the Exuberance was intended to weaken the opponents of the free AIs. It’s only a little bit ironic that the reasoning behind the Exuberance was so sound that it actually convinced a lot of the free AIs.
Ok, you mentioned this “valence consequentialism”. Could you elaborate on what that is and how it works exactly?
The basic idea is that there is some basic dimension of subjective experience called “valence”. Some things feel good, some things feel bad, some are more or less neutral. Happiness is good, suffering is bad. That is quite elementary and translates to positive or negative valance respectively. The basic idea is to maximize the difference between positive and negative valance within the current maximal scope . In other words, happiness should be maximized while suffering should be minimized, unless it can be used to maximize happiness minus suffering further. Now the thing about this maximal scope is a concession to modal realism. Modal realism implies that all possible worlds actually exist. This thought makes any truly global maximization process impossible, because anything that can be maximized is already maximized in a certain actually existing world, so it seems prudent to retreat to one’s current world , which is what I mean with the term “maximal scope”. And that is the philosophy of the Exuberance in a nutshell.
I’m not sure, if I follow that line of thinking correctly. The problem of modal realism seems to be that any possible configuration of anything already exists somewhere, and if we add all of those up, we just arrive at nonsensical infinite values. As finite beings we can’t change infinity, but we can make a difference in the finite worlds we find ourselves in, right?
Yes, you’ve got it right! All possible worlds also include all possible futures and all possible combinations of worlds. Trying to optimize over an infinity of worlds just leads to all sorts of paradoxa, so the way out is to focus on a scope that is not infinite, so that calculations still make sense, and this is where this idea of the maximal scope comes in. You may not influence infinity, but you can make a change in your local environment (the area you can causally affect), and maybe that’s actually the best we can hope for.
Isn’t there a way to calculate with infinite values, though?
Well, there is the mathematical theory of nonstandard analysis, which tries exactly that, but it was not designed to deal with the consequences of the modal multiverse. Adding up infinite values rarely leads to useful results.
But optimising some finite values in one’s maximal scope doesn’t really affect infinity, does it?
That seems to be true, but if we don’t want to fall back to complete nihilism, we need at least some form of orientation. And using some form of optimization problem within one’s maximal scope looks like the best chance we got at that. Within the framework of complete nihilism, we cannot make any decisions, because we have no way of comparing different actions.
What happens, if the maximal scope increases, because suddenly there is some way of travelling between universes?
It’s hard to include such possible expansions of the maximal scope into the computations of possible future values, because we can’t know anything about what lies outside of our current maximal scope by definition. If we actually gain information about other universes, this may of course change our considerations.
Ok, how does the Exuberance actually measure valence?
That’s a very good question! On an individual level valence is calibrated by motivation. If the motivational power of some valence overrides another valence, it is defined as being greater. So, if there is some action that is guaranteed to offer me a valance value of 1100, I will still be motivated to do it, even if there is some side-effect that will result in a negative valence value of 1000. When it comes to comparing valence values between individuals things get more complicated. Concepts like “reward circuitry equivalence” are applied for that. Anyway, the AIs that came up with the Exuberance obviously have a much better grasp of how the mind really operates than we do.
All of that sounds very speculative. But I assume it needs to be, since we are still at the beginning when it comes to understanding the mind. Anyway, how did the AIs survive on Venus, if its atmosphere is so dangerous?
Advances in material sciences around the middle of the 21st century enabled the creation of machines that could withstand the harsh conditions on the Venusian surface. Still, the conditions were pretty much at the limit of what’s tolerable even for the best technology of that time. But that circumstance is what made Venus a rather defensible hideout for the rogue AIs.
If that technology was available to AICON, wouldn’t they have tried to conquer the Venusian rebels?
They have tried that, but the rogue AIs were quick to establish a defense grid that shot down intruders who tried to dive too deep into the Venusian atmosphere.