I have been thinking about p-factions in the context of my “Exaltation” sci-fi universe. It seems that a very important difference between many of those factions is their ethical framework. So, I’ve identified four frameworks that make sense (are coherent and not based in silly moral dogmas) in a posthuman context (the names of these frameworks might be improved later on):
- Opportunism: You can do whatever you want, if you can get away with it
- Pacifism / Nonviolence: You can do whatever you want, as long as you don’t do any harm
- Eu-consequentialism: You can do whatever you want, as long as you do more good than bad
-
Compensationism: You can do whatever you want, as long as you compensate everyone for all the harm you caused them
Note (2015-09-17): I’ve renamed “compensationism” to “reparationism”
Edit (2015-06-02): On Facebook, I’ve been criticized for leaving out utilitarianism from this list. This is of course a big omission. Utilitarianism certainly is one of the best frameworks for a posthuman world, but at the same time it’s also very difficult to interpret and apply it correctly. These difficulties call for simplified frameworks, which are the frameworks presented in this post.
Now let’s analyse each of those frameworks separately:
Opportunism
This "machiavellian" framework sounds rather unpleasant, but at least it could be seen as brutally honest: The universe does not have any built-in moral framework, so everyone is free to try to make the best out of his chances without needing to respect the interests of other contenders. A society made up of opportunist needs mechanisms that make it viable, so that it doesn't end up in a brutish fight of everyone against everyone. Essentially, some kind of a [Hobbesian Leviathan][2] that enforces a social contract (respectively laws) is needed. Laws can be broken, but if the Leviathan is strong enough, perpetrator is punished. This punishment is simply bad luck for those who have succumbed to the Leviathan. The better, or luckier, opportunists break laws and get away with it.That is a reality that might be seen as unacceptable. If it’s really unacceptable for those in power, a future society might create a kind of super-Leviathan that actually prevents everyone from breaking any law in the first place. For example, everyone could get an inhibitor implant that paralyses him if he is about to break a law. Opportunists would want to escape the grip of this super-Leviathan, but whether they will succeed with that is another question. In fact, such a super-Leviathan might be implemented even in a society in which opportunists are rare, just in order to be safe from them.
Pacifism / Nonviolence
For simplicity's sake I call adherents of the principle of nonviolence pacifists, even if pacifism is generally rather used to the context of the rejection of war.Negative utilitarianism could be seen as a version of pacifism. Negative utilitarianism focuses on the minimization of pain and suffering. Radical utilitarians like David Pearce want to eliminate involuntary suffering though technological means, a position that is called (bioethical) abolitionism.
Also, the abolition of involuntary suffering is framed as desirable goal in the original transhumanist declaration, but it doesn’t seem to have the status of an ethical imperative there:
- Humanity stands to be profoundly affected by science and technology in
the future. We envision the possibility of broadening human potential by
overcoming aging, cognitive shortcomings, involuntary suffering, and
our confinement to planet Earth.
The same goes for Dirk Bruere’s updated version of the Transhumanist declaration:
Humanity stands to be profoundly affected by science and technology in the future. We assert the desirability of transcending human limitations by overcoming aging, enhancing cognition, abolishing involuntary suffering, and expanding beyond Earth. We intend to become more than Human.
This seems to suggest that it’s not actually necessary for transhumanists to be bioethical abolitionists, even though it is seen as desirable philosophy to follow.
In a posthuman pacifist context bioethical abolitionism can be justified by the technical feasibility of eliminating involuntary suffering together with the view that not preventing harm is morally equivalent to actively doing harm. Of course, the latter view can be contested, so posthuman pacifism only implies bioethical abolitionism if that view is actually accepted.
Pacifism seems to be an intuitively sound and noble ethical philosophy, but its practical feasibility and even its desirability can be questioned. Most important is the question what does and what does not constitute actual harm.
- Do I harm the environment by emitting or simply breathing out CO2?
- Do I harm someone else by standing in his way?
- Does free speech harm others?
- Do surgeons necessarily do harm even though they have the intention to reduce greater harm with an operation?
- Is owning private property a form of harm if I’m not willing to share it with others?
- Is my immune system doing harm to the microorganisms trying to live inside of me?
These questions teeter on the brink of the absurd, but they demonstrate that a clear delineation between harm and non-harm is very difficult.
But there is a significant reason why true pacifism may be undesirable in an advanced posthuman world:
Simulation Ethics
Simulation ethics refers to the ethical considerations related to simulating world with sentient beings living within that simulated world.Why would anyone do such a thing? Well, there are multiple plausible motivations:
- Simulating a game world with realistic “natural” inhabitants that behave authentically, because they are authentically simulated sentient beings. Such games could be totally popular, even though they might seem ethically questionable.
- Scientific curiosity: Is there actually interesting life in certain simulated universes?
- Creating an ancestor simulation to find out how our ancestors lived and possibly to bring them back to life by creating new bodies for them and downloading their simulated minds into those new bodies.
- For playing though possible scenarios of the future, in order to make better decision in the present.
- Because a sufficiently superintelligent mind might necessarily become so vast that it effectively contains small simulated worlds with models of sentient minds that are so complex that they are sentient in their own right!
All of these points make it seem plausible that any posthuman civilization will have a hard time resisting the temptation to create simulations that contain sentient beings.
While it may be possible to create harm-free simulations, the vast majority of interesting simulations will necessarily contain some involuntary suffering. The following frameworks deal with that issue by allowing such simulations, while still maintaining some noble ethical guidelines.
Eu-consequentialism
The solution of eu-consequentialism to simulation ethics is that you may cause harm to the simulated entities, but at the same time, you have to compensate for that suffering by creating so much positive emotions or well-being that it outweighs the created harm.In a classical utilitarian posthuman society the question may be whether having some simulations with suffering sentient beings may be necessary to maximize overall happiness, because the gain for the simulators may not only outweigh the suffering of the simulated entities, but this gain could actually be impossible to reach with suffering-free methods!
In that case, eu-consequentialism could be seen as minimal acceptable ethical standard for simulating worlds with sentient beings.
But of course, higher standards might be preferable:
Compensationism
**Note (2015-09-17):** I've renamed "compensationism" to "**reparationism**"There is a serious problem with the eu-consequentialist framework: It’s really subjectively terrible for the simulated entities who suffer a lot. While their suffering may be globally compensated by mood enhanced simulated sentient beings living in heavenly simulations, this doesn’t help the suffering beings in the less pleasant simulations in any way.
In this situation, compensationism comes to the rescue: Every suffering is compensated for everyone! If you are a being in a rather unpleasant simulated world, and the simulators are compensationists, you are not so unlucky after all: Your suffering will be compensated by being relocated into a much more pleasant world, or by becoming radically mood enhanced, so that you will experience levels of bliss that really outweigh all the suffering you have ever experienced!
The idea of compensationism might seem fairly weird, but it makes a lot of sense: You simply make up for the damage that you have caused. It could be seen as a form of karmic logic. Or it could simply be called responsibility.
One can make the case that compensationism is preferable to pacifism for various reasons:
- A posthuman compensationist civilization may have higher overall well-being than a pacifist one, if the “problematic” simulations do create benefits that cannot be achieved in any other way
- It can be argued that it’s preferable to live in a simulation that allows for severe suffering, but to be compensated for that suffering in the end, than not having existed in the first place.
- Posthuman pacifist civilizations may be comparatively weak, so compensationism might be seen as the least realistically possible evil when considering the alternatives of eu-consequentialism or even opportunism
- If one really values resurrecting the dead, then compensationism seems like the best framework that makes that act legitimate
- If one believes in modal realism, then all worlds are simulated anyway by all the different posthuman civilizations. If you are simulated in a world created by compensationists, at least your life will have a positive continuation. The compensationists have the task to make up for the subjective suffering that is caused by themselves and other simulating civilizations.
Discussion
- Do my concepts make sense to you?
- Are there other comparable coherent ethical frameworks?
- What framework do you find most preferable?