One can define transhumanism as the philosophy that aims at improving the human condition with science and technology. Formulated that way, it is actually an incomplete philosophy, because it doesn’t specify what is “good”, “better”, or “improvement”. In other words, the core philosophy of transhumanism needs to be complemented with a philosophy that tells you what is “good”. This is what ethics is all about, at least in the context of what is good in a social context.
There are three main classes of ethical philosophies:
- Virtue ethics that focuses on the character of persons and defines the good as good character traits aka virtues. Thus, virtue transhumanism would aim at improving the character of humans by technology. This is consistent with efforts under the label of “moral enhancement”.
- Deontology is concerned with rules guiding good behaviour. If people behave according to certain ethical rules, their actions are good, otherwise they are bad. A deonological transhumanism would probably aim to increase the ability of humans to follow certain rules. Framed more positively, deontological transhumanism wants to make humans more rational and intelligent, so that they can make better decisions. The goal of intelligence enhancement makes sense through and through when seen through the lens of deontological transhumanism. Also related is the idea of artificial (general) intelligence which should help us solve difficult problems. Interestingly, there are efforts to create “friendly AI” that is programmed so that it can only act in ways which are beneficial for humanity. Note that without an ethical frame the question of what is “beneficial” isn’t clearly defined, either. But with a deontological frame, friendly AI would have to follow stricter ethical rules (something like “Asimov’s laws of robotics on steroids”) than normal humans are supposed to follow.
- Consequentialism defines good acts as acts that have good consequences. Now what are good consequences? There are different schools of thought about this question. While consequentialism in general is less inclined on answering this question directly, there is a large ethical philosophy within consequentialism that does provide various answers to this question: Utilitarianism. Utilitarianism is an altruistic consequentialist philosophy, so it’s not about personal good, but good for as much people as possible. It comes in different flavours, each having a different definition of what this “good” is:
- Hedonistic utilitarianism is based on the idea of hedonism: The primary good is simply what we call happiness. What is the best thing for hedonistic utilitarianism? That which maximizes global happiness. Hedonistic utilitarianism was the original form of utilitarianism and also the first version of consequentialism. Hedonistic utilitarian transhumanism is likely to prescribe genetic enhancements which improve the happiness set-point of people. Also the development and use of drugs and implants which make people happier without doing significant harm are promoted by hedonistic utilitarian transhumanism.
- Preference utilitarianism aims to satisfy preferences of those who actually have preferences. As many preferences as possible should be satisfied, no matter who the “owner” of those preferences is. Interestingly, preference utilitarianism has been taken as prominent basis for arguing for animal rights. For preference utilitarian transhumanists the development of technologies which increase our power over the world would be priority, because those would allow us to align the world to our preferences. Examples for such technologies are atomically precise manufacturing which allows the cheap creation of super high quality goods with incredible properties. In the case that subjective preferences are easily satisfied without the requirement that these preferences refer to the “real world”, the creation of highly realistic and compelling virtual worlds would be a highly valued goals for preference utilitarian transhumanists. Finally, curing ageing and death would be prioritised pretty much, because nobody really wants suffer from ageing and death.
- Desire utilitarianism is a position that has been formulated, but hasn’t received much attention so far. It’s about the satisfaction of emotional desires rather than “cognitive” preferences. It is also associated with the “wanting” system of the brain, instead of the hedonic “liking” system. In its goals, desire utilitarian transhumanism would be pretty similar to preference utilitarian transhumanism.
- Eudaimonic utilitarianism focuses on the greek concept of eudaimonia, which some identify with “happiness”, but which rather means holistic well-being or flourishing. In some sense, eudaimonic utilitarianism could be seen as sophisticated and more balanced version of hedonistic utilitarianism. Eudaimonic utilitarian transhumanism would aim to improve the conditions for well-being / flourishing with rational and technological interventions that are suited for that purpose. It’s a priori not clear which these are supposed to be, but it would be a rather scientific endeavour to find out what really increases well-being and what actually makes it less likely. Eudaimonic utilitarian transhumanists are perhaps most eager to aim for a holistic approach to improving the human condition, also focusing on social reforms instead of just focusing on improvements on the level of the individual person.
As already mentioned, different ethical ideas seem to be naturally associated with different foci for technological or scientific development if they are combined with the idea of transhumanism. That’s not to say that supporters of these specific schools would exclusively focus on the mentioned technologies. It just means that there are different primary motivations and that some technologies seem to be associated more closely or directly with these motivations.
Also, let’s not forget that most people in general don’t know an awful lot about different ethical philosophies and rarely fully subscribe to one of those explicitly. The generic transhumanist is not a clear exception to this rule of thumb. While most transhumanists have clear personal motivations for pursuing transhumanist goals, few of them connect these motivations to deeper philosophical considerations.
What about me? I would be best described as eudaimonic utilitarian transhumanist, though I usually self-identify as hedonistic utilitarian for simplicity.
Now, what is the deeper point of this little essay? It should demonstrate that there are different flavours of transhumanism which pursue different goals, even though all of them involve science and technology as tools for their own purposes. This diversity of goals is a natural cause of tension between different kinds of transhumanists, which is sometimes more, sometimes less apparent.
Finally, transhumanism can not only be complemented by ethical philosophies, but also by ideologies or political philosophies which are not primarily concerned with ethics, for example:
- Anarchism (yes, there are many different flavours of that, too)
- Communism
- Democracy
- Fascism
- Libertarianism
It should be quite clear that such different schools of specific ideological transhumanism have quite different visions of how their transhumanist utopia would look like.
So, transhumanism in itself without being coupled with any specific complementary philosophy should only provide a rather modest level of ideological coherence. Therefore, the expected level of compromising within any party based only on transhumanism should be relatively high. It would be quite natural to expect the emergence of different philosophical wings within transhumanist parties which try to maximize their influence on the politics of the whole party.
In fact, the alternative to that would either to split up transhumanist parties into their coherent philosophical wings, which would minimize their overall effectiveness and influence, or not to base transhumanist politics on any coherent and sound philosophical principles, which would just make the party seem directionless and unprincipled. Given these alternatives, it sounds like a reasonable proposal to embrace the partition of transhumanist parties into more or less explicit philosophical wings.
I for one, am willing to act as eudaimonic utilitarian transhumanist within the emerging Transhumanist Parties openly and explicitly. Better to be clear on ones principles and goals, and then compromise later, than to start with an ill-defined compromise in the first place.
What kind of transhumanist are you or do you want to be?