Real Philosophical Transhumanist Factions

Today i’ve read the following article on Singularity Weblog: Transhumanism Needs to Establish a Meaning to Life which is a few months old, but seems to be quite relevant especially to political transhumanism.

It is important that the transhumanist movement establish a consensus on the meaning of life. Failure to do so will result in conflict, the extent of which is difficult to predict. As it stands today, transhumanism is a divided movement of various competing interests promoting values which are contradictory in nature. It seems the only agreement the movement has reached thus far is that the proper course of action is to promote the widespread adaptation of transhumanism.

From what I have seen, the three primary justifications for transhumanism are utilitarianism, freedom and meaning, all of which conflict with one another to a certain degree…

These different justifications could correspond to different transhumanist p factions: Utilitarian transhumanists, “libertarian” transhumanists, and “spiritual” transhumanists.

It is interesting to see that there are various positions taken in the comments:

  • Some people seem to identify very much with one of the proposed. You can clearly identify certain commenters as utilitarians or “libertarians”.
  • Some hope that with higher intelligence, or with technology to connect minds with each other much better, there will be a convergence of the different p factions to one form of unified transhumanism.
  • There’s also the notion that the different elements of justification need to be combined to reach some kind of consensus.
  • On the other hand, there are people who pretty much claim that differences of opinion and thus conflict will be inescapable.

What is my position on this issue?

  1. It might be that future progress could lead to a convergence of opinions, but I think that’s very unlikely. Anyway, this would be some kind of “best case scenario”.
  2. In the case that different opinions persist indefinitely, there will be conflict between the different p factions, but that doesn’t need to be violent. Our ability to prevent violent conflict resolutions might become extremely sophisticated in the far future, so that we can have world peace even though we have wildly divergent philosophies and values. (I have some ideas for a kind of “Peace Guarantee Network” which I will explain at some later time.)
  3. It could be this non-violent conflict between the different p factions that provides the most meaning in such a world (in the sense of feeling meaning by “heroically” supporting one’s own faction). So, the “meaning” part would be satisfied by having a more or less permanent conflict between different utilitarian and “libertarian” p factions and p subfactions.
  4. For a long time I have tried to create something like an “idealistic utilitarianism” that places a high value on principles like freedom while still being based on maximization of positive emotional valence. This line of thinking eventually lead me to conceive a fictional p faction that I call the Exaltation (which I’ll probably also need to explain in detail in a separate post later).

What do you think about all of this?

1 Like

i believe in the benefit to consider different positions and apply the method of pro and contra and conclusion. but i also believe that a complete consensus is possible and that there is “the one and only truth” we can discover. our differences are only the indicator, that we all have not found this truth by now.
i don´t think that the variety of opinions is a value by itself, but every opinion is valuable for the search for truth.

i like your idea. it is a good point to start the search and develop this ethical position further.

It is interesting to see how my opinion has changed over the last 5 months. Now I think that it is much more likely that there will be a convergence of philosophies towards a singular Ultimate Wisdom. It may still be wrong, but unless we try to move into that direction, we won’t know what’s actually the case.

The basic idea it to see the philosophy of idealistic utilitarianism as optimization problem. The goal is to maximize utility under the boundary condition that individual freedom is respected.

This poses quite a problem, however, since we are still quite far away from a state in which individual freedom is really fully respected. So, we would probably need to quantify the degree of freedom to find out how far away we are from the realm of full individual freedom.

1 Like

The answer will be given by the progress in research in AI, biotechnology and Human Genome Editing.
Mainly it is a question of Engineering and not of Philosophy.
If you read the articles aof Ray Kurzweil from Google, then you will be convinced AI ist the key for all.
If you will read books of George Church and Nick Bostrom, then you will be convinced that Human Genome Editing with Brain Upgrading is the key for all.

with every technological progress the requirement for ethics grows. that was the case in the past in human history and that will never change. if humanity fails to consider appropriate ethics we will create a dystopia.

this is what i hope as well.

Hello Wildkatze, and welcome to the Fractal Future Forum! :smile:

But then we will have to build an even bigger AI, and improve our genome even further to find out what the answer was in the first place. :joy: Oh, wait, I’ve read that novel somewhere… :smiley:

And you know that because … ?

And if you read all of them, then your brain will be blown! :triumph: Ray Kurzweil himself says that merging with the AIs is the real key. Michio Kaku says that, too. And it’s also my own position. I want my phonotnic cyberbrain and metamorphic body as soon as possible! If I don’t get it by 2045, I will go on hunger strike :wink:

Yes, someone should write a book “Ethics Module for Transhumanists”! :smiley:

Cool, then both of us can be alone in our quest for Ultimate Wisdom together! :wink:

I strongly doubt that. Even if the semantics of ontology, epistemology and ethics could be completely formalized in a mathematical framework (which I deem not feasible), there are the incompleteness theorems. But I’m pretty sure that I’ve found one particular fragment of ultimate wisdom: Uncertainty is here to stay.

Of course, a philosophy of ultimate wisdom would tell us how to deal with uncertainty in an optimal way. Even if uncertainty is a part of nature, science and technology can always help us to deal with the nastiness of nature. So, I don’t see a fundamental obstacle to a convergent philosophy of universal wisdom here.