What should be done

It’s not a trivial question what to do next, or what the best thing to do is in general. There are numerous approaches to this question, but let’s approach them with what has been developed so far in this forum. As the question what is of value depends on one’s value system, we should focus on those value systems first. In the naive mindset of humans, their current value systems are taken as given, and should not be questioned when considering what to do. Actually, this kind of naivity affects almost everyone to a huge degree. Rarely do people question their own values, and hardly ever do they pursue that act of questioning systematically.

How would one go about to question value systems systematically anyway? If one wants to compare different value systems, one needs some kind of meta value system for reference regardless. And why should that meta value system be any different from the value systems one currently holds? Usually, value systems are fixed and justify themselves. Escaping a value system from within usually isn’t an easy prospect. Only when strong contradictions within one value system are detected that require resolution in one way or another, one is compelled to think about how to change one’s value system. Typically, in such situations the philosophical foundations of one’s existence are put into question. Such phases are then called existential crises – and rightly so. People are often rather willing to die than to change their most deeply held values.

These insights point to one possibility of transcending one’s current value system: Looking for internal inconsistencies. If one arrives at a more consistent value system, then that becomes more stable – unless one’s strategy relies on ignoring internal inconsistencies. The problem with that approach is that a value system with internal inconsistencies is less useful, because it provides little true guidance in cases in which those inconsistencies cause an internal conflict that cannot be rationally resolved within the current value system.

Thus, ideally one would like to have a maximally useful value system with absolute internal consistency. Let’s call this a maximal value system. Is there a unique maximal value system, or are there multiple maximal value systems? In the first case, we can arrive at civilization system 6, otherwise the most we can achieve is a high functioning civilization system 5. At this stage, we cannot know the answer, so the best conceivable outcome would be to arrive at the unique maximal value system. Why not strive towards that?

Yes, why not? There doesn’t seem to be a rational counter argument against that aspiration. The only issue might be that there might emerge very strong evidence that there is a multitude of maximal value systems which fail to converge on a single maximal, canonical value system. That would be a problematic outcome, since it would most likely imply eternal conflict between multiple value systems. This doesn’t exclude the possibility that a single maximal value system will become dominant on a cosmic scale through various means, in particular the suppression of competing value systems. That would be a rational end state of the cosmos, though it’s not preferable to the case of the existence of a unique maximal value system that becomes dominant through its universal discovery and acceptance.

The conclusion is rather simple: One should strive towards finding a maximal value system, in the hope that it’s the canonical unique one. Once that is achieved, one possesses clear guidance by following that unique maximal value system. The core problems of ethics and axiology become solved issues. The rest becomes “mere implementation” of the best way of existence. A philosophically mature civilization will know what to do. A philosophically immature civilization like ours needs to become mature by arriving at a maximal value system, first.

Our task therefore consists in identifying the inconsistencies of our value systems, and fixing them, until no inconsistencies are left to fix. This requires a seriously high level of intelligence and rationality. Hence, instrumental goals of this first final goal of finding a maximal value system are improving our intelligence and rationality. Those aren’t easy task by a long stretch. As there is strong evidence that we are severely lacking in both respects, we need to apply rather radical means towards those goals. So, we need to pursue intelligence amplificiation and artificial intelligence (IA & AI). Thus, we arrive at an epistemologically motivated form of transhumanism with increased intelligence and rationality as its primary goal.

At this point it should be noted that with this context in mind, intelligence becomes far more than a mere positional good. It’s a nonpositional good, because it serves the higher purpose of finding a maximal value system.

The core conclusion

So far, the conclusion is that what should be done is striving towards a maximal value system, by increasing intelligence and rationality, mainly through developing IA & AI. But what happens in the grand scheme of things are mainly the speed and safety that we arrive at the goal of finding a maximal value system. Those are the most meaningful variables of existence prior to the discovery of a maximal value system. What truly matters, at this stage at least, are the rate of progress and our ability to ward off existential risks.

Why does the rate of progress matter at all? That’s an important question, and the answer is far from clear. It could be the case, that when we arrive at system 6 (ideally) doesn’t matter, but merely whether we’ll arrive at all, or not. In any case, the latter seems to be a rather dominant concern. Yet, our current state of science strongly suggests that the amount of resources available to us is limited, and becomes ever more limited over time, as parts of the cosmos expand out of our eventual reach, and stars wastefully burn their fuel to no good use, other than providing an inspiring scenery for life to inject true meaning into an otherwise meaningless cosmos. That’s why speed is also an important consideration, even though it’s far less important whether we arrive at a maximal value system a couple of decades or centuries earlier or later, compared to the prospect of complex life in our local region of spacetime going extinct, and leaving the task of granting meaning to existence to potential aliens or distant progenitors of the simple life forms that may survive a global extinction event.

In the grand scheme of things it doesn’t even matter whether “humanity” survives, or not. It only matters what happens to the cosmic beings (as defined in the remarkable book Human Purpose and Transhuman Potential: A Cosmic Vision for Our Future Evolution by Ted Chu) that drive the highest forms of evolution in the cosmos. Perhaps that’s the best rationale for being a Cosmist as defined by Hugo De Garis, rather than a Terran. Cosmists strive towards creating artificial intelligence incomparably smarter than (contemporary) humans, while Terrans think that we should really not create something that’s siginificantly smarter than a human. Yet, the conflict between Cosmists and Terrans may become severe any may even cause the exstinction of all complex intelligent life. That would be an outcome that ought to be avoided by all reasonable means. Hence, a more gradual progression towards a level of intelligence and rationality that’s sufficient for reaching a maximal value system becomes a rational imperative, as that would minimize existential risks caused by conflicts.

This means that progress should be more inclusive, rather than exclusive. Everyone should be able to profit from increased intelligence, rather than a select few – even if after all only the select few contribute the most towards the goal of arriving at a maximal value system. What is needed is a kind of Universal Upgrade, which uplifts all kinds of intelligent life in order to create a diverse and stable society that is able to generate quick and safe progress.

And with this we have arrived at a version of technoprogressivism that is primarily motivated by the concern of reaching higher levels of intelligence and rationality in a safe way.

All of this however doesn’t imply that it would suffice to focus on transhumanism or technoprogressivism, anyway. In fact, transhumanism and technoprogressivism are merely rational stepping stones for the higher purpose of arriving at civilization system 5, and hopefully eventually system 6. One actually needs to keep this grander scheme of things in mind. Otherwise, transhumanism and technoprogressivism will become a distraction from what actually matters and what actually should be done.

It’s important to keep the hole line of reasoning in mind:

  1. The need for a maximal value system
  • The need of increasing intelligence and rationaitly for that goal
  • The need to develop IA & AI (epistemologically motivated transhumanism) for that goal
  • The need to keep intelligent and rational life alive and functional at all times
  • The need to distribute gains from improved technology universally (technoprogressivism motivated though the need for existential safety)

The temptation for short circuiting this line of reasoning and focusing on a single point may be strong, but needs to be resisted. Only this grand picture provides a complete rationale for what should be done.

In the grand scheme of things the Civilization System Progression is more than a description of the evolution of civilizations, it’s an actual roadmap for what should be done on a cosmic scale, and how we need to proceed on a societal level. And this roadmap implies that we need to master the transition from civilization system 3 towards civilization system 4 as next step. Our proximate target therefore, should be to design and build the network age.

Appendix: Considering alternatives

What if we live inside a simulation made by a system 6 civilization?

In that case, everything is fine. So, we only need to worry about the converse possibility that we don’t live in such a kind of simulation.

What if there is no unique maximal value system?

In that case, we should still strive for a best approximation of that situation. Creating a world that mostly appears like a world in which a unique maximal value system existed may be a possibly rational way of proceeding in that situation.

What if I don’t care about consistency of value systems?

Then you are left with the problem of having to make decisions in an arbitrary way that isn’t philosophically satisfying. That’s about as smart as making your life decisions by throwing dice. This is not to say that this is an invalid approach of living life, but it’s surely not the most rational way of life.

What if I don’t care about rationality?

Then you will be less effective at reaching your own goals. Whatever you want, you will be less likely to get what you want.

What if I don’t care about getting what I want?

As a personal approach, that might be permissible, but please don’t complain if others will try to make use of your personal resources for their own ends.

3 Likes

by an “universal upgrade” you may create the ultimate lex luthor for you superhumans (btw do you prefere the avengers or the x-men ? *smile )

atm all folks are afraid of an evil ai. what about evil superhumans? you must be strong to fight them :smiley:

i think, the most elegant way to avoid such kind of conflicts is a human-ai-symbiosis. if the rationality of you decisions depends on your ability to anticipate, it is no other way to defend your values, even the maximal ones.

(there will no be human survivors on this planet *carlos castaneda)

(dummheit ist, ein problem mit den mitteln lösen zu wollen, die es hervorgebracht haben *a.einstein)

time to say goodbye to your species :wink:

1 Like

für andere sieht ein “universal upgrade” so aus : http://www.sueddeutsche.de/politik/sz-werkstatt-wie-china-den-voellig-vertrauenswuerdigen-menschen-schaffen-will-1.3514184 drei minuten, die sich lohnen