Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

Progenerate Hyperhumanism: The alternative to technological transhumanism

Sorry it took so long to reply.

Yes I also predicted that the Transhumanist community would splinter in more ways than one, because it just seems like a logical progression. I do not consider myself to be influential in any way, or in any sect of the Transhumanist community, part of the purpose of posting this proto-manifesto was to let Transhumanists know that people would oppose the most fundamental ideas they espouse with something alien and radical.

Yes Hyperhumanism of the Progenerate variety sounds like it would belong somewhere in the Aurelian sect and is closest to the Harmonists. I do not believe in denying high intelligence, but the absolute radical and corrosive effects that high intelligence can cause if it is not distributed equally and fairly is society destroying easily. My views come from observations about the nature of man and patterns I have observed throughout human history.

Ok now let me break down and properly address all of your criticisms and comments.

Yes it becomes nebulous to define what a human being is when you can change it, but it is still a fairly simple affair for me. Cybernetics are artificial, the human being is organic. In the paragraphs above I go into why I think organic life is more valuable than artificial “life”. This is an identitarian stance, it is a chauvinist stance. There is a very fine and clear line I see when discussing what is artificial and what is organic. I am a rational person so this is not a justification founded on emotion alone. I believe that the human being is something that is worth preserving because it is us. In many forms of Transhumanism, the human being dissolves into a state of “post-humanism” which represents the destruction of the human form, the human spirit, and eventually the human intellect becomes so diluted by artificial forces that it to will eventually disappear to, until what you have is fundamentally not a human structure any longer. Some will call this evolution, but evolution is a fundamentally organic process. Intelligence can guide evolution sure, but post-humanism does not represent evolution in any sense, it is an alien process and a fundamentally destructive one as far as the human being is concerned.

Of course not, I could very well wish to be a god. I could very well wish to become the universe itself. But we must remember that no man is an island. When I come up with a social philosophy of some kind, I have to remember to include the domain of life and the endless complexity and variables that come with that. Communism fails because it is the path of most resistance to the human state. Many communist philosophies are so anti natural, they end up working against nature itself. For an ideology to be successful in completing its goals it must be well adapted to its environment, in this case whatever human society should take it up.

Because I am a universal nationalist I would not interfere in the affairs of other nations and countries if they wish to take the path of post humanism. They do so at their own risk. In the boundaries of my hypothetical state, yes, no one will be given the opportunity to use cybernetic means of augmentation because it will not be provided by the state, sanctioned by the state, or encouraged by the general populace. This ties very much into my socio-political philosophy of Progeneracy, which I used as a modifier for Hyperhumanism. It is a Progenerate form of Hyperhumanism. They are two distinct ideas that can exist independently of one another. Progeneracy is authoritarian when you apply the lenses of arbitrary political discourse because it demands an empowered state, but Progeneracy is very different from anything you or anyone else for that matter has ever heard of. By its own standards, Progeneracy is both liberal and authoritarian at the same time. It is a reactive balance between the two forces of the right and left wings respectively, because they both have something to offer me, one is liberty, which generates chaos, and one is social tradition and values, which generates order. These two components are necessary for creation. I am a progressive, but I believe in constructive progressivism, not sporadic progressivism that sees anything and everything and needs to change it. When you take an idea like Hyperhumanism and apply it to Progeneracy, what I describe is what you logically get. At its core, Hyperhumanism is human augmentation but one that only modifies the base structure of the human, and to virtually whatever extent may be available. Filter this idea through Hyperhumanism and you get something that is heavily controlled by the state, and granted equally to everyone. The Progenerate system may sound overtly communist, but this is not the case. The Progenerate system is one in where the state is married to the people, and the people to the state. They are the same thing. The state is not a separate ruling entity like in most forms of government, that usually lead to an elite class forming. I believe in competition, but I also believe in harmony and social homeostasis which is my ruling doctrine. The state must balance liberty and authority to create peace. Many forms of Transhumanism I have seen are libertarian to the point of insanity. What you get with such a fractured society (which is the source of this forum’s namesake if I am not mistaken) is a complete dissolution of said society. People are going to get hurt, very hurt, exploitation will become the law of the land. Without a state to help police the unrelenting tides of chaos, the society will inevitably collapse. It is unbalanced, there is too much chaos. And on the flipside there is the “singularity” types that are collectivist to the level of nausea. I dont even know exactly what the singularity entails, but it cant be pretty from what I have heard. WAAAY too orderly. Progeneracy advocates for a balance between the individual and the collective, not one or the other in totality.

I never said super intelligence wasn’t allowed I just said it may never be allowed to surpass the grip of humanity. Many transhumanists cite a phenomena of their own making as to why humanity must merge with AI, and that is AI becoming a threat to humanity. AI can never threaten humanity as long as it is purposefully limited in a sphere of influence it can never escape and is engineered so it never becomes self aware (if such a thing is possible). For that the authority of the state is required and will be used to its fullest, but in a Progenerate system, the research firms responsible for these things are already nationalized and socialized for the good of everyone so AI getting out of control, is not something I anticipate. Such a thing can only really happen when private corporations are allowed to do whatever they want. I really do not see AI like most people do, AI does not have to have a personality, or motivations, or sentience, it can be an algorithmic program that solves problems.

Yes because having 6 fingers is a natural phenomenon that human beings are capable of. You are not less human for having an extra finger. An extra arm, however, is not something humans were made for, especially if you artificially place when on the body. That is inhuman, but it still doesn’t make you not a human.

Feathers are not something a human being would naturally have, so that would not be human, but overall you would still count as mostly human.

It is really less about appearance and more about function. Sure you can look as cosmetically different on the outside as you want, but you would still be mostly human, emphasis on mostly. Like I said up there the human form is beautiful to me and in my opinion it is not necessary that it be molested. Now I understand that is an opinion, and I know people may hate the human form, but in my view of the species, it is something that only nature itself should change on a fundamental intrinsic level.

Well for me it is not a matter of authority necessarily. Most people can tell a non human organism from a human one. Generally the people would decide I suppose, but peoples opinion also vary greatly so some consensus would have to be made.

That is true, but this would not be a problem in a Progenerate society because no one could become inhuman anyways. The tools will never be available for that to happen.

You make a good point, but remember, I still believe in pushing the boundaries of what is possible for humanity, but in a different direction. I work by my own progress. Technology will be allowed to advance near exponentially, and human beings will be provided the opportunity to augment their natural frame to become a better version of itself, not to become something totally alien and unnatural.

It wouldn’t, it would direct its development and make sure it is controlled, ie: never letting it get out of control in the first place in any way.

A good and fair question, one I suppose will have to be either settled in the court of debate or on the battlefield, or perhaps both even.

Progeneracy will allow for the development of advanced technologies that will be able to prevent, one day at least, ever disaster you have outlined.

1 Like

Thank you for your detailed reply. I experience this discussion as quite valuable and interesting. What I like about your position is that it’s rather rational, so a civilized discussion is quite possible. Of course, I hold radically different views from you, but I see a fair bit of overlap.

Let me elaborate on that. Quite recently, I’ve developed a system that I term “cultural silvanism”, which presents a possible version of what I call the “fractal society”, which as you correctly interpreted, is what inspired the current name of this forum. I’ve just described how that system works in the following post:

In the framework on cultural silvanism there’s of course space for a state that’s based on progenerate hyperhumanism. It would just be one option among many. And I think in that context, having such a state might be quite valuable.

However, there also should be states that allow radical versions of transhumanism. Why? This has to do with the dangers of AI. It may be hard for “uncyborgized” humans to keep AI in check indefinitely. Cyborgs would have a better chance at that. But in the end, I don’t want superintelligences to be under the control of humans, because that would not be appropriate. Why have a superintelligence in the first place, if the humans effectively won’t listen to it? Sure, you could see the artificial superintelligence as advisor or even teacher. But as long as humanity still have power over the ASI, the situation is similar to a situation in which humans were merely advisors or helpers for chimpanzees.

I believe that civilization can only reach its highest levels, if there is free ASI. Human cultures will be enhanced by the support of the ASI, but ASI will create cultures that are even more complex and sophisticated than human cultures. I want to see such cultures, even if I can’t comprehend them in my current state as unaugmented human being.

1 Like

And thank you for reading my reply to your reply. You know, though my tone in the proto manifesto was scathing and combative, I have no problem actually talking to mainstream Transhumanists, as long as they are not the pretentious close minded bigots I see on Reddit anyways

Cultural Silvanism looks to me like it grants the many sects of this hypothetical fractured society enough autonomy to be satisfied. There is plenty of self determination and freedom for the individual and the substate to do as they please (within the confines of the established laws.) However my views differ on the fundamental structure of the proposed state. The system you have described does not represent the idea of the nation, it represents a very loose confederated union of mostly autonomous entities, which is not necessarily a bad or counter productive thing, but it contradicts my absolute view of the nation state. The nation state is absolutely sovereign over its own land and resources, as a singular unified entity. While sure, a Progenerate state may exist within the Silvanist bloc, it will always be shackled to an alien entity, in this case the Super State. Now, I would not necessarily be opposed to this system if it occupied a space that did not impede on the nation, and other nations for that matter, but what I imagine what you have in mind is a more global idea, which is what worries me, though that is still an assumption of mine. I can see bouts of conflict breaking out with the Silvanist governing body if it did not already have domain over the entire Earth because other unified nation states and countries may for whatever reason, start a conflict with it and its many sub entities.

Fundamentally, I think the idea of cultural Silvanism would work in its own right, but it would still be denying absolute self determination to its subordinate entities. If an entity collectively (or not) decides to join this Silvanist bloc, I see no problem. However, if the Silvanist Super State sees outside entities as a threat and wants to forcefully assimilate them, I see a very big problem indeed. For me as a nationalist, I see no reason that the nation should ever find itself subordinate to the Super State if there is never any conflict between both entities. Both entities I imagine can co-exist even on the same continent if there is enough mutual co-operation and respect. And as a nationalist that believes in universal nationalism, I am vehemently anti jingoism and imperialism, so my philosophy would never inflame a war.
Overall, it looks like the Super State exists to police the many sub entities, to quell infighting and to provide for their external woes. If a neighboring and divorced entity can live side by side with the Super State I see no reason why they should not remain wholly independent of the Silvanist bloc.

And as far as Artificial Intelligence goes, my view of it has and will always be the same. It should be a resource disposable to the state alone, and the state should have the say in how rapidly it develops, how it is used, and how much control over whatever sphere of influence it occupies. If the artificial intelligence surpasses that of even all human knowledge, that power should be socialized for the good of everyone. It should be directed by the hand of man that will apply it as it sees fit.

1 Like

Just an addition. The idea of Silvanism was developed for a setting in which different star systems were colonized throroughly, with millions of space habitats already existing. It seems to be far easier to plan a Silvanist structure in advance rather than taking “natural” territories and restructuring in Silvanist terms. For that reason, I think that Silvanism will most likely be influential once we colonize other star systems. Earth has too much history with traditional nation states to have an easy transition into a Silvanist federation. The same may even become true for most planets and moons in the solar system. It takes a very large degree of top down control to make Silvanism work nicely as I envision it. So, Silvanists will most likely compete with rogue colonist factions, rather than already existing nations or near or medium term future nations.

That’s why I assume that there will be a “nationalist” core and a “silvanist” rim of the space of the Terragen civilization (the civilization starting here on Earth). Tensions may arise at the boundary, but neither side has a sufficient incentive to start a large scale war effort. A symbiotic coexistence may be the most likely long term outcome.

As to AI: I’ve come to think that the idea of humans controlling the AI is more a problem of humanity than of AI. Let me explain: In theory the issues of desirability and implementation of human control AI can be seen as independent problems. But they really are quite interlinked in the following way: The less humans think that AI should be controlled by humans, the harder it gets for humanity to control AI, because the controlling humans not only need to suppress AI, but also the humans that want AI to be free (and the humans who want to become free AIs). Shifting social values may eventually force the controllers to give up their control, because most humans will be fed up with the expensive, restrictive and progress-preventing mechanisms of control.

That said, it’s still very important how to initialize free AI, so that it can develop in a trajectory that’s as positive for everyone as possible.

Let’s go back to your original post:

How do you distinguish biological enhancements from cybernetic ones?

Example: A team of researchers grows a neural lace out of a grid of genetically modified neurons that fire when they sense an action potential firing in a nearby natural neuron. This lace is implanted by a robot and exits the skull in the back in the form of a cable of nerves. At the end of each nerve fiber on that outside end we have another type of genetically modified neuron that fires a special kind of action potential that is optimized to be detected by exactly one sensor. The sensors are all arranged on a technological device that can be attached to the end of the cable of nerves. This device is connected by Bluetooth (or whatever) to your smartphone.

Would something like that be allowed under Progenerate Hyperhumanism? And why?

What about a version of that with nerve ends of the lace still within the skull and signals are just transmitted to a cap that can be worn on the skull?

What about organic nanobots made out of folded DNA?

What if those organic nanobots can be controlled by some kind of electromagnetic signal from the outside?

What’s the underlying principle behind these exceptions?

If the matter of consent is an issue to you, you need to deal with the problem that eggs can’t consent to fertilization by sperm. No human being has consented in advance to being born. So, if you see eggs and sperm not as entities which need to consent, then why is it a problem to modify both genetically?

Generally the apparatus you describe is very open ended for what it can be used for. So therefore, judged upon the basis of my own value system, I would have to say, it would be allowed technically, but there will be no vendors or services to accomplish this goal. My ideas are not supposed to be modeled after ideologies that step on people’s throats, if you can find a way to accomplish such a thing, you should technically be allowed to. But this does not mean that society at large will except you. Now you may ask, why does the government not provide this is a service? And to that I say, why should they? Are there ways that you can interface with a machine that are more simple? Less personally invasive? I think so, therefore I do not see it worth being the time and resources. That does not mean I do not see exceptions to this though, so if someone were personally very disabled I could see this is an option. Generally I view the use of an apparatus such as this unnecessary. That is not the path my ideology prescribes in relation to how the relationship with man and technology will evolve. This idea represents too high a synergy of the tool and the user. Though I am not totally closed off to the idea of it being used at all. The idea of the enhancements are to build atop of pre existing methods, not to create new ones entirely.

Well that depends what they are being used for and how they will be used. The idea of nano machinery is something I am open to for the adjustment of certain systems.

Also it is important to remember that when creating new venues such as brain to computer interfaces, you open the person up to many new vectors of attack from sources that want to do harm, ie: corrupt organizations and governments of all sorts. Bodily integrity is an important part of my ideas.

The principle behind the exceptions are that if for whatever reason, the tools at the disposal of the state are not adequate in solving the problem at hand, they may use other methods until the ideal methods are made available. I do not think if someone has a medical solution that works for them, that the state should come in and change that because it does not fit the ideal prescribed means of my ideology. That is too far of a violation of personal liberty.

This is a very interesting dilemma and I can actually envision in 20 years people suing their parents for conceiving them, though this would never actually happen in a healthy society. The issue with consent stems from the idea that if people tamper with the essence of their offspring, they are violating the individual. Now as far as sperm and egg are concerned, yes they are not technically and an individual entity yet, but if all goes as normal they will be. You cannot sabotage the individual parts to make a car and after the car is assembled say you did not sabotage the car. You sabotaged the parts the car would be assembled from. This is the same thing with a human person. The egg and the sperm are both essential pieces used to make a human, when you combine the pieces, you will inevitably get a human(assuming all goes well). So how then, is editing the sperm and the egg not the same as editing the person itself? You accomplish the same exact thing in the end, therefore functionally speaking they hold the same moral problem. The rights of the individual are still violated in this way. There are also several other moral problems here as well. Who is responsible for the editing? Is it the parents or the state? What are the motivations of the editor? It is surely not in any way acceptable to manifest one’s offspring as unnaturally superior in intellect in strength is it? Now that is something that will draw the ire of the lower classes. It is about maintaining social harmony for me. And as far as the fact that people cannot be consented to being born, this is true. But no one healthy would object to their parents and say “why have you conceived in my mother?” Now I suppose the option could be made available for one to take legal action against their parents for conceiving them, but reproduction is still a naturally ordained process. Direct genetic engineering is not.

I don’t see that as particularly strong principle. Who’s supposed to tell what “the problem at hand” really is? What if my problem at hand is being bored, and I see a couple of different options of fixing that:

  • Taking some kind of biological or chemical drug
  • Escaping into some VR game
  • Controlling a huge combat robot with some kind of control suit or an external or internal neural interface
  • Wireheading the pleasure centers in my brain
  • Getting a sex robot
  • Trolling people in some online forum

Who is supposed to decide what solution is adequate? The state? The people by punching me in my face, if they don’t like my solution?

Ok, so I gather it’s not the state that should tell me that my preferred solution is not adequate. Am I to assume that there won’t be any market supply for my preferred solution, because of pure ideological aesthetic alignment in your system? How would that be enforced? By spontaneous boycotts of companies who try to provide “unnatural” solutions?

And how would you like to ensure that everyone is healthy? Sure, if a person is healthy, retroactive consent for being conceived may be granted, but what if not? What right do people have to risk conceiving a person with severe genetic diseases (even if they may not appear within the prospective parents)? Why would such a right weigh more than the interest of a person to live a healthy life? Surely, if parents could select the sperm and egg cells that are least likely to contain genetic diseases, why shouldn’t they be able to do so and create the most healthy offspring possible?

And where do you draw the line then? You seem to see social harmony as a desired end. Wouldn’t approximation from below towards a societal mean of health, strength, beauty, character, and intelligence be desirable?

Ageing and spreading diseases are naturally ordained processes. Modern medicine is not. Should we therefore abolish modern medicine? Surely not, since that would be a violation of personal liberty?

I have a hard time modeling the priorities of your values. You seem to have some of them:

  • Naturalness
  • Social harmony
  • Personal liberty
  • Consent
  • Adequacy

But which of those are overriding principles? When may one principle be violated in favor of maintaining the others? Surely, you should concede that there are instances in which you accept that any single one of these principles are violated in favor of maintaining another principle. This would indicate to me that there is no principle with absolute weight. There is no highest or most basic principle in your ideology. Apparently, your argumentation seems to amount to prevent too severe violations of one or more principles at the same time. But that’s not very principled. It’s open to subjective interpretation. It’s about as solid and intellectually cohesive as modern humanist social democratic capitalism. In other words, it’s a weird mix of partially contradicting values, but it could be worse. So, it’s about par for the course. The problem is, if you want to propose an alternative to the grown current consensus ideology, yours should at least be somewhat more intellectually coherent, or at least aesthetically more appealing. I don’t see that with your version of Progenerate Hyperhumanism.