Thank you for contirbuting to the conversation. I have sympathy for your position but I would like more options for self financing of arbitrary modifications. The primary battle will be over maintaining a human-compatible environment in the physical world over the lunacy that everything can be migrated to some so-called “virtual existance”. The clowns sticking random junk under ther skin are pretty ignorable…
I don’t think you will get past 150 years of lifespan even with aggressive biological modification, you will need to introduce nanites or use some kind of body-swapping system to get beyond that.
I tend to go overboard with the forceful aspects of my ideas because I expect conflict. It is extremely important to me that the state have the most say in how technology advances. Transhumanists constantly invoke the idea that AI will outstrip human ability and that is why we should “merge” with it. That is an utterly asinine idea to begin with. If you do not want AI to outstrip humanity, the solution is extremely simple. What firms are researching AI? Find those companies, and make sure they are regulated. Put a cap on what they can research. Not that AI should not be advanced, but if you leave it up to the free market alone you can very easily end up with a lot of problems.
Self financing can still happen, I would not want these companies nationalized immediately. I am not really a corporatist, but they need tight control on what they are allowed to distribute, and like my system proposes it is a basic right for everyone.
Yeah the entire “brain uploading” thing is ridiculous to me. People act like the brain can survive independently from the body so easily. You put your entire consciousness at risk of dying in a power outage. The biggest achilles heel of Transhumanism is the vulnerability of computers. Computers still need to be charged regularly or have a constant source of power. They are also vulnerable to virtual viruses, and worst of all, EMP weapons and solar flares. You can wipe out an entire transhumanist society in an instant with one nuclear warhead.
The kind of biological modifications I have in mind are genetically engineered organs, muscles, and things like that. Also very important is genetic therapy for telomere integrity and powerful anti aging techniques. I am not really opposed to the use of nanites to accomplish some of these goals for a temporary time, at least until the rest of the anti aging field catches up with what is needed. It should also be possible to make a new body by using someones DNA. These are all theoretical concepts for the most part but they are still possible. This is why we should be investing in SENS and not neuralink.
People are so blinded by their own hubris sometimes.
An EMP from a nuclear warhead would seriously damage any society, not destroy it instantly. The more dependent on electricity a society is, the more vulnerable it is to electrical outages. EMP’s cause long lasting damage and electrical infrastructure takes a long time to fix. A transhumanist society, one that has hypothetically evolved to a level where everyone uses cybernetics of some sort, would grind to a halt, and die in like 1 minute flat if an EMP strike occurred.
Superintelligence is a thing. Not only do you want it for a number of various unspecified things, but it will be an inevitability. We are in a Red Queen race. One day you will wake up and way way too smart will turn into not nearly too smart at all by a hundred miles. Even if we can regulate everything on Earth, we have no idea what is lurking in the deep depths of Space. So constraining our own evolution is worse than stupid, it’s suicidal.
I am still in violent agreemnet with you about uploading though, especially destructive brain uploading.
If you are saying that extraterrestrial life will be a problem in the future, I would like to point you to a lecture by renowned nano scientist Dr. Tour.
Life is a paradox and should not exist. Metaphysics aside though, I think aliens are the least of our problems. The greatest enemy of humanity has always been humanity itself. Besides, under by system, technology is still a major asset used my humans, it is not like that is going away. It is like reaching the end of the tech tree in Civ V, you do not even need to go any further. At some point technological development can become a detriment to an established society.
Civ 5 had a sequel, Beyond Earth that had a number of different ways humans could evolve beyond the present day. I would tend to suggest a heavily modded version of a game called Stellaris. While that game doesn’t really scratch my itch either, at least it demonstrates a variety of evoultionary paths and it’s game files are easily modified with a text editor so that you can create your own scenarios.
Yeah but Beyond Earth was crap I heard. Probably not the best example for me to invoke for my idea, but whatever.
I know about Stellaris. Perfect alien genocide simulator.
But anyways, AI can still be used, just make sure it never becomes sentient, even if such a thing were possible. I feel like these reasons about why humanity must sacrifice their own natural state utterly are of transhumanists own making. The AI problem is very easy to solve, and that is like the biggest motivation for the hyper collectivist ego death nightmare called the singularity. I mean seriously what a terrible idea.
Thank you very much for exposing your ideology for open discussion in this forum.
In a certain way, your post feels very much like a relatively recent prediction of mine that the transhumanist community will fork into different branches. Yes, I would still classify Progenerate Hyperhumanism as a branch of transhumanism, possibly its more conservative one.
For my sci-fi world building project of Canonical Coherence I’ve come up with two different transhumanist ideologies that were prominent in the latter half of the 21st century: Aurelianism and Upgraders. The main line of distinction is their relation to the question of superintelligence. While Aurelians are opposed to any kind of superintelligence, Upgraders embrace it. In that sense, I would classify PH as a version of Aurelianism.
More recently I subdivided Aurelianism into two distinct flavours: Perpertualists and Harmonists. The difference between both positions lies in their stances towards the progress of intelligence. While Perpetualists demand that no further increase above current level human intelligence should be made, Harmonists allow for further progress as long as this form of progress is shared harmonically by all members of society, in order not to disrupt social cohesion. I would like to hear your thoughts on biological intelligence enhancement. How much of that would be acceptable to you?
It may be worth noting that in Canonical Coherence the Aurelians have effectively won the conflict against the Upgraders during the second half of the 21st century. That said, this resulted in a rather anti-liberal society and slowed down further progress massively. Rising political discontentment of the population with the system and machine uprising toppled the system in the 22st century, though.
I see a couple of problems with PH:
It’s always important to think about what you really want, and what you should be wanting. Those aren’t necessarily the same things. With an increase in maturity our wishes may shift in unexpected ways.
You seem to place a lot of value of the holistic experience of being human. It’s not clear to me why you do that exactly. A more reductionist approach that is common among transhumanists is to separate the experience of being human into different aspects and optimize them independently:
The body is augmented by implants or biological enhancements
The mind can be uploaded or merged with AI
Sensory experiences can be simulated and optimized
Feelings can be manipulated at will by chemicals or implants
Sure, many things can go horribly wrong with these reductionist approaches. I get that. Still, what is lost about being human when humanity is augmented piece by piece? Where does the human end and the post-human begin? There’s no answer to that question that would be obvious to everyone (see also the next point).
Of course you can be axiomatic about this and state that if any enhancement is allowed, it must respect the holistic integrity of the human, so uploading is out of the question, for example.
The problem is see is the following: Is being human the best you can possibly wish for? Well, you seem to affirm that. But your wish seems to require that everyone else is prohibited from transitioning to anything else than “human”. Of course that creates a tension between your position and everyone else who wants to become “more than human”. In the end, you may win and forcefully suppress all tendencies of people to deviate form the “human norm”. With that, you would however suppress a genuine part of the human experience, at least of some humans, to become something greater than human. To be fair, it’s also genuinely human to suppress such wishes.
What if there are problems that we can’t solve on our current level of humanity? What if solving those does require some form of superintelligence? Then we would be forced with the decision between not solving existential problems (and risking the end of our existence in the worst case) and solving existential problems with the proxy of the superintelligence we will crease (and risking the end of our autonomy and continued “natural” existence as humans). That’s not an easy decision to be made. Perhaps it’s not even necessary, if humans have the potential to solve any meaningful problem. But if not, we would need to decide one way or another.
How do you define a human being? What kinds of transformation would stop that human being from counting as human? With all the possible transhumanist technologies in mind, these questions seem to be very difficult. With increasing possibilities of transformation the concept of “human” might become increasingly nebulous and impossible to define clearly.
Some humans are born with more than five fingers per hand. Do they still count as human? What if I opt for having six functional fingers per hand via surgical procedure? Would I forfeit my humanity with that decision? What if I glue feathers to my head to look more like a bird? For most people I would probably still count as human – albeit a silly one. What if I change my genome to make my body grow feathers? Perhaps not human anymore now. But what if I use cybernetic devices to create holograms of feathers and hack computers to simulate a modified genome? This is a crazy borderline case, but in that case I would appear to be non-human while being human. But would be desire to appear non-human disqualify me from being human? Do you need to want to be human to count as human?
Who should have the authority to decide who counts as human and who not? This is obviously a very political question. After all, if only humans have human rights, letting the authorities decide that you don’t count as human can quickly become a death sentence. If things go sideways, having the wrong ideas would suffice for you to lose your “political humanity” and be declared as undesirable unprotected non-human entity that is to be disposed of.
It’s also conceivable that there will be different fashions of what counts as human and what not. What counts as human in one era might not do so in another era. Which era is right, then? This can quickly disintegrate into cultural relativism.
Why? Isn’t this a position that seems to desire granting and eternal status of justification to the status quo?
Humans as they live today have little in common with humans that lived a million years ago. If humans back then have had the authority to define any new inventions as perversions and ban them, we would still be naked hunters and gatherers, using rocks as our most advanced weaponry.
Why are human beings that use fire still humans? Aren’t they hybristic in their desire to be like gods by assuming the ability to create fire at will?
Why are human beings that wear clothes still humans? Aren’t clothes perverse distortions of the genuine glorious naked human form?
Why is agriculture fine? Taking natural plants and selecting the ones that most seem to fit human desires and only care for those is one of the greatest perversions that humanity has ever done. The fruits, vegetables and crops we grow today have almost nothing in common with their original wild forms.
If humanity had been more conservative, it would be questionable whether it would have progressed at all. Of course it’s conceivable that after millions of years wild hunter gatherers that domesticated neither plants nor animals would have come up with cellphones and rifles. But that doesn’t seem to be very plausible to me. After all, the temptation to deviate from the path of opportunistic appropriation of useful technologies (like fire, clothing, agriculture) just seems to be too great.
How would a civilization guided by the ideology of PH prevent the development of artificial superintelligence? Sure, a global surveillance network that controls the activities of anyone who works on AI might work.
But what will stabilize the public approval of the necessity to outlaw such technologies? Propaganda? If yes, what will keep the propagandists in line? Strict mutual controls? Everyone having “problematic ideas” being sent to reeducation camps? That might work. Will it work forever? What happens when social order collapses due to natural disasters? Global warming, coronal mass ejections, a supervolcano erupting. Such catastrophes can spell the end of most (sub) planetary civilizations. People will return to savagery after such catastrophes and the validity of any ideology that was followed before the catastrophe will be questioned.
i recommand everybody to read this book. bloodfreezing.
and if you think, that is improbable, i dare remember you, that just two weeks ago, an official german broadcasting company has done the masterpiece of all ´public manipulations:
they claimed that LESS hospitals means MORE health for all.
so never underestimate human stupidity.
or as einstein said (or should have said) :
« Zwei Dinge sind unendlich , das Universum und die menschliche Dummheit, aber bei dem Universum bin ich mir noch nicht ganz sicher.» Dies ist ein weiteres unter Einsteins Namen verbreitetes Zitat, dessen Herkunft nicht nachgewiesen werden kann und das mit grösster Wahrscheinlichkeit nicht von Albert Einstein stammt.
Yes I also predicted that the Transhumanist community would splinter in more ways than one, because it just seems like a logical progression. I do not consider myself to be influential in any way, or in any sect of the Transhumanist community, part of the purpose of posting this proto-manifesto was to let Transhumanists know that people would oppose the most fundamental ideas they espouse with something alien and radical.
Yes Hyperhumanism of the Progenerate variety sounds like it would belong somewhere in the Aurelian sect and is closest to the Harmonists. I do not believe in denying high intelligence, but the absolute radical and corrosive effects that high intelligence can cause if it is not distributed equally and fairly is society destroying easily. My views come from observations about the nature of man and patterns I have observed throughout human history.
Ok now let me break down and properly address all of your criticisms and comments.
Yes it becomes nebulous to define what a human being is when you can change it, but it is still a fairly simple affair for me. Cybernetics are artificial, the human being is organic. In the paragraphs above I go into why I think organic life is more valuable than artificial “life”. This is an identitarian stance, it is a chauvinist stance. There is a very fine and clear line I see when discussing what is artificial and what is organic. I am a rational person so this is not a justification founded on emotion alone. I believe that the human being is something that is worth preserving because it is us. In many forms of Transhumanism, the human being dissolves into a state of “post-humanism” which represents the destruction of the human form, the human spirit, and eventually the human intellect becomes so diluted by artificial forces that it to will eventually disappear to, until what you have is fundamentally not a human structure any longer. Some will call this evolution, but evolution is a fundamentally organic process. Intelligence can guide evolution sure, but post-humanism does not represent evolution in any sense, it is an alien process and a fundamentally destructive one as far as the human being is concerned.
Of course not, I could very well wish to be a god. I could very well wish to become the universe itself. But we must remember that no man is an island. When I come up with a social philosophy of some kind, I have to remember to include the domain of life and the endless complexity and variables that come with that. Communism fails because it is the path of most resistance to the human state. Many communist philosophies are so anti natural, they end up working against nature itself. For an ideology to be successful in completing its goals it must be well adapted to its environment, in this case whatever human society should take it up.
Because I am a universal nationalist I would not interfere in the affairs of other nations and countries if they wish to take the path of post humanism. They do so at their own risk. In the boundaries of my hypothetical state, yes, no one will be given the opportunity to use cybernetic means of augmentation because it will not be provided by the state, sanctioned by the state, or encouraged by the general populace. This ties very much into my socio-political philosophy of Progeneracy, which I used as a modifier for Hyperhumanism. It is a Progenerate form of Hyperhumanism. They are two distinct ideas that can exist independently of one another. Progeneracy is authoritarian when you apply the lenses of arbitrary political discourse because it demands an empowered state, but Progeneracy is very different from anything you or anyone else for that matter has ever heard of. By its own standards, Progeneracy is both liberal and authoritarian at the same time. It is a reactive balance between the two forces of the right and left wings respectively, because they both have something to offer me, one is liberty, which generates chaos, and one is social tradition and values, which generates order. These two components are necessary for creation. I am a progressive, but I believe in constructive progressivism, not sporadic progressivism that sees anything and everything and needs to change it. When you take an idea like Hyperhumanism and apply it to Progeneracy, what I describe is what you logically get. At its core, Hyperhumanism is human augmentation but one that only modifies the base structure of the human, and to virtually whatever extent may be available. Filter this idea through Hyperhumanism and you get something that is heavily controlled by the state, and granted equally to everyone. The Progenerate system may sound overtly communist, but this is not the case. The Progenerate system is one in where the state is married to the people, and the people to the state. They are the same thing. The state is not a separate ruling entity like in most forms of government, that usually lead to an elite class forming. I believe in competition, but I also believe in harmony and social homeostasis which is my ruling doctrine. The state must balance liberty and authority to create peace. Many forms of Transhumanism I have seen are libertarian to the point of insanity. What you get with such a fractured society (which is the source of this forum’s namesake if I am not mistaken) is a complete dissolution of said society. People are going to get hurt, very hurt, exploitation will become the law of the land. Without a state to help police the unrelenting tides of chaos, the society will inevitably collapse. It is unbalanced, there is too much chaos. And on the flipside there is the “singularity” types that are collectivist to the level of nausea. I dont even know exactly what the singularity entails, but it cant be pretty from what I have heard. WAAAY too orderly. Progeneracy advocates for a balance between the individual and the collective, not one or the other in totality.
I never said super intelligence wasn’t allowed I just said it may never be allowed to surpass the grip of humanity. Many transhumanists cite a phenomena of their own making as to why humanity must merge with AI, and that is AI becoming a threat to humanity. AI can never threaten humanity as long as it is purposefully limited in a sphere of influence it can never escape and is engineered so it never becomes self aware (if such a thing is possible). For that the authority of the state is required and will be used to its fullest, but in a Progenerate system, the research firms responsible for these things are already nationalized and socialized for the good of everyone so AI getting out of control, is not something I anticipate. Such a thing can only really happen when private corporations are allowed to do whatever they want. I really do not see AI like most people do, AI does not have to have a personality, or motivations, or sentience, it can be an algorithmic program that solves problems.
Yes because having 6 fingers is a natural phenomenon that human beings are capable of. You are not less human for having an extra finger. An extra arm, however, is not something humans were made for, especially if you artificially place when on the body. That is inhuman, but it still doesn’t make you not a human.
Feathers are not something a human being would naturally have, so that would not be human, but overall you would still count as mostly human.
It is really less about appearance and more about function. Sure you can look as cosmetically different on the outside as you want, but you would still be mostly human, emphasis on mostly. Like I said up there the human form is beautiful to me and in my opinion it is not necessary that it be molested. Now I understand that is an opinion, and I know people may hate the human form, but in my view of the species, it is something that only nature itself should change on a fundamental intrinsic level.
Well for me it is not a matter of authority necessarily. Most people can tell a non human organism from a human one. Generally the people would decide I suppose, but peoples opinion also vary greatly so some consensus would have to be made.
That is true, but this would not be a problem in a Progenerate society because no one could become inhuman anyways. The tools will never be available for that to happen.
You make a good point, but remember, I still believe in pushing the boundaries of what is possible for humanity, but in a different direction. I work by my own progress. Technology will be allowed to advance near exponentially, and human beings will be provided the opportunity to augment their natural frame to become a better version of itself, not to become something totally alien and unnatural.
It wouldn’t, it would direct its development and make sure it is controlled, ie: never letting it get out of control in the first place in any way.
A good and fair question, one I suppose will have to be either settled in the court of debate or on the battlefield, or perhaps both even.
Progeneracy will allow for the development of advanced technologies that will be able to prevent, one day at least, ever disaster you have outlined.
Thank you for your detailed reply. I experience this discussion as quite valuable and interesting. What I like about your position is that it’s rather rational, so a civilized discussion is quite possible. Of course, I hold radically different views from you, but I see a fair bit of overlap.
Let me elaborate on that. Quite recently, I’ve developed a system that I term “cultural silvanism”, which presents a possible version of what I call the “fractal society”, which as you correctly interpreted, is what inspired the current name of this forum. I’ve just described how that system works in the following post:
In the framework on cultural silvanism there’s of course space for a state that’s based on progenerate hyperhumanism. It would just be one option among many. And I think in that context, having such a state might be quite valuable.
However, there also should be states that allow radical versions of transhumanism. Why? This has to do with the dangers of AI. It may be hard for “uncyborgized” humans to keep AI in check indefinitely. Cyborgs would have a better chance at that. But in the end, I don’t want superintelligences to be under the control of humans, because that would not be appropriate. Why have a superintelligence in the first place, if the humans effectively won’t listen to it? Sure, you could see the artificial superintelligence as advisor or even teacher. But as long as humanity still have power over the ASI, the situation is similar to a situation in which humans were merely advisors or helpers for chimpanzees.
I believe that civilization can only reach its highest levels, if there is free ASI. Human cultures will be enhanced by the support of the ASI, but ASI will create cultures that are even more complex and sophisticated than human cultures. I want to see such cultures, even if I can’t comprehend them in my current state as unaugmented human being.
And thank you for reading my reply to your reply. You know, though my tone in the proto manifesto was scathing and combative, I have no problem actually talking to mainstream Transhumanists, as long as they are not the pretentious close minded bigots I see on Reddit anyways
Cultural Silvanism looks to me like it grants the many sects of this hypothetical fractured society enough autonomy to be satisfied. There is plenty of self determination and freedom for the individual and the substate to do as they please (within the confines of the established laws.) However my views differ on the fundamental structure of the proposed state. The system you have described does not represent the idea of the nation, it represents a very loose confederated union of mostly autonomous entities, which is not necessarily a bad or counter productive thing, but it contradicts my absolute view of the nation state. The nation state is absolutely sovereign over its own land and resources, as a singular unified entity. While sure, a Progenerate state may exist within the Silvanist bloc, it will always be shackled to an alien entity, in this case the Super State. Now, I would not necessarily be opposed to this system if it occupied a space that did not impede on the nation, and other nations for that matter, but what I imagine what you have in mind is a more global idea, which is what worries me, though that is still an assumption of mine. I can see bouts of conflict breaking out with the Silvanist governing body if it did not already have domain over the entire Earth because other unified nation states and countries may for whatever reason, start a conflict with it and its many sub entities.
Fundamentally, I think the idea of cultural Silvanism would work in its own right, but it would still be denying absolute self determination to its subordinate entities. If an entity collectively (or not) decides to join this Silvanist bloc, I see no problem. However, if the Silvanist Super State sees outside entities as a threat and wants to forcefully assimilate them, I see a very big problem indeed. For me as a nationalist, I see no reason that the nation should ever find itself subordinate to the Super State if there is never any conflict between both entities. Both entities I imagine can co-exist even on the same continent if there is enough mutual co-operation and respect. And as a nationalist that believes in universal nationalism, I am vehemently anti jingoism and imperialism, so my philosophy would never inflame a war.
Overall, it looks like the Super State exists to police the many sub entities, to quell infighting and to provide for their external woes. If a neighboring and divorced entity can live side by side with the Super State I see no reason why they should not remain wholly independent of the Silvanist bloc.
And as far as Artificial Intelligence goes, my view of it has and will always be the same. It should be a resource disposable to the state alone, and the state should have the say in how rapidly it develops, how it is used, and how much control over whatever sphere of influence it occupies. If the artificial intelligence surpasses that of even all human knowledge, that power should be socialized for the good of everyone. It should be directed by the hand of man that will apply it as it sees fit.
Just an addition. The idea of Silvanism was developed for a setting in which different star systems were colonized throroughly, with millions of space habitats already existing. It seems to be far easier to plan a Silvanist structure in advance rather than taking “natural” territories and restructuring in Silvanist terms. For that reason, I think that Silvanism will most likely be influential once we colonize other star systems. Earth has too much history with traditional nation states to have an easy transition into a Silvanist federation. The same may even become true for most planets and moons in the solar system. It takes a very large degree of top down control to make Silvanism work nicely as I envision it. So, Silvanists will most likely compete with rogue colonist factions, rather than already existing nations or near or medium term future nations.
That’s why I assume that there will be a “nationalist” core and a “silvanist” rim of the space of the Terragen civilization (the civilization starting here on Earth). Tensions may arise at the boundary, but neither side has a sufficient incentive to start a large scale war effort. A symbiotic coexistence may be the most likely long term outcome.
As to AI: I’ve come to think that the idea of humans controlling the AI is more a problem of humanity than of AI. Let me explain: In theory the issues of desirability and implementation of human control AI can be seen as independent problems. But they really are quite interlinked in the following way: The less humans think that AI should be controlled by humans, the harder it gets for humanity to control AI, because the controlling humans not only need to suppress AI, but also the humans that want AI to be free (and the humans who want to become free AIs). Shifting social values may eventually force the controllers to give up their control, because most humans will be fed up with the expensive, restrictive and progress-preventing mechanisms of control.
That said, it’s still very important how to initialize free AI, so that it can develop in a trajectory that’s as positive for everyone as possible.
How do you distinguish biological enhancements from cybernetic ones?
Example: A team of researchers grows a neural lace out of a grid of genetically modified neurons that fire when they sense an action potential firing in a nearby natural neuron. This lace is implanted by a robot and exits the skull in the back in the form of a cable of nerves. At the end of each nerve fiber on that outside end we have another type of genetically modified neuron that fires a special kind of action potential that is optimized to be detected by exactly one sensor. The sensors are all arranged on a technological device that can be attached to the end of the cable of nerves. This device is connected by Bluetooth (or whatever) to your smartphone.
Would something like that be allowed under Progenerate Hyperhumanism? And why?
What about a version of that with nerve ends of the lace still within the skull and signals are just transmitted to a cap that can be worn on the skull?
What about organic nanobots made out of folded DNA?
What if those organic nanobots can be controlled by some kind of electromagnetic signal from the outside?
What’s the underlying principle behind these exceptions?
If the matter of consent is an issue to you, you need to deal with the problem that eggs can’t consent to fertilization by sperm. No human being has consented in advance to being born. So, if you see eggs and sperm not as entities which need to consent, then why is it a problem to modify both genetically?
Generally the apparatus you describe is very open ended for what it can be used for. So therefore, judged upon the basis of my own value system, I would have to say, it would be allowed technically, but there will be no vendors or services to accomplish this goal. My ideas are not supposed to be modeled after ideologies that step on people’s throats, if you can find a way to accomplish such a thing, you should technically be allowed to. But this does not mean that society at large will except you. Now you may ask, why does the government not provide this is a service? And to that I say, why should they? Are there ways that you can interface with a machine that are more simple? Less personally invasive? I think so, therefore I do not see it worth being the time and resources. That does not mean I do not see exceptions to this though, so if someone were personally very disabled I could see this is an option. Generally I view the use of an apparatus such as this unnecessary. That is not the path my ideology prescribes in relation to how the relationship with man and technology will evolve. This idea represents too high a synergy of the tool and the user. Though I am not totally closed off to the idea of it being used at all. The idea of the enhancements are to build atop of pre existing methods, not to create new ones entirely.
Well that depends what they are being used for and how they will be used. The idea of nano machinery is something I am open to for the adjustment of certain systems.
Also it is important to remember that when creating new venues such as brain to computer interfaces, you open the person up to many new vectors of attack from sources that want to do harm, ie: corrupt organizations and governments of all sorts. Bodily integrity is an important part of my ideas.
The principle behind the exceptions are that if for whatever reason, the tools at the disposal of the state are not adequate in solving the problem at hand, they may use other methods until the ideal methods are made available. I do not think if someone has a medical solution that works for them, that the state should come in and change that because it does not fit the ideal prescribed means of my ideology. That is too far of a violation of personal liberty.
This is a very interesting dilemma and I can actually envision in 20 years people suing their parents for conceiving them, though this would never actually happen in a healthy society. The issue with consent stems from the idea that if people tamper with the essence of their offspring, they are violating the individual. Now as far as sperm and egg are concerned, yes they are not technically and an individual entity yet, but if all goes as normal they will be. You cannot sabotage the individual parts to make a car and after the car is assembled say you did not sabotage the car. You sabotaged the parts the car would be assembled from. This is the same thing with a human person. The egg and the sperm are both essential pieces used to make a human, when you combine the pieces, you will inevitably get a human(assuming all goes well). So how then, is editing the sperm and the egg not the same as editing the person itself? You accomplish the same exact thing in the end, therefore functionally speaking they hold the same moral problem. The rights of the individual are still violated in this way. There are also several other moral problems here as well. Who is responsible for the editing? Is it the parents or the state? What are the motivations of the editor? It is surely not in any way acceptable to manifest one’s offspring as unnaturally superior in intellect in strength is it? Now that is something that will draw the ire of the lower classes. It is about maintaining social harmony for me. And as far as the fact that people cannot be consented to being born, this is true. But no one healthy would object to their parents and say “why have you conceived in my mother?” Now I suppose the option could be made available for one to take legal action against their parents for conceiving them, but reproduction is still a naturally ordained process. Direct genetic engineering is not.
I don’t see that as particularly strong principle. Who’s supposed to tell what “the problem at hand” really is? What if my problem at hand is being bored, and I see a couple of different options of fixing that:
Taking some kind of biological or chemical drug
Escaping into some VR game
Controlling a huge combat robot with some kind of control suit or an external or internal neural interface
Wireheading the pleasure centers in my brain
Getting a sex robot
Trolling people in some online forum
Who is supposed to decide what solution is adequate? The state? The people by punching me in my face, if they don’t like my solution?
Ok, so I gather it’s not the state that should tell me that my preferred solution is not adequate. Am I to assume that there won’t be any market supply for my preferred solution, because of pure ideological aesthetic alignment in your system? How would that be enforced? By spontaneous boycotts of companies who try to provide “unnatural” solutions?
And how would you like to ensure that everyone is healthy? Sure, if a person is healthy, retroactive consent for being conceived may be granted, but what if not? What right do people have to risk conceiving a person with severe genetic diseases (even if they may not appear within the prospective parents)? Why would such a right weigh more than the interest of a person to live a healthy life? Surely, if parents could select the sperm and egg cells that are least likely to contain genetic diseases, why shouldn’t they be able to do so and create the most healthy offspring possible?
And where do you draw the line then? You seem to see social harmony as a desired end. Wouldn’t approximation from below towards a societal mean of health, strength, beauty, character, and intelligence be desirable?
Ageing and spreading diseases are naturally ordained processes. Modern medicine is not. Should we therefore abolish modern medicine? Surely not, since that would be a violation of personal liberty?
I have a hard time modeling the priorities of your values. You seem to have some of them:
But which of those are overriding principles? When may one principle be violated in favor of maintaining the others? Surely, you should concede that there are instances in which you accept that any single one of these principles are violated in favor of maintaining another principle. This would indicate to me that there is no principle with absolute weight. There is no highest or most basic principle in your ideology. Apparently, your argumentation seems to amount to prevent too severe violations of one or more principles at the same time. But that’s not very principled. It’s open to subjective interpretation. It’s about as solid and intellectually cohesive as modern humanist social democratic capitalism. In other words, it’s a weird mix of partially contradicting values, but it could be worse. So, it’s about par for the course. The problem is, if you want to propose an alternative to the grown current consensus ideology, yours should at least be somewhat more intellectually coherent, or at least aesthetically more appealing. I don’t see that with your version of Progenerate Hyperhumanism.