Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

Evolution and Eugenics

featured

(Michael Hrenka) #21

There is a need for a solid basis to judge what should count as “better” and what not. This is by no means an easy task. And there are certain contradictory extreme positions, each of which holds its own appeal:

  1. There is a single universal standard for what is better, and we need to find it (this is what I hope)
  • There is a single universal standard for what is better, and it is identical to my own standard (this position could be called cultural chauvinism)
  • There are only individual standards for what is better (this may be termed “subjectivism”, or cultural relativism, though the latter comes with a lot of philosophical baggage)
  • There are no standards (nihilism)

We could discuss about the implications of each of those (essentially meta-ethical) positions, but we cannot expect to arrive at a clear conclusion any time soon. Nevertheless, it’s helpful to keep those different positions in mind, because the basic position one assumes shapes the further line of reasoning. Technically, I should call myself an agnostic in this regard, because I do not know which of these basic positions (if any) is true, even though I personally hope that “1.” is true. As an agnostic I cannot base my reasoning fully on any of the basic positions, so I need to pursue a different approach that is robust with regard to which base position turns out to be true in the end.

What kind of approach could that be? Unfortunately it’s not an elegant solution, but rather a pragmatic concession to my own ignorance. It’s a pragmatic approach that doesn’t rely on a most foundational philosophical ground, because I do not have access to what constitutes the most fundamental philosophical truth. At the same time, I want to be as rational as possible, and to avoid fallacies. Fallacious arguments can and should be attacked, so a position that relies on fallacious reasoning is not very robust.

What about human values? When we want to create an ethical framework on human values, we would need to define and figure out what human values are. Since philosophers can’t even agree on what “humans” are, this doesn’t seem to be a promising approach. Instead, if anything, we should be looking for universal values. If we are lucky “human values”, should turn out to be nearly the same as “universal values”, but there’s no guarantee for that. After all, it could be the case that position “1.” is true, but that human values are not fit for “universal quality standards”. My own expectation is that human values, once sufficiently abstracted and refined, are probably pretty close to truly universal values, but the devil is in the details of what “sufficiently abstracted and refined” is supposed to mean exactly.

Anyway, I’m currently settling for something of a compromise between hypothetical universal values and actual human values. Let’s consider three candidates that I stumble upon again and again (perhaps because they are sufficiently abstracted and refined?):

  • Well-being
  • Freedom
  • Wisdom

Well-being

For a long time I have taken happiness or well-being as foundational value, with which to judge “goodness”. That’s basically a utilitarian position, and I actually identified strongly as utilitarian. I’ve criticized this position in my central thread:

It’s probably not the worst philosophical position to take, but it comes with severe practical complications, which make it less than optimal for real world applications:

  • How do we actually define well-being?
  • How can we measure well-being properly?
  • Are we able to predict what will actually (or at least probably) increase our well-being?
  • Whose well-being do we need to take into account, and with what weight?

So, what would happen, if we used the criterion of well-being for directing our own evolution? It might turn out wonderful, or we might mess things up horribly, because we would have been too stupid and messed things up in a way that we might currently not even be able to comprehend (while that’s something that can go wrong with every basis, my intuition tells me that it’s more likely to happen when optimizing for well-being rather than for freedom or wisdom).

Actually, when you were inclined only to optimize for happiness, you would only need to focus on the happiness set point and increase that as much as possible, since any other intervention plays little role for overall happiness. The result would be humans who are pretty much the same as us, just with more subjective happiness – from the inside that would be a tremendous difference, even though from the outside the difference might not even be really registered.

Wisdom

While I really think that we should focus on increasing our wisdom, this criterion seems to be the least practical when considering what conclusions one could draw from it in the realm of genetic enhancement. What makes a person wiser?

  • More experiences? Yes, pretty much, so we should aim at increased life spans, right?
  • Higher intelligence? Quite likely.
  • More empathy? I guess so.
  • A higher happiness set point? Perhaps not, if that makes people more comfortable and less likely to pursue novel experiences (though that is a questionable assumption).
  • Better health? What if we gain a lot of wisdom from dealing with the challenges posed by ill health?

So, we can’t easily draw clear prescriptions from the basic value of wisdom. The possible conclusions are just too contradictory and speculative.

Freedom

Freedom is the value I retreat to due to the issues faced with the previously considered options of well-being and wisdom. Compared to well-being and wisdom, freedom can be defined relatively easily as the number of options one can choose (though this confronts us with the issue of “free will” and related questions).

Freedom is intimately related to intelligence, if intelligence is indeed seen as capability to increase one’s freedom, as is suggested by the Wissner-Gross equation for intelligence (also see an insightful comment on that formula).

Anyway, if we accept the amount of freedom as basis to judge “goodness”, then we can actually draw some relatively strong conclusions for genetic enhancements:

  • Diseases (including ageing) and disabilities should be avoided as much as possible, because they decrease one’s freedom
  • Intelligence should be enhanced, because that usually increases one’s freedom
  • One should not too strongly optimize for certain environmental adaptations, because that would reduce one’s freedom to live in as many different environments as possible (one of the core strengths of human beings)

Still, there are many open questions with regards to freedom as basis for “evolutionary / eugenic fitness”:

  • Whose freedom do we want to improve? That of human individuals? What about the freedom of groups? What about non-human persons? Could we even define a holistic “cosmic freedom”? How does the freedom of individuals add up?
  • How do we define freedom in the face of determinism or quantum randomness?
  • Doesn’t more freedom allow people to do more stupid things? Freedom without wisdom may not be a good idea, but on the other hand wisdom may not be achievable without having freedom in the first place.
  • Can we really expect to be better at predicting what will increase our freedom than what will increase our well-being or wisdom?
  • Are there perhaps multiple different kinds of freedoms that contradict one another?

Overall, I think that freedom is the best abstract basis to deal with complex questions as the quality of genetic enhancements. Do you rather agree or disagree with me?


#22

a good analysis. morale, laws and rules and worldviews definitely base on peoples belief systems. to go a step further those positions could be developed and changed during a lifetime. the 2. position i would classify as the most common and less wise. it is a position almost every human being starts with. people learn as children what is right and what is wrong from their fellow human beings and internalise positions and standards that already exist. the first hard work in life is to learn existing beliefs, but every conflict and every doubt that arises has the potential to let a person grow out of his culture and become more reflected, become a rebell or a philosopher. to realise that other people, other cultures and completely different belief systems have their inner value and are as much legitimate for people than ones own cultural beliefs could end up in a form of confusion and desperation to feel helpless and have lost orientation, so that the acceptance of variety that comes with (or: that is) cultural relativism could lead to nihilism. but the parting of ways could also lead to position 1. we could be conscious about our differences as humans, respect them and become agnostic, but at the same time we could have the idea, that we as a species must have something in common. the human rights arose from this position and this was the beginning of a universal standard for humans. but we can go a step further: we could have the idea, that we must have something in common with every lifeform on earth.[quote=“Radivis, post:21, topic:1118”]
Overall, I think that freedom is the best abstract basis to deal with complex questions as the quality of genetic enhancements. Do you rather agree or disagree with me?
[/quote]

yes, freedom is just another word for nothing left to lose :innocent:. no, freedom is a good idea but not easy to grasp. the intelligence equation points in the direction of “more options” but includes predictability. we had this topic here and remained in disagreement: [quote=“zanthia, post:14, topic:1118”]

no. that might apply to other animals who have no imaginative power and abilities of abstract thinking ( although in some situations even other animals could differentiate) or which could not be informed about their situation. to anticipate how long an unfree situation will last makes a huge difference to a situation of which you have no information. if the inability to act would suffice to feel unfree, everybody who uses the elevator would feel like someone in prison. ( and this might only be the case with people who have phobias). and it makes a huge difference to know the cause
[/quote]

i can take the elevator if i can anticipate, that i will be able to leave it and gain more freedom when i arrive at my destination on a higher level of building. to me it is intelligent to include our power of imagination and anticipation when we talk about freedom.

and there is another problem with freedom we had here:

[quote=“Joao_Luz, post:13, topic:1118”] [quote]
The free man is the man who is not in irons, nor imprisoned in a gaol, nor terrorized like a slave by the fear of punishment … it is not lack of freedom, not to fly like an eagle or swim like a whale.[/quote]

-Helvitius

This is one of the quotes in the wikipedia article on negative freedom. Isn’t it the most anti-transhumanist thing one could possibly say? Isn’t our objective to allow humans to fly like eagles or swim like whales? That has been my conviction ever since I started identifying myself as a transhumanist and I never saw anyone inside the movement disagree with me in this regard.

That’s why I’m surprised that Amon Twyman used such a retrograde philosophical concept as a basis for transhumanist morality. Negative liberty is something that only appears to make sense in pre-modern political theory. I frankly don’t see how it would be fit to guide the radical transformations to the human species that our movement proposes, especially considering that it isn’t even fit to properly guide our modern lives.
[/quote]

joao made a very good point . but nevertheless i am not convinced and as i stated in this thread repeatedly, i think “negative liberty” is a better point to start and it will be a challenge to find something superior. ( with more freedom to …act? )
if we take the position that freedom includes, to fly like an eagle and swim like a whale, it will get us nowhere when we follow that path to an end. then the consequences have to be, that not only the way our bodily structure limits us is a lack of freedom, but the whole fact to have a body is a lack of freedom. then we probably end up in collective suicide of humanity to get rid of our bodies. because we must conclude that earth is an inacceptable limit, too. and if we are consequent, we have to consider space as an inacceptable limit as well…
to me, this “freedom-category” at this time, does not make much sense to improve our lives.
for practical and rational reasons the violation of our negative liberty should be our priority to solve. because this is a problem humans cause. and humans have to solve it. it makes no sense if fellow human beings give me a pair of wings, but other fellow human beings lock me up in prison for the rest of my life. the positive liberty to gain a pair of wings only makes sense if my negative liberty is not violated and no human being prevents me to fly. and i would rather be out of prison only with two legs to move than to be caged with wings.
so if freedom is our value and we want to maximize it, the first step has to be, to change humanity in that way, that we ourselves are not the ones anymore, blocking our way. @ joao :
we could not conclude that negative liberty “isn’t even fit to properly guide our modern lives.” if we have not managed to respect the negative liberty of all till now.

intelligence sounds like a good idea to increase ones freedom although this idea becomes complicated when we have a look at the savant syndrome. is it a disability or an indication of astonishing human capabilities that could increase freedom within special areas? could those abilities be called transhuman when we will all achieve them with the help of technology and without the problems they cause on the other hand? ( if that is possible…we will need an off -switch for abilities like that). could a special high intelligence cause severe suffering for the intelligent person ? would it be necessarily mean more freedom, if an intelligent person deliberates over various options to act and come to the conclusion to not act at all, while a normal person just decide to act out of her feelings with less deliberations or none at all? so while it could be wiser, not to act to avoid predicted problems, it does not necessarily mean more freedom, blocking ones own actions with too careful deliberations. this is a personal trap i experienced and i would like to overcome like my perfectionism. maybe i am wrong but i experienced that perfectionism is a special feature of intelligent people and is not so common among the average humans. and while i really love and enjoy a perfect work or a perfect piece of art, i suffered my whole life from blocking myself to act and to risk something imperfect. and this situation feels extremely unfree! ( for example: it is a bit of a horror to write here in english because my english is flawed and crippled. so there are many posts on my harddrive i never published. )


(João Luz) #23

I’m sorry I took some time to respond. I’ve been really busy lately.

Lat’s start by making clear what our bases for ehtical theory should be:

The difference is that, while emotions are personal and subjective, intrinsic human psychology is common to every human being on the planet.

While some people cringe when they face “unnatural” things like most transhumanist concepts and some don’t, everyone in the planet wants to be happy and to make other people happy.

That’s why we can use basic human values to base all our ethics on, because they are universal for humans. People can’t question them any more than they can question that 1+1=2.

It’s not a basic human value, but it’s derived from them. By improving human capacities we’ll be empowering people so that they can become more likely to make themselves, as well as others, happy.

OK, then, as I said before, the first philosophical problem we need to solve is not any of those that we’ve been talking about, but this:

Sentience is a good idea. Basic human values appear to be appliable only to sentient beings. Trees can’t be happy nor make others happy.

Now, do we have proof that the zygote isn’t sentient?

I don’t know what you mean by “proof” but the overwhelming majority of neuroscientists today, as well as most philosophers, believe that the “secret” of sentience lies in the anatomical and chemical structure of the brain. I belive this is the case for two reasons:

  1. Only the lifeforms who possess a central nervous system have been observed displaying behaviours that are associated with sentience. Thought this doesn’t prove anything, it is a considerable piece of evidence.

  2. Suggesting any other hypothesis, which I assume would have to be based on some sort of spiritualism, would directly violate occam’s razor, thus passing to the realm of pseudoscience.

So, yes, I guess we can say that the zygote isn’t sentient, since it doesn’t have a brain nor any other structure capable of similar information processing.

The naturalistic fallacy that you are, in my opinion applying, is the concept that you keep repeating over and over again: negative liberty.

That concept places an inherent value in the so called “natural state” and posits that harm inflicted by nature is not as bad as harm inflicted by other humans. You’re saying two individuals in the same situation are diferently free just because they ended up there for different causes.

While I appreciate your arguments about people feeling differently and about the importance of imagination, I think that they they are not useful for the task of defining Liberty, since they would ultimately lead to a subjective instead of an objective version of this concept.

This is an important thing to consider when we talk about how to obtain more freedom, but not when are defining or evaluating freedom.

It may be useful at this point to state that freedom isn’t a basic human value, it’s just a good way we have to uphold those values in certain circunstances. Having more liberty is only a good thing when it increases the individual’s capacity to make himself and others happy, as it’s generally the case but not allways.

I think that the problem here is that you are trying to find a moral theory for transhumanism without thinking like a transhumanist.

I don’t know what you mean by “getting rid of our bodies” and “collective suicide”, but most transhumanists see mind uploading, which would mean abandoning our biological bodies to live like beings of pure information. I’m not quite has enthusiastic about that as them, since I would personally prefer to keep enjoying physicall reality for at least as long as I can, but I do believe that the development of uploading would increase people’s freedom and their ability to be happy.

If by “collective suicide” you mean actual suicide, the kind in which we all really die and cease to exist, then the question you need to ask is: would that really make anyone happy?

It’s not “inacceptable” for everyone, but, there are many transhumanists who also dream of transcending tha laws of space and time itself by mean of human intervention in elementar physics. I’m not sure if that’s possible, but I don’t see anything wrong with it, and I actually think of it as a noble goal.

Why not? You think we should adopt an ethical position that discourages ambition just because the fullfilment of that ambition seems distant? Aren’t you afraid of creating an unjustified moral dogma just to convince people to adress the most “pressing” issues first.

If we want to use liberty as a concept in ethics we need to define it in a way that makes it:

  1. Objective

  2. Useful for the fulfillment of basic human values in as many situations as possible

Negative liberty simply does not fulfill these requirements.

While it’s true that humans should strive to end limitations of liberty imposed by other humans we must also seek to eliminate the limitations imposed by other forces.

Of course it doesn’t, that’s why we should consider liberty as one unitary concept and seek to protect it as a whole.

That makes sense, but would you rather be burried alive under the ground for natural reasons or being forbidden to leave your country by the government?


#24

this is a good statement and this might be the connection to radivis 1. position:[quote=“Radivis, post:21, topic:1118”]
There is a single universal standard for what is better, and we need to find it (this is what I hope)
[/quote]

and therefore the chance to develop something like human rights or develop them further. but with your following conclusion, a new problem occurs:

you said people “cringe”. ok, but i would not use the limitation of “unnatural” things. could we agree, that there are different things and thoughts where people “cringe”? you once used the expression “obnoxious” and this is also a case where people “cringe” about whatever.
then you complained that i should not impose a morality on you that is not based on your feelings but mine. why not? because it makes you unhappy? what right do we have as transhumanists, to impose a morality on many other people that would make them “cringe” and therefore very likely unhappy? could happiness be the solid ground, the one and only universal value? addicts could claim to be very happy when they have their drugs.

ok, please compare the following two statements…

  1. we humans are lifeforms and know we have qualia and are sentient. we should assume that all lifeforms possess a form of qualia and therefore a form of sentience, although it might be different than ours.
  2. probably only lifeforms with a brain and a nervous system possess sentience. other forms of behaviour, that might look like qualia and sentience to us,( like for example the oenothera ) we could explain with instinct and stimulus and response. although we could not really explain what the difference between stimulus and response and sentience is and we create the problem of the possibility of living zombies, we think it is irrational, to believe that more lifeforms as those with brain and nervous system are sentient. once we thought only humans ( and if we go back in time it gets even darker, but i will skip that topic) are sentient, then science assumed that apes might be sentient and intelligent, too, maybe some other animals as well, although we could not really distinguish and explain what we mean with “intelligence”, “sentience”, “consciousness” and “qualia”. is intelligence without sentience possible? what about bees? they seem to be intelligent and ok, they have a nervous system, they could be sentient. or too crazy? but if science wants to show us that probably plants could be intelligent, we “cringe”.
    http://www.bbc.com/news/10598926

…and apply occams razor.

i am nearly powerless to avoid natural disasters and accidents. i could take precautions, and humanity could use knowledge and technology to make living on earth safer but if something happens to me, it would serves me no purpose to complain about it. but if other people are violent against me, i could do something. your comparison is flawed. because even a transhumanist could not free himself from natural disasters and accidents.

now it becomes annoying, that you cannot communicate with me without insinuations. i never said, that “natural” is better than “unnatural”, i would not even use that strange dichotomy. but you constantly confuse me with the ususal enemies of transhumanism. you assume that i “cringe” all the time, although the only thing i really cringe about in this discussion is to give parents instead of scientists the right to decide how the future generation should look like, should be like and should fit in and make parents happy. and i never said that uploading should not be done because i often wished that i could leave my body and become something different. to me, personally, it would be nice, if science could create humans, that are more intelligent, more wise and more compassionate than the average humans, now. and no matter if that goal is achieved with genetics, cybernetics or neuroscience or psychology or whatever, if we want to do the right things and avoid dystopias we need to find the best ethical ground as objective as possible; where neighter my feelings and wishes nor your feelings and wishes should be relevant to decide over humanity as well as the feelings and wishes of parents should not be relevant to decide over humanity.


(João Luz) #25

I was merely giving an example. Of course that people can cringe about many things, not only unnatural things.

Because you can’t expect me to adopt a moral position that I simply cannot understand without sharing your own subjective emotions. If we want to create an ethical theory that can be followed by every single human being we need to create one that every single human being can reach on his own, provided that he thinks rationally enough.

But, from what I’ve seen, you don’t really think that ethics should be based on people’s personal feelings, do you? So, what are you trying to get with this.

I feel like you may be accusing me of some sort of hipocrisy. You may think I’m the one who’s making arguments based on emotions.

I’ll try to rectify this possible situation by clarifying one of my earlier statements.

My statement was not meant to be taken as moral argument, it was merely intended as an example of how greatly people’s feelings regarding this subject can vary.

It was rhetoric strategy devised to support my argument (that, as you correctly identified, feelings cannot be used as bases for ethical and moral principles).

Now, I do think that people are entitled to act emotionally as long as that makes them or others happy and doesn’t harm anyone else. What they can’t do is imposing their emotions on other people and forcing them to act the same way.

In other words, I think it would be legitimate for a parent to say “I like blue eyes, so my child will have blue eyes”, but it would not be legitimate for him to say “I like blue eyes, so every child on the planet will have blue eyes”.

For the same logic, it would be legitimate for you to say “I don’t like the concept of genetic engineering, so my child won’t be genetically engineered”, but not “I don’t like the concept of genetic engineering, so no child on the planet should be genetically engineered

When you put it like that, then yes, it does seem that the first scenario is simpler, but that’s only because you are not adressing it’s consequences:

  1. If there actually is a clear line between what is alive and what isn’t, then that line has to be a very thin one. What is a lifeform? A being that is made out of cells? A being that possesses genetic information? A self-replicating and complex structure with a carbon-based chemistry. Is a virus alive? Is a prion alive? That question is at least as complex as what is sentient, and it seems to me that any line we draw between the living and non-living worlds will be a mostly arbitrarian one.

  2. If you manage to draw that line, you still need to justify why life should be related to sentience? Let’s suppose you draw the line in bacteria. Why should bacteria be sentient and rocks not?

  3. As I understand from your statement, you are not defending spiritualism (which would put into question many of the foundations of modern science, thereby creating a large set of complications all by itself), but you are saying that all other lifeforms may have some sort of structure that allows that to process information that is different from the brain’s but still capable of making them sentient. In that case, you’ll have to explain me how that different sort of sentience may be and how we can find a structure inside a bacteria (for exemple) that would make that sort of sentience possible.

  4. You’ll also have to explain why brains (which clearly appear to be related to sentience and consciousness in animals) have evolved if consciousness already existed in unicellular organisms.

  5. If unicellular organisms are sentient, doesn’t that mean that every single cell in our body should be sentient as well? If cells can be sentient on their own, why would they lose that sentience when associated to others.

Now, apply occam’s razor.

I also think that your second statement is sort of a straw man for my positions, but I guess that understandable since I didn’t really elaborate on what I think.

Yes I think that bees can be sentient, and no I do not cringe when I hear that plants can be intelligent.

No one can completely free himself from anything at all (that would mean living in an isolated world where you control absolutely everything, well, actually that might be possible with far-fetched futuristic technology, but it would make ethics irrevelevant), the only thing we can do is comparatively increase our freedom to make it greater than the one we had before.

There are also many people in the world who could harm you without giving you a chance of doing something about it.

I’m sorry, I never meant to offend you or to confuse you with someone that you are not.

The thing is that I don’t see that much difference between your arguments and those of the people that you call “the usual enemies of transhumanism”

As I understand it, the concept of negative liberty implies a distinction between the “natural” and the “unnatural”, as only “unnatural” limitations to human actions would be considered as factors that diminish one’s liberty. From all I’ve read about the concept, it’s all based on the idea that the “natural state” is the state of “maximum freedom”. Did I understand something wrong? If yes, I would like you to clarify it a bit.

Then what did you mean by “collective suicide” and getting “rid of one’s body”. People generally say those things when they are talking about uploading. Did you mean actual suicide?

I don’t understand why that should be a consequence of my definition of liberty. How would increase one’s ability to do what he wants?

Quite frankly, many times I find myself having absolutely no idea of what you are trying to say, and of course that leads to assumptions and assumptions lead to insinuations.

Yeah I want that too.

The thing is that not everyone wants the same thing and, as long as a parent doesn’t harm a child (and no harm can come to anyone just for having other people morally or amorally decide something that would otherwise have been decided ramdomly) he should have the right to make him accordingly to his personal preferences. What right do we, as a society, or nature itself have to decide what kind of child he should have and raise.

Of course society should have something to say. As we don’t want to create a society of idiots and sociopaths, considering that that would slow down or invert human progress in science and morality, governments should probably institute intelligence and empathy quotas that would be raised with time, and we should definitly forbid any kind of modification that would surely or very probably do way more harm than good to the child.

But none of that means that parents shouldn’t have a say on the genetic characteristics of their children.


#26

maybe they didn´t. probably they use a central server for their network :wink: we call brain…

ok, and what would you call your second statement that accuses EVERY other hypothesis than yours of being nonsensical spiritualism??? scientific?:rage:

why should humans be sentient and bacterias not? but i could give you an answer: because rocks are no autopoietic systems. https://en.wikipedia.org/wiki/Autopoiesis bacteria and humans are.

oh, if you like to , please go on. after accusing me of naturalistic fallacy and not being transhuman, it could not become much worse. if it serves any purpose i could be a witch…:rolling_eyes:

maybe it is not that different than the brains on a deeper level how it functions. have you clicked on the link about the plants? if it suffices to transfer information for sentience, it could be just electrical energy, light or whatever. another question could be, why amoebas could move, “sense”! food and react on it and why are some of them predators? to sense something and be not sentient makes no sense ! :wink:
these might all be speculations, but what is worst, is, that all other theories about sentience depending on brain, nervous system, or ( for humans ) entertaining and interesting behaviour are not much more than speculations without proof. so if you don´t mind i will take a distance from all those speculations and remain on my position, that we have no proof, if a zygote is sentient or not.

if you complain that you are unfree, that you cannot swim like whale and fly like an eagle, somebody else could complain to have a body. maybe he wants to be immaterial with no limitations from gravity, earth or even space and he doesn´t want to walk on ground, nor swim in water nor fly in the air. isn´t it legitimate as well? to consider ones own body as a limitation and don´t want to have a body at all? in that situation no upload would help because you would upload your consciousness into something material. and there are already scientific indications, that something lives on after death. so imagine different transhuman cults in the future and one of them collects information about life after death and when they all be convinced, that they will live on, they kill themselves. wouldn´t a scenario like that not be plausible or legitimate? they would just do what they want with their body, using science, information and technology to become something what they believe will be better. based on only “maximum liberty” to do what you want with your body we have to tolerate even a step like suicide. but would it be rational? ethical? and would somebody be unethical who tries to prevent their suicide?

i tried to question “happiness” as universal value. if people cringe about transhumanism, they might be not very happy about it. should we impose our values of morphological freedom on them, even if it makes them unhappy? it would be a contradiction if we take your “happiness”- value and at the same time disrespect, when people cringe. but amon twymans “exit principle” could be a solution for it. everybody who dislikes transhumanism could leave transhuman communities and build something different he likes more.
but what about drug addicts? we have to respect them because they also will claim to be happy when they have a sufficient supply of their chemical substances.

probably you are. how could we be sure that our deliberations are free from emotions? so many ideas and cultures in peoples minds that claim to be the truth, wisdom and good and right…it is not easy. and we always stick to an approach that FEELS promising, appealing, right…
i want people to do what they want and to be what they want. i want them to have the chance to become more as they are. but i could not be sure, if this is just a wish based on my feelings.[quote=“Joao_Luz, post:25, topic:1118”]
In other words, I think it would be legitimate for a parent to say “I like blue eyes, so my child will have blue eyes”, but it would not be legitimate for him to say “I like blue eyes, so every child on the planet will have blue eyes”.
[/quote]

it is legitimate to say " i like blue eyes and i want some for me." but not “i like blue eyes, so i want someone else to have blue eyes.” what you say sounds as if parents possess their child and therefore could do what they want with it. could that be right?

yes, i think you understood negative liberty different than i. to me it does not need a distinction of " natural" and “unnatural” but a distinction of human- induced unfreedom and unfreedom of other causes. if humans have built a road and you want to walk on it but a human prevents you from arriving at your destination, he violated your negative liberty because without the intervention of this human you could have done what you are able to do on your own: walking on this road. and if humans invented technologies to enhance humans and demand a payment to enhance you and you have the wish and the money to make the deal “yes , please, enhance me!” every other human being, state, religion, collective, human influence…etc. that wants to stop you from changing your body ( with laws, rules, prohibitions, punishments, threats…) would violate your negative liberty. because without this human influence you could have made the deal on your own and enhanced your body. and to let you do, what you are able to do on your own and to let you be what you able to be on your own, could be demanded as a human right.
but what could not be demanded is positive liberty like for example: you walk and your way is blocked by a mountain. you want to walk to the other side without climbing the mountain. and now you are yelling at your fellow human beings that they should build a tunnel for you, because you feel unfree because of the existence of the mountain blocking your way. this positive liberty could not be demanded as a human right, because you will need others to build the tunnel. you are not able to do what you want to do on your own. and it cannot be right to compel other human beings to work for you to help you to fulfill your personal wishes. in the case of positive liberty we could imagine a transhuman dictator who uses a collective and resources and human workforce to do research to enhance him with a pair of wings. for the sake of the dictators positive liberty other people have to invest their time and energy to fulfill his personal wish, because he is not able to enhance himself with a functional pair of wings on his own. a different case would be, if a community decides to work on a pair of wings for humans. but a single human could not demand that this should happen and coerce others to help him.
to have mountain blocking your way and the lack of wings are no manmade causes of unfreedom. no other human being is responsible that you feel unfree because of these circumstances. so it would not be right to demand from others to change that.
but if you could make a deal with other human beings to enhance your body, nobody else should stop you. and everybody who tries that, is responsible for you feeling unfree. so it has to be your human right to demand, that no goverment, no religion, no other human being should stop you from doing with your body what you want to do and are able to do on your own.
and this is how i understand the quotation:

“The free man is the man who is not in irons, nor imprisoned in a gaol, nor terrorized like a slave by the fear of punishment … it is not lack of freedom, not to fly like an eagle or swim like a whale.”

to be in irons, or imprisoned in a gaol, or terrorized like a slave by the fear of punishment, means suffering from an unfreedom other humans are responsible for. ethics is about human actions. we want to find out what the right way to act is. so if others let us suffer because they made us unfree when they imprisoned us etc…we can say that they act unethical.

but to suffer because we could not swim like a whale or fly like an eagle lies not in the realm of ethics. no other human being is responsible for that kind of suffering, nobody acted unethical, to cause that kind of suffering.


#27

but maybe we should sort out, what we have:


(Michael Hrenka) #28

Let me coin the phase libertarian consequentialism (LC) here, whose maxim is the greatest increase of freedom. There are actually good reasons to take that as most overall balanced position, rather than maximizing happiness, or protecting people’s negative liberty.

Consequences

First of all, LC is a form of consequentialism, so we are arguing with consequences and ends, rather than causes and means. There is certainly some value in considering the latter, but doing so mostly has strategical and heuristic value. There’s a useful generic thought experiment that makes the distinction clear: Imagine there is a button that magically gave you what you want. Would you press it? Would you press it, even if it had the side-effect that it killed a thousand people randomly? If your answer to the second question is definitely no, you probably aren’t a consequentialist, or can’t imagine anything that’s worth more than a thousand deaths of random persons (what about world peace, the end of all diseases, ageing, and involuntary death, or a deus ex machina device that granted you any reasonable wish?).

Anyway, a big problem of consequentialism is that humans are rather bad at predicting the (direct or indirect) consequences of their actions – especially when it comes to emotions like happiness. Shifting the focus on something more objective than emotions, namely freedom, should enable us to make better predictions (even if freedom is seemingly slightly less important than happiness).

Could we end up in a scenario in which people optimized their freedom, but ended up being totally unhappy? On the surface, that seems possible. Let’s first assume that people don’t decrease their happiness, because they don’t value happiness. If they didn’t value happiness, it would be acceptable for themselves to become unhappy. So, if people value happiness, and they end up in a situation with high freedom and low happiness, they could just undo their previous actions that made them unhappy. Or alternatively find a path towards greater happiness. Since they are rather free, there are almost certainly such paths, even if they can’t simply trace back their paths to get back to the situation they were in previously.

Restrictions of negative liberty must always be worth it

If we accept the maxim of striving for the greatest increase of freedom, it may sometimes be beneficial to allow local decreases of freedom in order to reach greater global freedom – this abstract line of reasoning is a possible basis for accepting the rule of law, for example. Justifying legal penalties simply on the basis of protecting negative liberty gets you into contradictions much easier: You violate the negative liberty of those who violated the negative liberty of others? Huh.

A clear direction

Maximizing freedom provides a more resilient direction than maximizing happiness, or simply not diminishing negative freedom. People should become more powerful, and more capable, if they strive towards greater freedom. That’s very much in line with the general sentiments of transhumanism. If the obstacles to our freedom are other people or natural causes doesn’t matter. We shouldn’t accept any imposition we cannot fix with technology and intelligence.


System V factions
(João Luz) #29

I have answers to most of your points, but I think we should leave that question for a topic of its own. Trying to determine which beings are conscious and beings are not is something that should never be discussed as a side issue of anything.

So, let’s now focus on the issue of freedom.

As I said, I don’t recognize any validity in the concept of “negative liberty”.

Yes, it is, why shouldn’t it be? I’ve seen a few transhumanists expressing similar desires, and I frankly don’t see anything wrong with them.

OK, first, you’re gonna have to explain to me what you mean by “something”, and second, you’re gonna have to show me that evidence or I’ll remain intensely skeptical.

But, let’s assume you’re right. Let’s assume that there is “life after death”. What does that imply?

Well, if death isn’t really the end of human of human existence, and the self actually survives the ceasing of all biological functions, then why sould we be this much afraid of it?

Why wouldn’t it be legitimate? If those people really had evidence that they would become some sort of immaterial beings after death and they knew that was what they wanted, I don’t think they would be doing anything wrong if they “killed” themselves.

Since “death”,in this scenario, implies nothing more than leaving one’s body behind, it wouldn’t be any less rational and ethical than uploading. In fact, we could actually describe it as not “death” but as “uploading to a non-material substrate”.

Do you guys know anyone who doesn’t value happiness? If you do, I think that person is probably suffering from some sort of mental disease.

There are, in fact, some people who are selfless to the point of considering their personal happiness as secondary to the happiness of others or to a cause they deem “superior” to themselves (probably because they think that cause will generate great happiness to many people). That is a very noble attitude in my opinion, but it doesn’t mean in any way that they don’t value happiness.

I think that happiness is clearly a basic human value. If the purpose of ethics is not to make as many people as possible happy, than what is it? Is there anything else we all can agree on?

Their happiness is not more important than ours. Society should be organized in a way that gives people as much freedom as possible to find happiness, provided that they don’t step on other people’s freedom to find their own happiness.

Yeah, I agree with that. They could be like the Amish. That sort of social organization would definitly uphold basic human values. As long as people were allowed to enter or leave those communities whenever they wanted to, provided that they followed the rules, I think it would be a positive thing.

Drug adiction is a mental disorder, and it’s sure to decrase a person’s happiness in the long term. A person who has been deemed clinically incapable for rational thought shouldn’t be allowed to make its own decisions.

It’s true that you can’t know for sure why exactly you have that desire in your mind, but you can always rationally derive that desire from basic human values and find out that it is in fact the right thing to do.

As the child wouldn’t have the chance to decide anyway, I don’t see any basic human values being violated.

The parents don’t own their child, but they are in charge of making all important decisions for it until the child becomes capable of taking over that role.

I do think that people would have the duty of helping the dictator get wings, but the dictator would also have the duty to help them get whatever made them happy. It wouldn’t be right for him to make others miserable just so that he could be happy himself, on the contrary, he should strive to make others happy just as much as they should strive to make him happy.

Ethics should ecourage people to help each other, not just to keep out of each others ways. That would be more in the interest of upholding basic human values.

While I can’t demand that other people satisfy my every whim, I do think that, when they have the possibility of helping me without sacrificing too much of their freedom and happiness, they have a moral duty to do so.

No human would be the cause for that suffering, but every human on earth should help one to break free from that suffering.

Yes, I totally agree with that. Ethics is about doing what is right, not just not doing what is wrong. If nature is the thing that is stopping us from becoming happier, then we must fight nature with all we have!


#30

it is in the wrong order. and you use again the word “nature” and open up the useless dichotomy again. everything is nature. if you have a civil war at your door and you are constantly in fear, that somebody would kill you or your relatives simply while walking in the streets, you might wish that your body is ironclad and that your hands are fireweapons. you might want to be enhanced like robocop or the borg to feel save again. but in a case like that the war is the problem and not the fact that your body is vulnerable. transhumanism could go into any direction, depending on culture and societal problems. in areas where people are constantly in fear to starve to death, but have the most insolation people might want to be enhanced with photosynthetic plant cells in their skin. people who are often raped in their life might want to change their sex or get rid of it at all. drugaddicts might want to get brainenhancements that make the use of substances obsolete. people with hyperphagia might want to be enhanced with a digestive system that could eliminate calories. people wo feel lonely might want to change into a barbie- or ken-like body to become more popular and attract more friends. parents who are ashamed of their children might want to have a new one they consider more presentable. long-time unemployed might want to get enhanced with multiple talents to end their suffering to feel worthless.[quote=“Joao_Luz, post:29, topic:1118”]
Why wouldn’t it be legitimate? If those people really had evidence that they would become some sort of immaterial beings after death and they knew that was what they wanted, I don’t think they would be doing anything wrong if they “killed” themselves.
[/quote]

yes, it is legitimate! and all other wishes i listed above are legitimate as well, because they all would increase individual happiness.

here we have a real big new problem. why should we declare somebody who doesn´t want to experience his life limited on endogenous opioid neuropeptides ( nature?) but instead wants to experience the effects of drugs, as mentally ill and take away his freedom to pursue his own happiness? wouldn´t we have a double standard with it? why should our dictator be allowed to experience the bliss of a pair of wings but our drugaddict should not be allowed to experience the bliss of his drugs? what is more rational about the one thing somebody wants to experience in his life than the other? and how could we justify that kind of paternalism ?

“don´t step on other peoples freedom” is the core of negative liberty. you fought against it the whole time but now it seems that you introduced it yourself, again.
what i wanted to show with my list of legitimate individual wishes to pursue ones happiness is that transhumanism is not only about enhancements and it wouldn´t suffice to just implement the maximum freedom for everybody to enhance oneself. it is about solving problems and avoiding dystopias as well. we would not solve the problem of war if we enhance people in the style of robocop or borg because the opponents will develop only better weapons. we would not solve the problem of starvation with photosynthetic body cells and all the human problems we caused with our violence against each other.

ethics is about not doing what is wrong as well. and it is a concern of humanity and not only of the small group of transhumanists. and transhumanists won´t stand a chance to better humanity if most of us will go on doing what is wrong.

so what could be a single universal standard for what is better, the value every human being could reach on his own, provided he will think rationally?

i told you my opinion in this thread:

generally, i think that technology has to be explored by humanity and not be restricted. what is more important: that we work together, that our motives are good ones so that one day we can trust our fellow human beings and not fear them.

and this is what is intrinsically to human psychology: we all have a longing to trust each other, but we can´t, because we all experienced that we are violent against each other. all our technology and inventions, to secure us, to give us more power and to become less vulnerable - from the chinese wall to modern drones - are because we fear the individuals of our own species. our own violence is the main problem of humanity. violence is what we are doing wrong constantly throughout history and if you ask people to deliberate about the basic human value they want to be realized in the world, most of them would say something that points into that direction of abolishing violence like global peace. and the purpose of ethics is, to shape our behaviour in that way, to become less violent and more trustworthy for each other. this is why all our laws are based on ethical deliberations.


(Michael Hrenka) #31

Good question! While the concept of happiness is quite intuitive, it’s hard to define in a very clear and objective way. Perhaps there are concepts of superior clarity that mean about the same, but would be considered as perhaps even more meaningful than happiness. It seems that we as humans have difficulties grasping such concepts. This might be related with our inability to deeply comprehend phenomena like consciousness, and qualia. My hope is that we will make progress grasping such concepts and phenomena as we enhance ourselves and attain deeper wisdom. And that’s one of the many reasons why I currently prefer focusing on wisdom, rather than direct happiness maximization.

Also, the maximization of happiness seems to lead us to strange conclusions, such as that we would probably be relatively well off, if we decided to wirehead at least most of our time. Filling the cosmos with a potential “utilitronium”, the densest possible form of happiness-“generating” matter, would also seem like a logical conclusion of the maximization of happiness. Would that be bad? I don’t know. Perhaps these strange cases evaporate, or are sufficiently refined to become rather flawless and universally agreeable, once we become wiser and get a deeper understanding of the world and our minds.

If anything, I would argue to be careful about premature optimization for happiness. Humans aren’t very good at increasing their happiness by pursuing it directly. It’s often better to pursue other goals in the meanwhile.

I think that there’s a lot of truth to what you are saying there. My personal hypothesis is that humans (and other animals) are actually quite unhealthy in general. Via various kinds of biological enhancements, we could reach levels or health that might allow us to feel much better than we would now consider possible. Add to that additional direct enhancements of our happiness set points, and of our peak experiences, and we will soon reach robust superhappiness.

That’s because you aren’t considering the holistic implications of maximizing freedom on the large scale. Why are wars and violence bad? Because they severely diminish the freedom of its victims – and often also of the perpetrators. A world in which freedom was actually maximized, would have long abolished violence and wars. Maximizing freedom also requires the wisdom to know what actually increases freedom for all, and what doesn’t.


(João Luz) #32

It doesn’t nee to be in a specific order. Saying that not doing evil should come first than doing good would ultimately render us to afraid to do evil to actually do any good at all.

I keep using the word “nature” because I don’t see any signifficant differece between it, the way it’s generally understood, and what you describe as “freedom from human interference”. If we go for the rigorous definition of the word, then yes, human actions are as natural as any other causes for events, but if we do that the word becomes redundant. I don’t see any problem with understanding “nature” as most people actually do, as long as we don’t necognize it as a moral value, but all right, I’ll stop using the word for the sake of linguistic correction.

The reason why drug addicts are and must be classified as mentally ill persons is not because they want to experience certain types of sensations but because they have become dependent from substances that are known to cause them severe physicall and neurological damage, as well as greatly de-functionalyze their lives.

We are not taking any freedom away from them because, as long as their behaviour is directed mostly towards the satisfaction of a harmful physical dependence, they don’t have any.

Now, this situation may change when we consider the issue of “soft” vs “hard” drugs. Maybe consuming Marijuana or LSD is not harmful enough to cause as much clinical concern as that, but we definitly can’t say the same of cocaine and heroin.

Anyway, this is a medical issue, not exactly a philosophical one. I think that we both agree that mentally ill people should be treated and not encouraged to perpetuate their conditions.

The difference is that I define Liberty as “the ability to do what one wants”, while you separate in two concepts of “positive” and “negative” liberty, being that the second is defined as the “absence of human intervention”.

The reason why the dictator can’t monopolyze the resources of his entire country nor the labour of his entire people is because, by doing so, he would be diminishing the people’s ability to do what they want in order to find happiness (their freedom, not positive nor negative, just freedom), which would violate basic human values.

I don’t agree with this. As a matter of fact, I see most of our modern technologies as ways to protect us against “natural” (non-human) threats, as well to supplant the limitations that “nature” has placed on us. You seem to be focusing far too much on militairy technology.

Once again, I don’t think that way. The number of people that die and suffer from non-human causes is far greater than the number of people who die and suffer because of human causes.

Your position seems subjective to me.

Yeah, that sounds like a good long-term strategy.

I do think that happiness is the best thing we have by now, but we must recognize that our ethics won’t be perfect so soon.


#33

great reasoning![quote=“Joao_Luz, post:32, topic:1118”]
The difference is that I define Liberty as “the ability to do what one wants”, while you separate in two concepts of “positive” and “negative” liberty, being that the second is defined as the “absence of human intervention”.

The reason why the dictator can’t monopolyze the resources of his entire country nor the labour of his entire people is because, by doing so, he would be diminishing the people’s ability to do what they want in order to find happiness (their freedom, not positive nor negative, just freedom), which would violate basic human values.
[/quote]

great reasoning, as well!

there is probably no proof for that assumption. i separate liberty and you separate causes and talk about it as if we both are something like jedi and sith. i want to eliminate causes of suffering and unfreedom and you want the same.
now radivis and you have convinced me that freedom ( without distinction between negative and positive) and “to do what you want” suffices for our ethical basis.
but you have brought up a new problem:

it is not always clear to know WHEN somebody wants something out of freedom (and rationality ?) and when somebody desperately wants something ( e.g. drugs) out of mental illness. you had some ideas for distinguishing both cases:

  • somebody becomes dependent ( contradiction to freedom!)
  • severe physical damage ( here i see a problem because in an earlier post you admit that suicide has to be considered legitimate. although suicide could be considered as the maximum of physical damage)
  • “greatly de-functionalize their life” <-- when we step from happiness further and now all agree on freedom as the basic human value, what is the function of life? everybody who agrees on "happiness " as basic human value has to come to the conclusion that this must be the function as well, because[quote=“Joao_Luz, post:16, topic:1118”]
    basic human values . These values are those that are so intrinsic to human psychology, that our existance as sentient and rational beings would stop making sense to us if we, for some reason, rejected them.
    [/quote]

so for advocates of happiness as basic human value a functioning life has to be a happy one and this could be pursued, as radivis showed, with something similar to drugs:[quote=“Radivis, post:31, topic:1118”]
Also, the maximization of happiness seems to lead us to strange conclusions, such as that we would probably be relatively well off, if we decided to wirehead at least most of our time. Filling the cosmos with a potential “utilitronium”, the densest possible form of happiness-“generating” matter, would also seem like a logical conclusion of the maximization of happiness. Would that be bad? I don’t know.
[/quote]
so from the happiness-point -of -view the life of a drug addict has to be considered as high functional. but what about freedom? it seems to be, although somebody claims to want something ( your definition of freedom) that he could be at the same time dependent (unfree).
from a subjective perspective it is possible not to comprehend and to distinguish if you are free or unfree ( when you are a drug addict). from the more “objective” perspective of medical doctors and others it is easier to understand, that at least a drug addict is more unfree than free when he wants his drug. the problem i have now, is, that from my inner perspective, if i would consider myself as free to want something, i could never be sure to be not dependent and in the same situation as a drug addict. and from an outer perpective i have nothing more than a person who tells me to want something, what could be a flaw to respect that wish in the case of a drug addict.[quote=“Joao_Luz, post:32, topic:1118”]
The difference is that I define Liberty as “the ability to do what one wants”
[/quote]

i suppose we could include “the ability to get what one wants” as well? and probably “to be what one wants to be”…?

what is the function of life with freedom as the basic human value ?
how could we know from the inner perspective that whatever we want is out of freedom and not out of dependency?
how could we know when somebody claims to want something that he is not a drug addict/ mentally ill?

should i make a new thread for that topic or will you?


(Michael Hrenka) #34

This kind of distinction seems relevant when one considers “freedom” in the common sense interpretation as ability to do what one wants. Instead, I argue for a more formal definition of freedom as the number of possibilities one has. You start with the causes, but my perspective begins with the consequences: What are the consequences of taking certain drugs for example? If they decrease my functionality and performance, that means that I will have less realistic options for actions when I take them. Thus, that would be a bad decision, but I still should be able to choose it, because denying me the possibility to take any drug would reduce my options!

This seems contradictory, but it isn’t: People should be free to do what they want, but they should do that what maximizes freedom (options). The first part is a moral prescription against limiting interventionism, while the second part is a moral prescription for individual human behaviour.

People must be able to choose bad decisions, otherwise they would have less options, and therefore would be less free. Anyway, what is a bad decision, and what not, is not so clear after all. Experts might think they know the answers, but they might be horribly wrong. Some experts will certainly have a high probability of being right, but that doesn’t preclude the possibility that they still might be wrong.

The advantage of this approach is that is already comes with an answer to what people should want: To widen their options, or the options of others. When you argue from the basis of “common sense freedom”, there is no clear orientation for what people should want, so almost anything seems to be fine. That is not the case. Some decisions are actually better than others, with respect to the criterion of maximizing freedom. Unfortunately, it’s quite hard to verify that some decisions are better than others.


#35

although this sounds in a way logic it appears to me as a simplification with not much practical use.[quote=“Radivis, post:34, topic:1118”]
This seems contradictory, but it isn’t: People should be free to do what they want, but they should do that what maximizes freedom (options). The first part is a moral prescription against limiting interventionism, while the second part is a moral prescription for individual human behaviour.
[/quote]

if it is “or” and not “and” the individual motivation might be to rule the world, because one could consider this as a position with maximum freedom and so everybody who strives for ruling the world could be considered ethical in his motivation. [quote=“Radivis, post:34, topic:1118”]
You start with the causes, but my perspective begins with the consequences
[/quote]

what consequences? immediate ones or long term? if i have the long term goal to rule the world, i could argue that all consequences of my actions that i need to take to achieve my goal are negligible when i finally succeed to gain my position with maximum freedom.

you should consider short term and long term consequences as well. one short term consequence of taking drugs is a special experience and with that additional information for you. some people say that particular drugs could change your consciousness and lift your iq. so taking a drug could have as well short term and long term consequences that could maximize your freedom. and while during the time you experience the drug you don´t have the freedom to function and your performance for many tasks decreases, the input of new information increases.
you are right, when you say that prohibitions lower your freedom. but just to say that drugs don´t maximize your freedom is not right. i want to split up your idea of maximzed freedom into more detailed parts:

  1. prohibitions don´t maximize freedom, so everybody should have the option to decide for himself.
  2. new input of information always maximizes your personal freedom
  3. dependency and addiction of anything ( including drugs) decreases your freedom

with that i could say: nobody is allowed to forbid something ; )
it is everybodys personal freedom to chose to take drugs. as long as people don´t get addicted the use of drugs could not be considered a decrease of freedom.

what do you think of these details ?

Abolitionism requires as a premise that emotions have a physically manipulable, not spiritual, source, such that by altering the human brain we can fundamentally change the way that humans experience life. http://hpluspedia.org/wiki/Talk:Abolitionism/archive-from-wikipedia

People who are less fearful tend to follow their goals and generally don’t feel as held back as people with fear. Most fear has been accidentally removed in a 44 year old man who undergone brain surgery.

is it - with the idea of freedom in mind - legitimate to judge human experiences and to divide them into “good” and “bad” and to impose this kind of morality onto others? i want the freedom to experience the whole spectrum of emotions my body is capable of. and if technology could give me more to experience and widen my senses it would maximize my freedom. if i will be able to experience what it´s like to see with compound eyes or what it´s like to be a bat or how it feels to be connected with another mind i would consider this as more freedom. if people could give me this it would be great.
but if they would eliminate my capability to feel fear, sorrow and pain i would consider this as a violation and a loss because it would diminish the spectrum of my emotions and my capabilities to experience life. i don´t trust people who are that judgmental and arrogant to believe that they know what is best for others.
this is how it all began:

i feel this as one of the most judgmental and arrogant ( in a negative sense) statements i read here.
the way humanity is at the moment, how would we judge a powerful, strange, highly intelligent alien ( ok, maybe i watched too much star trek, but i would recommend it to anybody to watch at least: TNG, DS9, VOY and ENT) who criticise our way of living and thinking in a disturbing and fundamental way? are we prepared to accept such intelligence? ( e.g. like Q) is it socially acceptable for us to be confronted with attacks on our old believe-systems and worldviews that could contribute to a better world ?
we probably might have someday the chance to objectify “intelligence”, “health” , “strength” ( what kind of strength?) while observing our genetic code. but then it will be a medical approach to provide people with more physical abilities and not less. ( and even this discussion is not easy) . but psychological stability ( is Q psychological stable?) , beauty ( how would we ugly white skins judge indigenous australians concerning beauty? ) and social acceptance… i hope humanity will never find a further method to mold helpless fellows in that way!


(Michael Hrenka) #36

Well, so far the focus has been the maximization of individual freedom. This ends up to be a simultaneous local optimization problem for billions of people. And such problems often do not tend to create a global optimal solution via local optimization. So, perhaps the right perspective is to do global optimization of aggregate individual freedom. To formalize this, let’s assume that we can quantify individual freedom via “Freedom Quanta” or FQ.

The situation you proposed is an extreme outcome of local optimization of the FQ of a single person at the cost of the FQ of everyone else. In a just society, people should aim to increase their FQ, but only if they don’t impose a FQ cost on others that outweighs their individual gain, because that would mean that the global aggregate FQ would decrease. We would need to apply utilitarian-like reasoning to the question whether an action increases or decreases aggregate freedom. So, we always need to ask ourselves at least two questions:

  • Does something increase the freedom of a specific individual?
  • Does it increase or decrease the freedom of others?

Those questions should lie at the heart of any rational politics, in my opinion.

Any by the way, discussing aggregate individual freedom opens up a new path for defining a fractal society: A fractal society is a society in which aggregate individual freedom is optimal (given the current state of technology, the economy, and the resource bases).

So, can we tie this definition back to the previous discussion about evolution and eugenics? I think the main point should be to optimize individual morphological freedom for everyone. It’s not too far fetched to envision a future in which adult gene therapies will become ubiquitous, so that the genome that a person starts with is less of an issue. That’s not to say that it doesn’t matter what genome a person is born with, but in the big scheme of things, it’s more important what a person can evolve into, due to highly developed human enhancement technologies. Even if people’s genomes were homogenized in the mid-term future due to some standardized gene optimization, whether that’s driven by private interests, or government regulation, or medical insights, or even economic pressures, in the long-term future people will still be free to re-shape their own bodies and minds in the ways that they intend to – and why not? With the technology of the long-term future those changes should well be reversible and rather low in risk. Morphology becomes and expression of personal identity, rather than an imposition by natural or societal forces.

What’s still critical in this vision is that the initial morphology and upbringing of an individual can shape their preferences about their future morphology. That’s not to say that these preferences will be necessarily easily predictable or could easily be shaped towards any specific socially imposed norm.