There is a need for a solid basis to judge what should count as “better” and what not. This is by no means an easy task. And there are certain contradictory extreme positions, each of which holds its own appeal:
- There is a single universal standard for what is better, and we need to find it (this is what I hope)
- There is a single universal standard for what is better, and it is identical to my own standard (this position could be called cultural chauvinism)
- There are only individual standards for what is better (this may be termed “subjectivism”, or cultural relativism, though the latter comes with a lot of philosophical baggage)
- There are no standards (nihilism)
We could discuss about the implications of each of those (essentially meta-ethical) positions, but we cannot expect to arrive at a clear conclusion any time soon. Nevertheless, it’s helpful to keep those different positions in mind, because the basic position one assumes shapes the further line of reasoning. Technically, I should call myself an agnostic in this regard, because I do not know which of these basic positions (if any) is true, even though I personally hope that “1.” is true. As an agnostic I cannot base my reasoning fully on any of the basic positions, so I need to pursue a different approach that is robust with regard to which base position turns out to be true in the end.
What kind of approach could that be? Unfortunately it’s not an elegant solution, but rather a pragmatic concession to my own ignorance. It’s a pragmatic approach that doesn’t rely on a most foundational philosophical ground, because I do not have access to what constitutes the most fundamental philosophical truth. At the same time, I want to be as rational as possible, and to avoid fallacies. Fallacious arguments can and should be attacked, so a position that relies on fallacious reasoning is not very robust.
What about human values? When we want to create an ethical framework on human values, we would need to define and figure out what human values are. Since philosophers can’t even agree on what “humans” are, this doesn’t seem to be a promising approach. Instead, if anything, we should be looking for universal values. If we are lucky “human values”, should turn out to be nearly the same as “universal values”, but there’s no guarantee for that. After all, it could be the case that position “1.” is true, but that human values are not fit for “universal quality standards”. My own expectation is that human values, once sufficiently abstracted and refined, are probably pretty close to truly universal values, but the devil is in the details of what “sufficiently abstracted and refined” is supposed to mean exactly.
Anyway, I’m currently settling for something of a compromise between hypothetical universal values and actual human values. Let’s consider three candidates that I stumble upon again and again (perhaps because they are sufficiently abstracted and refined?):
- Well-being
- Freedom
- Wisdom
Well-being
For a long time I have taken happiness or well-being as foundational value, with which to judge “goodness”. That’s basically a utilitarian position, and I actually identified strongly as utilitarian. I’ve criticized this position in my central thread:
It’s probably not the worst philosophical position to take, but it comes with severe practical complications, which make it less than optimal for real world applications:
- How do we actually define well-being?
- How can we measure well-being properly?
- Are we able to predict what will actually (or at least probably) increase our well-being?
- Whose well-being do we need to take into account, and with what weight?
So, what would happen, if we used the criterion of well-being for directing our own evolution? It might turn out wonderful, or we might mess things up horribly, because we would have been too stupid and messed things up in a way that we might currently not even be able to comprehend (while that’s something that can go wrong with every basis, my intuition tells me that it’s more likely to happen when optimizing for well-being rather than for freedom or wisdom).
Actually, when you were inclined only to optimize for happiness, you would only need to focus on the happiness set point and increase that as much as possible, since any other intervention plays little role for overall happiness. The result would be humans who are pretty much the same as us, just with more subjective happiness – from the inside that would be a tremendous difference, even though from the outside the difference might not even be really registered.
Wisdom
While I really think that we should focus on increasing our wisdom, this criterion seems to be the least practical when considering what conclusions one could draw from it in the realm of genetic enhancement. What makes a person wiser?
- More experiences? Yes, pretty much, so we should aim at increased life spans, right?
- Higher intelligence? Quite likely.
- More empathy? I guess so.
- A higher happiness set point? Perhaps not, if that makes people more comfortable and less likely to pursue novel experiences (though that is a questionable assumption).
- Better health? What if we gain a lot of wisdom from dealing with the challenges posed by ill health?
So, we can’t easily draw clear prescriptions from the basic value of wisdom. The possible conclusions are just too contradictory and speculative.
Freedom
Freedom is the value I retreat to due to the issues faced with the previously considered options of well-being and wisdom. Compared to well-being and wisdom, freedom can be defined relatively easily as the number of options one can choose (though this confronts us with the issue of “free will” and related questions).
Freedom is intimately related to intelligence, if intelligence is indeed seen as capability to increase one’s freedom, as is suggested by the Wissner-Gross equation for intelligence (also see an insightful comment on that formula).
Anyway, if we accept the amount of freedom as basis to judge “goodness”, then we can actually draw some relatively strong conclusions for genetic enhancements:
- Diseases (including ageing) and disabilities should be avoided as much as possible, because they decrease one’s freedom
- Intelligence should be enhanced, because that usually increases one’s freedom
- One should not too strongly optimize for certain environmental adaptations, because that would reduce one’s freedom to live in as many different environments as possible (one of the core strengths of human beings)
Still, there are many open questions with regards to freedom as basis for “evolutionary / eugenic fitness”:
- Whose freedom do we want to improve? That of human individuals? What about the freedom of groups? What about non-human persons? Could we even define a holistic “cosmic freedom”? How does the freedom of individuals add up?
- How do we define freedom in the face of determinism or quantum randomness?
- Doesn’t more freedom allow people to do more stupid things? Freedom without wisdom may not be a good idea, but on the other hand wisdom may not be achievable without having freedom in the first place.
- Can we really expect to be better at predicting what will increase our freedom than what will increase our well-being or wisdom?
- Are there perhaps multiple different kinds of freedoms that contradict one another?
Overall, I think that freedom is the best abstract basis to deal with complex questions as the quality of genetic enhancements. Do you rather agree or disagree with me?