What constitutes the value of human beings?

Discussing the “value” of human beings is always a difficult and controversial topics. There are several extreme positions when it comes to those possible values:

You just Kant!

Kant sees each and every human being as infinitely valuable, so that you can’t sacrifice one being for the sake of another. While meanwhile there is a solid mathematical theory of comparing infinite values, the sense of infinity of Kant is one in which infinity is absolute and can’t be compared, which fits to the conventional use of the infinity symbols in arithmetic.

Human value is zero

This is the attitude of a strong proponent of cognitive therapy: David D. Burns. The reasoning behind that is that people should stop comparing one another, which is usually a pretty useless and psychologically dangerous activity. You can’t feel inferior, if you see the others as having no value. On the other hand, you can’t feel superior, because you own value is 0, too. The line of thinking doesn’t imply that humans should be seen as completely disposable. It’s more a mind set that is aimed at specific psychological applications like defusing depression and low self esteem.

Economists think in monetary terms

Economists sometimes try to allocate monetary value to human beings, usually by estimating earning capacity and life expectancy. That’s a very pragmatic way of approaching the issue of human value. It merely depends on the capacity of humans to contribute to economic activity. So, it’s a rather narrow view of human value, but it’s at least a definition with which we can start to seriously discuss the underlying issues. To make those issues clearer, let’s consider the following scenario

The zero marginal cost human

Imagine we lived in a sufficiently advanced and rather fantastic future, in which we can create matter and energy from nothing, and in which we can create perfect copies of any human being instantly simply by pressing a button (or merely thinking a “copy” command). The economic cost of creating a copy of a human being is defined as zero in this scenario.

Given this crazy scenario, what would constitute the value of a human being? Would humans be worth nothing? Well, additional unneeded* copies would probably be valued at zero, because they are both free and not needed (they might even have a slightly negative value, since people would complain about “human copy spam”). It seems that the value of humans rather belongs to the equivalence class of humans, which is constituted by a series of identical copies. Removing all but one of those copies doesn’t really diminish the value of that class, because additional copies can be created instantly at will, when they are actually needed. It’s only when you eliminate all of the copies (including the “original”) that you actually destroy real value. But is that really true? What if you could store the information of that human in a passive data storage device from which you can reconstruct that person at will? From this, it seems that the value of a human being is concentrated in the information that defines that human being. At least this seems to be true in the case of this rather outlandish scenario in which energy and matter cost nothing and we can easily create copies of anything and anyone.

* unneeded for any conceivable kind of economic activity, given the probably unrealistic assumption that economic activity could somehow be “maxed out”, so that the value of any additional unit of work could not be greater than zero.

Knowledge value of humans

Now, let’s go a bit deeper with this scenario and assume that we actually create a number of copies of a human being. Over time, the different copies would acquire different information by doing and learning different things. The additional information and knowledge a copy gains is usually not present in any of the other copies. The value of that copy would be defined by the additional knowledge of the copy. If that knowledge could somehow be extracted and integrated into the other copies, the whole equivalence class might become more valuable, but the additional knowledge of the copy would be gone, so the “knowledge value” of that copy would be reduced to zero again.

What if the technology is also sufficiently good to allow instantaneous integration of knowledge across all copies of a human being? The the knowledge value would again be concentrated in the class of all of those copies. Copies would merely be “knowledge generation devices”. In other words: In this scenario humans are generally only valuable as knowledge generation devices.

Back to reality

Now does this tell us something about the value of humans in our current non-fantastic reality? I think so. This thought experiment suggests that a significant portion of the value of humans is their ability to collect information, process it, turn it to knowledge, and then eventually apply or share that knowledge. At the moment, we can’t easily extract and share human knowledge, because we can’t access all the data stored in our nervous systems directly. A part of the tragedy of a human person dying consists in the loss of knowledge that is lost through that death. If we could somehow back up all that knowledge, a death would be at least somewhat less tragic – especially if we could “reinstantiate” that human person with an emulating AGI / robot copy.

One interesting conclusion in particular, is that humans would accumulate knowledge over time, as they learn more stuff. Children would be less valuable than adults. Furthermore, humans who can acquire knowledge faster would be more valuable than those who are slow at gaining knowledge. This contradicts the idea that all human beings are equally valuable. Given that we started with a purely economic definition of human value, this isn’t very surprisingly, however.

Hedonistic values of human beings

Another possibility to define values for human beings is through their capacity to experience pleasure and the elicit pleasure in others. Such an approach is in line with hedonistic utilitarianism. Given the assumption that humans have positive values, it would make sense to aim for maximizing the number of humans, or other “pleasure generation devices”, in existence. In the fantastic scenario from before, this would mean, that we should press the human copy button as often as possible – or rather create optimal “pleasure generation devices” first, and then copy them as often as possible. This is certainly a very outlandish scenario, but it would make sense and it’s quite consistent.

Scary implications for the future of humans?

If we accepted either the knowledge value or the hedonistic value of human beings as guidelines for our actions, we are confronted with some unsettling prospects, if we assume that technology will eventually allow the creation of more effective and efficient “knowledge generation devices” or “pleasure generation devices” as humans. Wouldn’t the valuation of knowledge and pleasure compel us, or our successors, to replace humans with those more functional “devices”? Mostly, yes. However, there may be an exception to that rule: Why humans might not be the most function general knowledge generation devices possible, they might still be the best at generating human-related knowledge (for what it’s worth). As long as anyone is interested in humans, there will be an incentive to keep them around.

Are there other meaningful ways of defining the value of human beings?

i think there are plenty. you can use one interpretation of quantum mechanics to conclude that the function of every human is to be an observer. and as everyone observes different, the variety of observers could be declared “valuable” for the universe. maybe what we call “progress” serves a purpose we don´t know by now ( maximizing entropy, finding theory of everything, spreading information into parallel universes, making simulators happy?). or you could just leave universe or any human collective aside and declare your personal experience with life and your personal growth as valuable. would it be bad for the collective of humans or the universe if every human would just value his personal growth? what about people who meditate and contemplate most of their lifetime in search for wisdom and enlightenment? i don´t think that AIs could ever do their job because this would be contradictory: the hard work, to gain personal growth, wisdom and enlightenment is something people egoistically do for themselves and not for any (economic) purpose. but probably this kind of work is the most (ethically) valuable of all.

but what remains, is the question “what constitutes our idea of “value” ?” a value for whom or what…

The observation a person makes are a part of their knowledge about the universe, and thus subsumed in the “knowledge value” of that person.

You could certainly define a value of a person by their contribution to progress, assuming you had already a clear definition of “progress”.

Sure, you can do that for yourself, but then how do you think about the value of other humans? Do they only possess instrumental value for you as means for your own personal growth?

I think that would generally be a rather good philosophy, if people aren’t completely reckless about pursuing personal growth at the cost of others.

Wisdom and enlightenment could be seen as specific aspects of personal growth in general. A lot of spiritual people seem to focus a lot on “enlightenment”, but I think most of them haven’t gone far enough in their search for wisdom, if they haven’t become transhumanists, yet. After all, no matter what you value, becoming more than human increases your capability of doing so (unless it’s being “human” what you value, then you’re stuck).

If AI is only created for economic purposes, you might be right. Unless we can establish a more “enlightened” economy in which personal growth does constitute some serious value, even if it’s not directly useful for a narrow definition of the economy. Note: Are there any good attempts to reconcile economics with humanism?

Yes, there is indeed an ambiguity here: The “value” of a human being could refer to:

  1. The value a person sees for herself
  2. The value a person sees in another person
  3. The value of a person as seen by some societal institution (be that governments or insurances or the general public)

It’s would certainly be interesting to analyse all of these separately, but they are usually pretty much entangled with one another, so the idea of a “general value” of a person does seem to make sense to some degree.

1 Like

no. i think it is part of personal growth to give up our way of thinking about “instrumental value”. even if it is not easy because we grew up with the idea to instrumentalize everything ( what is maybe the core of our economic-thinking), it is the challenge to shift our view from a judging and dividing perspective with wich we create hierarchies and try to understand the world, to a perspective of value in itself and for itself.
and here you could apply kant again:

Act
in such a way that you treat humanity, whether in your own person or in
the person of any other, never merely as a means to an end, but always
at the same time as an end.

— Immanuel Kant, Grounding for the Metaphysics of Morals

it would be interesting to find out, if that is really possible. “reckless” seems to imply an attitude that i can´t connect to a picture of somebody meditating and doing self-therapy or a buddhist…and “reckless” doesn´t seem to fit to my imagination of an enlightened person. but it is a difficult question. “at the cost of others” seems to fit perfectly in the realm of “economic -thinking” …

yes, to define “progress” is a challenge. maybe we could try to compare ourselves to people who are not connected to our way of living, our economy our technology and who live for thousands of years in the same way.

why don´t they change? we could define progress as the change of our way of living but if it is good or bad, valuable or not … i don´t know. and what about their value? they don´t have economic value ( probably from some economic perspectives a negative value if they should be saved although we want to exploit their territory), they have hedonistic value and if you add the observer to the knowledge value then this value as well.
but AI could never be considered a better observer, as far as the diversity of observations is valuable.

this sounds plausible. but that does require to understand, what “human” means so that we will know what it is what we should overcome to become transhuman.

What does that actually mean? Don’t you start judging necessarily as soon as you think about any kind of value? Isn’t at least the kind of judging necessary that tells you whether something is in accordance with a value or not? If we removed even that from our thinking, on what basis would we make decisions?

Because they don’t have to change, because their environment is relatively stable, I suppose. Our global environment is definitely not stable, since there is global warming, mass extinction, environmental degradation and so on (see also https://xkcd.com/1338/ for a shocking display of how much we have already changed the biosphere). Therefore, our global civilizations needs to change.

Unfortunately, this little excursion doesn’t answer the question what constitutes progress. When people need to adapt, they adapt or perish. Not every kind of change can be counted as progress. What really counts as progress depends on your value system. In one value system some change could be counted as progress, while in another one it would be seen as regression. So, the question about progress breaks down to the question what kind of value system we should adopt. Of course, we have already discussed this topic already, but a definite answer is still out of reach, which implies that we can’t be really sure about what progress really is.

Why?

Humans like to use the following approach to define what is human: Take all the positive values in your value system and define humans as the creatures who possess these values (whether they really possess them or only possess them potentially or in imagination is not so important at this point).

That’s how we end up with definitions of “human” using values like empathy, intelligence, creativity, cooperation, communication, and so on. Now, transhuman is a hypothetical being that resembles humans, but incorporates those values to a higher degree than humans. There is no clear delineation between “human” and “transhuman”, but the idea is simply that the transhuman has capabilities that are “beyond human”.

Is this a good way to define “human”? Well, at least it makes it easier to suggest what a “transhuman” is supposed to be, but I don’t think it’s a generally good definition.

Personally I would define “human” though neuromorphic similarity: If it thinks like a human, it is a human. The subsequent question what human thinking is can be found out through introspection – if you are a human.

How would an AI define humans in that framework? It would compare the behaviour of an entity with the behaviour of the entities which were labeled as “humans”. It is behaves similarly to them, then it’s human too.

This kind of reasoning also seems to be the reason why women and black people didn’t count as human in earlier stages of our history: Their behaviour and thinking processes seemed just too alien to count as “proper human”. Attitudes have changed a lot since then and the definition of “human” has been expanded to all genders and human phenotypes. Will it be expanded again to encompass certain kind of cyborgs, AIs, or uplifted animals? I think that’s possible, especially when applying the neuromorphic definition as basis.

1 Like

good question. so you imply that it is impossible to think about value without judging and you are probably right.
then the question might remain, if thinking without values and judging is possible at all.
and you imply another thing: that we need values and judging to make decisions. is it possible to exist as a human being without deciding and judging permanently in one way or the other?
because we could say that every action is the result of a decision.

that means that progress is something that is induced by a kind of pressure. we have to change to survive. could it be that a stable environment where humans perfectly fit in without the need to change and adapt permanently is somehow experienced paradisical? then progress might be a kind of torture…
unless there are other causes for progress. what, if they (the uncontacted people) know, that they could progress like us but they don´t want to? i know this is just guessing but as far as we talk about human beings who are capable of thinking and wisdom, we should consider the possibilty that “living without progress” could be a conscious decision.

true. so progress could be regress depending on the perspective of the person who judges. the only solution to that would be the discovery of universal values or the one and only universal value.

my last known result we had here was “freedom”.

this is one of the greatest and wisest approaches to philosophy i read in this forum. considering the changes of the interpretation of “know thyself” throughout human history (maybe the valuesystem of our ancestors influenced their interpretation https://de.wikipedia.org/wiki/Gnothi_seauton#Unterschiedliche_Bedeutungen), do you think it is possible that what you said is meant by that? that “know thyself” is the advice to introspect? and that it is possible to find a transhuman interpretation of this?

1 Like