Discussing the “value” of human beings is always a difficult and controversial topics. There are several extreme positions when it comes to those possible values:
You just Kant!
Kant sees each and every human being as infinitely valuable, so that you can’t sacrifice one being for the sake of another. While meanwhile there is a solid mathematical theory of comparing infinite values, the sense of infinity of Kant is one in which infinity is absolute and can’t be compared, which fits to the conventional use of the infinity symbols in arithmetic.
Human value is zero
This is the attitude of a strong proponent of cognitive therapy: David D. Burns. The reasoning behind that is that people should stop comparing one another, which is usually a pretty useless and psychologically dangerous activity. You can’t feel inferior, if you see the others as having no value. On the other hand, you can’t feel superior, because you own value is 0, too. The line of thinking doesn’t imply that humans should be seen as completely disposable. It’s more a mind set that is aimed at specific psychological applications like defusing depression and low self esteem.
Economists think in monetary terms
Economists sometimes try to allocate monetary value to human beings, usually by estimating earning capacity and life expectancy. That’s a very pragmatic way of approaching the issue of human value. It merely depends on the capacity of humans to contribute to economic activity. So, it’s a rather narrow view of human value, but it’s at least a definition with which we can start to seriously discuss the underlying issues. To make those issues clearer, let’s consider the following scenario
The zero marginal cost human
Imagine we lived in a sufficiently advanced and rather fantastic future, in which we can create matter and energy from nothing, and in which we can create perfect copies of any human being instantly simply by pressing a button (or merely thinking a “copy” command). The economic cost of creating a copy of a human being is defined as zero in this scenario.
Given this crazy scenario, what would constitute the value of a human being? Would humans be worth nothing? Well, additional unneeded* copies would probably be valued at zero, because they are both free and not needed (they might even have a slightly negative value, since people would complain about “human copy spam”). It seems that the value of humans rather belongs to the equivalence class of humans, which is constituted by a series of identical copies. Removing all but one of those copies doesn’t really diminish the value of that class, because additional copies can be created instantly at will, when they are actually needed. It’s only when you eliminate all of the copies (including the “original”) that you actually destroy real value. But is that really true? What if you could store the information of that human in a passive data storage device from which you can reconstruct that person at will? From this, it seems that the value of a human being is concentrated in the information that defines that human being. At least this seems to be true in the case of this rather outlandish scenario in which energy and matter cost nothing and we can easily create copies of anything and anyone.
* unneeded for any conceivable kind of economic activity, given the probably unrealistic assumption that economic activity could somehow be “maxed out”, so that the value of any additional unit of work could not be greater than zero.
Knowledge value of humans
Now, let’s go a bit deeper with this scenario and assume that we actually create a number of copies of a human being. Over time, the different copies would acquire different information by doing and learning different things. The additional information and knowledge a copy gains is usually not present in any of the other copies. The value of that copy would be defined by the additional knowledge of the copy. If that knowledge could somehow be extracted and integrated into the other copies, the whole equivalence class might become more valuable, but the additional knowledge of the copy would be gone, so the “knowledge value” of that copy would be reduced to zero again.
What if the technology is also sufficiently good to allow instantaneous integration of knowledge across all copies of a human being? The the knowledge value would again be concentrated in the class of all of those copies. Copies would merely be “knowledge generation devices”. In other words: In this scenario humans are generally only valuable as knowledge generation devices.
Back to reality
Now does this tell us something about the value of humans in our current non-fantastic reality? I think so. This thought experiment suggests that a significant portion of the value of humans is their ability to collect information, process it, turn it to knowledge, and then eventually apply or share that knowledge. At the moment, we can’t easily extract and share human knowledge, because we can’t access all the data stored in our nervous systems directly. A part of the tragedy of a human person dying consists in the loss of knowledge that is lost through that death. If we could somehow back up all that knowledge, a death would be at least somewhat less tragic – especially if we could “reinstantiate” that human person with an emulating AGI / robot copy.
One interesting conclusion in particular, is that humans would accumulate knowledge over time, as they learn more stuff. Children would be less valuable than adults. Furthermore, humans who can acquire knowledge faster would be more valuable than those who are slow at gaining knowledge. This contradicts the idea that all human beings are equally valuable. Given that we started with a purely economic definition of human value, this isn’t very surprisingly, however.
Hedonistic values of human beings
Another possibility to define values for human beings is through their capacity to experience pleasure and the elicit pleasure in others. Such an approach is in line with hedonistic utilitarianism. Given the assumption that humans have positive values, it would make sense to aim for maximizing the number of humans, or other “pleasure generation devices”, in existence. In the fantastic scenario from before, this would mean, that we should press the human copy button as often as possible – or rather create optimal “pleasure generation devices” first, and then copy them as often as possible. This is certainly a very outlandish scenario, but it would make sense and it’s quite consistent.
Scary implications for the future of humans?
If we accepted either the knowledge value or the hedonistic value of human beings as guidelines for our actions, we are confronted with some unsettling prospects, if we assume that technology will eventually allow the creation of more effective and efficient “knowledge generation devices” or “pleasure generation devices” as humans. Wouldn’t the valuation of knowledge and pleasure compel us, or our successors, to replace humans with those more functional “devices”? Mostly, yes. However, there may be an exception to that rule: Why humans might not be the most function general knowledge generation devices possible, they might still be the best at generating human-related knowledge (for what it’s worth). As long as anyone is interested in humans, there will be an incentive to keep them around.
Are there other meaningful ways of defining the value of human beings?