Civil Rights in a Posthuman Future

Since we have pretty much resolved the issues of racial and ethnic discrimination in wertern societies, civil rights is rather simple thing nowdays: if you’re human you have all the rights if you’re not you have no rights.

The only reason civilization can function that way is because we are the unquestionable rulers of planet earth but if things turn out as we expect them to we won’t hold that title for long.
Current trends appear to dictate that in a a few decades (a century tops) there will be several other species (and I employ the word “species” in a very broad way so that it includes not only biological life but also all the other possible forms of intelligence and life-like behaviour) with intelligence levels that match or even surpass our own. In adition, the own concept of “human” will have to be revised, since our species will evidently diversify and suffer considerable modification.

If we want to peacefully coexist with our intellectual equals and superiors while still keeping our rights and duties we will have to come up with a functional legal and moral system that applies to a diverse multi-species society. So, what should that system be based on?

As I said before, it can’t be humanity. I also don’t think it can be intelligence (as much as we insist that we are “rational animals” we have to admit the fact that super-intelligence will probably think of us as we think of mice).

A popular solution (which I’ve seen presented several times in this forum) is that we grant basic rights to all “sentient beeings”. This may be a viable solution, since it would lead super-intelligences to respect us. We would also have to change our treatment of so called “irrational animals” since most studies suggest that they fit our traditional definition of sentience, but that will be a lot easier once we have vat-meat and stuff like that. The one true problem with this hypothesis is that sentience is actually a pretty hard thing to define and it’s even harder to determine who or what can be classified as sentient.

In wikipedia setince appear defined as “the ability to feel, perceive, or experience subjectively”. Given that definition i think it’s pretty clear that at least all animals (or at least most of them) are sentient. But this may be harder to guess with diferent creatures.

Has any of you read Peter Watts’ Blindsight. It’s a science-fiction novel that describes the first contact between humans and the “scramblers”, a race of alien being that is intelligent but not sentient. They are smartter than us but don’t have the so called “subjective experience”, however, they try to deceit us to thinking that they do.

This brings us to the classical chinese room argument. If an indentity acts like a sentient being is it safe to assume that it is sentient? Watts clearly answers that question with a no, and I agreed with him when I finished the novel.
But then I read the arguments of some philosophers and neuroscientists and they made me think.

Do we really have a reason to believe that our so called “subjective experience” is actually subjective? Can’t it just be the result of an imensely complex algorithmic process?

Philosopher Daniel Dennett goes as far as to afirm that consciouness in its standard definition is nothing but an ilusion and that our brain does nothing but processing information and generating appropriate responses. He sustains his position by afirming that we have no empirical evidence on the existence of qualia. Despite being a little divided, I think he’s got a point.
If he is rght, don’t we have to define the concept of sentience?

I would like to here your opinions on this subject. I know I’ve drifted i little away from the theme but that’s because I think these thibgs are important to debate. Maybe I’ll write more on the issue of civil rights another day.

2 Likes

only the experience of “our” own qualia. because that is what it is:subjective experience. “I” could never know, if anybody else but “I” has qualia. from that moment it is the problem of solipsism.

so in a way it doesn´t really matter if we call “qualia” an illusion or not.
we could never prove it.
so what i think is that ethics always have to assume something additional in favour of non-violence for all that might have qualia.

1 Like

Thanks for raising this very interesting and important topic!

I basically agree that we need to allocate basic rights to sentient entities, but with the distinction that only beings with valence qualia have relevant rights. Valence is the positive or negative value of a subjective experience. Subjective experience can be neutral, too – in which case they aren’t very relevant.

Unfortunately, this makes the basic problem of measuring sentience not easier.

Subjective experience is probably what certain immensely complex algorithmic processes feel from the “inside”. There’s a recent theory about this, called integrated information theory that has an approach to actually quantifying consciousness and qualia. I think that it’s currently the best theory we have and I also think that it is quite true.

We definitely need a deeper understanding of sentience in order to evaluate the degree of sentience of any complex and integrated system.

I haven’t read Peter Watts’ Blindsight and I don’t necessarily disagree with the hypothesis that there could be “zombies” which display intelligent behaviour without being actually sentient. However, I believe that these zombies would be quite difficult to create. I call the technological process that allows this “ultraautomation” and I’m not even sure about how feasible it is. It feels plausible to assume that the more you want to reduce the sentience of an intelligent system, the more effort you have to put into the creation and configuration process of that system. Also, such ultraautomated systems might be brittle and unflexible when confronted with problems they were not designed for.

There’s also the theory of practopoiesis that suggests this should be the case. The following video is a quite intriguing introduction into that theory:

With regards to Daniel Dennett, I think he clearly demonstrates that applying objective ontology and epistemology to a fundamentally subjective issue leads to all kinds of philosophical bullshit. I’d interpret his philosophy as elaborate error message that results from this approach.

1 Like

I had never heard of that theory, so thanks for metioning it. I’ve just finnished reading the wikipedia article about it and I’ll probably read it again tommorow since I didn’t understand everything. If we assume that subjective experience is in fact a super-complex algorithmic process, quantifying consciousness can be an interesting way of measuring sentience. The only problem is that I don’t really understand how would we do that (behaviour tests?), but that’s probabily just the result of not understanding the theory.

Good question. Behaviour tests wouldn’t actually do much, since we get down below behaviour and understand what makes behaviour emerge. How does the mind in question work exactly? We would need to know that in detail. Understand its operation in ideally perfect detail. Then we could start doing serious estimates about its quantitative consciousness parameters.

In other words: This will probably require technology that would at the same time allow uploading and radically enhancing human minds. Before that, all we can actually do is make educated guesses.

1 Like

OK. Thanks for clarification. I’ve been doing some research and I think I understand the theory better now (although I still struggle with some of the mathematical aspects).

Another thing I think it’s important to discuss is the question of government. We can give basic rights to all sentient beings (including “irracional animals”) but we obviously can’t give them the right to participate in the government.

Maybe the only viable way of organizing posthuman societies will be accpeting the rule of the smartest (whi means we will have to let AIs tell us what to do when they become smarter than us). But this doesn’t necessarily mean that baseline humans need to be completely powerless, since they can allways get away from their societies and colonize some uninhabited asteroid/moon/planet where there is noone more intelligent than them.

You make some good and important points here, Joao. I have thought about these issues a bit, but I think it’s important to dig deeper over time.

The core theme is participation in decision making processes. Our current mindset is mostly anchored to the contemporary forms of democracy with one vote per citizen. When we talk about a radically posthuman future, we need to realize that such a form of decision making most probably won’t be the most appropriate one. Instead, we need to develop new, more appropriate forms of decision making and participatory governance.

Here are some general approaches:

  • We could try to take our current form of democracy as basis and think about which aspects we would have to change in order to get a good posthuman governance system.
  • We look out into nature and analyse how insect hives or the human brain come up with decisions and try to include those mechanisms into a new form of collective governance.
  • We collect all forms of government that are out there and try to form a synthesis that is adapted to a posthuman setting.
  • We enter a scenario in which technology exists that allows for direct 1-on-1 telepathy and empathy, as well as group- and hive-minds. Then we deduct how an effective form of governance that is based on these technologies would look like.

Then there’s also the question how much centralization is appropriate, if at all. I don’t believe we should decentralize everything that can be decentralized, but I think we need a fair and appropriate balance of both centralized and decentralized elements for optimal governance.

Leaving all decision making to the smartest minds is actually a form of centralized governance. It’s probably not the most appropriate one, since less smart minds still have sufficient knowledge to at least solve certain problems they are directly faced with. Smarter minds should only intervene when less complex minds fail to find a sufficiently good solution on their own.

So, I disagree with the notion that we “obviously” can’t let “lesser” sentient beings participate in government, or at least some decision making system. The question is rather how to deal with their input in the most appropriate way. Any good government should at least consider the needs and wishes of all sentient stakeholders.

You’re right, we definitly should dig deeper.

I’ll adress each one of your possible approaches.

As you said yourself, individual vote (which isthe basis of all democratic systems) will probably become impractical in the future. If things go as we expect them to, one day we will have the technology to freely clone, fork and merge minds as many times we want. This will certainly it make very difficult (if not impossible) to decide who/what gets the right to vote.

In adition, if we want to (as you say) let all sentient beings participate in government in some way, vote would not be a good way. Dogs can’t vote and super-human intelligences may also have difficulty expressing their immensely complex desires in that form

I’m skeptical about the idea that we can find inspiration for a form of posthuman government in insect hives. It’s a fact that insects can organize pretty well but we also have to consider that their behaviour is radically diferent from ours.
They have a low level of sentience (if we can call them sentient at all) and their communities are pretty uniform and immutable. I find very it difficult to imagine how their ways of social organization would apply to a diverse community of posthuman beings with complex desires and life styles.

In regards of the possibility of some kind of brain-government, I don’t think it would differ much from the idea of AI absolute rule, since we would probably have a single brain-like super-intelligence making all decisions. But it’s still an option.

As I said before, traditional democratic systems would be unapliable to a posthuman society. Dictatorships would function, but they would have the same problems as allways and maybe even more.

I like this approach. If we could develop some form of “telepathy” we could accurately determine the desires of all sentient beings and take decisions accordingly.

I wouldn’t like us to become a hive-mind since I’m afraid we may lose our individuality. I would prefer being governed by a board of super-intelligent beings who took their decisions based on the analysis of all sentient beings’s desires. We could even make decision-making automatic in some less complex situations so that no one has to actually govern.

You raise a very important point here: The ability to easily clone, fork, and merge minds. That’s a really really big issue. It’s so big that there would be immense (and justified) pressure to thoroughly regulate such acts. After all, those who first have access to this technology could use it for their own advantage and flood the world with minds according to their own image. In the case of such a mind flooding those minds would probably be seen as existential threat by almost everyone else and they would be hunted down as far as possible. Of course, talking about “rights” in such a scenario would be quite antithetical, because the dytopian flooding scenario would probably result in the creation of a totally authoritarian and rights-ignoring regime, either created by the copies themselves, or by the counter-copy alliance.

A more positive scenario would actually depend on thoroughly regulated cloning/forking, at least once that action becomes sufficiently cheap. Otherwise, it would be hardly conceivable that the clones/forks would be granted actual rights. If cloning/forking is strictly regulated, then it becomes more likely that political power would be allocated in proportion to mental characteristics like the size of the mind, or its consciousness. For example, voting power could be allocated in proportion to the number of the neuron- or synapse-equivalents within one mind. This approach would also work for group-minds.

It all comes down to the mathematical models that are used to describe collective decision-making. When it comes to the complexity of these models, democratic election systems aren’t really more sophisticated than for example quorum sensing in bacteria or insect colonies. Your rebuttal would mean that something as primitive as democracy should be totally unacceptable for our highly developed human society. Yet, we depend very much on democratic mechanisms. Isn’t that strange?

I was thinking more in terms of humans acting in a neuromorphic government as neurons do in the human brain. Each neuron does have some impact in the processes of the brain at large. There are also very different kinds of neurons and their interactions can be quite complex. Still, I don’t have any really good idea how such hypothetical neuromorphic governance systems should work. Nevertheless, the brain is a system that can run really well, so it’s not too far-fetched to seek inspiration within it for creating new forms of (radically distributed) governance.

Interesting. What are the characteristic problems of dictatorships in your view? And how do democratic systems try to avoid those?

Yes, the basic idea seems to be clear. Doing that in reality would nevertheless be hugely complex and the devil would be in the details.

Would you go as far as disallowing those who want to form a hive-mind and give up their individuality from actually doing so?

The degree to which this sounds like a good idea is the degree to which those super-intelligent beings appear as trustworthy to everyone else. Establishing trust-worthiness is certainly a huge general problem.

Humans usually don’t want to hand over power to automatic decision-making algorithms unless they have a really good incentive for doing so. Money and convenience are pretty popular incentives that could make them do that. I’m not sure this would be a good idea in general. In some cases, it may be a good idea, but that would need to be examined on a case to case basis. And even then would I demand the inclusion of a manual override system for special circumstances.

I have to admit I don’t know much about quorum sensing. How would a government based on it function?

You mean, like a hive-mind? As I said, I’m not a big fan of those things.

The traditional problems of dictatorship are the supression of individual rights (which will probably become easier in the future because of improvements in surveillance technologies and mind control) and the repression of minorities (which will greatly diversify once humanity starts radically altering itself)

We would need a full understanding of how the mind works, which would also be necessary to develop most of the technologies that would create the problems that we are discussing.

No, I believe everyone should have the right to do whatever they want with their minds and bodies since they don’t harm anyone else.

No matter as much as we try to decentralize government and distribute political power there will allways have to be someone who runs things. In my opinion, that someone should be a group of people that have proved themselves smart and benevolent.

Of course trust-worthiness will be a problem, it allways was and it allways will be.

Since quorum sensing is a rather general concept, that question is about as hard to answer as the question “How would a government based on voting function?” The answer is: It depends! It depends on how voting / quorum sensing is implemented and what it’s actually used for.

Neuromorphic governance with humans as neurons doesn’t imply the existence of a hive-mind. Participating in a large group mind eliminates your individuality about as much as participating in an election. In other words: The neuromorphic global mind is just another tool for collective decision making, and nothing which would necessarily change human nature.

Perhaps you are just polled about certain topics in seemingly arbitrary intervals. These topics would be within your fields of interest, or you would be affected by the outcome of the decision that is going to be made by the global mind. And maybe there are also certain “neuron people” who have different policy related tasks like thinking about which options the polls should include or even how the whole mind should function. And there would be many feedback loops and stuff. All really complicated, like regular politics, but with a better overarching structure and higher collective intelligence.

These problems can also occur in democractic governments. And apart from that, these issues do not necessarily translate into inferior outcomes for the state applying dictatorial control. It is quite conceivable that some dictatorships work better than some badly functioning democracies. They might even outperform democracies in terms of the problems you mentioned: Intelligent dictatorships could respect the rights of individuals more and protect minorities. I’m not saying that this is really likely, but it could very well happen.

Good. Then you also need to accept that people are quite diverse and some would be willing to give up their individuality in favour of joining a hive-mind. And these hive-minds might actually work better than regular groups of individuals. There’s certainly some potential for conflict between those different factions, but it can also be hoped that they cooperate peacefully and ideally form something like a symbiosis.

Some years earlier I would have agreed with you on this point, but now i’m not so sure about this. Politics might actually distributed extremely effectively with no coherent collective being able to accumulate a significantly large amount of political power. Of course, this is very speculative territory.

You mean somekind of direct democracy where only certain people would be polled about certain things?
Sounds interesting, but it would be very hard to implement. It’s an option, anyway.

I never said that there could be no hive-minds, I just think that they can’t be considered the sole solution for the problem of government in a posthuman future, since a lot of people (like me) don’t want to give up their individuality.

I’m all for political experimentation. We can perfectly look for better ways of distributing power and improving popular participation in government but I sincerely doubt we’ll ever get to the idyllic distribution of power that you talk about. There will allways have to be somekind of governmental body.

Very hard to implement? Probably. But I think this may be the best general direction to go into, because we need a powerful system that can meaningfully use the swarm intelligence that potentially exists within all of the population or even biosphere without overwhelming individual actors with information and decision overload. There’s a book chapter in the book “Anticipating tomorrow’s politics” about an interesting system that mixes meritocracy and direct democracy in an intelligent and dynamic way. It should be available online on the Transpolitica homepage, soon.

Right. Then we seem to be on the same page.

The governmental body could be extremely decentralized and distributed nevertheless. It could be some kind of “GovNet” thing that permeates everything and makes decisions based on local or global desire and opinion densities or something like that.

That seems interesting. I’ll make sure to check it out.

Forgive me to bring bad news, but its rising, even Madonna said so:
http://www.theguardian.com/society/2014/aug/07/antisemitism-rise-europe-worst-since-nazis

Beautiful question you got there.

ok, wait a second, this is different, this asks for knowing how to evaluate something by means of either our subjective views or a different, unknown method.

And a beautiful answer right here.

Yes, a solution to uncertain problems is always approached by Occam’s razor rule of simplicity, “assume the most likely”.
But, I think this rule from old Occam is incomplete, but how to solve uncertainty?

I thought the “Chinese Room” prove this is not the case.

Got to check this…Yes, it is an incredible way for optimizing results, but I don’t see the spontaneous generation of consciousness yet. And there can be, just as life, a first spontaneous generation, but it doesn’t add its creation yet.

How to define this rule? And how can we expect to follow it, if not by understanding all necessary to actually be among the smartest? This paradox is appropriately difficult to cope, so it let us know we can always question rules we don’t understand. The alternative, to follow blindly, seems inappropriate.

And I agree with you Radivis. If one can learn, one can become smarter, and one can decide one’s own fate.

I hope is not necessary to go there, but an internet with the feature of automatically know how to operate in society in favor of one’s own and for the society’s fate is possible. I hear “eudaimonia” realized on this.

It is an intrinsic value of our beings, and it doesn’t means it contains the whole democracy, but there is something in it that we must understand, a natural law of nature.

Me neither, as for myself at least. I might tolerate other individuals in it thou.

To be more precise, people with more power than other people. Not necessarily bad unless either is taking away power from someone with genuine merits to work with such power, or the subject with more power is a psycho/sociopath.

I agree, only if there are countermeasures against deceit and “emotional overtaking”, exposing subjects to mental subversion, which would mean an exponential way to exploit people’s minds.

It was my hope, before I realized that intelligence is measured in a million different ways, many of which could refute this assertion of “benevolent rule”. Because, do we measure success in how good an individual exploit another?

Ye, this idea is revolving a lot in this forum. I will leave this cauldron as it is for the moment before clarifying what I think we are heading.

That depends on what we call “government”.

PS: Forgive my long absence, my computer was hacked (even my network seemed suffer).

Not really. The Chinese Room experiment hasn’t been done in real life. Perhaps some level of conscious understanding on a rather unusual level would emerge if the rules of the system are complex enough and the operator is intelligent enough to recognize their patterns and stuff. In some sense, the structure of the rules would have to match the structure of the use of Chinese language outside of the room. That might make it possible to learn a kind of structural Chinese from within the room.

What are you talking about?

I think there’s a high danger to apply a linear model of “smartness” to this issue. In reality, you can be the “smartest” in one area, while being the “dumbest” in another. Even the smartest humans do not understand everything. And in particular, most people don’t really understand law, or even know a significant fraction of all the laws that exist. Yet, obedience to the law is demanded by us. My guess is that this system only works, because law only goes into action once things go wrong to a significantly high degree to actually invoke it. Law is not the usual practical guiding principle of people, but social expectations are.

And yes, rules should be possible to be understood by those with only modest cognitive capabilities. Theoretically, lawyers should play to role of “explainers” of the rules we call laws. So, I propose lawyer AIs for everyone.

I suspect you mean that democracy is necessary, because humans have an innate need for autonomy. Respecting autonomy is very important, but it may be questionable whether it’s “democracy” which is the best mechanism for doing so. Ideas like anti-authoritarianism, libertarianism, and anarchism could be more suitable for that purpose.

Please don’t conflate the terms “intelligence” and “success” and “exploitation”. That just creates a mess. And don’t confuse “intelligent dictatorships” (as in “intelligently designed dictatorship”) with “dictatorships with an intelligent dictator”. In the first, I mean the intelligence of a system, in the latter the intelligence of a person. It’s not like all democracies or dictatorships were the same systems. In reality, both can be quite complex systems with many different internal structures and subsystems.

By “spontaneous generation of consciousness”, is for the creation of a minimal value of consciousness. We still need to have standards for measuring consciousness, or we would give the merit consciousness to rocks and inanimate bodies. Not that I am completely opposed to, but to explain that is rather another discussion.
By “optimizing results”, it shows promising uses for other areas. Indeed it could be used in consciousness studies, but not yet proving to generate consciousness, at least the theory is not complete.

I hope you are serious, because that is such a great idea!

Something in it, not necessarily the whole, but could be the whole. It depends on what we call democracy. We should strive for positive liberty, but does it necessarily implies elimination all forms of democracy? Is there something we should keep from democracy? Is there any proto-idea that forms the core of democracy while at the same time melds within human nature itself? Eudaimonia is an old and not often use concept, but it might fit. It might.

There are records to prove your point.

And this is precisely why there is danger in the mention of intelligent dictatorships, our language could not exchange the same idea over this topic. I suggest the use of “enlightened despotism”, it is a classical idea on which we can share a common language and it doesn’t go into questioning so easily.


On the existence of enlightened despotism, we would enter into a game of chances, were the despot or, most probably, the oligarchy, can and probably would try to use deceit to appear benevolent. Would we play such an hazardous game of chances?
You can tell from now on that if this is the destiny, there would be far too many proponents in this race, and there will be conflict. Human nature doesn’t constraint in the face of despotism so easily.

If one is to take integrated information theory seriously, then one has to realize that is represents a kind of panpsychist theory that does ascribe a very modest degree of consciousness to rocks and other “inanimate” bodies. It’s just that this basic all enveloping consciousness is astronomically lower than that of highly sentient beings. Nevertheless, there is no clear cut-off where the “inanimate” or unconscious worlds ends and the “animate” or conscious world begins. Constructing such a boundary would be a tough challenge for philosophers and mathematicians. I know that theories like these can feel unsettling, inconvenient, and even disturbing, but I think they are closer to the truth than more conventional theories and ideas about consciousness.

And yes, the theory doesn’t really explain how exactly consciousness or qualia or valence arises and what determines its actual quality. It’s a first effort of a quantitative approach that’s not subtle and complex enough to deduct any qualitative explanations.

This is a nice suggestion, but I fear it carries too much historical baggage with it. There are actually a few transhumanists who are called neo-reactionaries and want to have some kind of enlightened monarchy. Most notably Michael Anissimov who recently released a book in which he criticises democracy and promotes enlightened monarchy. Of course, his views are heavily contested. I’m certainly not agreeing with Anissimov, but I think there are some aspects in which he confronts us with some valid points.

The main point that I am making is just that we shouldn’t fixate ourselves on demanding that future governments should be 100% democratic, unless we have really good reasons for doing so.

I’ve tried to come up with highly functional modes of future government involving artificial superintelligence (ASI). In contrast to others I do not think that ASI rule would be a perfect solution, even if that ASI was “friendly” in some sense. And even with ASI we would need to set up good governance structures to make sure the system works in everyone’s best interest. One of the latest concepts I have thought about involves a distributed ASI whose character, goals, and behaviour changes in a continuous adaptation process. If the people are unhappy with the ASI, then the adaptation process changes the ASI until the people are happy again. This cannot be done with normal persons, because we cannot technically or morally invade such deeply into the mind and character of a human.

The contemporary possible system that would closely resemble this governance structure would be that of a democratically elected monarch, but with continuous polling of the population about contentment with the policies of the monarch. Once the ratings from the polls dropped below a certain threshold, the next best monarch candidate would take over the office.

Anyway, the system would have to be quite sophisticated and balanced, because there are still so many ways in which this could go wrong.

In the future we could use brain scanning in order to verify the integrity of our rulers. In some sense, this is a kind of inverse 1984 in which the leading classes are subjected to constant and extreme sousveillance, but hey, that’s much more appropriate than the stupid forms of surveillance and government corruption we have today.

Brilliant!

The ASI could evaluate the desires of the populations through “telepathy” (since we will probably have that technology when we are able to create a superintelligence), that way we could allow all sentient life to have a “say” in government.

1 Like

Yes, that sounds really good. It’s quite similar to what I’ve had in mind. I guess the most serious problem is finding out how to best fulfil all the different desires of all sentient beings simultaneously. There will certainly be some tough trade-offs to be made. And much complaining about how that is done.