Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

The Need for Privacy is a Symptom of Immaturity!

I wrote the following as a response to a Singularity Weblog post about a video on the importance of privacy:

I see the need for privacy as symptoms of an underlying disease: Human and societal immaturity. What creates the need for privacy is the fear of being judged. This fear arises because we live in judgemental societies which punish deviant behaviour in various ways, even though there might not be fundamentally good reasons for that.

If we lived in a world in which you could do what you wanted, unless you caused significant harm to others (without their consent), then there would not be a justifiable need for privacy! In such a world every action that you would be ashamed off would be ethically problematic. It might still be justified to do these problematic actions if they further a higher good, but in a more mature society this possibility should be discussed openly, and possible sacrifices would have to be accepted for reaching higher goals. Thus, if all good actions can be done openly and effectively, the actions that remain that require privacy are those that are not really good. Universal transparency would be fully justified in that situation of high ethical maturity.

The problem is that our societies are not ethically mature. And by promoting the value of privacy you are just strengthening that suboptimal status quo, rather than confronting society and humanity with the highly unpleasant fact that we really aren’t as mature as we think we are.

Our true ideal should be the transition to full transparency, even if the transition towards that ideal is a valley full of shame, guilt, pain, embarrassment, despair, and existential crises! Growing up is not an easy process. It can feel devastating. But in the end, it will be worth it, because we can only reach full flourishing, our full potential, if we use maximal synergy, improve ourselves together, and don’t waste our precious resources in petty conflicts and zero sum games.

I understand that it’s highly revolutionary to set such an extreme ideal, but I believe that it is possible to achieve. It will require an immense amount of effort, and an immense amount of psychological resilience. So, why will it be worth it? Because the rewards will be full maturity, full resilience, full freedom, full flourishing, and full common wealth. Any other option would be worse than that. If we want the best future we can imagine, we must work on ourselves, endure the transitional pains, and emerge victorious in an iridescent world that is wonderful to live in, but would shock any previous generation to death, just as our current world would shock those generations that have come centuries and millennia before us.

If you argue for privacy, you are essentially arguing in favour of poverty. Only true transparency can grant us real abundance!

1 Like

the video of glenn greenwald is very good. thank you. i don´t have the impression, that he is immature and argues for poverty, on the contrary. there are major aspects you missed: human dignity and “the other side of the window” <—the person, who violates the privacy of others. what is her motive?
another is, that information is power. and people could use power in a beneficial way or in a violent manner. sadly, we experience more violence against us istead of help from those who have power.
i don´t know, whether glenn greenwald is right with the idea, that privacy is human nature…maybe he is not. but it suffices to have a lifelong experience with that type of violence, to seek shelter in privacy.

This is indeed a complicated topic. Its roots go very deep. The core problem is that humans do harm to each other depending on whether they meet some conditions. In general, the patten is this:

  1. If person P knows information I about person R, then P does action A which will harm person R.
  2. Assuming that person R knows about the previous fact, then person R has an incentive to keep person P from attaining the information I, in order to avoid the harmful action A.

Wherein lies the problem? The core problem is not 2, because 2 is just the rational reaction of person R to 1. It’s 1 that is the real problem. We use information about others as an excuse for doing harmful actions. If all of us stopped doing that, then there would be no reason for privacy. If what others know about you won’t make them treat you worse, then what reason would you have to hide anything? It’s this mature state that I want the world to reach. But it would require the majority to behave in a mature way and refrain from harming others on the basis of information they have about them. This may seem completely utopian, but there’s no reason for that state to be impossible.

Anyway, the mature state is deeply incompatible with out current societal systems which are rooted in the immature state. What we would need to do in order to achieve the mature state is to abolish punishment completely, and to become so sensitive that we can predict what actions of ours are interpreted as punishment by others.

That’s why the following title would also be appropriate for this topic:
The existence of punishment is a symptom of immaturity!

Could a world without punishment work at all? Maybe not completely. But at least the world might resort to more reasonable punishments like the loss of power and wealth, rather than the loss of freedom. Perhaps we will be there in 100 years.

i think there are many motives to hide information from others as well as there are many motives to get private information about others …but in a way all of it could be subsumed to “power” and “fear of violence”. for example:

getting information:

  • to outsmart others - to get money- to rob - to cheat - to feel superior and mighty - to punish …

hiding information
because of the fear: - of being misunderstood - and being punished for that - of losing something: money, property, reputation, influence, job, relationships, freedom…

How about summarizing all the different motives for hiding information as “Avoiding harm or reward reduction caused by others after knowing that information”? This seems to be the core of it, imho. The motives for getting information on the other hand seem to be far more diverse. Beyond what you have written there are:

  • Curiosity. Furthering your understanding of people and the world at large. Scientific interest.
  • Hypothesis testing. Scientific analysis.
  • Desire to find and experience novelty.
  • Revealing corruption if politics, economics, and other areas.
  • Getting information about crimes. Ideally in order to avoid crimes altogether.
  • Distinguishing the guilty from the innocent clearly. Though this may be a misguided motive with a more systemic lens on human behaviour.
  • Gaining more information on a specific subject or art in order to acquire mastery. Self-improvement.

I think it’s fair to say that the motives for gathering information outnumber the motives for hiding information. In that sense, openness would be a more “natural” state than privacy. Nevertheless, the desire for privacy seems to be strong. Even non-human animals cheat and lie. So, a totally transparent society would be “unnatural”, too. But that doesn’t mean that it would necessarily a bad thing. It just requires a more enlightened culture and a high degree of personal resilience. After all, it’s tough to accept that everyone else may know everything about you. The other side of the coin is that in turn, you know everything about them. The countermeasure for others hurting you via the information they have about you is hurting them with the information you have about them. Don’t be a victim. Fight back. If everyone behaved that way, there would be a large disincentive to hurt others. One that may only be tolerated if it’s for a higher purpose, like preventing even greater harm caused by individuals or groups.

If the world you outline here is actually possible, that means that even now there exists a subset of people who could form such a community between themselves and live without hiding anything from any other member of the group. If you could find these people and have them form a community and have that community grow without losing this defining feature, that’d both prove it possible as well as work as tool for making such a world a reality.

What kind of infrastructure would such a community need? How would they look like from the outside? What would they think about the people outside of the group? What would convince them to stay in the group? How would they decide which people from outside could be allowed into their group? Would they ever need to kick someone out of the group?

While I can’t confidently say if such a group is possible or not, I do know that I wouldn’t be ready, myself, to actually be part of such a group at this point.The first order of business would be to find people who can be convinced to join such a group.

There’s also a grave security aspect in such a group. Even if you can trust every single member of the group enough to go without privacy, being a member of such a group means that if any member of the group bungles their data security, it means the whole group is screwed. Perhaps we’ll have good enough technology for privacy eventually that this ceases to be a problem.

That’s an interesting train of thought, but I’m not sure whether it could work out that way. This privacy-free group might need to be isolated from the rest of the world, because the rest of the world would still be a significant threat to the well-being of the group (as your last post strongly suggests). And any real isolation from the world at large would probably have some dystopian characteristics. So, this group approach is something that seems to scale badly.

But that insight puts into question whether a privacy-free world can be reached at large without dystopian side-effects. Perhaps there needs to be a gradual uniform transition, which each individual gradually changing about at the same time towards caring less about privacy. When everyone brings down their shields down at the same time and becomes more potentially vulnerable, then a privacy-free system might work. Those who bring their privacy shields up again would make themselves very suspicious. It would look like they had something very terrible to hide. In a reputation economy they might get ruined for that.

The end result may be some kind of society in which privacy is punished worse than almost any actual crime. With that kind of incentive situation, a privacy-free world would probably work fine. Sure, it would be very inconvenient not to have the ability to hide your own faults, quirks, imperfections, and bad deeds, but that’s a price we might have to pay to live in a mature society.

Don’t forget that there most likely are ways to have privacy about having privacy. That’s somewhat of a problem for the model where people are punished for having privacy.

However, it sounds likely that the reasons for having privacy will be conquered society wide, one by one. I mean, that’s already visibly happening. Much to the dismay of the people who are the reason the privacy might be needed.

That being said, I suspect we’ll get closer to fully transparent world faster through technologies that build direct mind-to-mind links between individuals. 5-6 such links per person would mean the whole world is interconnected in a network that resembles telepathy. I believe we already have several usable mechanisms for the input part of the links, perhaps outputs as well. Just needs someone to implement it and perform long term testing on such a link.

We’re perfectly capable of learning to interpret our current senses differently, so sensory substitution would be a workable non-invasive way of inputting data:

In other words, just hook 2 or more people up to each other and see what they learn to do with the connection.

1 Like

That’s a very good and important point that definitely needs to be considered in all future scenarios about privacy and transparency!

Yes, I’ve thought about telepathic networks like that. There have been a few articles on that topic, too, but fewer than I think are warranted for such an important topic. What would change through such networks is getting a better understanding about how other people think and feel. That will be a big game changer. However, that will only happen if people open themselves up to see the world from the POV of others. Many people would shy away from doing that, for various reasons. So, such a network can’t be a real panacea. It’s helpful for those who accept that technology and are willing to use it to improve themselves and their understanding of other people, but those are exactly the people who are pretty “progressive” and open already.

This might cause a divide between “telepaths” and “non-telepaths”. The non-telepaths would probably cause a lot of trouble to the telepaths. Tensions might build up and eventually lead to violent conflicts. I’m not sure how to resolve this potential issue. Perhaps it won’t be as bad as I imagine here. After all, we didn’t get violent conflicts between “social networkers” and those who don’t use social networks. What do you think? Is there serious socially disruptive (in the bad sense) potential in this kind of technology?

I’d expect a community of people linked telepathically like that would understand the world and human psychology much better than your average unlinked individual can. In addition, I see no reason why members of a telepathic network couldn’t keep their membership in the network a secret. So, whatever form a conflict between telepaths and non-telepaths takes, the telepaths would have an enormous advantage.

Although, I also suspect that most members of the telepathic network would not, at least in the beginning, be able to consciously use the link. Most of the communication would be direct communication between their subconscious minds. It’s entirely possible that the network as a whole would develop it’s own consciousness that no single member is completely aware of. Another kind of potential superintelligence.

People would just find themselves acting out bits and pieces of the superintelligence’s will and watch in awe at the combined results of all of their actions. It could look like massive numbers of people just felt like doing something a little unusual and something extraordinary resulted out of it. It could even be that very few of those people ever realized they were making it happen. Kinds of scary thought, no?

Then again, perhaps that’s already happening, even without direct links.

That is a very fascinating scenario. But I think most people would be totally scared about being connected to a network that read their subconscious thoughts. On the other hand, it would turn the “collective unconscious” theory of Carl Jung into a reality.

In awe, or in sheer horror. :astonished:

Yes, and that’s why this unconscious telepathic connection + emergent superintelligence which pretty much remain a fringe thing, if it happens at all.

Adam Smith calls it the invisible hand of the market! :smiley: