Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

Privacy in a technologically mature society

What will be the eventual future of privacy? How will privacy work? What will it cost? I’ve already started a thread on privacy:

My argument for why we desire privacy is that we want to prevent others from using certain information to harm us. Now we don’t know how hard it will be to keep up privacy in a future that is increasingly saturated with technology. It would seem that, as our world becomes increasingly hyperconnected and more and more sensors are deployed in the environment, our homes, and even our bodies, maintaining a high degree of privacy will become an increasingly hard prospect.

The Leviathan sees everything

Can that outlook somehow be avoided? What if we prohibit spying on others in general? Well, as sensors, spy drones, and their like become ever smaller, smarter, and cheaper, it would become increasingly difficult that a particular person spied on someone else. In 30 years, what stops the average Joe from printing out thousands of robotic spy flies that gather information on everything in sight? Either we spy on Joe ourselves, or we don’t know who releases all those spy flies that probably won’t even noticed. This should give us a clue: A privileged organisation, a secret service with the authority to spy on people, but with protection from being spied upon, could be used to prevent unauthorized breaches of privacy. In the end, we’d have to trust some kind of “privacy Leviathan” to protect our privacy from everyone else. But who protects us from that Leviathan? This is a troublesome question.

We could try to control that Leviathan somehow, but such efforts could threaten out privacy again. If we see what the Leviathan sees, and the Leviathan sees everything, then we see everything, and privacy is lost again. We could try establishing a network of organisations that keep each other in check, so that they don’t exploit their information on us in any due way, and we would like to have some democratic or economic leverage on those organisations. That’s probably the best we can hope for, under usual conditions.

Unusual conditions would be that the Leviathan was completely trustworthy, for some interesting reason. Perhaps it’s a kind of friendly AI that is incapable of betraying us. Or perhaps it’s provably aligned with our interests. In that case, we’d have what I call a “perfect Leviathan”. Otherwise, we’d have to deal with imperfect Leviathans with checks and balances.

Strategies of privacy fanatics

Our course, privacy fanatics wouldn’t want even a Leviathan to have access to their private information. But without a Leviathan protection against intrusion of privacy committed by third parties will become increasingly difficult and costly. Yes, you can protect your own home with lasers that shoot down spy drones. Even if there are nanobots that spy on you, you could lace your home with a nanobot immune system that kills off intruders. The walls of your house may be listened on with lasers that pick up the vibrations that are caused by voices inside, but you might want to opt for encrypted tight beam laser communication that carries signals for a telepathic local area network anyway. Outside of your home, maintaining your privacy will be almost impossible. You might hide your identity by smearing your face with paint that throws off face recognition, and wear stilts that makes gait analysis harder, but no normal human would recognize you anyway, and the chances that AI recognize you by your skeletal features is so high that no real privacy will be possible outside of your home. And of course the entry to your home is likely under constant supervision, so you won’t be able to hide when you are together with certain people in your home.

With the arrival of the Internet of Things, you would have to opt for “dumb stuff” that’s not connected to the internet all the time. If you have devices with cameras in your home, you might want to cover them in case that someone has access to them (which isn’t terribly unlikely). If you don’t resort to computers that don’t have access to the internet, at the very least you use Tor all the time to remain anonymous online.

You don’t tell anyone anything about yourself that isn’t already publicly known – unless you can trust those people not to propagate that information.

Yeah, hard times for privacy enthusiasts are upon us. Only the wealthy and paranoid will be able to maintain a relatively high level of privacy.

Everyone knows almost everything

So, why not get rid of privacy entirely? That way, at least nobody has an unfair advantage by knowing more about you than you know about them. Well, sure, that’s an option, but we are discussing the possibilities of privacy here, so this option only deserves a brief mention.

How does privacy work after all?

Ok, so privacy consists in the idea that you can prevent person X to know information J about you. You obviously know information J, otherwise you wouldn’t deem it worth protecting. There are certain strategies for maintaining your privacy:

  1. Spread containment strategy: the basic strategy for privacy is preventing others from gaining access to J. That’s by creating obstacles to the spread of information J. Non disclosure agreements and digital rights management are examples for such obstacles.
  2. Disinformation strategy: You make X unsure about the truth of J by spreading conflicting (untrue) information K.
  3. Access denial strategy: You prevent X from accessing information J.
  4. Prevention strategy: You prevent X from using J to your disadvantage.
  5. Distraction strategy: You make X focus on something else, and thus not think about J.

Let’s take a look at the different strategies in turn:

#1 Spread containment strategy

Ideally, information J is a secret that only you know about. In that case, everything is fine. Unless you need to let others know about J in order to make full use of that information – for example, if it’s some kind of invention that you need help for implementing it properly. Once you let others know J, it becomes a shared secret. The difficulty lies in preventing a further spread of J to people you don’t trust.

One method for dealing with that problem is by punishing those who share J with unauthorized people. That’s basically what NDAs can help with in the worst case. A more extreme variant of this is suing a person for treason for spreading the state secret J. In a very wide sense, copyright belongs to this class of strategies, too. An information J that is protected by a copyright is intended to be sold for profit, so uncontrolled spread of J would perhaps diminish profits (though that’s not necessarily the case).

A complementary strategy to copyrights is digital rights management that makes it harder to copy J freely. This strategy isn’t based on punishments (at least not directly), but on creating additional barriers that have to be overcome before one is able to copy J to other people. In reality, DRM basically makes it more time consuming to copy J, but usually that’s good enough.

#2 Disinformation strategy

This strategy works, even if J has spread to people who aren’t supposed to know about J. So, the strategy is that you invent some (usually false) information K that contradicts J and spread that to as many people as possible. Then you make go on to discredit those who still believe J to be true als gullible fools. The world may be spread between those who believe J and those who believe K – divide and conquer. The situation has stopped being about knowledge, but has become about beliefs.

Disinformation is more about damage control than about preventing harm in the first place, though it’s conceivable to use this strategy in a preventative way. It would even make a lot of sense to spread out false information K before J even has a chance to get leaked. By priming the people on K, any alternative J may look like a ridiculous rumor. Repeat a lie often enough, and it becomes a widely accepted fact.

#3 Access denial strategy

Now, let’s come to some seriously advanced and futuristic strategies. Let’s assume that X has access to J, and know that it’s the truth. With a sufficient ease of spreading information, and access to a comprehensive sensor network, this will sooner or later be a nearly inescapable situation. You would like to remove J from X’s mind, but that won’t help for long, since J is rather ubiquitous by now. What could be done, however, is denying X access to the information J, even though X has it in her mind.

How does that work? You insert a malware I term a denial veil into X’s mind. The effect of the denial veil is that X will always deny the truth of J whenever J pops up in her mind. Denial of the truth of J becomes of overriding priority. In every situation, X will find a way to rationalize why J must be false. There’s no alternative – because the denial veil doesn’t allow the existence on an alternative. The falseness of J becomes self-evident to X. And of course that’s not the case, because X suffers from a denial veil, but because J is obviously false. Everyone else must be a crazy conspiracy theorist, and completely deluded. For most practical purposes, denial veils are equivalent to deeply held convictions, so they aren’t terribly easy to detect – unless one is suspicious of all deeply held convictions.

But why not stick to a seemingly simpler approach and try to insert a deletion veil into X, that deletes J every time it enters the mind of X? This becomes problematic when X is faced with strong evidence in favor of J. Every time the deletion veil becomes active, X spaces out for a moment. The more X is confronted with evidence for J, the more X spaces out. At first, this may appear as an increasing pattern of concentration problems, but later on, it can become increasingly paralyzing. Denial veils avoid this problem, since they don’t cause such apparent neurological impairments, but merely strong convictions.

#4 Prevention strategy

An even more advanced concept than a denial veil is a conditional inhibitor. A conditional inhibitor inhibits X from using J as justification to harm you. This is a bit complicated. What the conditional inhibitor needs to do is to compare two scenarios. First, it needs to predict the consequences of X knowing J during a specific action of X. Then, the conditional inhibitor predicts the consequences of X not knowing J during that action of X. If the results of the first scenario are worse for you than those of the latter, the conditional inhibitor inhibits X from proceeding with the action in question. The actions that remain for X are those who are neutral with regards to the knowledge or truth of J, or those that actually lead to a better result for you, given the knowledge of J.

Conditional inhibitors can even work for information that isn’t very secret, after all. Assume for a moment that you could insert conditional inhibitors into X that prevent X from discriminating against you for being a woman, black, a criminal, or completely incompetent. That’s pretty crazy stuff. What’s scary about conditional inhibitors is that they seem to be more benign than veils, so the temptation to actually use them on people becomes stronger. Governments may feel tempted to prescribe the use of certain “sensible” conditional inhibitors on the whole population.

#5: Distraction strategy

Ok, so this last strategy can be used with low tech tools, as well as high tech tools. Let’s start with the low tech approach. So, X has finally found out about J, but, so what, you tell X about L. Woah! L, yes, L is much more important than L. Yes, why care about J, if L is so fascinating, and urgent, and captivating? L, think about L! Everybody is talking about L! We must react to L, no time for anything else? What? Has there ever been something else? No, of course not, we’ve all been preoccupied by L since the dawn of time!

And once distraction with L stops working, you throw M into the mix. And then N, and so on…

In the future we don’t even need to come up with things like L, M, or N. We simply insert a distraction veil of J into X. What the distraction veil does, is simple. Once X thinks about J, the distraction veil redirects the attention of X towards anything else that happens to occupy the mind of X.

In many ways, distraction veils are the perfect tool for keeping J secret in plain sight. Even if everyone knows about J, if everyone also is under a distraction veil on J, then it simply doesn’t matter. J becomes a perfect taboo. It’s not that people don’t know about J. Of course, people know about J, but nobody is able to think about J, or care about J.

A great demonstration of this idea of a distraction veil is found in the books of the “Night Watch” book series bei Sergei Lukjanenko. In the contemporary fantasy setting of the book series, there are magicians and other magical creatures, walking among us. But normal people are under a universal spell that makes them not care about all the strange things happening around them. They may see magical events, but are magically forced to ignore them, and afterwards they are quickly forgotten.

The technical implementation of veils and inhibitors

In our glorious cybernized future, the veils and inhibitors are artificial intelligences that interface with our minds. Given a scenario, in which it will have become very easy for nearly everyone to observe everything and everyone, they are the remaining tools which can maintain privacy (or collective ignorance, which is basically the same).

As with all technologies that affect the mind directly, the potential for abuse is incredibly high. Using those tools wisely will be a huge challenge for humanity, and its potential successors.

You forgot to mention the abuse of such technology by those in power. What if we come to live in a surveillance state like 1984’s Oceania, only far more efficient (and therefore more worse)? :wink:

The mind-malware (or “denial veil”) that forces people to forget specific information is not something you have the right to force on others, and is not something others should be willing to have installed either. You decided to confide in them, and from that point onwards, it is their right to know and remember it. Imagine, you tell something life-changing to another person, inducing great personality changes in them (let’s say, you inform them you killed their lover, after which they have a change in worldview or similar drastic changes). After that, you have no right to delete or restrict that information, since it is now part of who they are, and of their past.

You can always decide who you share information with by exchanging encrypted messages. Of course, you should be certain the person you are telling a secret to won’t stab you in the back, but that possibility is always there in life – just like a wife murdering her husband, a trusted person might turn on you (in the original context, that would be, to spread your secrets). Demanding a protection from unauthorised information spreading is like demanding a protection that makes it impossible for others to physically harm you, like a little bomb in their brain that explodes once they are about to commit a violent act. This would imply a complete submission of humankind to those technologies. Humans should use tools to further their goals, not be used / restricted by them.

Another risk of the denial veil would be that it could delete more than it was originally intended to and could be used for extreme brainwashing. It might even implant false memories in your brain and you could be used as a cheap laborer who personally received a divine command from god to work himself to death.

Your conditional inhibitor would imply that the whole brain of X is known to the inhibitor system at any time and can be simulated in fast-forward easily. Although it is not as drastic as putting a bomb in X’s head, it is still a severe restriction to X’s freedom since X’s actions are pre-selected by testing the ahead-of-time versions of X. You should always consider that a system is used as maliciously as possible, so that you are aware of potential dangers. I think it is ethically incorrect to use ahead-of-time simulations of a person to restrict them, as for example a mighty corporation with illegal dealings could use these inhibitors on investigators, preventing them from ever uncovering their crimes. Nobody should be subjugated to such technology, as in the worst case, all of humanity could be eternally enslaved by it. Every person attempting to break the enslavement would forget about the enslavement or be prevented from destroying it by the conditional inhibitor. Also, your examples of using them on other people wantonly is strange, since you don’t have the right to form the world to your liking, by becoming something akin to a god and having absolute control over others.

My alternative to you inhibitor would be something like protective suits against violence, or the ability to create backup vessels (via whole brain scans) of oneself that are activated in the event of death. However, I would take the risk of secrets told to others being exposed anytime, if I keep my freedom of thought for it. Also think of all the whistleblowers.

Privacy is an important good for sure, but I weigh freedom over oppression-enforced privacy.

Sorry for the messy post, I’m not used to writing prose anymore.

I like that you jumped on pointing out the possibilities of abuse of the systems I’ve explained in the original post, @Idomenio and @RmbRT. That’s a quite reasonable reaction, since the technologies involved could equally be used for complete mind-control. Those who control the mind-control technology, control the world. So, there will sure be a conflict about who may wield this kind of power, if at all.

Yet, I’ve framed the debate so that I’ve claimed that these technologies are the only way to maintain privacy in a technologically mature future. If you don’t want that these technologies are used, the logical conclusion is that you also forgo privacy. That’s fine. If you are willing to pay that price, great. If not … well, you could of course claim that my arguments are flawed. Or you really start thinking about how to use mind-manipulating technologies in an ethically responsible way.

What if you enter a contract that involves the use of these technologies under certain conditions? What if you want to enter a virtual world that’s so great that you want to forget the existence of the outside world, and don’t even want to consider the possibility that there was an outside world? A denial veil or a distraction veil would keep you in that virtual world until the exit condition holds true – which could be soon, or never.

Abolishing the use of these technologies also means restricting the freedom of people to use them in a completely voluntary way. Is that justified? If yes, why?

Well, I don’t really think you have to completely abandon privacy, what you would abondon would be totally secure privacy, so you always have a certain chance that your privacy is compromised, although it is not that likely. Just as you have a certain chance to stumble while walking down the stairs. On a side note, one could build windows that are set to vibrate at an adapting noise frequency, dampening sound waves from inside the room and scrambling the left-over sounds so that they are harder to be reconstructed via laser scans. I don’t know how feasible this is, though. I took the concept from those noise-cancelling headphones.

Well, I think that these technologies are at least as dangerous as nuclear weapons, because, as you also pointed out, they can be used for complete mind-control, and can cause at least as much destruction (by forcing people to be soldiers, for example).

Using these on one’s self might be agreeable, since one should have freedom over one’s body. This would also include voluntarily letting others use it on one’s self, although, to me, it seems like going into a mental prison voluntarily, where one cannot leave on one’s own. I cannot imagine a person willing to do such a thing, but if they are, then let them, I guess. It is important however, that one can look at the ‘source code’ of the veil before it is implanted to ones mind.

Using these on others forcefully, no matter whether you are a person or a government, however, is totally out of question to me, since it would probably be malicious (otherwise, there is no need to force it on others, since they would accept it voluntarily).

To me, based on my opinion that knowledge and sanity is the highest good one can possess, a distraction veil seems like giving in to mental weakness, since one wishes to escape the factual world one lives in. Something like a technology that helps one accept something rationally instead of distracting one from it, would seem far better, as it helps one mature mentally and keeps one’s worldview in sync with reality (even if that reality is a virtual world, I think one should not hide from unpleasant parts of it, but face it in its entirety).

1 Like

i see here a fundamental problem of all living things and astonishing biological correspondances to the cited strategies.

all beings are living in a netting with each others. and they are all living in an information dilemma:

how to send informations to required receivers and how to avert to send informations to pernicious receivers.

this struggle between the predator-prey-contest and the covenant-of-allies is the driving force behind the evolution (apart from environmental influences of course); all the mentioned strategies has been developed in the biological world since the beginning in many sophisticated ways (i will not waste your time by citing exemples)

so i propose an a priori problem, which imho must preceed your discussion, even if it may lead to a paradoxon: how to distinguish between a potential ally and a potential enemy without exchanging critical informations.

from current occasion: what will be your option, when the evolution itself will be your enemy?

You don’t distinguish between them, because they are the same. Everyone has the potential to become an enemy or an ally. It all depends on the degree of abundance that’s present in the environment.

  • In environments with extreme scarcity (like extinction levels of scarcity), individuals almost inevitably have to fight for their own survival by competing with everyone else.
  • In a situation of extreme abundance, everyone can have everything, so there’s no point in assuming either the role of an enemy or an ally, because both are pointless
  • Only a situation of moderate scarcity provides both motivation and opportunity to make allies. Typically, teams of allies will fight competing teams of allies.

And even in that last situation allies can turn into enemies and vice versa. That may be less likely to happen than not, but it’s still an option that needs to be considered.

Depending on whether you focus on risk or opportunities, you should treat everyone as potential enemy, or as potential ally. The first case would imply maximizing privacy, the last case would mean that it might be an advantage to disregard it completely, in the hope that this help you to find allies. Something like Facebook comes to mind easily.

yeah :slight_smile:
but it is not what i am driving up.
my thoughts were inspired by the actual debate about cambridge analytica and the medial manipulation of the public opinion.
especially and in own thing by the discussion about the artifial intelligence.

i see at the moment a schisma between the supporters and the opponents of this upcoming event - the most of them without a basic understanding, i.e. without conclusive informations.

additional to that, many of those people don`t have a basic understanding of themselves.

a tricky situation *smile

and because i will not be devored by the basilisk, i am looking for a way to not lose terrain to those who will “pull the plug”.