This is supposed to be a series of rather spontaneous short stories about a vision of the future.
Edit: It’s ok to post comments in here. I will republish the story cleanly on other platforms anyway.
Netec 2045
I’m not sure for whom I am writing this, but I feel the need to reflect on all of the changes that happened this century, especially during the last 15 years. It seems to be characteristic for our time that half of all people complain that our current largest problem is that things are changing too fast, while other half complains that things are not changing too fast enough. It is not an accident that I’m writing this in the year that Ray Kurzweil picked as date for the Singularity. Was Kurzweil right with his predictions? Yes and no. In some areas he turned out to be slightly too optimistic, but in other areas he was definitely too conservative.
So, it seems that I have to summarize the world I live in right now. What should I say? We managed to solve some of the greatest problems of humanity: Poverty, ageing, disease, and the worst forms of involuntary suffering. On the other hand, many people are scared about the future, since they fear that the AIs will soon take over, and then this short golden age for humanity will have passed. This is certainly not the fear of the Upgraders. They believe that we can upgrade ourselves to become as good as the AIs and eventually merge with them completely. The Upgraders are opposed by the Aurelians who claim that today’s nootech is good enough for all practical purposes and that we shouldn’t allow any individuals to become very much more intelligent than the average human.
Note that I’ve written “individuals”. There has been this talk about Superintelligences all going back to the 2010s, and even before. But now we actually have them, and they are not individuals – they are the nets! Collective Intelligence (CI) is the thing that enabled the exorbitantly rapid progress of the last 10, or 15 years. Few futurists saw that coming, and none of them could have predicted how things actually turned out, because reality sometimes turns out to be crazier than any fiction. Today, the nets rule supremely, and it’s the best thing that could have happened to us!
What surprises me and many of my friends is not only that this incredibly fast change is happening, but that the human mind actually has the capability to adapt to it. It’s like an animal that releases its full potential when it’s cornered and threatened. The last 15 years have seen more and deeper disruption of anything we know than perhaps the last 15000 years. Perhaps that’s an exaggeration, but it’s only a slight one. When it looked like humanity had settled with its social status quo the netec revolutions seemingly came out of nowhere and really changed everything! Governments crumbled, corporations were wiped away. Those who understood netec adapted to it, and those who didn’t, or even dared to oppose it, were banished to meaninglessness.
Who would have anticipated that now there is no single government left in the world? They all have been replaced by govnets during the netec revolutions. This has caught us all by surprise, but it’s what we wanted – no, it’s what we needed! How can the govnets be described? CI-driven cyber-anarchodemocracies? At least, they might have been described govnets that way in the 2020s. Now, most of the constituent terms feel more like relics of a long past age. Actually, the situation is getting more complex every day now that the peacenets are gaining more traction and are about to finally banish wars and crimes to the realm of history. It may seem weird that we relinquished accountability in governance during this process, but it turned out that the govnets work so well that this isn’t even a real issue.
Equally surprising was the relatively abrupt end of capitalism and its replacement by netec. Sure, we could call netec capitalism 2.0 or socialism 2.0, but that would make about as much sense as calling it feudalism 3.0, or mercantilism 3.0: Almost none. The Network Economy just doesn’t play by the old rules, and it’s important to understand this point, otherwise you are lost! The dynamism in netec is just incredible. What took years and decades under capitalism is achieved within months under netec, which of course uses the most advanced technology available worldwide to achieve its purposes. Netec has overcome centralism: Single points of failures, bottlenecks, and inefficiencies of scale all ceased to exist. At the same time, netec is decentralized, distributed, human, and superhuman. Our CI-driven economic system solves problems much more efficiently than any single human or AI could. This is why we live in an age of universal prosperity. Compared to the poorest Africans of today, even the wealthiest persons in the USA of 2030 lived lives of abject destitution and misery. This may be a bit of a stretch in some respects, but it’s still mostly true.
The weirdest thing is how netec both empowered and humiliated us at the same time. With basincs and repincs we lost our large dependence on governments and employers. This liberated us, but afterwards we realized that we soon were working for entities which were vastly superior to us singular humans: The nets. Although many of us have been dimly aware that we always have been parts of systems that were greater than the sums of their parts, the shocking revelation of the superiority of the nets has been a great blow to our perceived dignity as humans. We felt like we have been degraded to mere ants. While feeling individually empowered, we actually started to operate for superorganisms that trumped our individual intelligences in nearly all respects. Today we take pride in being “like ants”, but the transition to this new form of pride was very challenging and even painful – especially for us “previous Westerners”.
The futurists of the early 21st century would be mad at this: Today we have artificial general intelligence, but compared to the nets it’s not a big deal! Once the true superiority of the nets has sunk in to us, we stopped fearing the quickly emerging AI singleton that could take over the world and turn all of us into paperclips, or alternatively simply turn our world into an utopia for humans. At the very least, the peacenets of today make such an outcome absolutely unrealistic. So, what people actually fear now it not extinction at the hands of AI, but rather the obsolescence of what the Aurelians see as “humanity”. The Upgraders want to become gods, while the Aurelians feel that being human is the best there is – especially in our time.
Hugo de Garis has predicted a conflict like this. In his scenario there would be so-called Cosmists who wanted to create machines named artillects – artificial intellects – that were billions of times as intelligent as humans. The Terrans would be absolutely opposed to such “machines” being created, and would eventually wage a war against the Cosmists before they have a real chance for actually building an artillect. Later on, the Cyborgians were added to the mix, which were humans who wanted to become artillects. It’s fair to compare the Cyborgians to the Upgraders who are very vocal today. The Aurelians can be roughly identified with the Terrans. However, the world has become almost astonishingly peaceful, so that no big violent conflict between both parties has erupted, yet. Instead, the whole issue turned into a political and philosophical debate in which each party wants to convince the other with grander visions and higher moral integrity. To vent some steam, there is certainly no lack of brutal violence between both factions within virtual worlds, but that’s more fun than an expression of serious animosity.
Perhaps the most astonishing change of the last 15 years is how quickly we have matured, both as individuals, and as global civilization. Optimistic science fiction writers who have deemed such a peaceful world possible expected humanity to require centuries or millennia to reach it. But no, we did it within 15 years. The netec revolutions forced us to grow up faster than any of us was comfortable with. It started with the rampant surveillance that became apparent in the 2010s – big brother was watching us. Then came the backlash with ubiquitous transparency and sousveillance in the 2020 – little brother was watching back. Afterwards the netec revolutions really shook us up in the 2030s – especially with the emergence of the nounet, which allowed us to feel each other, instead of just watching each other. And now in the 2040s the peacenets watch over all of us – unrelenting, tireless, and without fail.
Yes, the nounet deserves most of the blame for our recent maturation. Perceiving the feelings and thoughts of other humans directly was a frightening, surprising, terrifying, terrorizing, and even traumatizing experience. It was the pinnacle of technological and social future shock and the core catalyst for the decade of amazement: The 2040s we are currently living through. We couldn’t just go on as we did. While the first netec revolutions were political and economical, the nounet unleashed the second wave of netec revolutions, which were social, moral, and philosophical in nature – especially once we started to embed nounet nanos in the brains of our pets and even our surrounding biosphere.
You need to have a very disciplined mind in order to even make full use of the “single player” parts of nounet. Dealing with other minds reflected in your own pushes you to the brink of madness. Without the help of neural regulation augmentations the world would have collapsed in a collective burnout around 2040 – perhaps only sparing the most hardcore monks and philosophers. That we mastered this crisis is a testament not only to the strength of the human spirit, but also to our ingenuity. Many people learned more within a few years by dealing with nounet than they could have learned in multiple lifetimes before the netec revolutions! We have become stronger, wiser, more compassionate, understanding, and rational than we ever deemed necessary – but back then we have been so naïve. Humanity has finally reached its full flourishing and glory. And that is one of the strongest arguments of the Aurelians for not augmenting us much further. On the other hand, the Upgraders project the incredible changes of the last 15 years another 15 years into the future in which we might achieve a truly divine level of development.
How could it be that even the more conservative parts of society joined in this grand transformation? They were pushed by the netec revolutions, of course. The old conservative elites quickly lost ground during the revolutions and their power was reduced to nothingness when they refused to embrace netec. Conservative thinking could safely be completely ignored in the 2040s. Netec has become our new god and it didn’t care a bit about old-fashioned social norms. What counted was how to solve the big problems of the world and not how many pathetic status symbols you could accumulate, or with what kind of entity you had (nou)sex.
Netec is at the same time more human and more ruthless than any other economic system that mankind had used. It’s more human, because it emerges from the distributed human activity that is driven by human wants and coordinated and optimized by the nets. And it’s ruthless, because is has absolutely no respect for any human follies which reduce the effectiveness and efficiency of the economy. For example, accumulating private property might be an indirect expression of human wants, but if anyone today has the idea not to use that private property for public good, he is punished by netec so immediately and thoroughly that he changes his minds very quickly. This is why netec is so ruthlessly efficient, and allows humans to flourish in socially compatible ways like never before. Humanity co-educated itself and raised the bars in all areas simultaneously.
The only elites that remained were those who excelled by their ability to pierce through the big problems of humanity and lead the world with their refined characters. Mistakes were punished swiftly and dramatically. This forced people to improve them as much and as quickly as possible. Human augmentation was in great demand and quickly became the norm. Its adaptation was economically mandated and supported. This is still one of strongest forces in favour of the Universal Upgrade, which the Upgraders want to see completed. We were already forced to become more than human, but the pressure to move on further is still very strong. The Aurelians claim that it’s enough, that we have reached the pinnacle of what can be called humanity. They think that we can solve all remaining problems at the level we have reached by now. Perhaps they are right about that, but I’m not so sure.
You see, there are serious problems that actually emerge from our recently developed technology. There’s the issue of the so-called automodders who use the nanotech in their brain to make certain experimental changes. Changes that could improve their minds – or cause lethal or gruesome complications. Now, the interesting issue is that a healthy and sufficiently trained human brain is already very good, especially when it’s interconnected with the nounet and interacts with other nets. How will you notice a real improvement? When you are more successful in your endeavours? Oh, but how will you know that your success is due to your modifications and not the result of a streak of luck? Yes, you will have to apply statistical analysis and make lots of experiments. This takes a lot of time, which is why we haven’t seen a real intelligence explosion, yet.
The other reason why we haven’t seen an intelligence explosion is that photonic circuitry only recently has surpassed the human brain in performance, flexibility, adaptability, and energy efficiency. Some of the most daring Upgraders are eager to replace their current electrochemical brains with new photonic cyberbrains. Sure, let them do that, we need the data that they will produce: Will they behave as before? Will they become jerks? Will they transform into benevolent demigods? We are mainly beyond the debate whether uploads of human minds still are the same persons, and whether they still possess consciousness and qualia. Outspoken deniers are usually blacklisted from civilized meetings. Uploading has been an option for many years, but it has always come with real problems, until now.
Aurelians are opposed to switching to photonic brains, at least if their performance is better than that of conventional human brains, which wasn’t the case up until recently. Make no mistake: Aurelians are cyborgs, too, but they use human augmentation technologies to become as human as humanly possible, not to become more than human. They argue that the social rift between people will become too large once the differences in the performance of their hardware become severe. This would force everyone to adopt the latest hardware or be left behind in the dust. But Aurelians don’t want to run on a treadmill perpetually, under the constant threat of falling down. Aurelians want to stay as they are, without any new disruptive “upgrades”.
As you might have already guessed, the debate between Upgraders and Aurelians dominates our current age. Many are still undecided and don’t really pay allegiance to any of both factions. I am one of those undecided. Both factions have their own appeal and tell grand and enchanting narratives.
The Upgraders have their Universal Upgrade, which is a total transformation of everything: A better you, a better society, a better world. In an Upgraded Civilization there would be maximum freedom for everyone, no injustice, and incredible superabundance. Everything would be bigger and better than it is today, and the big problems would not only be solved for humans, but also other animals and artificial minds. And by the way: All of them will converge into Upgraded Beings, so there won’t be a real difference between them anyway.
That’s obviously too much for the Aurelians who see their Aurification completed already: Humanity has already reached technology and social innovation to reach its current pinnacle. There will be further progress, but no real disruption. Our world is already pretty nice. We are repairing the damage we inflicted on nature, we are all pretty wealthy already, we don’t need to die, people are generally nice, and we are even starting to build new habitats for us in space. So, why risk all that we have worked so hard for in the search for a higher level of existence? From the pinnacle we are standing on, all roads inevitably lead to dehumanization.
Now that nets have enabled us to solve so many of the great problems that AI enthusiasts expected to be solved by AI singletons, there seems to be less need to actually create generally superintelligent individuals, be they AIs or augmented humans. So, what reasons are there for striving for more intelligence anyway?
First of all, there is the subjective experience of personal insufficiency: For all of us there are challenges we feel would be quite doable, if we were just slightly more intelligent. Alternatively, we might put years of effort into these challenges and eventually fail to rise up to them regardless.
Secondly, there are great challenges left that even the nets couldn’t solve, yet. In particular the great ethical and political challenges of our time: Which entities should have which rights? Is there a deeper moral truth that could unify the whole world in peace forever? And if not, can we leave in peace, and universal abundance anyway? How can we safely modify ourselves and our social systems to achieve sustainable optimal well-being? There are certainly a lot of well-grounded opinions on these questions, but no clear consensus of highly intelligent minds. Many of the Upgraders hope that we will find unanimous answers, if we rise our intelligence to a sufficiently high level that lies well above our current level.
Thirdly, we are curious beings. We want to know what is still out there that we haven’t discovered, yet. The realm of superhuman intelligence is one of the last frontiers that await being explored.
If you paid attention, you should have noticed, that I tend to be slightly in favour of the Upgraders. They are the ones who yearn for the answers, while the Aurelians don’t seem be moved by such a deep longing. However, they are the ones who opt for us living the best lives we can live as humans – and they do that quite compellingly.
There are two kinds of Aurelians, actually: The tolerant Aurelians who are not opposed to the grand visions of the Upgraders, but simply opt to remain “merely human”. And then there are the militant Aurelians who want to prevent the emergence of superintelligent individuals universally – without any exception. For that purpose, the latter have created one of the most aggressive and intrusive peacenets: AICON, the Augmented Intelligence Confinement Operations Network.
So far, AICON merely scans for signs of individual superintelligence. There were some signs which it classified as “problematic” or even “highly problematic”. Yet, AICON hasn’t moved on to actually suppress these potential superintelligences, because there was no decisive need to do that, yet. It is often argued that this is merely an excuse for its reluctance to provoke competing peacenets by forms of open aggression. Especially SHIELD, the SuperHuman Intelligence Existential Layer of Defence peacenet poses a strong obstacle to the rule of AICON.
SHIELD strives to protect the safety and freedom of all kinds of intelligent beings simultaneously. It would most likely retaliate against aggression from AICON, at least within the limits of possibility and reason. What makes our current situation somewhat stable is that we maneuvered ourselves into quite a mess with all of these multiple overlapping peacenets. You see, peacenets protect their members primarily from other members of the same peacenet. But it turned out that people desired peace so much that they joined multiple peacenets at once – which seemed to be a perfectly reasonable thing to do. So, how would an open conflict between AICON and SHIELD look like? AICON could only act against SHIELD members who are not also AICON members. And SHIELD cannot act against AICON members, if they are also SHIELD members. It would actually be comparable to the situation during the Cold War in which the USA and the Soviet Union could not realistically attack each other directly, but still were able to engage in proxy wars.
Edit: I must have made a mistake here. I need to think more about peacenets and how they actually operate. It doesn’t seem unreasonable for them to be picky about who they accept as members. This would untangle the messy situation that is described above.
Edit (2015-10-10): Ok, on second thought, what I’ve written does make a lot of sense, since it’s people who select which peacenets they want to join, and not peacenets who select which people they accept. Peacenets need to grow as large as possible to fulfil their function properly. Actually, people submit themselves to peacenets. Joining one is easy, leaving it is really hard. The “messy” situation above should actually be an expected result of that dynamic.
It would have helped if you asked more specific questions. By “working for” I do mean all kinds of work, like traditional jobs, or freelance jobs, or just helping out somewhere. People are free to choose for which nets they want to work. Contracts, if they exist, are typically very lean and flexible. There are no taxes on work-related incomes! Regulation is minimal, unless non-existent at all. The nets work very crypto-anarcho-libertarian-meritocratic, but many also have a social component, like providing basic incomes or reputation incomes. Self-organization reigns supremely. Net-internal “Democracy” (as in decision making via polls) is kept at a necessary minimum.
The nets in this story are a combination, evolution, and convergence of many ideas and trends, including:
Also a fair bit of intelligently integrated supportive AI
Later on, also AGI (Artificial General Intelligence) / strong AI
Also later on, group-minds connected via technological telepathy and empathy
An ad-hoc / anarchist mindset in general
The basic idea is that these trends will become stronger and more refined in the 2020s which leads to the emergence of what are simply called nets in the context of Netec 2045. They become so powerful in the 2030s that they effectively wipe out all competing structures during the Netec Revolutions. That’s why people work for the nets. People work for nets, nets work for people. It’s a symbiotic relationship, actually.
I was imagining that robots and A.Is did all the important stuff, leaving humans 100% free to do whatever they feel like. What useful work could the humans do?
So i was expecting something like:
This liberated us, but afterwards we realized that we had zero power over the world and were just being looked after like pets.
Why do the humans think of themselves as ants rather than pet cats?
Let me rearrange this:
“What useful work could the humans do?” “whatever they feel like”, including “all the important stuff”
You need to consider that the more power you have, the more types of meaningful work there are. While robots and AIs will take some work from humans, the amount of work there is to do will increase nevertheless, because more possibilities are opened up. In other words, I’m not believing in “technological unemployment” anymore. There is always enough work to do. It just depends on the economic circumstances of the system whether it makes sense to allocate certain types of work to humans. When humans are basically free to do what they want, they will choose very cool types of work, if they want to feel fulfilled.
I believe there is more than enough meaningful work lying around for humans, even with the support of robots and AIs. At least for quite a while.
The intrinsic motivation that comes with these better and more meaningful kinds of work makes people more motivated to work, rather than less. Thus, in the future of Netec 2045 people feel busy like ants, but very much fulfilled at the same time.
[Our recent progress that made our world so prosperous was perhaps the main driver behind the wide success of the peacenets. We were so glad about what we have gained that we were terrified about the prospect of losing all of that to some stupid conflict, disaster, or accident. The Aurelians profit a lot from spreading the fear of us having to share our prosperity with vastly more intelligent individuals that would “naturally” emerge with further progressing technology. That is, if they don’t play with the fear of having us being annihilated completely by our superintelligent successors. Their strategies have born fruit: AICON is now one of the most successful and powerful entities of all times. Granted: Both AICON as SHIELD emerged as reactions to perceived existential threats, and both peacenets claim to have prevented humanity from having taken a very destructive route.
Most Upgraders try being diplomatic in the face of Aurelian scaremongering and propose that everyone should be upgraded at about the same time to higher levels of intelligence. Given our recent technological, and societal breakthroughs that strategy does seem realistically feasible. Of course, the Aurelians aren’t happy with that answer, because for them it implies losing our “humanity” at some point during the upgrading process. However, there aren’t very clear boundaries between being “human” and not being human anymore. Today most Aurelians claim that they are still human, even if they are significantly augmented already. Likewise, people have claimed throughout history that wearing clothes makes people more human, instead of less human!
Clearly, defining what it means to be “human” is one of the greatest challenges of our time. There have always been debates around that question, but now they are getting really acute and serious. It is already possible to uplift animals to human intelligence levels with the use of advanced exocortices. Along with human-like AIs these uplifted animals belong to a newly defined category of “anthropoids”. Anthropoid rights have been introduced in many govnets, but the overall situation remains very complex and inconsistent. While some Aurelians see the non-human anthropoids as a threat to human supremacy, others see them as allies who are also interested in protecting themselves from the emergence of superintelligenecs. Upgraders typically agree that those beings who possess a certain level of sentience and intelligence should all have about the same rights, but they don’t agree on who should be upgraded. Egalitarian upgraders want to upgrade as much life as possible to vastly increases levels of intelligence, while anthropocentric upgraders only desire to upgrade humans, unless there are very good reasons to upgrade other entities.]
[Although our current situation is mostly stable, it is very tense. The Aurelians rely on AICON preventing any dangerous, or even unreasonably disruptive superintelligence to appear and break lose, but most of them realize that they can’t stop the technologically fuelled evolution of intelligence forever. Their most realistic hope is to slow it down so much that it becomes totally non-disruptive and everything that is valuable about being “human” is preserved indefinitely.
The Upgraders, on the other hand, see AICON as despotic inhibitor of further progress. They fear that the era of rapid progress will halt very soon, once AICON starts to actually impose severe limitations on self-improvement of individuals. So far, AICON is closely monitoring many anthropoids with severe modifications that are classified as experimental and “potentially threatening”. The reason AICON can be lenient and allow anthropoids to modify themselves rather freely is its own position of clear superiority. Actually, the superiority of AICON is quite terrifying, and it’s the most effective system for suppressing any internal opposition that has even been designed.
Before the development of the peacenets, people were worried about “unfriendly AI” going “FOOM”, meaning that it would improve its own intelligence explosively. Unfriendly AI would then go ahead to do things which are “unfriendly” to humans. This motivated many people to attempt to invent a “friendly AI”. So far, the efforts in this direction have turned out to be rather futile. Instead, the peacenets were accepted as solution to the “control problem”, so that even if powerful AIs turned out to be unfriendly, we would still have the power to control them and prevent them from doing any real harm. Today, the ideas of a “friendly AI” are mostly seen as old-fashioned science-fiction nonsense. They have been replaced by the cold and efficient protection provided by the peacenets.]
[There is little doubt that the peacenets are highly effective. They have successfully thwarted all attempts at disabling their inhibition systems. Their reach is global and pervasive. AICON’s surveillance system monitors nearly every cubic centimeter of the planet and beyond. This approach is certainly criticised by many as too extreme, but it was a measure that has found widespread popular agreement – in particular thanks to the scaremongering of the Aurelians.
We are probably experiencing the silence before the storm. Sooner or later, AICON will inhibit anthropoids from modifying themselves in certain ways, and then the Upgraders will become very unhappy. The conflict between Upgraders and Aurelians will heat up, because the Upgraders will see themselves as suppressed minority. Some predict this conflict will unleash a severe global economic crisis. Given our current level of economic development that would be quite survivable, but it would inconvenience everyone a lot. Anticipating those consequences is probably one reason why AICON has stayed passive for so long, but it won’t stay passive forever. Any moment, an anthropoid could upgrade itself “too much”, and then things will become ugly. How the upcoming conflict will end is anyone’s guess.
I would have preferred not to delve into this conflict, but it’s a current political topic that can hardly be avoided. It’s all too human to become occupied by current problems, instead of appreciating all that we can be grateful for now. I’ve tried to break out of that trap for the duration of this “state of the world report”, but I have failed.]