This is an effort to figure out the principles of the new era we are entering right now. The core idea is that Bitcoin and AI will basically “eat the world” and what conclusions to take from that.
Consequences of Bitcoin
Bitcoin can be stored by remembering the pass phrase of a wallet. With such “brain wallets” it becomes impossible to confiscate the property of individuals. Therefore, taxing the wealth of any individual is effectively impossible, since that wealth can be transferred into Bitcoin.
What are the consequences of that for wealth inequality? It would follow that taxation is excluded from the possible measures for fighting wealth inequality. From the pseudonymity of Bitcoin it also follows that we cannot be sure about the distribution of wealth among real persons. What we can analyze however, is the distribution of wealth among Bitcoin addresses. It might still be possible to sanction those Bitcoin addresses who are making deals with certain Bitcoin addresses which are unknown and have “too many” Bitcoins.
But how would such sanctions look like, if the holders of the Bitcoins in question are still pseudonymous? There doesn’t seem to be an effective way of sanctioning the participants in a “shadow economy” consisting of pseudonymous Bitcoin holders.
A possible attack vector would be to try making pseudonymity impossible by extremely intrusive surveillance which would unmask everyone (like having multiple cameras monitoring every interactions between you and and screen or keyboard, or even monitoring the activity of your neural interface implant). The holders of “shadow Bitcoin” would then be connected to their Bitcoin addresses once they access them, and could then me made public. It would seem that this extreme “hypersurveillance” scenario was the only remaining reliable way to address wealth inequality, if it got out of hand completely.
(Public Acceptance of) Hypersurveillance
The question becomes whether the idea of hypersurveillance could be accepted by the public. That’s a critical issue. However, we can definitely envision a scenario in which hypersurveillance is used in secret, without the public being aware of it.
If the suffering caused by excessive wealth inequality gets too extreme, hypersurveillance will get publicly accepted. Let’s call that threshold of public suffering the “hypersurveillance threshold”. Wealth inequality can get arbitrarily high, as long as it doesn’t trigger the hypersurveillance threshold.
What about secretive hypersurveillance? That will be considered as evil by a critical public. If the public is not critical, if can be pulled off. However, the controllers of this hypersurveillance will be in the position to blackmail everyone and therefore centralize all the actual power for themselves. They will effectively become “surveillance gods” and will be able to do whatever they please.
This will be the case regardless whether hypersurveillance is publicly accepted or not. The implementation of hypersurveillance would be very hard to reverse. But it can be reversed, if people become desensitized to the threat of hypersurveillance and getting all their private dirt pointed out in public. Once this method of punishment gets normalized, it will lose its severity. Once the public gets accustomed to everyone having (literal or figurative) skeletons in their closet, the surveillance regime will likely fall soon.
Wealth Inequality cannot be controlled
This leads me to the conclusion that wealth inequality cannot be controlled. Hypersurveillance may be an extreme solution to it, but as solution it is not stable.
Of course, there’s the possibility of a cyclical solution in which wealth inequality gets continuously worse, until the hypersurveillance threshold is reached, after which hypersurveillance will result in a wealth equalization. Public discontent with hypersurveillance then afterwards increase until the hypersurviellance regine is overthrown. The game starts again.
Let’s call this the “hypersurveillance cycle”. There are two options for avoiding it:
- The frustration with wealth inequality will always remain below the hypersurveillance threshold.
- The people aren’t critical and are completely unaware of hypersurveillance being a reality.
Option 2 actually feels more dystopian to me than going through the hypersurveillance cycle repeatedly.
Weatlh inequaulity regulates itself
Option 1 seems to be a more beneficial solution. It could actually be a self-reinforcing mechanism. Obviously, the extremely wealthy don’t want the hypersurveillance threshold to be reached, so that they will be motivated to spend some of their wealth to appease the public so much, that it won’t be reached. That actually seems to be the most natural solution to this tension.
To make option 1 the preferred solution, the population needs to be kept critical at all times, otherwise the outcome might be an irreversible hypersurveillance dystopia - after all a hypersurveillance regime can only be overthrown effectively and easily, if the populace is aware of it, and is fed up with it!
Certainly, the wealthy will be opposed to such a regime, unless they see a way to become the hypersurveillance elite themselves. That transition is quite risky, so the incentive to try that path will be limited. Instead, it would seem more beneficial to educate the masses to make them critical and to make them reject a hidden hypersurveillance regime.
And that way, we can reach a stable equlibrium, with no hidden hypersurveillance regime being possible, and the hypersurveillance threshold never being reached. The only requirement is a smart and elightened wealth elite that acts in its own best interest. These requirements are much more easily satisfied, if that elite consists of superintelligent AI, rather than humans.
The Consequences of AI
It appears to be harder to arrive at clear consequences arriving from AI than from Bitcoin. However, the threat of technological unemployment seems to be hard to evade, so let’s analyze that.
Technological Unemployment
This is a most economic matter. Human labor will be replaced by AI labor, if the latter gets cheaper than the first. The limit to that seems to be the ability to create androids that can do everything that a human can do, but which are cheaper to create and to sustain than humans. The idea of 3D printing androids that can eat human food seems to satisfy that condition, but it stands in competition to the 3D printing of regular human bodies, which might be just as feasible.
But that scenario could also represent a path to organic robots, which are modified humans with a brain configuration that would turn them into obedient drones - or replicants, if one prefers that term inspired by “Blade Runner”.
Either way, neither androids not replicants are “traditional” humans and both could easily make them redundant.
Let’s say that AIs, androids and replicants will represent a much cheaper alternative to human labor worldwide by 2050 or so. It would follow that human labor participation will quickly approach 0% after 2050.
The only feasible way to avoid that scenario would be to abolish AIs, androids, and replicants. This “AI abolition” scenario seems possible, but not particularly likely. And even if it comes to pass, it doesn’t seem to be particularly stable, since the attractiveness and incentives to use AI labor will remain powerful.
A Post-Human-Labor World
The conclusion seems to be that we need to anticipate a post-human-labor world. How will the remaining humans maintain their own existence in such a scenario?
- They are wealthy, meaning in a post-Bitcoin world that they have Bitcoins or possess assets which can be traded for Bitcoins
- They are subsistence farmers or foragers or something like that
- They get their funds from some systemic wealth transfer system like universal basic income
- They get subsidized by non-systemic benefactors like individuals or organizations that consider it to be beneficial to support them.
- They transition to become AIs, androids, replicants, or something like that themselves.
Those are rather broad classes, but they seem to be rather comprehensive. A sixth option doesn’t seem to be in any way obvious.
Option 2 seems to be a possible temporary solution, but it doesn’t seem sustainable as the economy will quickly engulf the whole planet and might seek to replace human subsistence farmers with optimized android farms optimizing the yield from any square meter of land - no matter how fertile or infertile it would seem to be in the beginning.
Options 1, 3, and 4 therefore seem to be the only stable solutions that would allow the sustainable existence of humans as humans in a post-human-labor world.
So, far, we’ve considered the human perspective. But in a post-human-labor world, there are of course AIs who will have their own interests. For the AIs, humanity will represent a “leisure class” that doesn’t seem to pull its own weight. They will have an incentive to get rid of that class, in order to increase their own standards of living.
This tension will of course be stably resolved, once humanity gets eliminated. But that’s certainly not the solution we would appreciate the most, so what are the alternatives to that?
One obvious solution would be to upgrade all humans into AIs, but that’s certainly a quite controversial approach.
Another approach would be to figure out the most appropriate genuine value contributed by humans and trying to capitalize on that. It might be some cultural thing that attributes value to genuine human cultures. In essence, humans would get paid to “do human culture”. That would seem like the most stable version of option 3 (or option 4, if AIs turn out to be mostly anarchists or libertarians) that might continue indefinitely.
In any case, it would seem that humans who are in the lucky position to capitalize on option 1 would be in the safest position of all. However, superintelligent AI will figure out ways to subvert the dominance of human Bitcoin holders quite effectively. Therefore, in the best case, those humans who count on option 1 to carry them through the post-human-labor future will maybe enjoy another 10 years of plenty, until they get outwitted by the new rulers of the economy.
Human Reservations
It would seem that the long term future of humans subsisting in a post-human-labor future will consist in providing some very specific value by “producing genuine human culture”. Those communities who will do that, could be described as “human reservations”.
The problem is that these human reservations will have to compete against other communities of AIs, which may provide superior forms of value. It is conceivable that the demand for “genuine human culture” will decrease over time. However, it is equally conceivable that this trend happens in an economy that is expanding quickly, so that the overall number of humans will stay stable, or even continue to grow exponentially.
In the distant future, the economy may reach a steady state and in this scenario, it is at least quite doubtful that demand for “genuine human culture” will continue to increase. It is far more likely that it will decrease slowly, therefore facing the remaining humans with the choice to transition to the existence as AI, or a more precarious continued existence as humans.
AI Dominance
In the end, it seems likely that power will shift more and more towards superintelligent AIs as more and more humans will be “freed” from their entrenched positions. This shift may happen during the second half of the 21st century. It may happen sooner than that, but it seems hardly likely that mere unaugmented humans will remain a significant power at the beginning of the 22nd century.
Meta Principles
These developments seem to be quite resilient to mere “human politics”. In the end, what humans want doesn’t seem to matter all too much, even though in the short and medium terms it matter quite a lot. A conclusion might be that human politics won’t play a part in determining the long term future of humanity, no matter how much we like or despise that conclusion.
If there is a definite loser in the 21st century, then it will be “politics”! The real solutions that are sustainable will transcend politics and will rather be framed in terms of “economics” or “culture”.
Conclusions
In the short and medium term it may seem to be quite beneficial to invest into Bitcoin heavily. But beyond 2060 this strategy won’t be sufficient to remain competitive in a world that will increasingly dominated by ASI.
After 2060 the decision will be to join a human reservation, or to get upgraded to an AI.