This is my first post here and as someone who’s recently reading a lot of philosophy . Especially Philosophy of mind and epistemology and I’m surprised as to how many singularity tend to be un philosophical and talk about how their view is purely a science based view and completely ignoring what science is and what it is. Even H+pedia doesn’t seem to take philosophy as a field seriously. Have there been any People who take the idea of singularity seriously in philosophy ? I really haven’t met anyone on r/philosophy and r/Askphilosophy. Even the other philosophy forums
Welcome to the F3, Consciousobserver. I think what you’ve experienced is a result of a cultural dichotomy. Singularitarianism can be seen as extreme version of modernism with its very optimistic vision of progress. It can also be seen as very optimistic version of transhumanism, which is itself a spin-off of humanism. The problem is that transhumanist philosophies are marginalized in the communities of academic philosophers. Why is that so? I’d like to have a clear answer on that, too. Perhaps people with a good understanding of mathematics, physics, computer sciences, and other sciences have the necessary basis to understanding the technological basics of transhumanism, while it looks like science fiction to others who don’t.
It’s not that academic philosophers don’t know about transhumanism. In my opinion, it’s seen as potentially promising, but “forbidden” realm of philosophy. There’s no real track record of transhumanist philosophers having been greatly successful in the realm of academic philosophy, so academic philosophers are wary to pursue that direction, because they fear it might hurt their career prospects. Transhumanism has the status of an undesired newcomer in the area of reputable philosophies, at best. This reaction, in turn, makes transhumanists wary of engaging too deep with “traditional” forms of philosophy.
When it comes to philosophers, in a wider sense, who take the idea of a Singularity seriously, some names come to mind for me:
- David Pearce (the guy who wants to abolish suffering)
- Max More
- Max Tegmark
- Anders Sandberg
- Eliezer Yudkowsky
- Ray Kurzweil
- Nikola “Socrates” Danaylov
There’s also me, but I’m not widely known and don’t have a track record of publications outside of this forum.
Hello thank you very much ! I have to say I’m a bit refreshed seeing some people taking criticisms from philosophy seriously. Most of the communities I’ve visited seem to be really the average “STEM or nothing” crowd. I’ve also found a lot of criticisms of Less wrong style rationalism from other real rationalist communities. Often in forums like sneerclub and badphilosophy on Reddit or some other local philosophy forum.
I think transhumanists don’t realise that their ideology itself is a philosophy , not to mention how much the lesswrong crowd tends to stigmatise philosophy and not take philosophical problems seriously. I think in science communities transhumanist hypothesises aren’t exactly testable empirically as of right now.
And I’ll look into the rest of the people you mentioned : D BUT I tend to have some real dislike towards Yudowsky because of how arrogant he is and how he doesn’t take critiques that are well justified seriously. But then again I also acknowledge that a lot of them aren’t really working towards actually debating philosophers. And are simply people helping raise awareness.
But I do think we will need some form of argumentation and rigour on our behalf to be taken seriously. Because simply raising awareness by simplistic terminology and sensationalism isn’t really strong because academics can easily “burst the bubble” of impressionable people with better and more detailed arguments against those stances of many of the proponents and advocates of our view.
As for modernism. I think that there are definitely positions that directly address post modernism like transmodernism by Enrique dussel bit those seem like minority positions. I’m not sure why transhumanism specifically would be hypermodernist ?
what a lot of bullshit.
I would add Nick Bostrom, although i am not really a fan. But he is definitely a philosopher with profound thoughts about the singularity.
I think raising awareness is the most important task at this stage in history. So, I think what these transhumanists have done was the right thing, but it’s not enough by far. What kind of important philosophical work do you think is needed for progressing a / the theory of transhumanism?
Is that a theoretical concern or do you really see that happening? I mean, I see that’s a valid concern for provocateurs like Zoltan Istvan, but I don’t see David Pearce or Nick Bostrom easily dismantled by detailed arguments.
Because of the overarching narrative that (societal) progress is generated though science and technology. Modernism and transhumanism have that in common, but transhumanism goes a step further in the extent that science and technology as applied to, by including modifications of the human body and mind.
Anyway, I feel that transhumanism is in a crisis at the moment. There aren’t many influential proponents of liberal transhumanism and their philosophies get discredited by the boogieman of anti-liberal transhumanism that is painted with increasing frequency by ever more popular conspiracy narratives. That problem isn’t going to disappear all by itself any time soon. In a hostile environment like that, liberal transhumanism cannot thrive. I’d put more hope in humanism adopting more and more elements of transhumanism without calling them as such.
I’m not really sure about Nick Bostrom and I haven’t read much about David pearce but I think David Pearce praising Armstrong’s book about how awesome Zoltan is kinda shows how things have been going for Pearce lately.
Honestly the whole premise is flawed on the other hand on transhumanists like Kurzweil. Who don’t know much about other fields less their own. It gives a bad impression on actual scientists and Philosophers. Same goes for say Michio Kaku who is also frequently dissed in both communities.
What premise exactly? What’s your criticism?
I’m not sure to be honest but I believe that transhumanism really attracts a lot of Naive liberals and anarcho capitalists. And people who don’t realise the consequences of technologies. I don’t think technology is good or bad. It has the potential for both but a lot of people have an idea that this is an end in itself which distracts the attention from more important problems ongoing in the world and gives a privileged attitude towards the world. Since that isn’t enough. We lack technical arguments for many transhumanist goals’ viability.
But I do have to say that you’re doing a great job by actively acknowledging this ! And I’m happy people like you realise this and want to work towards this
A little off topic but I really recommend reading the SEP entry on transhumanism
Edit : I’d like to add that transhumanists show a disdain for religion and tend to think of it is irrational but after reading authors like WLC and other theist philosophers. I believe that we overlap a lot with them. Many philosophers of religion are still arguing for natural theology and the view that reason and faith are opposed and can’t supplement each other isn’t popular among philosophers of religion. But yet people diss on religions and theism while anticipating the singularity
There is a real big problem with intrinsical motivation.
Humans are always motivated to solve problems and better their life. And as a collective culture we tend to use science and technology exclusively for that purpose and call this progress.
But what is the definition of transhumanism? Is a human being who enhances his physical abilities with the help of technology transhuman? Then we might call the first human using binoculars transhuman and the history of transhumanism begins a long time ago.
And the goals were always the same: we want to heal illnesses and damages to our body, we want to extend our abilities, we want to accumulate knowledge and we want to travel faster (although this one could be subsumed under “extend our abilities” as well). Did i miss something? The development of surrogates for humans like robots and AI are nothing but an extension to our abilities.
When somebody leaves all technology behind, lives a simple life and becomes enlightened, we usually do not consider this progress. Although it might be very well possible that this person become much healthier and extends her abilities and gains huge new knowledge. We usually would not call such a person transhuman because Transhumanism is exclusively meant for technological progress of the human condition. But this reduction is flawed and illogical.
Back to intrinsical motivation:
I don´t believe that there is any human being who is opposed to progress. But the definition of progress can be disputed in many ways. The goal of progress should be improvement from less good to better, not decay from good to worse.
Some years ago i stumbled upon a video of Zoltan Istvan in which he said, that he would want his children to be physically enhanced so they can compete in a more and more competitive world.
In this way transhumanism is again reduced. He did not question the frame that mankind once developed: our world and our economy is competitive. Period. We should acknowledge this without question. But this is transhumanism for fools. If we are not able to question every society, culture, and economy humans ever developed, we will never become real transhumanists. So the first question in this example should be: Do we want a competitive society? The next questions might be: Is this progress? Is this improvement? Is this necessary or could we change this frame we are all living in for a long time: a competitive society? What about a cooperative society? Would it be progress if we could change a competitive society into a cooperative society?
The motivation to enhance ones body to become more competitive is extrinsical, not intrinsical, because the frame comes from the outside.
So one of the main questions we should ask, is: What is the problem we want to solve/ the thing we want to improve? Is a competitive society less good than a cooperative society? If the answer is “yes” we should solve the problem with society and refrain from enhancing people to fit in a flawed frame.
With the belief in propaganda nearly every motivation becomes extrinsical. Propaganda tells you what you should want. We might witness a triumph of transhumanism in the near future, but if this triumph was propelled by power-hungry entities from silicon valley or the world economic forum it will be a dystopian nightmare like 1984. And this will be regress not progress, althought every human will be enhanced and will use advanced technologies. I would never call such a development “Transhumanism”. But what if many popular proponents of transhumanism are part of this dark transhumanist agenda? Such people would not want philosophers in transhumanism. They would want to hype technological advancement and need many useful idiots to cheer them on and to follow them wherever THEY want to go. Transhumanists without an interest in philosophy are useful idiots who just follow propaganda or worse: they are bad actors.
Philosophy and especially ethics is necessary to create a utopian, transhuman society. Many tough questions should be asked, like:
Do we want to create AI to control and manipulate humans or to help them?
Should strong AI become a slave or will we grant it human rights?
The development of strong AI might be the key to our consciousness and therefore the key to immortality. Do we want to become immortal?
Should technologies be available for everyone? Should they be imposed on everyone? and many more…
There is a very thin red line between a utopia and a dystopia when it comes to transhumanism and without philosophers it will inevitably be the latter.
For this idea about the dystopian version of transhumanism we desperately need Philosophers and talks about ethics: (great redpill)
It always comes to me that most futurist techno optimists are negative utilitarians. Which is mind boggling given that negative utilitarianism is overwhelmingly rejected in contemporary philosophy and even classical utilitarianism seems to have an overwhelming Amount of criticisms ?
There’s a certain logic to them being negative utilitarians. If techno optimists are determinists about technological progress continuing indefinitely and fixing most problems in the process, then the only things left to care about are extinction risk and the problem of suffering accumulated along the way to techno heaven. Their reasoning is along the lines of: If we can avoid existential risks and set up systems to minimize suffering, then we will inevitably create the best possible world.
That kind of reasoning can certainly be alluring. The promise of a technological utopia that we will reach inevitably, if we just avoid the greatest pitfalls is quite attractive. Actually, this idea might not be too bad as a starting point, after all. It just needs certain refinements that include questioning the reliability of its premises:
- Technological progress is not guaranteed. There may be strong forces working towards greater technological progress, but there are also forces working against it (such as increased fragility created by an ever larger technology stack that civilization depends on; or just plain systemic corruption).
- There are more outcomes than techno heaven and extinction. Avoiding 1984-like techno dystopias might be harder than expected by the existential risk community. Such political risks seem to be dramatically underemphasized in their work.
- The result of a lack of a coherent positive vision will more likely lead to chaos, conflict, and balkanization of the cosmos, than to techno heaven. Classical utilitarianism could be a more solid foundation than negative utilitarianism, if it would be possible to provide a canonical definition of utility, which is something I’m not expecting to happen any time soon.
The problems with clearly defining utility eventually lead me to place myself in the consequentialist camp. While a one-dimensional definition of utility might not be appropriate, dealing with multiple dimensions of utility, for example happiness, freedom, knowledge, and personal growth, might be a more robust approach, even though we are then stuck with the problem of making trade-offs between those dimensions of value.