Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

Transhumanist thinking


(Alan Grimes) #1

om

Though I’m usually pretty mindless, once in a blue mun (waay too much Kerbal Space Program) my brain produces something resembling a thought and I share them in these meditation posts, always punctuated with “om”.

This is the first time I’ve written one for a forum, and I’m not sure if I will even be able to complete it without my browser crashing, as it is want to do these days, it typically takes between several hours and a week and a half to produce one of these. My computer is busy recompiling itself due to a compiler upgrade and I have a programming challenge for my dream job I need to get cracking on (and there are a number of technical issues, beginning with the fact that Libre Office is not working well enough right now to open the text of the challenge!!!)

Anyway, the topic I want to get into is one that is at the root core of transhumanism but one that people don’t seem to grasp or is so repulsive that they turn tail as soon as they catch a glimpse of it. What I am talking about is the engineering mindset and the ontological framework that science and technology demands one adopt and how it has no room for the modern pseudo-religions seen in medicine, law, and how we think about ourselves and each other.

om

For one of my clients, I was tasked with producing working devices from a pile of broken ones. These devices were twenty years old and parts are scarce. So what I would do was disassemble three of them, put the serviceable parts in a pile, and find enough parts to build one and then get a second to 95% completion. At that point I would contact the manufacturer and ask for information about the missing piece so that I might order a new one. They would then ask me the unit’s serial number so they could look it up. Sure I could read the number off the chassis I was assembling with random parts but, on a deeper level, the question was quite meaningless.

True story!

By proposing that we use technology to change the human condition, we must necessarily bring humanness into the realm of things that can be discussed in engineering terms. Instead of people, we are talking about physical systems which possess certain capacities, some of these are simple physical properties, others are more complex such as the ability to fulfill a social role, or even a specific social role.

When considering transhumanism crossed with social roles, one must contemplate the range of social adaptability of the individual that is to be created. (we can also contemplate post-individualism but I may return to that below). If we assert that the transhuman must have greater capacity than the human then we, implicitly, specify that the transhuman being have a capacity to fill a greater variety of social roles than a human can, and therefore enable a broader diversity of viable societal organizations.

Transhumanism, by definition, is a process of becoming what you want to be next. Over protracted time, it will, probably inevitably, become iterative. Indeed, it is almost a sure bet that a personal-scale singularity will take place where your next transformation will be incomprehensible to what you were before your previous.

So the two essential issues are what you want to become (and the answer could very well not be a mind upload!!!)
and how it is you propose to get there (and the answer could very well not be destructive brain scanning!!!)

While I want both of these questions to be as open ended as possible, there are specific engineering constraints. The answers must fit within the strict constraints of engineering. Engineering talks only of physical systems and processes involving physical systems. Everything else is meaningless fantasy.

So when you apply engineering principles (as you must) to the problem of upgrading (or simply changing) the body must be translated into scientifically measurable terms. There is no language for speaking about feelings, or religious notions of transcendence, or even abstract notions such as identity. The only things that exist are qualities, such as capabilities or characteristics, and physical processes which affect those characteristics. The act of engineering is to design physical processes and physical systems that have a desirable result.

So we can talk about uploading in this paradigm and it’s absurdity becomes blatant. The proposal to take a functioning human brain, kill it, then distill from it some holy essence called information that, alone, has no capabilities, no practical purpose, and is barely even an artifact. It has no purpose, it cannot exist independently of the system that is said to “run” it. The upload is therefore entirely uninteresting from an engineering standpoint, but rather the system which runs the upload is. Now the system that runs it may or may not have interesting properties which may or may not be desirable as a transformation-target. But in that case, one is contemplating a transformation into a computer-based system rather than being emulated by one, the former being infinitely preferable in comparison. It is difficult to comment on the feasibility of such a transformation, only that brain emulation is an exceptionally poor way to go about it. =|

I might as well throw in a paragraph about post-individualism. First, I want to address the form of computer mediated telepathy. People seem to want a form of computer mediated telepathy that involves a high degree of sharing that still retains individuality. I think this MIGHT be possible but will require extensive research. I point out that the natural state of the several parts of the brain is to produce a unified consciousness. Simply building bridges between more parts of more brains will tend to produce a unified consciousness… (which could have some very useful practical applications, intelligence augmentation being one, when considering an artificial substrate on the other end, among others.) It is an interesting open question as to what will happen on a societal level when any of the several ways that one might imagine merging happening becomes possible.

om

Anyway, u have your thinking laid out for you…


Mind Merging, Definitions of Identity, and their Societal Consequences
(Michael Hrenka) #2

Thank you for sharing your thoughts. You are addressing some very important issues that are rarely mentioned, if at all!

You have my compassion here. :confused: I’m using Libre Office myself and I find it unacceptable that it doesn’t seem to be sufficient for everything! :confounded: What about asking a friend to lend you a Windows laptop or something? :sweat:

I generally agree here, but I wonder whether “engineering” is the best term here. After all, we are talking about biology here. What do we call the intersection of engineering and biology, and perhaps also systems theory? Synthetic biology? Biomimetics? Biocybernetics (I personally favour this one)? Bioengineering? Biohacking? Hubris? :wink:

We also need to consider that the “area of control” of a person will be vastly expanded beyond one’s own body once people start to control avatars via neural interfaces and radio waves (or lasers or whatever). It’s not clear how that will affect the self-image of persons. That will most likely depend on how often a person uses an avatar. Things will become very interesting when those avatars are not exactly “anthropomorphic”. How would it feel to control a spider-bot or a glorified quadcopter as avatar? My guess is that people will prefer avatars that can convey emotions via gesture and facial expression. Wolf furry avatar: Ok. Amorphous blob without face: Fail.

That is indeed a very thought-provoking paragraph, if only because I’m not sure what exactly you mean by that. What you’ve written here sounds plausible and even like a profound insight, but I wonder whether it’s necessarily true. I mean, shouldn’t people be able to choose the breadth of their social adaptability? What if someone prefers a rather isolated lifestyle in a small subculture? Shouldn’t we strive for the range of diversity that we find in nature or even surpass that? Isn’t that what a fractal society should eventually amount to: Maximum diversity, while there’s still peace between different communities, and the possibility to choose one’s lifestyle and community?

Entities which can be “used” in any context could be seen as mere followers or tools. Maximum adaptability, compatibility, and flexibility are not necessarily the ideals we should strive for – whether we call ourselves transhumanists or not!

About that broader diversity of viable societal organizations: Now that’s an interesting thought. One could imagine that people decide to change their character so that they become fully compatible to radical ideologies like

  • Communism
  • Anarchism
  • Free market capitalism
  • Various religious ideologies :innocent: :smiling_imp:

Many would probably denounce such modifications as dehumanization – perhaps even rightly so. Should morphological freedom really as encompass all possible mental configurations?

And shouldn’t we rather try to fit our systems to the people, instead of people to the systems? Or would it actually be the best to try both approaches at once?

As next step, I would really like to be perfectly healthy and increase my productivity and happiness set point. Also, I’d like to be wiser and have a better memory. Everything further still has a lot of time.

The question is whether people really want to go so far, that their next modification will make them incomprehensible different. It might well be that social forces drive them to do exactly that, even if they preferred a more conservative incremental approach. This is a huge unsolved issue!

Also, it reminds me of what I’ve written about “delta minds” in my blog post Paradigms And Classification Of Upgraded Minds. Changing oneself radically comes with great risks. You are actually the person who has drawn my attention to that issue!

Finally, there’s also the great philosophical issue with identity, which we need to consider when we want to discuss self-modification seriously. There has been a very good recent blog post about that topic:

I think that all of the different views portrayed in that blog post have their justification, but I don’t subscribe to the view that any of them represents the actual truth. Instead, I think they represent different models, or aspects of the self. See my post

It seems to be your signature move to point out that you are not an “uploader”, no matter how much or little other people actually talk or think about “uploading”. I accept your position. I’m not offended by the proposition that some people choose to have a classical human brain forever.

So, you are saying that we should have a “mechanistic” outside view of our future bodies when we go ahead to modify them? For purely physical modifications that makes perfect sense, but what if I want to change the parameters of my mind, how my emotions are configured, what my preferences are, and stuff like that? I imagine that something radical like that would require a language that rather resembles a mixture of psychology, neuroscience, computer science, and systems theory rather than what we think of as “engineering”.

Well, there are people who talk about these “substrate independent minds”, but I have become sceptical about the possibility of full “substrate independence”. If the mind is really substrate independent then why bother changing the substrate at all, if anything is supposed to be as it was in the old substrate? I think that the “thinking substrate” does have a big impact on the mind, no matter what. This effectively means: Other substrate => other mind. The degree of differences may be limited, so it determines on the definition of a “person” whether there’s some kind of continuity between both versions. So, I effectively agree with you here.

I’m not sure about that. Despite all its shortcomings, the human brain is still the most astonishing thinking substrate we have. Perhaps brain emulations is really the most intelligent way to create some kind of “AGI”? It wouldn’t surprise me if some minor tweaks to the brain, or its emulation, would make it really awesome.

That’s certainly a very fascinating subject. It will certainly depend on what kind of connections you establish between different minds. Do you only connect to other minds via a “mental voice” interface, or do you transmit “deeper” data? I think, individuality would be pretty much unaffected if you had mental voices in your mind that you could actually mute at will. Partially unified consciousness would probably emerge once you have high bandwidth and very deep interconnections between minds. I think it is also quite possible that some people would opt to change their brain architecture in order to experience deeper group mind or hive mind states. More power to them! :smile:

Yes, that’s a topic that deserves its own discussion thread, if anyone is willing to discuss this in detail! :smile: