Neural Interfaces and On The Fly Applications

How could humans interact with technology in about 2040 or so? Assume they have some kind of neural implants that enables them to be connected to computing, the internet, and other minds rather directly. How would that work? Let me display that schematically:

Data | Application | Interface | Mind

Those are different “layers” which need to interact in a harmonic way to augment the function of the mind. People who are familiar with the Model View Controller (MVC) pattern will notice some clear similarities here.

  • The data layer consists of data on local or cloud storage. It could be stored in files, databases, blockchains, or any other kind of data storage technology
  • The application layer is an intermediary between the data layer and the interface layer. The application grabs data and operates on it according to commands coming in from the interface. Then, the refined data is returned to the interface for human interpretation
  • The interface layer allows the human and the application to interact with each other. Human intentions must be shaped into machine-interpretable commands which the application can work with. The interface layer takes over the job of processing pre-formatted human intentions into well-formatted commands for the application. At what level are intentions read by the interface when we have deep neural implants? It could be words we think about and which somehow appear in an overlay of our visual field (no matter whether our eyes are closed or not). These words probably need to be validated somehow (perhaps by mentally moving them into the correct field and mentally click an “ok” button), because forwarding your whole stream of consciousness to an application would end up in a bloody mess.
  • The mind layer is difficult to understand from a computer science perspective, as well as from a neuroscience perspective. It may or may not be a good description / simplification to say that it’s some kind of very complex “neural network”.

Let’s assume that the data and mind layers won’t change very much until we arrive in 2040 – this may be a wrong assumption, and I would be glad to read about alternative scenarios, but let’s focus on the assumed scenario, mainly. How would the application and interface layers have to look like to enable humans to deal with data in the best way?

I think there are basically two approaches to this question:

  1. Start thinking from the data side
  2. Start thinking from the mind side

With option 1 we look at the data and think about it could be processed in a meaningful way. We would create conventional applications (CApps) that do interesting things with data and then create interfaces around those apps to enable people to use them.

With option 2 however, we start with human intentions to do something. Those may not be clearly formalized yet. Anyway, we would try to create an interface which tries to make sense of human intentions. Once possibly meaningful intentions are isolated from the mind, the interface would go on to create an application on the fly for the specific purpose of executing those intentions. These on the fly applications (OTFApps – this definitely needs a better acronym ;)) would then fetch and process the data they need to execute the isolated intentions of the mind that started the process.

Obviously, the interfaces would need to be extremely sophisticated and intelligent to create applications on the fly. Perhaps they wouldn’t do that on their own, but rather employ the services of an A(G)I whose purpose it is to create OTFApps just in time. We might assume that this is quite possible by 2040.

Interacting with OTFApps would probably feel much more natural and intuitive than dealing with relatively “static” and “fixed” conventional applications. Currently, humans need to wrap their minds around applications. In the future, the situation will be reversed: Apps and interfaces are wrapped around the mind of humans.

The process might work like this. A human thinks “I want to know X about Y”. Then an interface isolates that desire of the human to know X about Y and presents it as some kind of option to the human mind to approve of, or even goes ahead and acts on the intention automatically without explicit approval. The interface would create an OTFApp that is tailored to finding out everything X about Y by sifting through the global data layer, curating the results, and visualizing them in a way that the human can interpret best.

Human programmers seem to be out of the loop in this case, but that merely hides that all the deep complexity now resides in the interface layer, which has task to create a whole application within milliseconds! Creating such advanced interfaces probably requires a huge amount of intelligence, and may occupy hoards of humans and AGIs.

What do you think about this idea? Is if feasible? Is it realistic? Is it even desirable? Could things be done in an even better way? What should OFTApps be really called?

Interesting question, I just now went back in time two years and drew up a diagram of how my proposed neural interface will generally work. https://docs.google.com/drawings/d/1iqonNwcj90HmeGGyodgzCcP99dC1tj56WTaK7HQcdyQ/edit

That looks interesting. But I guess the real complexity lies in what

Distributed heterogeneous cloud of fault tolerant mind platforms

actually are and how they work. I think this might bear strong connection to the presentation of the last speaker on the Terasem AI Colloquium:

It seems that there is a common theme of the mind interfacing with some kind of layer in the cloud.

Anyway, what are “Avatar Mind(s)” exactly @AlonzoTG?

Well, I am not an uploader. So what you do is create whatever AI mind your avatar requires and then use the neural interface (and some intermediate systems) and use a computer-mediated mind meld with it. In theory, this should provide the additional bandwidth required to operate not just the avatar but continue to operate your “primary” concurrently. There is still a point of view problem but it may be accessible to a hacker’s approach.

So, you are talking about something like an AI-driven exocortex? Extending your mind with machine intelligence that is directly (or indirectly) linked to your brain?

What I am wondering about is neuroplasticity. Shouldn’t it suffice to provide the brain with direct connections to an avatar control system, because the brain can figure out how to operate it on its own? After all, the brain learned how to operate the human body on its own (there may be some “pre-programming” or “pre-disposition” present, but I don’t think it amounts to very much). Why shouldn’t it be able to control more than the human body (well, yeah, probably not too much at once)?

The point of view problem is interesting. I imagine it’s similar to the problem of having two eyes, each of which provides a 2d picture. Together they are interpreted in a way that generates a 3d impression of the space around you. Having multiple viewpoints in different realities would be quite irritating, but I guess once you get used to it, it would feel as if you had a more complete view on the world at large.