How could humans interact with technology in about 2040 or so? Assume they have some kind of neural implants that enables them to be connected to computing, the internet, and other minds rather directly. How would that work? Let me display that schematically:
Data | Application | Interface | Mind
Those are different “layers” which need to interact in a harmonic way to augment the function of the mind. People who are familiar with the Model View Controller (MVC) pattern will notice some clear similarities here.
- The data layer consists of data on local or cloud storage. It could be stored in files, databases, blockchains, or any other kind of data storage technology
- The application layer is an intermediary between the data layer and the interface layer. The application grabs data and operates on it according to commands coming in from the interface. Then, the refined data is returned to the interface for human interpretation
- The interface layer allows the human and the application to interact with each other. Human intentions must be shaped into machine-interpretable commands which the application can work with. The interface layer takes over the job of processing pre-formatted human intentions into well-formatted commands for the application. At what level are intentions read by the interface when we have deep neural implants? It could be words we think about and which somehow appear in an overlay of our visual field (no matter whether our eyes are closed or not). These words probably need to be validated somehow (perhaps by mentally moving them into the correct field and mentally click an “ok” button), because forwarding your whole stream of consciousness to an application would end up in a bloody mess.
- The mind layer is difficult to understand from a computer science perspective, as well as from a neuroscience perspective. It may or may not be a good description / simplification to say that it’s some kind of very complex “neural network”.
Let’s assume that the data and mind layers won’t change very much until we arrive in 2040 – this may be a wrong assumption, and I would be glad to read about alternative scenarios, but let’s focus on the assumed scenario, mainly. How would the application and interface layers have to look like to enable humans to deal with data in the best way?
I think there are basically two approaches to this question:
- Start thinking from the data side
- Start thinking from the mind side
With option 1 we look at the data and think about it could be processed in a meaningful way. We would create conventional applications (CApps) that do interesting things with data and then create interfaces around those apps to enable people to use them.
With option 2 however, we start with human intentions to do something. Those may not be clearly formalized yet. Anyway, we would try to create an interface which tries to make sense of human intentions. Once possibly meaningful intentions are isolated from the mind, the interface would go on to create an application on the fly for the specific purpose of executing those intentions. These on the fly applications (OTFApps – this definitely needs a better acronym ;)) would then fetch and process the data they need to execute the isolated intentions of the mind that started the process.
Obviously, the interfaces would need to be extremely sophisticated and intelligent to create applications on the fly. Perhaps they wouldn’t do that on their own, but rather employ the services of an A(G)I whose purpose it is to create OTFApps just in time. We might assume that this is quite possible by 2040.
Interacting with OTFApps would probably feel much more natural and intuitive than dealing with relatively “static” and “fixed” conventional applications. Currently, humans need to wrap their minds around applications. In the future, the situation will be reversed: Apps and interfaces are wrapped around the mind of humans.
The process might work like this. A human thinks “I want to know X about Y”. Then an interface isolates that desire of the human to know X about Y and presents it as some kind of option to the human mind to approve of, or even goes ahead and acts on the intention automatically without explicit approval. The interface would create an OTFApp that is tailored to finding out everything X about Y by sifting through the global data layer, curating the results, and visualizing them in a way that the human can interpret best.
Human programmers seem to be out of the loop in this case, but that merely hides that all the deep complexity now resides in the interface layer, which has task to create a whole application within milliseconds! Creating such advanced interfaces probably requires a huge amount of intelligence, and may occupy hoards of humans and AGIs.
What do you think about this idea? Is if feasible? Is it realistic? Is it even desirable? Could things be done in an even better way? What should OFTApps be really called?