How AI is changing our relationship with computers and will do it with society.

Javier Rodriguez
3 min readJun 27, 2020

We are witnessing a change in our relationship with “computer systems” that we are not fully aware of. We talk to virtual assistants (Siri, Sherpa, Ok Google, Alexa) and we are used to them doing what we say, as “computers” have always done. We start typing in our favorite search engine and we are surprised if it does not auto-complete the search, assuming then that, if we don’t find in the first two pages what we are looking for, it does not exist. We assume that the answers we get from “a computer” are “THE answer

Our notion of computer systems has always been deterministic, we tell you what to do and if they don’t do it, it’s bad programming. This has been a constant point of friction between programmers and users because interpersonal communication in the definition process is not easy. What formal education have we had to manage our interpersonal communication? What capacities do we have to manage “interpersonal” communication with computers? also, taking into account that unconsciously we treat them as people [1]?

The new AI systems largely lack this deterministic component, the probability and uncertainty modeling of our friends Bayes and Markov are very present in these systems. We generate algorithms, we feed them with data, and they make decisions and return their prediction, a prediction which we will have to know how to contrast and put it in value with respect to our preconceived idea.

Due to the “widespread implementation” of AI in all areas (we have it in our search engines, on our phones, on our televisions, in our online purchases) there is some urgency of having a plan to improve our relationship with AI systems.

AI is here to stay, help, and complement us, not to replace us. This pushes us to review our educational systems, we have to learn to “question” our own decisions to accept that a system that can analyze an immense amount of data can help us make better decisions, using them as a cognitive prosthesis. The decision-making process that exists today in the management of many of the companies will have to evolve, in a way that contrasts all the nuances that the human experience provides with the large amount of data that AI systems can take into account for their analysis.

This decision-making capacity that AI has, is causing us to ask ourselves ethical questions for implementing it, in order to define a regulatory framework, but it might be a little bit late for this debate; The generalization of social networks occurred approximately 10 years ago, and without forgetting the positive side, we have collided with the negative impact that these have on our society, on our democratic system and on our capacity (or absence of it) to accept ideas contrary to ours due to the use they make of our “likes” to give us information that we like. This change has caught us by surprise, in many areas it is being blamed as “fake news”, but we must not forget that in our social networks there is an AI algorithm deciding which news we see and which we do not, and we must have an educational system that delves into Socratic doubt and maieutic, which teaches us to question both the conclusions received and our own so that our decision-making is enriched thanks to (i) a good education to know how to manage our interaction with the systems of Artificial Intelligence, (ii) as well as a correct implementation of these systems so that they increase our cognitive and decision-making capacity.

[1] Byron Reeves and Clifford Nass — The Media Equation: How People Treat Computers, Television, and New Media Like Real People

--

--

Javier Rodriguez

Working to help business implement AI products with Human-Centered approach.