Technology, and in particular, Artificial Intelligence (which will come or is already here, ed.) will not only, in some cases, replace man by performing actions more efficiently and less expensively, but will limit his ability to make independent, reasoned, weighted choices on the basis of personal data and experiences.
This is the concrete risk that, perhaps unconsciously, we will face and which is well described in the book “Artificial Insanity” by Luca Bolognini, one of the leading European experts on privacy and data law, lawyer and president of the Italian Institute for Privacy and Data Valorization (IIP). Not a catastrophic or apocalyptic vision of the future, but an attempt to reason aloud in the search for a balance between opposing extremisms, ranging from uncontrolled enthusiasm for innovation at all costs to “stop everything, I want to get off”.
What is the right balance between man and technology?
“In this case, the pursuit of balance is a dynamic exercise, a constant search, a bit like the one that pushes us to reach the ideal weight. There is no solution that can suit everyone: each will try to weigh risks and opportunities, will transfer data and “parts of themselves” to the extent that they believe they can and will become more possessive when they think that the game is not worth the risk. It’s a sort of ‘accepted imperfection’. One of the most positive new elements introduced by the GDPR, for example, lies in this constant search for effective safeguarding of the rights of people to be implemented through measures adequate to the risk, which are constantly evolving and subject to constant evaluation”.
Is it possible to intervene, and in what way, on the powers of the big IT players?
“After having become aware of the power that the big players have over our lives (an exercise not always practiced), there should be discussion at national, European and international level about possible modifications to Rights Charts and Conventions which can regulate and limit the excessive technological power of some individuals. For example, the principle of Rule of Human Law should be introduced into treaties and constitutions, adding the word “human” when describing the requirements of members of parliaments.
The principle of “Rule of Human Law by Default” should impose the design of computer systems, even more so if they are intelligent, in which the last word always belongs to super-admin humans. Something that does not exist now and does not protect us from having machines govern us in the future, even if only implicitly. Already today, when we think about it, when we talk about protecting workers, unions are often forced to deal more with algorithms, apps and automated controls that dictate the rules of work than with human employers. In practice, we must defend humans from machines”.
How much sensitivity and awareness is there compared with the artificial insanity we live?
“Not much awareness, unfortunately. Average human beings do not give too much weight to what they do not perceive clearly: everyone thinks that, regardless of the scenarios that are drawn, nothing will happen in the end. This mechanism is the same as that which, in previous decades, led us to pollute the environment thinking that basically there would be no serious consequences, because “nature” was, in any case, vast and could absorb everything. It was not like that, and today the seas and the beaches are full of plastic: that’s when we realize it.
Moreover, the big players who use our data do so in a “silent”, imperceptible way. This is part of the mechanism of “digital circumvention”, which I speak about in my book, that is also used for good purposes in some cases. We realize the gravity of the situation only when we have a problem, we suffer an injustice and we feel all our impotence. Just think about how we feel when a social network blocks our account and we try to communicate with a distant and sometimes disinterested entity”.
How can people be made aware of the importance of their data?
“I like to think of a parallelism: we do not defend ourselves enough from microbes because we do not see them and we do not perceive them as dangerous. However, we learn to wash our hands because someone explains to us from an early age that only through hygiene can we defend ourselves against illnesses, some even serious. It could be the same with digital: through prevention made up of “digital hygiene” courses.
GDPR: what are the defects and what are the virtues?
“Starting from the virtues, I appreciate at least a couple: on the one hand, the concept of “accepted imperfection” that I mentioned earlier, which leads to continuous verification of technical and organizational measures for data protection; on the other hand, the heavy penalties that can be applied and which finally make companies and bodies which did not take these obligations seriously before responsible.
The biggest defect, however, lies in having set up the GDPR but still binding it to preventive logics (policy information on disclosures, consents) which have shown over the years not to work. Policy information on disclosure should be given not only ex ante, when nobody reads it, but especially after data collection: when presenting a personalized content, the result of profiling, I should communicate to the consumer-user how I came to suggest that thing, using which data, taken from whom. A bit like when we read the ingredients on the label of the snack we are about to eat. This would warn people and make them more conscious about the concrete results of the transfer of their data. Another major defect is that the GDPR is not adapted to IoT, because it is not a regulation capable of making intelligent objects and “controllers of non-human processing” responsible.
If it is true that data represent gold mines, how will consumers defend themselves from assault on the fort?
“I dream of a world in which there are digital intermediaries (but from my observatory I know that they already exist and are about to invade the market), that are able to enable each user to really decide by whom to be profiled or contacted for marketing purposes and, once they have decided the setting they like, set the price in order to earn a bit.
Because it does not seem correct to me that, in the profitable game of processing data considered so precious, the user is the only one not to get something out of it. A true third generation privacy right will be precisely the right to monetize one’s own data”.
Sonia Montegiove