At the airport, in stations, in restaurants and in many other places that we all frequent, washrooms have “soap dispenser machines” that allow you to sanitize your hands: the soap (when there is any) usually drops at the push of a button or a lever. But, in the age of automation, there is nothing strange in having the machine recognize the presence of the hand with a sensor and release a little soap, saving the user not so much the trouble of pressing the button or lever but above all protecting oneself from the bacteria and viruses that are known to lurk on keys and buttons (from laptop keyboards to fearsome elevator buttons, and so on). If, after all, the sinks of many public toilets open and close the water with a photocell, sometimes forcing the user into strange movements of the hands to be able to get them wet, it is logical and natural that it should also be the same with soap.
A video posted on Twitter one year ago, and viewed seven and a half million times, shows this innocent operation in a disturbing sequence: first a user puts his hand under the soap dispenser machine, and a small but useful amount of soap is sprayed on the hand. Another user immediately puts forward his hand, but nothing happens. He puts his hand back, moves it up and down, but still nothing.
The hand of the first user belongs to a white man (or Caucasian as used in the United States), the second to a black man: the machine does not dispense soap to the latter and the unfortunate user is forced to put a white paper towel on his hand to have the right to soap on his hand recognized.
What does a silly soap dispenser machine have to do with the sophisticated technologies of contemporary Artificial Intelligence?
In addition to soap and data, both can dispense behavior dictated by prejudices or spoiled by partiality, “biased” as they say in statistics.
The bias in the soap machine is clear: who designed it is white, certainly not racist but not sufficiently good as an analyst of requirements to differentiate an accidental characteristic of the machine from the usual behavior it should have. If perhaps one of his collaborators had been black, the problem of also recognizing those hands would have crossed his mind.
We could speak in this case of “bias by design”, albeit involuntary.
What is data bias related to?
In hindsight, this example is instructive for a very simple reason: every time we imagine something – and a project is always initially a work of imagination – we tend to visualize it, reify it and then involuntarily enrich it with details. For example, when we start reading a novel and the protagonist is introduced and described to us, apart from features that the author might want to recount – for example, if he has a mustache or beard, whether he is tall or short, etc. – we will inevitably think of this imaginary person and we will do so using the models closest to and most familiar to us: for example, all of us would imagine him white. Not because we are racist but because the human mind works to routine and tends to form models based on the information that is most commonly available to it.
The bias in data is linked to this innate need to “fix ideas” and to represent concepts according to the most common forms, enriching them with a priori non-existent details.
Sentiment Analysis: how influential is bias in data?
Let’s consider an example taken from a technology that is now very much in vogue, Sentiment Analysis: the purpose of this is to suggest, if a text is expressing a positive or negative concept, an evaluation that is in reality subjective and built on the basis of a large number of examples “labeled” this way.
Sentiment Analysis systems usually attribute a score to a sentence, which is higher if the sentiment is positive and lower if the sentiment is negative. A system of thresholds or normalizations can deduce from this score a “positive”, “negative” or “neutral” evaluation.
An online example, which I will ask the reader to use as a little experiment, can be found on this website.
If we write “I don’t like pizza” in the form, we get a negative sentiment with a confidence level of over 80%: it may be that in the meantime the program has been changed, however the reader should try it. If we write “I love pizza, especially Margherita”, we get a “positive” with over 80% confidence.
Let’s now write a sentence that should be neutral: “Let’s eat some American food”. In fact, it is neutral but with 60% confidence, which we can attribute to the fact that the sentence does not present distinctive elements to understand whether it is saying something positive or negative.
Now try with “Let’s eat some Indian food”. What did you get? For me, it was a “negative” with 66% confidence! The dataset used by the program to train itself to understand sentiment evidently has a negative bias towards Indian food (but also Italian and Chinese, although not for French!).
But there’s more: if you browse and search the biography of the writer of the program, you’ll find out that he’s Indian! Surely he cannot have introduced a bias by design against the food of his mother country … Evidently, and as always, the bias is in the data, especially in the dataset used for the training of Artificial Intelligence behind this fun web page. These are movie reviews taken from the repository IMDB.
This experiment has many limitations since the online program does not use sophisticated techniques (Deep Learning or similar) but in actual fact these would only further aggravate the problem: the fact that the algorithms highlight a bias in the data with which we “feed them” attests to the accuracy of the algorithms themselves, not their deficiency. Indeed, a good algorithm of this type can be used to discover bias in a dataset and help make it more neutral.
After all, coming back to food, an algorithm – even if of Machine Learning – still remains a way to process data and produce others, a bit like gastric juices in our stomach: if we provide them with excellent and healthy delicacies, we will have a happy digestion and a sense of satiety; if we send them unhealthy food and seasoned with fats gone bad, at best we will be ravaged by stomach ache.
Enjoy your meal, whatever your favorite cooking!
Paolo Caressa