SOCIETY | May 15, 2019

Alexa listens to you. And someone takes note

What is the link between data collection and the automatisms proposed by Artificial Intelligence?

Bloomberg’s headline on 11 April 2019: Amazon workers are listening to what you tell Alexa. The article, also re-launched by some Italian newspapers, tells of the lives of thousands of Amazon employees working to control and improve the transcription of the commands given to the famous voice assistant which serves thousands of users around the world. The sensationalist tone of the articles shows how little we know about the tools we use.

Machine Learning and annotations

Actually, if a software is able to identify pedestrians within an image it is because, in the past, someone identified people in hundreds of thousands of images. If it can correctly transcribe a spoken word it is because someone, in the past, associated letters and numbers with specific sounds (in practice, providing a transcription). If it is able to identify the sentiment of a document, it is because someone previously associated an emotional state with multitudes of texts (anxiety, happiness, etc.). The supervised Machine Learning-based software, learns to generalize these associations created by human users which in technical jargon are known as annotations.

Rough data and Artificial Intelligence

For the purposes of Machine Learning, therefore, it is not the raw data that establishes the value of the database, but the quality of the annotation, which the software learns and generalizes. This is a problem, because companies that work in the field of Artificial Intelligence generally consume data which they do not produce and that are devoid of “annotations”. For example, to produce valuable services, Amazon uses the voice of its users, without knowing beforehand the transcription, based on which the Alexa software will execute a command.

There is another aggravating factor: some algorithms need thousands, if not millions of annotations to work properly. The collection of annotations can therefore become a lengthy and extremely expensive process. Not all companies operating in the field of Machine Learning have the economic tools to hire hundreds of specialists dedicated to reading and taking notes.

Collaborative annotations

How do we get out of this impasse? In different ways. One of the most interesting is that of the “crowdsourced annotation” (freely translatable with “collaborative” annotation): in essence, a user base voluntarily provides its contribution by supplying data in a format suitable for training the machine. The approach is extremely versatile and has become a form of outsourced and paid work, widely promoted by web giants (Amazon, Microsoft, Google, etc.), with the appearance of labeling platforms based on crowdsourcing (the most famous is probably Figure Eight).

How do Crowdsourcing platforms work?

These platforms are actually “intermediary” portals between labor seekers (requesters) and platform users (workers). Requesters (companies, researchers, institutions) create tasks which workers scattered all over the world must answer. Among the various tasks envisaged there is precisely the annotation of data provided by requesters: sometimes the worker is asked to express an opinion on a text, at other times to verify the presence of objects in images and to locate their position, at others to contribute to training a chatbot.

Simply put, the requester creates a questionnaire, sets a rate and a time limit and submits it to the portal. The portal, in turn, submits the task to its workers, paying their time at the agreed rate. Thanks to the profiling ability of its workers, the portal should guarantee the quality requirements requested by the applicants (requesters). The challenge is clearly to succeed in obtaining high quality data in order to train algorithms at competitive prices.

Amazon MTurk

To get an idea of the strategic importance of this phenomenon, just look at the market players. It is no surprise that among the big names there is Amazon with its Amazon MTurk platform, whose motto is “Access a global, on-demand, 24×7 workforce”, which is even integrated with the software development tools of the IT Amazon Web Service. From the requesters’ point of view, crowdsourcing platforms allow exploiting data against what are all in all reduced investments, leveraging on the ability of web platforms to reach “suitable” people (for a questionnaire lasting at most 15 minutes, the fixed fee ranges between 1-2 cents and a few dollars, depending on the task). Instead, in order to find out the point of view of the worker, you can instead read the interesting article My Experience as an Amazon Mechanical Turk (MTurk) Worker, which provides interesting insights.

The alternatives

From a technical point of view, where possible, rather than massive annotations, the trend is to prefer solutions where the person and the machine interact iteratively, allowing the software to contribute directly to improving itself (Active Learning), reducing the work load for the human being. For large platforms this is possible by developing engagement or rewards systems (rewards) from which the user gets the necessary motivation to contribute (think of systems like Google Rewards or Google Maps questions). However, this path is not feasible for many companies, as it requires dedicated business models which are not always easily achievable.

The ethical question

While it is appealing to think that a farmer from Minnesota, an Indian accountant or a student at the Sorbonne can contribute to the same goals, on the other hand the “commercial” collaborative annotation also opens up clear ethical questions: what is the right reward for a stranger who, on the other side of the world, transfers his/her knowledge? Can there be a relationship between requester and worker? And again: is it legitimate that third parties should know the content of my conversations, albeit with the aim of improving a machine’s training? There are no easy answers to these questions.

In mass culture the subject of annotation is always treated as a collateral to the great questions concerning privacy. The concern is legitimate, but it is also an enormous simplification. Annotations represent the link between the elegant automatisms of the world which we would like and the inevitable limits of human nature and, today, are still the pillar on which the industry of Artificial Intelligence rests. It is not dust to be swept under the rug, but the price we must pay for the efficiency of the world in which we live. Annotations remind us that behind a datum there is always a human being.

Michele Gabusi