Anyone who has held positions of responsibility and has found themselves guiding, coordinating or simply overseeing the activity of other people will recognize that one of the most appreciated features in a collaborator is l’autonomy. Whatever type of task one decides to entrust to a collaborator, the dream of every coordinator, manager or leader is to no longer have to think about that task until it has been completed and proof has been given.
Autonomy is also an appreciated feature in machines: for example, once the washing program has been set, we all want to sleep peacefully while the centrifuge takes care of washing, soaping, rinsing and maybe even drying our clothes. Obviously, in this case autonomy is guided and deterministic: in other words, we are sure that the washing machine will perform its task without deviating from the established program, unless a failure and the consequent flooding of the house do not disturb our sleep. At first sight it would appear that nothing could be more natural and desirable than making even the most sophisticated machines – such as those that are already available today and are capable of carrying out surprisingly complex tasks – autonomous.
However, when it comes to the autonomy of automata (forgive the play on words), the music appears to change and the tranquility of our dreams drowns in dystopian and apocalyptic nightmares.
Since its beginnings, science fiction has evoked scenarios of devastation, homogenization and enslavement of mankind linked to the supremacy of machines over humans. So much so that one of the most read and loved authors of the genre, Isaac Asimov, has also gone down in history for having coined in his novels in the I, Robot series, the three laws of robotics, the first of which reads: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”.
The ethic of machines
Asimov was therefore one of the first to deal with the ethic of machines: in fact any entity that is susceptible to behavior etymologically possesses an ethic. However, Asimovian ethic is not the only one possible: for example, a machine could be programmed to pursue the good of humanity, let’s say the fight against terrorism, and in this case it could decide to kill terrorists to fulfill its goals (think of HAL 9000 in 2001: A Space Odyssey or the Skynet network in Terminator).
Today technology has surpassed science fiction: weapons, drones and automobiles are autonomous in the full sense of the word; that is, they not only complete a pre-established task but are able to perform complex activities that involve decisions, even crucial ones, as in the case of weapons that can decide whether or not to fire at a human target.
One example concerns the automatic weapons deployed by South Korea on the border of the DMZ (a term usurped by informatics) which for 65 years has been a “buffer” between the two Koreas: the Korean war has never been formally ended, and the two armies deployed on both sides of the DMZ can shoot at sight against anyone trying to cross this border.
In particular, the Samsung SGR-A1 model constitutes an “automatic sentinel”, that is, a weapon capable of firing at a target independently, using voice recognition for armed surveillance functions. This machine should be capable of distinguishing a human target from animals or other moving objects but, for example, it is not able to distinguish a fugitive from a saboteur. The company that manufactured it, in collaboration with a South Korean university, claims that SGR-A1 is unable to make the decision to shoot without human intervention, although the features of the system are precisely those of decision-making autonomy with respect to the targets to be fired on.
Beyond the specific case, however, there is no doubt that machines today can be programmed to decide to harm human beings (or animals or other machines), and thus expressly violate the first law of robotics. However, today, in dealing with these issues, there are no longer just science fiction writers but also philosophers.
Philosophy and automata
The role of philosophy in contemporary cognitive sciences is unquestionable, and in the case of autonomous machines, and in particular of autonomous weapons, it permits analysis of problems in an objective and general manner. For example, philosophers have identified two possible behavioral “algorithms” for an autonomous machine in the event of an ethical dilemma, such as a car that has to choose whether to run into two pedestrians or jeopardize the safety of the passenger by swerving sharply to the right: the “consequentialist” approach and the “deontological” approach.
In the first case the machine is programmed to decide in a neutral way with respect to the individual, operating in a rational way to minimize damage and therefore choosing not to run into a group of people but endangering the single passenger.
On the other hand, the deontological approach is not neutral with regard to single individuals, and recognizes their right not to behave in a “heroic” way, therefore, for example, not sacrificing themselves for others. Generally speaking, the deontological theory deems it detrimental to human dignity to entrust machines with any decision that could harm a human being, substantially in line with the first law of Asimov.
Even if automatic weapons are excluded from a deontological approach, it is nevertheless a fact that they continue to be made and that they will become increasingly sophisticated and able to decide on the safety and destiny of human beings.
In any case, the last North Korean exile who managed to cross the DMZ and take refuge in South Korea was seriously injured by North Korean guards and not by automatic weapons: for the time being we must continue to fear not so much the intelligence of machines but the stupidity of humans.