ROME – “In the field of search and rescue in combat zones, AI will improve things greatly because it will have superhuman perceptive capabilities, for example image analysis. And in the more distant future Unmanned aircraft – guided by AI – can bring relief by reducing the number of rescuers.” Some non-science fiction examples of how the face of the Air Force will change were described yesterday by Guglielmo Tamburrini, professor of machine ethics at Federico II, on the occasion of the 100th anniversary of the role of the Armed Forces.
But it’s not all ‘smooth’, on some data such as the principle of non-discrimination the AI “inherits”, explains the teacher, human choices that have perhaps been spoiled by this element and therefore “it will repeat the same injustices”. All of this also requires the vigilance of “human judgment”. A process undermined. “Prediction and suggestion of operational responses” represents an opportunity, but will the decision-maker always be a human being? “We want it – says Tamburrini – but we cannot take it for granted. There are many pitfalls in this man-machine relationship. We know from history that the automation of machines has created episodes of friendly fire”, he recalled.
“AI was used to identify targets”, recalls Tamburrini, underlining that at the head of the process there are always human specialists who govern the action of the machines according to the principles of just war. “That this team is there is fundamental but it is not sufficient to guarantee human control. The AI has such a pressing pace that it can cognitively put this team in difficulty in making informed decisions. And if there is psychological pressure on productivity, the risk is that they will be reduced to machine wheels, to a stamp factory”. “Time is a double-edged factor”, “with risk in the cyber domain of “unintentional escalation”.
There are many doubts about the future and this inexorable change. “What will happen to the fighter guide – asks Tamburrini – when he has an AI and not a pilot? If this ever happens, if we want to allow it…”. The biggest pitfall lies precisely in the relationship between AI and the responsibility of the military decision, the teacher explains: “which is also the dignity of the military: identifying a target and attacking it. Can we delegate this to a machine? The Red Cross has brought a ‘important regulatory proposal’. And finally another shadow: “Artificial intelligence has a great processing capacity, but it cannot give accounts“. In short, it has no transparency. It won’t answer all our questions.