Six years ago, this Brief imagined the ethical challenge that killer robots, possessing artificial intelligence and presumably looking like Terminator, would pose. The issue has also been the focus of the European Parliament in 2018 and 2019 when MEPs called for an international ban on AI weaponry, stressing that “machines cannot make human-like decisions” and that humans should remain accountable for decisions taken during a war. On 22 December 2023, 152 countries voted in favour of the General Assembly resolution on the dangers of lethal autonomous weapons systems, while four voted no, and 11 abstained. Israel was among those who abstained. Things have evolved in the meantime: Killing machines now have the abstract characteristic of an algorithm rather than the movie-friendly aspect of a humanoid robot. UN Secretary-General António Guterres voiced serious concern on Friday (5 April) over reports that Israel was using artificial intelligence to identify targets in Gaza. According to an investigative report by the website +972 and Local Call published two days before, the Israeli army has developed an artificial intelligence-based programme known as “Lavender”. According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement in the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. |