originally published in January 2024 under the University of Cambridge newsletter “The Good Robot Podcast”
According to NPR, Israeli forces have struck more than 22,000 targets in Gaza leading into 2024, which President Joe Biden as characterized as “indiscriminate bombing” (AP News). Israeli Defense Forces state that all strikes are carried out with precision, a claim they increasingly validate with their usage of an AI systems for targeting.
“The Gospel” is the primary AI system used by the Israeli military, which takes in enormous quantity of surveillance data, crunches it together and makes recommendations about where the military should strike. Surveillance data includes the likes of drone footage and satellite images, as well as people’s cell phone conversations and locations.
A growing concern among researchers is that while such an AI system is good at sorting through surveillance data, there is disagreement on whether it can deliver accurate targeting results. The Gospel can be efficient in its training and outperform human analysts, but the central danger is that this AI system was only trained on correct targets and not incorrect examples, such as civilians living in Gaza. NPR spoke with Heidi Khlaaf, an AI expert who runs Trail of Bits, who is “very critical of the idea of putting AI in charge of targeting, in part because she thinks the training data just won’t be good enough to tell a Hamas fighter from a civilian”. Heidi stated, “You’re ultimately looking at biased and imprecise targeting automation that’s really not far from indiscriminate targeting”.
Since the AI system trains itself, this also threatens to delude the point of intention and accountability for targeting decisions made by Gospel. “it then becomes impossible to trace decisions to specific design points that can hold individuals or military accountable for AI’s actions”, said Heidi.
Furthermore, a report called “Automated Apartheid” by Amnesty International found that Israel has been deploying facial recognition software, known as “Red Wolf,” to control Palestinian locations and restrict movement. This is an early example of a government or institution using AI-enabled surveillance against an ethnic group.
As 2024 unfolds, keep a lookout for growing usage of AI surveillance systems in defense targeting, discriminatory movement restrictions, as well as recognition software for large public gatherings, notably, protests.
Further reading: