New Delhi: Israel Defense Forces (IDF) are reportedly using an artificial intelligence (AI) system called Hasbora (Hebrew for “The Gospel”) to select targets in the war on Hamas in Gaza and to estimate likely numbers of civilian deaths in advance, according to an article in The Conversation.
AI is already altering the character of war as per the article.
Militaries use remote and autonomous systems as “force multipliers” to increase the impact of their troops and protect their soldiers’ lives. AI systems can make soldiers more efficient, and are likely to enhance the speed and lethality of warfare – even as humans become less visible on the battlefield, instead gathering intelligence and targeting from afar, it added.
The article goes on to ask a moot question. When militaries can kill at will, with little risk to their own soldiers, will the current ethical thinking about war prevail? Or will the increasing use of AI also increase the dehumanisation of adversaries and the disconnect between wars and the societies in whose names they are fought?
AI has the potential to reshape the character of war, making it easier to enter into a conflict. Being ‘dehumanised’, the systems may also make it more difficult to signal one’s intentions or interpret those of an adversary in the context of an escalating conflict and thereby contribute to mis-or disinformation. creating and amplifying dangerous misunderstandings in times of war. The boundaries of an AI system that interacts with other technologies and with people may not be clear, and there may be no way to know who or what has authored its outputs, no matter how objective and rational they may seem, the article adds.
Perhaps one of the most basic and important changes we are likely to see driven by AI is an increase in the speed of warfare.
According to a former head of the IDF, human intelligence analysts might produce 50 bombing targets in Gaza each year, but the Habsora system can produce 100 targets a day, along with real-time recommendations for which ones to attack through probabilistic reasoning offered by machine learning algorithms.
The quantitative act of estimating likely numbers of civilian deaths in advance, which the Habsora system does, does not tell us much about the qualitative dimensions of targeting. Systems like Habsora in isolation cannot really tell us much about whether a strike would be ethical or legal (that is, whether it is proportionate, discriminate and necessary, among other considerations).
AI should support democratic ideals, not undermine them. Trust in governments, institutions, and militaries needs to be restored if we plan to apply AI across a range of military practices. We need to deploy critical ethical and political analysis to interrogate emerging technologies and their effects so any form of military violence is considered to be the last resort.
Until then, machine learning algorithms are best kept separate from targeting practices, the article concludes.
Mumbai: Ever since ‘Kantara’ became a phenomenal hit in 2022, movie lovers have been yearning…
Bhubaneswar: The husband of a sarpanch and a panchayat samiti member were critically injured as…
New Delhi: Pollution in the national Capital is going from bad to worse. As Delhi’s air…
Bhubaneswar: In order to streamline the road construction works, the Odisha government is planning to…
Patna: The trailer of ‘Pushpa 2: The Rule’, one of the year’s most anticipated films, is…
Patna: It was one of the most anticipated events in the Indian film industry -- the…
Bhubaneswar: With Odisha government taking a serious note of unnatural death of elephants, the Forest…
Bhubaneswar: Veteran actor Jayiram Samal was honoured with Lifetime Achievement Award at a glittering function…