AI in Warfare: Precision Targeting or Civilian Casualties?
AI-supported targeting: enhancing precision or increasing civilian casualties?
Despite the risks outlined above, AI-based Decision Support Systems (AI DSS)are already being deployed in modern warfare, playing a role in target identification, selection, and engagement. This inevitably increases the risk of exacerbating civilian suffering, especially when human judgment is insufficiently involved. The process relies heavily on predictions based on pattern recognition and classification, generalising from training data to recommend targets. This approach inherently carries legal concerns, as AI systems are not infallible and are prone to failure, particularly in the complex and unpredictable environment of a battlefield. Even with developers' best efforts, unforeseen scenarios within the "fog of war" are unavoidable.
For example, an AI DSS might be trained to identify collaborators of enemy commanders by analyzing various data sources like social media, communications intercepts, and surveillance. The system could then recommend targeting a building where these individuals reside. However, the system might misclassify individuals as targets based on perceived—even tenuous—links to adversaries. Innocent individuals, perhaps sharing a school or a casual acquaintance with a suspected collaborator, could be wrongly identified, leading to tragic consequences. The AI could even invent nonexistent patterns and connections, resulting in the targeting of innocent civilians. Ruben Stewart & Georgia Hinds' research on algorithms of war highlights this very risk.
While the final decision to use force rests with humans, AI DSS recommendations significantly influence this process. Military personnel often prioritize immediate action, especially in time-sensitive situations. This tendency, known as automation bias, as described by Bryce Hoffman, can lead to insufficient verification of AI-generated recommendations. AI’s speed and scalability enable unprecedented "mass production targeting," further enhancing this risk and reducing meaningful human control. The result can be a simple "push of a button" decision-making process, diminishing human oversight and ethical considerations.
The potential impact on civilian lives is substantial, possibly exceeding that of autonomous weapons systems, especially in densely populated areas. Militaries may attempt to justify civilian casualties as collateral damage, often claiming reliance on the system's reliability and thus avoiding accountability. However, reported success or failure rates can be misleading due to the dynamic nature of data and the rapid learning capabilities of some AI systems. Even Article 36 of Additional Protocol I, which mandates legal review of new weapons, may be insufficient to address the complexities and potential IHL violations of these technologies, particularly given the lack of pre-deployment testing and often the practice of field-testing ex ante.
The lack of transparency surrounding AI systems in military use, often under the guise of national security, further complicates the matter. Even a seemingly low margin of error in AI's reliability and predictability could still lead to a vast number of civilian casualties, raising profound ethical questions about the acceptability of such losses resulting from technological failures or malfunctions. This lack of transparency is a significant challenge for oversight and accountability.
Q&A
AI in military: key challenges?
AI in military applications faces ethical dilemmas, legal ambiguities, and technical limitations like bias and unreliability, demanding careful consideration.
Related Articles
Questions & Answers
AI's impact on future warfare?
AI will accelerate decision-making, enable autonomous weapons, and raise ethical concerns about accountability and unintended escalation.View the full answerAI's role in modern warfare?
AI enhances military decision-making, improves autonomous weaponry, and offers better situational awareness, but raises ethical concerns.View the full answerHow does AI secure borders?
AI enhances border security by automating threat detection in real-time video feeds and streamlining identity verification, improving efficiency and accuracy.View the full answerAI's ethical dilemmas?
AI's ethical issues stem from its opaque decision-making, potentially leading to unfair outcomes and unforeseen consequences. Addressing traceability and accountability is crucial.View the full answerAI weapons: Key concerns?
Autonomous weapons raise ethical and practical concerns, including loss of human control, algorithmic bias, lack of accountability, and potential for escalating conflicts.View the full answerAI's dangers: What are they?
AI risks include job displacement, societal manipulation, security threats from autonomous weapons, and ethical concerns around bias and privacy. Responsible development is crucial.View the full answerAI in military: key challenges?
AI in military applications faces ethical dilemmas, legal ambiguities, and technical limitations like bias and unreliability, demanding careful consideration.View the full answerAI in military: What are the risks?
AI in military applications poses security risks from hacking, ethical dilemmas from autonomous weapons, and unpredictability issues leading to malfunctions.View the full answerAI implementation challenges?
Data, infrastructure, integration, algorithms, ethics.View the full answerAI ethics in warfare?
AI in warfare raises ethical concerns about dehumanization, weakened moral agency, and industry influence.View the full answer
Reach Out
Contact Us
We will get back to you as soon as possible.
Please try again later.