AI in Warfare: Precision Targeting or Civilian Casualties?

AI-powered targeting systems promise increased precision in warfare, but at what cost? This article explores the ethical and legal implications of using AI in military targeting, examining the potential for increased civilian casualties.
Protesters demand AI warfare transparency, giant red AI brain looms over crowd with blindfolded demonstrators

AI-supported targeting: enhancing precision or increasing civilian casualties?


Despite the risks outlined above, AI-based Decision Support Systems (AI DSS)are already being deployed in modern warfare, playing a role in target identification, selection, and engagement. This inevitably increases the risk of exacerbating civilian suffering, especially when human judgment is insufficiently involved. The process relies heavily on predictions based on pattern recognition and classification, generalising from training data to recommend targets. This approach inherently carries legal concerns, as AI systems are not infallible and are prone to failure, particularly in the complex and unpredictable environment of a battlefield. Even with developers' best efforts, unforeseen scenarios within the "fog of war" are unavoidable.


For example, an AI DSS might be trained to identify collaborators of enemy commanders by analyzing various data sources like social media, communications intercepts, and surveillance. The system could then recommend targeting a building where these individuals reside. However, the system might misclassify individuals as targets based on perceived—even tenuous—links to adversaries. Innocent individuals, perhaps sharing a school or a casual acquaintance with a suspected collaborator, could be wrongly identified, leading to tragic consequences. The AI could even invent nonexistent patterns and connections, resulting in the targeting of innocent civilians. Ruben Stewart & Georgia Hinds' research on algorithms of war highlights this very risk.


While the final decision to use force rests with humans, AI DSS recommendations significantly influence this process. Military personnel often prioritize immediate action, especially in time-sensitive situations. This tendency, known as automation bias, as described by Bryce Hoffman, can lead to insufficient verification of AI-generated recommendations. AI’s speed and scalability enable unprecedented "mass production targeting," further enhancing this risk and reducing meaningful human control. The result can be a simple "push of a button" decision-making process, diminishing human oversight and ethical considerations.


The potential impact on civilian lives is substantial, possibly exceeding that of autonomous weapons systems, especially in densely populated areas. Militaries may attempt to justify civilian casualties as collateral damage, often claiming reliance on the system's reliability and thus avoiding accountability. However, reported success or failure rates can be misleading due to the dynamic nature of data and the rapid learning capabilities of some AI systems. Even Article 36 of Additional Protocol I, which mandates legal review of new weapons, may be insufficient to address the complexities and potential IHL violations of these technologies, particularly given the lack of pre-deployment testing and often the practice of field-testing ex ante.


The lack of transparency surrounding AI systems in military use, often under the guise of national security, further complicates the matter. Even a seemingly low margin of error in AI's reliability and predictability could still lead to a vast number of civilian casualties, raising profound ethical questions about the acceptability of such losses resulting from technological failures or malfunctions. This lack of transparency is a significant challenge for oversight and accountability.


Q&A

AI in military: key challenges?

AI in military applications faces ethical dilemmas, legal ambiguities, and technical limitations like bias and unreliability, demanding careful consideration.

Related Articles

Questions & Answers

  • AI's impact on future warfare?

    Commander facing wall of screens in chaotic command center, face illuminated red, symbolizing AI-driven military decisions
    AI will accelerate decision-making, enable autonomous weapons, and raise ethical concerns about accountability and unintended escalation.
    View the full answer
  • AI's role in modern warfare?

    Strategist in inverted submarine room, manipulating floating battle scenarios, showcasing AI-powered planning
    AI enhances military decision-making, improves autonomous weaponry, and offers better situational awareness, but raises ethical concerns.
    View the full answer
  • How does AI secure borders?

    Traveler at AI identity verification kiosk in busy airport, surrounded by floating documents and data
    AI enhances border security by automating threat detection in real-time video feeds and streamlining identity verification, improving efficiency and accuracy.
    View the full answer
  • AI's ethical dilemmas?

    Confused pedestrian amid chaotic self-driving cars, justice scale teeters nearby
    AI's ethical issues stem from its opaque decision-making, potentially leading to unfair outcomes and unforeseen consequences. Addressing traceability and accountability is crucial.
    View the full answer
  • AI weapons: Key concerns?

    Person reaching for red 'OVERRIDE' button in chaotic UN Security Council chamber
    Autonomous weapons raise ethical and practical concerns, including loss of human control, algorithmic bias, lack of accountability, and potential for escalating conflicts.
    View the full answer
  • AI's dangers: What are they?

    People trying to open AI 'black box' in ethical review board room, question marks overhead
    AI risks include job displacement, societal manipulation, security threats from autonomous weapons, and ethical concerns around bias and privacy. Responsible development is crucial.
    View the full answer
  • AI in military: key challenges?

    Protesters demand AI warfare transparency, giant red AI brain looms over crowd with blindfolded demonstrators
    AI in military applications faces ethical dilemmas, legal ambiguities, and technical limitations like bias and unreliability, demanding careful consideration.
    View the full answer
  • AI in military: What are the risks?

    Soldier in bunker facing ethical dilemma with AI weapon system, red warning lights flashing
    AI in military applications poses security risks from hacking, ethical dilemmas from autonomous weapons, and unpredictability issues leading to malfunctions.
    View the full answer
  • AI implementation challenges?

    Businessman juggling glowing orbs atop swaying server stack, representing AI implementation challenges
    Data, infrastructure, integration, algorithms, ethics.
    View the full answer
  • AI ethics in warfare?

    Civilians huddling on battlefield beneath giant AI surveillance eye
    AI in warfare raises ethical concerns about dehumanization, weakened moral agency, and industry influence.
    View the full answer

Reach Out

Contact Us