AI-Supported Targeting: Precision or Civilian Casualties?

AI-powered targeting systems promise increased precision, but their potential for causing civilian casualties raises serious ethical and practical concerns.
Marketplace where human identities dissolve into algorithmic data streams

AI-supported targeting: enhancing precision or increasing civilian casualties?


Despite claims of enhanced precision, the deployment of AI-Decision Support Systems (AI-DSS)in modern battlefields, including their use in target identification, selection, and engagement, risks exacerbating civilian suffering. This stems from the fundamental limitations of AI and the potential for catastrophic misapplication.


AI-DSS rely on predictions based on pattern recognition and classification, generalizing from training data to recommend targets. This inherent reliance on past data means that the system's ability to identify targets depends on its capacity to find similarities between available information about a given population and the data it was trained with. Research by MIT highlights the inherent limitations of this approach.


The problem is compounded by the inherent imperfections of AI. These systems are never perfect and are prone to "failure," particularly in the complex and unpredictable environment of a real-life battlefield. Even with meticulous development, the countless variables involved in the “fog of war” are impossible to perfectly pre-program. For example, an AI-DSS trained to identify collaborators of enemy commanders through various means (social media, communication intercepts, etc.)could misclassify individuals as targets based on perceived, yet tenuous, linkages with the enemy. Innocent civilians could be targeted due to shared school attendance or casual acquaintances. Furthermore, AI might even “invent” nonexistent patterns, leading to the targeting of innocent civilians. Studies on autonomous weapons emphasize the ethical ramifications of these inaccuracies.


While human operators retain the ultimate authority to use force, AI-DSS recommendations significantly influence their decision-making. Studies show that military personnel often prioritize action over inaction in time-sensitive situations, leading to "automation bias." This bias inclines operators toward accepting AI recommendations without sufficient verification, reducing meaningful human control. The speed and scalability of AI-powered targeting enhance this risk, increasing the potential for mass targeting and minimizing the opportunity for careful assessment.


The devastating potential impact on civilian lives is substantial. Research suggests that AI's use in targeting decisions could lead to far greater casualties, particularly in urban environments. The claim of “collateral damage” often serves as a justification for civilian casualties; military forces may claim to have believed in their system, but the question of accountability remains a significant challenge. Human Rights Watch reports highlight the absence of accountability in such situations.


Even if a system boasts a seemingly low error rate, this figure can be deceptive. The dynamic nature of data flows in AI systems means that any reported success or failure rate constantly fluctuates. Furthermore, the legal review mechanisms envisioned in Article 36 of Additional Protocol I may not be sufficient to prevent violations of IHL. The lack of transparency regarding the specifics of the AI systems used by armed forces hinders monitoring and oversight. Even with a low purported margin of error, the sheer volume of targets identified by AI systems still places countless civilian lives at risk. Can we accept such casualties as the result of a simple "glitch" or "hallucination"?


Q&A

Military AI: Ethical concerns?

AI in military targeting raises ethical and legal concerns, including potential IHL violations due to bias and lack of transparency.

Related Articles

Questions & Answers

  • AI's impact on future warfare?

    Commander facing wall of screens in chaotic command center, face illuminated red, symbolizing AI-driven military decisions
    AI will accelerate decision-making, enable autonomous weapons, and raise ethical concerns about accountability and unintended escalation.
    View the full answer
  • AI's role in modern warfare?

    Strategist in inverted submarine room, manipulating floating battle scenarios, showcasing AI-powered planning
    AI enhances military decision-making, improves autonomous weaponry, and offers better situational awareness, but raises ethical concerns.
    View the full answer
  • How does AI secure borders?

    Traveler at AI identity verification kiosk in busy airport, surrounded by floating documents and data
    AI enhances border security by automating threat detection in real-time video feeds and streamlining identity verification, improving efficiency and accuracy.
    View the full answer
  • AI's ethical dilemmas?

    Confused pedestrian amid chaotic self-driving cars, justice scale teeters nearby
    AI's ethical issues stem from its opaque decision-making, potentially leading to unfair outcomes and unforeseen consequences. Addressing traceability and accountability is crucial.
    View the full answer
  • AI weapons: Key concerns?

    Person reaching for red 'OVERRIDE' button in chaotic UN Security Council chamber
    Autonomous weapons raise ethical and practical concerns, including loss of human control, algorithmic bias, lack of accountability, and potential for escalating conflicts.
    View the full answer
  • AI's dangers: What are they?

    People trying to open AI 'black box' in ethical review board room, question marks overhead
    AI risks include job displacement, societal manipulation, security threats from autonomous weapons, and ethical concerns around bias and privacy. Responsible development is crucial.
    View the full answer
  • AI in military: key challenges?

    Protesters demand AI warfare transparency, giant red AI brain looms over crowd with blindfolded demonstrators
    AI in military applications faces ethical dilemmas, legal ambiguities, and technical limitations like bias and unreliability, demanding careful consideration.
    View the full answer
  • AI in military: What are the risks?

    Soldier in bunker facing ethical dilemma with AI weapon system, red warning lights flashing
    AI in military applications poses security risks from hacking, ethical dilemmas from autonomous weapons, and unpredictability issues leading to malfunctions.
    View the full answer
  • AI implementation challenges?

    Businessman juggling glowing orbs atop swaying server stack, representing AI implementation challenges
    Data, infrastructure, integration, algorithms, ethics.
    View the full answer
  • AI ethics in warfare?

    Civilians huddling on battlefield beneath giant AI surveillance eye
    AI in warfare raises ethical concerns about dehumanization, weakened moral agency, and industry influence.
    View the full answer

Reach Out

Contact Us