Week 10 The Algorithmic Battlefield: An Evidence-Based Dossier on AI & Robotics in Modern Warfare and Security
Intro
The rapid integration of Artificial Intelligence (AI) and robotics into defense, law enforcement, and national security is creating a new technological paradigm characterized by software-defined warfare, algorithmic decision-making, and the democratization of advanced capabilities. This shift presents both unprecedented opportunities for operational advantage and profound strategic, ethical, and legal challenges that current frameworks are ill-equipped to address. This dossier provides an exhaustive, evidence-based analysis of this transformation, examining the key corporate and state actors, pivotal technologies, and the emerging doctrines shaping this new era.
Key findings reveal a multifaceted and often contradictory landscape. In the United States, a new generation of defense-tech companies like Anduril Industries and Palantir Technologies is disrupting the traditional procurement model. By prioritizing agile, software-first platforms, they are delivering AI-driven capabilities to the battlefield and domestic security agencies at a pace legacy contractors cannot match. Anduril's autonomous surveillance systems are now programs of record on the U.S. border, while Palantir's data-fusion software has become a central nervous system for military intelligence, from Project Maven's automated target recognition to the front lines of the war in Ukraine.
The conflict in Ukraine serves as a brutal, real-world laboratory for these technologies. It has demonstrated the devastating effectiveness of low-cost, attritable drones, with Russia's ZALA Lancet loitering munition proving capable of neutralizing high-value, Western-supplied military assets. This has forced a tactical evolution towards a drone-centric model of attrition warfare. Concurrently, the conflict has exposed a critical strategic vulnerability: the global dependence on China's manufacturing base. China has become the indispensable supplier of both complete military drones to nations outside the Western orbit and the essential dual-use components that fuel the drone industries of both Russia and Ukraine, granting Beijing significant and deniable geopolitical leverage.
Domestically, the adoption of AI in law enforcement, spearheaded by companies like Axon Enterprise, promises greater efficiency but raises acute civil liberties concerns. AI-powered tools for report writing, real-time crime centers, and drone-as-first-responder programs are being rapidly adopted. However, the controversy surrounding Axon's proposal for a Taser-equipped drone for schools, which led to the mass resignation of its AI Ethics Board, highlights a profound societal unease with the weaponization of autonomous systems for domestic use and a dangerous gap between technological ambition and ethical governance.
This "governance gap" is a central theme of the report. The rapid, de-regulated pace of technological development is far outstripping the slow, consensus-based evolution of legal and ethical norms. International bodies like the ICRC are calling for new, legally binding treaties to regulate autonomous weapons, but progress is slow. This creates a high-risk environment where technology is deployed far ahead of the frameworks meant to control it, increasing the potential for unintended escalation, erosion of human rights, and a loss of meaningful human control over the use of force.
The strategic outlook is defined by a dangerous tension. On one hand, the pursuit of advanced human-machine teaming, exemplified by the U.S. Air Force's Collaborative Combat Aircraft program, promises a new level of operational dominance. On the other hand, the foundational technologies underpinning this vision are increasingly vulnerable to attack through GPS spoofing, adversarial AI, and compromised supply chains. The future of the algorithmic battlefield will be determined not only by who has the most advanced AI, but by who can best secure their systems against these pervasive and evolving threats.
Part I: The New Defense-Tech Ecosystem: U.S. Innovators
The 21st-century defense landscape is being reshaped not by traditional prime contractors alone, but by a new cadre of technology companies born from the ethos of Silicon Valley. These firms champion a "software-first" paradigm, prioritizing agile development, open architectures, and AI-driven operating systems over the hardware-centric, decades-long procurement cycles of the past. This section analyzes two of the most prominent pioneers of this new ecosystem: Anduril Industries and Palantir Technologies. Their rapid ascent demonstrates a fundamental shift in how military and national security capabilities are designed, procured, and deployed, challenging the established order and introducing both disruptive innovation and complex new questions of governance and control.
Chapter 1: Anduril Industries - Autonomy at the Edge
Anduril Industries has emerged as a formidable disruptor in the defense sector by applying a venture-backed, software-centric business model to national security problems. The company's core strategy revolves around its Lattice AI software platform, which functions as an intelligent operating system for a diverse and growing family of interoperable hardware systems. This approach fundamentally inverts the traditional defense model; instead of building bespoke, stove-piped systems for single-mission requirements, Anduril develops a common software backbone that enables rapid integration of new sensors and effectors, including from third parties. By focusing on autonomy at the tactical edge, Anduril aims to reduce the cognitive load on warfighters and accelerate the decision-making cycle in contested environments.
Case Study 1.1: Lattice OS - The Software-First Approach to C4ISR
The foundation of Anduril's entire product ecosystem is Lattice OS, an AI-powered software platform that serves as the command-and-control (C2) and data-fusion engine for its hardware and integrated third-party systems.1 Lattice is designed to function as a comprehensive operating system for military and security missions, ingesting data from a wide array of sensors—such as radar, electro-optical/infrared cameras, and signals intelligence collectors—and using AI, computer vision, and sensor fusion algorithms to create a single, unified operating picture.1 Its core function is to autonomously detect, classify, and track objects of interest, presenting human operators with decision points rather than raw streams of data.3
This software-first approach is a deliberate departure from the traditional defense industry model. Anduril's strategy is to build an open platform that encourages a broader development ecosystem through the use of Software Development Kits (SDKs) and Application Programming Interfaces (APIs).4 This allows partners and even competitors to build autonomous systems that can integrate with the Lattice tactical data mesh, fostering innovation at a pace that closed, proprietary systems cannot match. By creating its own vertically integrated hardware products, such as drones and autonomous submarines, Anduril demonstrates the tangible outcomes of its platform, bridging the gap between abstract software potential and real-world mission success.4 This model directly aligns with the U.S. Department of Defense's (DoD) explicit strategic goal, outlined in its 2023 AI Adoption Strategy, to pursue an "agile approach to adoption that prioritizes speed of delivery".5 Anduril's success is therefore not purely technological; it is a direct exploitation of the Pentagon's recognized need to break free from slow, requirements-heavy acquisition processes. The company is not just selling products, but a new, faster procurement pathway that the DoD is actively seeking.
Case Study 1.2: Sentry Tower - Securing the U.S. Southern Border
The Sentry Tower is a prime example of Anduril's model in action, demonstrating the rapid deployment and adoption of AI-powered surveillance for a critical national security mission. Sentry Towers are autonomous, persistent surveillance platforms designed to provide awareness across land, sea, and air.3 They are engineered for austere environments, featuring solar panels for power, on-board edge processing to reduce data backhaul, and a modular design that allows for rapid deployment by a small team in under three hours.3
The system's AI performs sensor fusion and object detection at the edge, meaning it analyzes data locally to autonomously identify and classify objects like people, vehicles, or drones, and only transmits relevant alerts to operators.3 This distinguishes it from systems that simply stream raw video, significantly reducing the cognitive burden on personnel. U.S. Customs and Border Protection (CBP) began testing the towers in 2018 and, impressed by their performance, moved to declare the system a "program of record" in 2020, with plans to field 200 towers along the Southwest border by 2022.7 This rapid transition from a startup's pilot project to an official government program is a significant departure from typical defense procurement timelines.
Border Patrol officials have lauded the technology as a "force multiplier" that gives agents a "significant leg up against the criminal networks".3 It is crucial to note that Anduril and CBP officials have clarified that the system is designed for wide-area object detection and tracking, not for facial recognition or collecting personally identifiable information.7 The Sentry system has multiple variants tailored to specific threats, including a Standard Range tower for ground targets, a Long Range version with powerful radar for counter-drone missions (detecting Group 1-3 threats up to 15 km away), a Maritime variant for surface vessels, and a Cold Weather version for harsh climates.3 This case study showcases Anduril's ability to quickly deliver a tailored, AI-driven solution that meets a pressing government need, securing a major contract in a market historically dominated by legacy contractors.
Case Study 1.3: Ghost & Ghost-X UAS - AI-Enabled ISR for the UK Royal Marines
Anduril's Ghost and its expanded-capability variant, Ghost-X, are modular, man-portable unmanned aerial systems (UAS) that bring the AI capabilities of the Lattice platform to the tactical edge.2 Unlike traditional drones that are remotely piloted and serve as simple flying cameras, the Ghost is designed as an autonomous intelligence, surveillance, and reconnaissance (ISR) asset. Powered by Lattice, it features AI-driven capabilities such as point-and-click mission planning, where operators can task the drone to survey an area or track an object with minimal input, and "intelligent teaming," which allows a single operator to control multiple collaborating drones for complex missions.1
The system's on-board AI leverages computer vision and sensor fusion to autonomously detect, classify, and track objects of interest even in low-bandwidth or contested environments, reducing reliance on a constant data link to a ground station.2 The Ghost-X variant offers enhanced performance with a longer endurance of 75 minutes, a greater range of 25 km, and a higher payload capacity of 9 kg.2
The Ghost platform has been notably adopted by elite military units. A member of the UK Royal Marines' 40 Commando described the system's key differentiator: "What's different with Ghost is that it's built for soldiering purposes. It's always searching, it's constantly looking. This isn't just a drone with a camera, it's AI".2 The Chief Technology Officer of the UK Royal Navy has called Anduril's AI and ISR systems "battle-winning technologies for our Future Commando Force".2 The U.S. Army's Experimentation Force has also praised the system's ease of use, noting that soldiers were able to operate it effectively after just a morning of instruction.2 Both Ghost and Ghost-X are approved on the Blue UAS Cleared List, certifying them for use by the U.S. government.2 This case illustrates the tactical value of pushing AI to the edge, empowering small units with advanced autonomous capabilities that enhance situational awareness and reduce risk.
Case Study 1.4: Menace-T/X - Deployable C4 for Austere Environments
The Menace family of systems addresses a critical challenge in modern warfare: maintaining advanced command, control, communications, and computing (C4) capabilities in remote, austere locations with limited or non-existent infrastructure. Menace-T is a compact, two-case C4 system that can be deployed by a single operator and made operational in minutes.8 It is designed to provide resilient and secure connectivity, running Anduril's Lattice Mesh software to create a local, self-healing network at the tactical edge.8
A key capability of Menace-T is its ability to host not only Anduril's software but also any third-party software stack, including computationally intensive edge AI inferencing and machine learning models.8 This allows a small, forward-deployed team to bring its entire technology stack with it, running mission-critical applications without dependence on a satellite uplink or a distant command post. Anduril describes a scenario where a reconnaissance team, previously hampered by legacy gear and unreliable satellite links, can use Menace-T to immediately establish a network, run intelligence collection software, and relay real-time targeting data to a joint operations center hundreds of miles away, turning a failed mission into a success.8
The system has already been proven in real-world deployments around the world, including in ground vehicles and on maritime vessels, enabling real-time targeting data relay in joint and coalition operations.8 The more advanced Menace-X variant has demonstrated its AI-enabled rapid targeting solutions during a U.S. Marine Corps exercise in December 2023.8 Menace represents the physical embodiment of Anduril's "autonomy at the edge" philosophy, providing the necessary compute power and connectivity to make AI-driven warfare a reality on the front lines, independent of traditional infrastructure.
Chapter 2: Palantir Technologies - The Central Nervous System of Data
Palantir Technologies occupies a unique and often controversial position in the defense and security landscape. Rather than building hardware, Palantir specializes in creating the software backbone for data fusion and AI-powered analytics. Its core products, Gotham and Foundry, are designed to ingest massive, disparate datasets from hundreds of sources and organize them into a coherent, queryable "ontology" that allows human analysts to uncover hidden patterns, relationships, and insights. Palantir has become deeply embedded in the U.S. intelligence community, military, and law enforcement agencies, functioning as a central nervous system for data-driven operations. This deep integration has made it an indispensable tool in modern warfare and counter-terrorism, but has also placed it at the center of fierce debates over surveillance, predictive policing, and civil liberties.
Case Study 2.1: Project Maven - AI for Automated Target Recognition
Project Maven, officially the Algorithmic Warfare Cross-Functional Team, is a flagship DoD initiative launched in 2017 to accelerate the military's adoption of machine learning for intelligence analysis.9 The project's primary goal is to address the overwhelming volume of full-motion video and other sensor data collected by surveillance platforms like drones and satellites, a problem often described as "drowning in data".10 Maven employs AI algorithms, particularly object recognition models, to automatically scan imagery and identify, classify, and track objects of interest such as vehicles, buildings, or individuals.9
Palantir became a central contractor for Project Maven, providing the critical data integration and visualization software that underpins the effort.9 This role became more prominent after Google, an initial partner, withdrew from the project in 2018 following internal protests from employees who objected to the company's involvement in "the business of war".11 Palantir, alongside other tech firms like Amazon Web Services and Anduril, stepped in to fill the void.9
The Maven Smart System (MSS) user interface allows analysts to view fused data from multiple sources, with AI-flagged potential targets highlighted for human review.9 The system is designed to be a "human-in-the-loop" decision support tool, capable of performing four of the six steps in the military's "kill chain": find, fix, track, and target (the final "engage" and "assess" steps remain under human authority).9 Project Maven has been operationally deployed to support U.S. airstrikes in Iraq, Syria, and Yemen in 2024, and was used to locate Houthi rocket launchers and surface vessels in the Red Sea.9 The project represents a critical step in operationalizing AI for core warfighting functions, aiming to dramatically increase the speed and scale of targeting operations.
Case Study 2.2: Gotham in Ukraine - Battlefield Intelligence and Targeting
The 2022 Russian invasion of Ukraine has served as a large-scale, high-intensity proving ground for Palantir's technology. The Ukrainian military is a confirmed user of the Palantir Gotham platform, which has been described as a "pillar of Ukraine's defense against Russian aggression".13 Palantir's software integrates and analyzes vast streams of battlefield data—from commercial satellite imagery and drone feeds to signals intelligence and on-the-ground reports—to provide Ukrainian commanders with a unified, real-time picture of the battlespace.14
This capability has been instrumental in several key areas. For targeting, the platform helps identify and prioritize Russian military assets, enabling more precise and effective strikes with Ukraine's limited long-range munitions.14 The conflict has been described as an "AI war lab," with Palantir's technology at the center of this transformation, turning data into actionable intelligence at a speed and scale that would be impossible for human analysts alone.14 Beyond targeting, the platform is also used for humanitarian and logistical purposes, such as collecting and organizing evidence of Russian war crimes and assisting in the complex task of identifying and clearing landmines.14 This deployment is a powerful real-world demonstration of how a sophisticated AI data-fusion platform can provide a decisive intelligence advantage against a larger, conventional military force.
Case Study 2.3: Predictive Policing - The LAPD and New Orleans Deployments
While Palantir's military applications receive significant attention, its deployment in domestic law enforcement has generated intense controversy. The Gotham platform has been used by several police departments as a "predictive policing" system, which uses historical crime data and other inputs to forecast where and when future crimes are likely to occur, and in some cases, who is likely to be involved.11
The Los Angeles Police Department (LAPD) used Palantir software as part of a program to target "chronic offenders".15 The system employed a points-based formula to identify "probable offenders," whom analysts were then required to place under ongoing surveillance.15 Civil liberties groups, most notably the Stop LAPD Spying Coalition, heavily criticized the practice, labeling it a "racist feedback loop".15 Their argument is that because the historical arrest and crime data fed into the algorithm is a product of past policing practices—which have disproportionately targeted minority communities—the AI system will inevitably learn and amplify these biases, leading to a self-reinforcing cycle of surveillance and enforcement in those same communities.
A similar program was secretly tested in New Orleans, where Palantir's technology was used to analyze personal data, including criminal records, social media activity, and known gang affiliations, to identify individuals at risk of being involved in violent crime.16 The program was operated without public knowledge or consent, raising significant ethical concerns about mass surveillance, algorithmic fairness, and the lack of transparency and accountability in how law enforcement uses such powerful tools.16 These cases are central to the public debate over AI in policing, highlighting the profound risks of embedding algorithmic bias into law enforcement and eroding privacy rights.
Case Study 2.4: ICE Investigative Case Management - Data-Driven Immigration Enforcement
Palantir's technology is also deeply integrated into U.S. federal immigration enforcement. The company provides U.S. Immigration and Customs Enforcement (ICE) with its Gotham-based Investigative Case Management (ICM) system, which serves as a central data analysis and case management tool for the agency's various divisions.11 Palantir holds a long-term, multi-million dollar contract for the ongoing operation and maintenance of the ICM system, which was recently renewed for a potential value of $97.7 million.11
The use of the ICM system has been linked to some of the most controversial immigration policies in recent years. Reports indicate that the system was used during operations in 2017 to target and arrest family members of unaccompanied immigrant children who had crossed the border.11 The platform was used to connect disparate data points and build cases against these individuals, leading to the arrest of 443 people in one operation.11 This deployment places Palantir's technology at the nexus of data analytics and highly contentious domestic policy, raising profound ethical questions about the role of technology companies in facilitating immigration raids and family separations. The company's work with ICE has been a focal point for protests from both human rights activists and some of Palantir's own employees.
The varied applications of the Gotham platform reveal a core element of Palantir's business strategy. The same fundamental technology is marketed for distinct, and politically charged, purposes. When used to support Ukraine's defense, it is framed as a tool for national liberation.14 When used by the LAPD or ICE, it is framed by critics as a tool for biased surveillance and oppression.11 This "dual-use" ambiguity allows the company to maximize its government contracts across a wide spectrum of agencies. However, it also demonstrates that the technology itself is agnostic; its ethical and societal impact is determined entirely by the mission and data of the user. This creates a strategic challenge for Palantir, which leverages its laudable military applications to build its brand while its more controversial domestic contracts generate significant revenue alongside considerable ethical and reputational risk.
Part II: State-Level Proliferation and Battlefield Adaptation
The proliferation of AI and robotics is not confined to the innovations of Western technology firms. State actors are rapidly developing, adapting, and deploying their own unmanned systems, fundamentally altering the dynamics of regional conflicts and global strategic competition. This section analyzes two pivotal state actors: the Russian Federation, which has demonstrated a remarkable capacity for battlefield adaptation in its use of drones in Ukraine, and the People's Republic of China, which has strategically positioned itself as the indispensable global supplier of both military-grade drones and the dual-use components that fuel the entire ecosystem.
Chapter 3: The Russian Federation - Asymmetric Drone Warfare
The war in Ukraine has served as a crucible for Russian military doctrine, forcing a rapid evolution in its use of unmanned systems. After suffering heavy losses of conventional armored and air assets in the initial phases of the invasion, Russian forces have increasingly pivoted to a drone-centric strategy of attrition. This approach leverages domestically produced, relatively low-cost loitering munitions and reconnaissance UAVs to hunt and destroy high-value Ukrainian assets, many of which are more technologically advanced and expensive Western-supplied systems. This demonstrates a sophisticated battlefield "learning loop," where tactical realities drive rapid technological iteration and doctrinal change.
Case Study 3.1: ZALA Lancet - The High-Value Target Hunter in Ukraine
The ZALA Lancet loitering munition has emerged as one of Russia's most effective new capabilities in the Ukraine conflict.17 Developed by a subsidiary of the Kalashnikov Concern, the Lancet is a "kamikaze" drone designed for precision strikes on battlefield targets.17 With a range of 40 km and the ability to carry high-explosive or anti-tank warheads, it is typically employed against high-value military targets that have been located by a separate reconnaissance drone.17
Its impact on the battlefield has been significant. Open-source intelligence trackers have documented thousands of Lancet strikes since mid-2022. As of February 2024, one tracker recorded 1,163 strikes that destroyed or damaged hundreds of Ukrainian targets, with a particular focus on artillery systems.17 The Lancet has proven effective against a wide range of Western-supplied equipment, including British-supplied Stormer HVM air defense systems, Ukrainian Tor SAM systems, and various self-propelled howitzers.17 In one notable instance, a Lancet strike was claimed against a Ukrainian MiG-29 fighter jet parked at an airfield approximately 70 km from the front line, suggesting that newer variants possess a greater range than initially reported.17
The British Ministry of Defence assessed in late 2023 that the Lancet was "highly likely to be one of the most effective new capabilities deployed by Russia" over the previous year.17 The drone's success exemplifies the power of asymmetric warfare: a system with an estimated export cost of around $35,000 can be used to destroy a self-propelled howitzer or air defense system worth millions of dollars.17 This forces Ukraine to expend limited and expensive air defense munitions to counter a cheap and plentiful threat. In response to Ukrainian countermeasures, Russia has already begun fielding upgraded Lancet variants with longer flight times, larger warheads, and improved resistance to electronic warfare, demonstrating a continuous cycle of battlefield adaptation.18
Case Study 3.2: Orlan-10 - The Reconnaissance-EW-Strike Triad
The Orlan-10 is the workhorse of Russia's tactical reconnaissance drone fleet. It is a relatively simple and inexpensive UAV, but its effectiveness comes from its use in a sophisticated, integrated doctrine.19 Russian forces typically deploy Orlan-10s in groups of two or three. The first drone operates at an altitude of 1-1.5 km to conduct visual reconnaissance and identify targets. The second drone often carries an electronic warfare (EW) payload to jam Ukrainian communications or GPS signals. The third acts as a data relay, transmitting intelligence from the other drones back to a ground control station.19
This "triad" forms a highly effective find-fix-track-target loop. The Orlan-10's reconnaissance capabilities are used to spot Ukrainian forces, particularly artillery positions, which are then targeted either by Russian artillery or by loitering munitions like the Lancet.19 The integrated EW capability can suppress Ukrainian defenses and communications, increasing the survivability of the drones and the effectiveness of the subsequent strike. The Orlan-10 has been a persistent feature of the conflict since the initial war in Donbas in 2014, and its widespread use in the full-scale invasion highlights Russia's mature doctrine for integrated unmanned operations at the tactical level.19 The system's resistance to jamming has been noted as a particular challenge for Ukrainian forces.19 The success of this low-cost, high-volume system, when combined with lethal effectors, underscores Russia's shift toward a drone-centric model of attrition warfare that has proven difficult and costly to counter.
Chapter 4: The People's Republic of China - The Global Drone Superstore
The People's Republic of China has executed a sophisticated, multi-layered strategy to achieve a dominant position in the global drone ecosystem. This strategy operates on two parallel tracks. First, China exports advanced, military-grade Unmanned Combat Aerial Vehicles (UCAVs), such as the Wing Loong and CH-4B Rainbow, to a growing list of countries, particularly those in the Middle East, Africa, and Asia that face restrictions on purchasing Western systems. This serves as a tool of defense diplomacy and geopolitical influence. Second, and perhaps more strategically significant, China leverages its unparalleled commercial manufacturing dominance to control the global supply chain for dual-use drone components, creating a critical dependency for nations around the world—including both combatants in the Ukraine war.
Case Study 4.1: AVIC Wing Loong II - Combat Deployments in the Middle East & Africa
The Wing Loong II, developed by the Aviation Industry Corporation of China (AVIC), is a Medium-Altitude Long-Endurance (MALE) UCAV that is visually and functionally comparable to the U.S. MQ-9 Reaper.21 It is capable of carrying a 480 kg payload of bombs and missiles and has an endurance of up to 32 hours.21 The drone has seen significant export success, with confirmed operators including the United Arab Emirates (UAE), Saudi Arabia, Pakistan, Egypt, Nigeria, and Morocco.21
The Wing Loong II has a documented combat record. The UAE, the drone's launch customer, deployed the system extensively during the Libyan civil war to conduct airstrikes in support of the Libyan National Army (LNA).21 These strikes were reportedly responsible for destroying a significant number of Turkish-supplied Bayraktar TB2 drones operated by the opposing Government of National Accord (GNA).21 The Royal Saudi Air Force has also heavily utilized its Wing Loong II fleet for both ISR and precision strike missions in the Yemeni Civil War, logging over 5,000 flight hours as of May 2024.22 The drone's proliferation illustrates China's success in filling a market vacuum created by strict U.S. export controls on armed drones, allowing Beijing to build defense partnerships and project influence in strategically vital regions.
Case Study 4.2: CASC CH-4B Rainbow - Proliferation and Operational Record
The CH-4B Rainbow, produced by the China Aerospace Science and Technology Corporation (CASC), is another Reaper-like UCAV that has been widely exported.23 It can carry a payload of up to 345 kg and is capable of firing air-to-ground missiles from an altitude of 5,000 meters, keeping it outside the range of many short-range air defense systems.23 Its list of operators includes Iraq, Jordan, Saudi Arabia, the UAE, Nigeria, and most recently, the Democratic Republic of Congo (DRC), which acquired the drones to combat M23 rebels.25
The CH-4B's operational history highlights both its appeal and its potential drawbacks. The Iraqi Air Force used its fleet to conduct over 260 strikes against ISIS targets with a reported success rate of nearly 100 percent between 2015 and 2018.27 This demonstrates the drone's effectiveness, particularly in counter-insurgency operations. However, the Iraqi fleet has been plagued by severe maintenance and reliability issues. A 2019 U.S. government report stated that only one of Iraq's "more than 10" CH-4B drones was fully mission-capable, severely hampering their ISR capacity.27 Similarly, the Royal Jordanian Air Force was reportedly so dissatisfied with the drone's performance that it put its entire fleet up for sale in 2019.27 The CH-4B case study underscores the trade-offs inherent in Chinese military hardware: it is affordable and readily available, but questions remain regarding its long-term reliability, sustainment, and logistical support.
Case Study 4.3: DJI - The Dual-Use Dilemma and "Chinese Military Company" Designation
SZ DJI Technology is the world's largest manufacturer of commercial drones, dominating the consumer and prosumer market.29 Despite the company's official stance that it does not produce military equipment and actively discourages the combat use of its products, its drones have become ubiquitous on the battlefield.30 Both Ukrainian and Russian forces have extensively used off-the-shelf DJI drones, such as the Mavic series, for tactical reconnaissance, artillery spotting, and even as makeshift bombers by dropping small munitions.31
This widespread military use has placed DJI at the center of a complex geopolitical and regulatory dilemma. The U.S. Department of Defense has officially designated DJI as a "Chinese Military Company," citing its status as a "civil-military fusion contributor" under Chinese national strategy.31 While DJI vehemently disputes this designation and is appealing the decision, the label has led to restrictions on the procurement and use of its products by U.S. federal agencies.31 DJI epitomizes the challenge of dual-use technology in the modern era. A commercially available product has proven to be a highly effective and transformative tool of war, blurring the lines between civilian and military technology and creating immense difficulties for governments attempting to regulate its use and mitigate potential security risks associated with data collection and foreign control.
Case Study 4.4: The Russia-Ukraine Supply Chain - China as a Decisive Enabler
Perhaps the most significant aspect of China's role in the global drone landscape is its dominance of the component supply chain. While Beijing has officially restricted the export of complete military drone systems to Russia and Ukraine, an investigation of trade data reveals that China has become a decisive enabler of Russia's war effort by supplying a massive volume of critical dual-use components.29
Chinese exports to Russia of key items needed for drone manufacturing—such as fiber-optic cables, lithium-ion batteries, motors, and electrical control panels—have surged since the 2022 invasion.29 This flow of components has been instrumental in allowing Russian drone manufacturers to scale up production of systems like the Lancet and various FPV (first-person view) drones.29 At the same time, Ukraine's own burgeoning domestic drone industry remains highly dependent on Chinese components. A 2024 report found that nearly 89 percent of Ukraine's UAS-related imports by value still originated from China, including critical items like flight controllers, thermal sensors, and batteries for which there are few affordable Western alternatives.33
This situation places Beijing in a uniquely powerful strategic position. By controlling the foundational elements of the drone supply chain, China can indirectly influence the operational capacity of both combatants, all while maintaining a formal position of neutrality. This highlights a critical strategic vulnerability for any nation—including the United States and its allies—that relies on the globalized electronics market, as the supply chain for many critical defense components ultimately traces back to China.
Part III: The AI-Enabled Law Enforcement Paradigm
The integration of AI and robotics is not limited to the battlefield; it is also rapidly transforming domestic law enforcement. Axon Enterprise, the company best known for the TASER energy weapon, has strategically repositioned itself as a technology firm providing a comprehensive, interconnected ecosystem of AI-powered tools for police agencies. This ecosystem includes body-worn cameras, digital evidence management software, real-time crime center platforms, and unmanned aerial systems. While these technologies are marketed as tools to enhance officer efficiency, safety, and transparency, their deployment has ignited fierce debates about surveillance, algorithmic bias, and the appropriate limits of technology in policing, culminating in a high-profile controversy over the proposed use of armed drones.
Chapter 5: Axon Enterprise - The Automation of Policing
Axon's strategy is to create a deeply integrated, proprietary platform that captures, manages, and analyzes data at every stage of a police interaction. From the body camera that records an incident to the cloud software that stores the evidence and the AI that helps write the report, Axon aims to be the indispensable operating system for modern policing. This approach creates powerful efficiencies for law enforcement but also concentrates immense power and data in the hands of a single corporation, raising critical questions about public oversight, accountability, and the potential erosion of civil liberties.
Case Study 5.1: Draft One - Generative AI for Police Report Writing
One of the most time-consuming tasks for patrol officers is writing incident reports, a process that can consume up to 40% of their time on shift.34 To address this, Axon developed Draft One, a generative AI tool designed to automate the initial drafting of these reports.34 The system uses the audio transcript from an officer's Axon body-worn camera to produce a draft narrative of the incident. Axon claims the tool can cut report writing time by more than half, saving officers an average of 6-12 hours per week.35
The company emphasizes that the system is designed with a "human in the loop" safeguard; the AI-generated text is only a draft that must be reviewed, edited, and officially approved by the human officer before it can be submitted.34 This is intended to ensure accuracy and maintain officer accountability. Draft One represents a practical application of large language models to a mundane but critical police function. However, its use raises important questions regarding the accuracy of AI-powered transcription and summarization, the potential for the AI to introduce or omit crucial details that could affect a legal case, and whether it could subtly influence an officer's recollection of an event.
Case Study 5.2: Axon Air & DFR - Drones as First Responders
Axon Air is the company's integrated solution for deploying drones in public safety, with a particular focus on the "Drone as First Responder" (DFR) model.36 A DFR program enables a police agency to launch a drone from a centralized location—often the police station rooftop—the moment a 911 call is received. The drone can then fly to the incident location, often arriving before ground units, and stream live video back to the command center and to the mobile devices of responding officers.36
The stated goal is to provide critical, real-time situational awareness that can help de-escalate situations, improve officer safety, and lead to better decision-making. For example, a drone arriving at a potential active shooter scene can provide information on the suspect's location and armament before officers make entry. Axon's platform integrates the drone's video evidence directly into its Axon Evidence cloud system to maintain a chain of custody and includes a public-facing "Transparency Dashboard" to share flight logs and other data with the community.36 While DFR programs offer clear potential benefits for emergency response, they also trigger significant privacy concerns about the potential for persistent, widespread aerial surveillance of communities.
Case Study 5.3: Fusus & ALPR - Real-Time Crime Centers and AI Surveillance
Axon Fusus is a platform that enables the creation of "real-time crime centers" by using AI to fuse and analyze data from a vast network of disparate sources.34 Fusus can integrate video feeds from public-owned cameras, private business security systems, and even residential doorbell cameras into a single operational picture for law enforcement. This allows police to track suspects or monitor events across a wide area in real time.
This data fusion is often combined with other AI-powered surveillance tools, such as Axon's Fleet 3 in-car camera system, which includes high-speed Automatic License Plate Recognition (ALPR).34 The AI-driven ALPR can scan thousands of license plates per shift across multiple lanes of traffic, automatically checking them against law enforcement hotlists for stolen vehicles or wanted persons.34 While these technologies can be powerful investigative tools, their combination in a real-time crime center raises profound Fourth Amendment questions. The aggregation of ALPR data can create detailed logs of a person's movements over time, and the integration of private camera networks effectively deputizes privately owned surveillance equipment for government use, often without a warrant or public knowledge.
Case Study 5.4: The Taser Drone Controversy - A Failure of Corporate Governance
In June 2022, in the aftermath of the tragic school shooting in Uvalde, Texas, Axon CEO Rick Smith publicly announced a radical proposal: the development of a Taser-equipped drone system that could be pre-positioned in schools and other public venues to remotely incapacitate active shooters.37 The announcement, which included a graphic novel depicting the concept, envisioned an AI-powered system that could detect the sound of gunfire, alert authorities, and allow a human operator to deploy and fire the drone.40
The proposal immediately blindsided and horrified members of Axon's own independent AI Ethics Board.37 For over a year, the board had been cautiously deliberating a much narrower proposal, known as Project Ion, for a Taser drone to be used only by police in specific, high-risk scenarios to avoid using lethal force.38 The board had ultimately voted 8-4 to recommend that Axon not proceed even with that limited pilot, citing significant concerns that the technology would increase the overall use of force, would be prone to abuse, and would dehumanize the act of using force by making it feel like a video game.38
Smith's unilateral decision to announce a far more expansive and controversial version of the project—deploying armed drones in schools—without consulting the board was seen as a profound breach of trust. Nine of the board's twelve members, including its facilitators from the Policing Project at NYU School of Law, resigned in protest.37 In their public resignation letter, they stated that the school drone proposal would require "AI-powered persistent surveillance" and had "no realistic chance of solving the problem of mass shootings," while warning that it would disproportionately harm communities of color.37 The immense public and internal backlash forced Axon to "pause" the project.40 This case stands as a landmark failure of corporate governance and ethical oversight, starkly illustrating the societal dangers of "tech-solutionism" and the deep-seated public resistance to the deployment of weaponized autonomous systems in domestic settings.
The suite of products offered by Axon points to a deliberate strategy of creating a closed, fully integrated ecosystem for law enforcement. While tools like Draft One or Axon Air are marketed as standalone solutions for efficiency or safety, their true power—and the source of the greatest civil liberties risk—is realized through their integration. The Taser drone concept revealed the ultimate vision: a seamless system where an AI sensor (gunshot detection) triggers an autonomous platform (a DFR drone) armed with a company weapon (a Taser), with the entire event streamed to a centralized command center (Fusus) and the data stored on a proprietary cloud (Axon Evidence). This "operating system for policing" creates powerful vendor lock-in for police agencies and concentrates an unprecedented amount of surveillance capability and public data under the control of a single corporation, making robust, independent public oversight more critical and more difficult than ever.
Part IV: Strategic & Technological Frontiers
Beyond the systems currently deployed, military planners and technologists are actively developing the next generation of AI-powered capabilities that will define the future of warfare. This frontier is characterized by a push towards greater autonomy and human-machine teaming, as seen in ambitious air combat programs. Simultaneously, the increasing reliance on these complex, networked systems creates new vulnerabilities. This section explores these cutting-edge developments, from the U.S. Air Force's vision for "loyal wingman" drones to the emerging technological battlegrounds of counter-drone warfare, electronic attacks on navigation systems, adversarial AI, and the strategic weaknesses in the global defense supply chain.
Chapter 6: The Future of Air Combat
The paradigm of air superiority is shifting from a focus on the performance of individual manned aircraft to the collective capability of networked teams of manned and unmanned systems. This evolution is driven by advances in AI, autonomous flight control, and data links, enabling a new doctrine of distributed and collaborative airpower.
Case Study 6.1: The Collaborative Combat Aircraft (CCA) Program
The Collaborative Combat Aircraft (CCA) program, colloquially known as the "loyal wingman" project, is a cornerstone of the U.S. Air Force's Next Generation Air Dominance (NGAD) initiative.43 The program aims to develop relatively low-cost, autonomous unmanned combat aerial vehicles (UCAVs) designed to operate in close collaboration with manned sixth-generation fighters and bombers like the B-21 Raider.43 These AI-piloted aircraft are intended to be "attritable"—affordable enough to be risked in high-threat environments where losing a multi-million-dollar manned fighter would be unacceptable.
The roles envisioned for CCAs are diverse. They can act as forward sensors, extending the sensor range of the manned aircraft; as weapons carriers, launching munitions under the direction of the human pilot; or as electronic warfare platforms, jamming enemy radars and communications.43 This human-machine teaming concept elevates the human pilot to the role of a mission commander or "quarterback," orchestrating the actions of their autonomous wingmen rather than directly flying a single aircraft.44 The USAF plans to invest heavily in this vision, with a projected spend of over $8.9 billion on CCA programs from fiscal years 2025 to 2029.43 A significant technical and doctrinal challenge remains in developing the human-machine interface and the processes for training and "debriefing" the AI agents after missions to ensure continuous learning and improvement.44 The CCA program represents a fundamental commitment to human-machine teaming as the central pillar of future air combat.
Case Study 6.2: MQ-9 Reaper with Gorgon Stare - Wide-Area Persistent Surveillance
While the CCA represents the future, the Gorgon Stare system, deployed on the existing MQ-9 Reaper drone, represents the maturation of the data collection capabilities that will fuel future AI.45 Designated by the Air Force as a "wide-area surveillance sensor system," Gorgon Stare is a pod-mounted spherical array of cameras designed to overcome the narrow "soda straw" view of traditional drone sensors.45
The second increment of the system uses an array of 368 individual cameras to capture motion imagery of an entire city-sized area—up to 100 square kilometers—from an altitude of 25,000 feet.45 This creates a massive, persistent dataset that allows analysts to monitor all activity within the area of regard. If an event of interest occurs, such as an IED explosion, analysts can effectively "go back in time" to see who entered the area beforehand and track where they came from or where they went afterward. The sheer volume of data generated—several terabytes per minute—makes manual analysis impossible.45 Consequently, Gorgon Stare is intrinsically linked to AI analysis tools like those developed under Project Maven, which can automatically process the imagery to detect and track objects of interest.45 Gorgon Stare is a powerful, mature example of the "collect everything" approach to ISR, creating the data-rich environment in which future autonomous systems will be trained and operate.
Chapter 7: The Counter-UAS Imperative
The proliferation of small, cheap, and capable drones, starkly demonstrated in the Ukraine conflict, has made Counter-Unmanned Aerial Systems (C-UAS) a critical and rapidly growing sector of the defense industry. The threat ranges from small commercial quadcopters used for reconnaissance to sophisticated loitering munitions. Defending against this diverse threat requires a layered approach that integrates multiple sensor types and both kinetic and non-kinetic effectors.
Case Study 7.1: Dedrone by Axon - AI-Powered Airspace Security
Dedrone, now part of Axon, represents the new generation of C-UAS providers focused on a software-centric, sensor-agnostic approach.47 The core of the Dedrone system is an AI/ML-powered command and control platform that fuses data from a variety of sensors to provide a comprehensive picture of the local airspace.48 The system typically uses passive radio frequency (RF) sensors to detect drone control signals, which can often identify the drone model and locate the pilot.48 This data is then correlated with inputs from radar, acoustic sensors, and electro-optical/infrared cameras to track the drone's flight path.49
The AI software is crucial for distinguishing actual drone threats from other objects (like birds) to minimize false positives.47 Once a threat is confirmed, the system can automatically trigger a range of mitigation measures, depending on the legal and operational environment. These can include precision jamming to disrupt the drone's control link or GPS signal, or cueing a kinetic effector to physically intercept the drone.48 Dedrone's technology is used to protect a wide range of sensitive sites, including military installations, airports, critical infrastructure, and large public events like state fairs.47
Case Study 7.2: Northrop Grumman's FAAD C2 - An Integrated C-UAS Architecture
Legacy defense prime contractors like Northrop Grumman are also heavily invested in the C-UAS mission, often leveraging their existing, battle-proven systems as the foundation for a modern solution.50 Northrop Grumman's approach is built around the Forward Area Air Defense Command and Control (FAAD C2) system, which has been the U.S. Army's program of record for short-range air defense (SHORAD) for decades.50 The DoD has selected FAAD C2 as its interim system of choice for the C-sUAS mission.
This approach emphasizes a layered, "system of systems" architecture. The FAAD C2 software uses an open architecture to integrate a wide variety of "any-sensor, best-effector" combinations.50 This could include advanced radars like the AN/TPS-80 G/ATOR for detection and tracking, paired with effectors ranging from medium-caliber cannons firing proximity-fuzed ammunition to electronic warfare jammers or directed energy weapons.50 The company also offers mobile, fully integrated solutions like the M-ACE (Mobile - Acquisition, Cueing and Effector) system, which packages sensors and effectors onto a vehicle platform for on-the-move protection.50 This case study illustrates the strategy of adapting a proven, government-certified C2 system to meet the evolving C-UAS threat, contrasting with the more software-native approach of newer companies like Dedrone.
Chapter 8: Emerging Threats and Vulnerabilities
The increasing reliance on autonomous and networked systems creates a corresponding increase in the attack surface available to adversaries. The very technologies that enable these new capabilities—GPS for navigation, machine learning for perception, and globalized supply chains for hardware—are themselves sources of critical vulnerabilities. Securing the algorithmic battlefield requires defending not just against physical attacks, but against sophisticated electronic, cyber, and supply chain threats.
Case Study 8.1: GPS Spoofing - Compromising UAV Navigation
The Global Positioning System (GPS) is the primary means of navigation for the vast majority of military and commercial drones. However, the unencrypted and unauthenticated nature of civilian GPS signals makes them highly susceptible to jamming and, more insidiously, spoofing attacks.51 In a spoofing attack, an adversary broadcasts counterfeit GPS signals that are more powerful than the authentic satellite signals. A target UAV's receiver locks onto these fake signals, allowing the attacker to manipulate the drone's understanding of its own position, velocity, and time.51
This can be used to achieve a range of malicious outcomes: an attacker could subtly alter a drone's flight path to guide it into enemy territory for capture, feed it false coordinates to make it crash into terrain, or force it to land by tricking it into believing it has entered a no-fly zone.54 The widespread availability of low-cost software-defined radios (SDRs) has made creating a GPS spoofer relatively simple.51 The conflict in Ukraine has seen extensive use of both GPS jamming and spoofing, making it a real-world battlefield threat. Consequently, a major area of defense research is focused on developing robust detection mechanisms, such as using machine learning to identify anomalies in GPS signals or cross-referencing GPS data with other navigation sources like inertial measurement units (IMUs) or alternative positioning systems.52
Case Study 8.2: Adversarial AI - The Next Frontier of Cyber-Warfare
As machine learning models become integral to critical military functions like target recognition, they also become targets themselves. The field of adversarial machine learning explores how to deliberately fool these models.55 An adversarial attack involves creating a specially crafted input—such as an image with a subtle, human-imperceptible pattern of noise—that causes an AI model to misclassify it with high confidence.57 For example, an adversarial patch placed on top of a tank could cause a target recognition AI, like that used in Project Maven, to classify it as a friendly ambulance.
This vulnerability poses a profound security threat to any military system that relies on AI for perception and decision-making. These attacks can be digital (manipulating sensor data as it is processed) or physical (creating real-world objects or camouflage designed to deceive AI cameras).57 The military and intelligence communities are actively funding research into this area, both to understand the offensive potential and to develop more robust defenses.55 Defensive techniques include "adversarial training," where models are trained on a dataset that includes adversarial examples to make them more resilient.52 DARPA's Assured Neuro-Symbolic Learning and Reasoning (ANSR) program is exploring hybrid AI approaches that combine neural networks with symbolic reasoning to create systems that are more trustworthy and less susceptible to these attacks.58 As AI becomes more autonomous, the ability to conduct or defend against adversarial attacks will become a critical element of electronic and cyber warfare.
Case Study 8.3: U.S. Defense Supply Chain - The Risk of Foreign Dependency
The most advanced AI software and robotic systems are useless without the specialized hardware on which they run, particularly microelectronics. This creates a strategic-level vulnerability for the U.S. defense industrial base, which has become heavily reliant on globalized, and often opaque, supply chains.59 A September 2018 DoD report identified this "foreign dependency" as a major risk to national security.60
The risks are twofold. First, an adversarial nation like China, which dominates many areas of electronics manufacturing and rare earth mineral processing, could cut off U.S. access to critical components during a crisis, crippling the production of advanced weapon systems.60 Second, components sourced from adversarial nations could be intentionally compromised with hidden "back doors" or kill switches, allowing for intelligence gathering or sabotage.60 A documented cyberattack on QinetiQ, a U.S. defense contractor that supplies bomb-disposal robots, was attributed by investigators to a unit of the Chinese People's Liberation Army (PLA).62
The DoD's challenge is compounded by a severe lack of visibility into its lower-tier supply chains.60 While a prime contractor may be American, the countless sub-components within a system—from processors to capacitors—may originate from suppliers around the world, including in China. Prioritizing low cost over security and planning for a peacetime environment of free trade have exacerbated these vulnerabilities.59 Addressing this deep-seated risk requires a whole-of-government effort to map critical supply chains, onshore production of key technologies, and build resilient partnerships with trusted allies.
The confluence of these trends reveals a dangerous paradox at the heart of modern military technology. The very drive toward more interconnected, data-driven, and autonomous systems, as exemplified by the CCA program, creates an exponentially larger and more complex attack surface. Every sensor, every data link, every processor, and every line of code becomes a potential point of failure that can be targeted by GPS spoofing, adversarial AI, or a compromised microchip. The immense strategic advantage promised by the algorithmic battlefield is therefore directly contingent on solving these foundational vulnerabilities in navigation, perception, and supply chain security—a challenge that is currently far from resolved.
Part V: The Regulatory, Legal, and Ethical Battlefield
The rapid technological advancements in AI and robotics have far outpaced the development of corresponding legal, ethical, and regulatory frameworks. This has created a "governance gap," where powerful new capabilities are being developed and deployed in a normative vacuum. This section examines the complex and often contentious efforts to apply existing laws and develop new ones to govern these technologies. It analyzes the applicability of International Humanitarian Law (IHL) to autonomous weapons, the policy responses of national and international bodies like the U.S. DoD and NATO, and the stark warnings issued by human rights and civil liberties organizations.
Chapter 9: International Law and Autonomous Weapons
The prospect of weapons that can independently select and engage targets without direct human intervention raises fundamental questions for the laws of war. The international community is grappling with whether existing legal frameworks are sufficient to regulate these systems or if new, specific prohibitions and restrictions are urgently required.
Analysis of International Humanitarian Law (IHL) Applicability
It is undisputed that any weapon system, including an Autonomous Weapon System (AWS), must be capable of being used in accordance with International Humanitarian Law (IHL).63 The core principles of IHL—distinction (distinguishing between combatants and civilians), proportionality (ensuring incidental civilian harm is not excessive in relation to the military advantage gained), and precaution (taking all feasible precautions to avoid civilian harm)—must be upheld in any attack.63
However, AWS pose a fundamental challenge to the application of these principles. IHL is built on the assumption of human judgment. A human commander is responsible for making the complex, context-dependent assessments required by distinction and proportionality.65 With an AWS, the human user is removed in time and space from the specific act of applying force; they activate a system that will later select and engage targets based on a pre-programmed "target profile".64 This creates what is widely termed the "responsibility gap" or "accountability gap": if an AWS conducts an unlawful strike, it is unclear who is legally and morally responsible. Is it the commander who activated it, the programmer who wrote the code, the manufacturer who built it, or the system itself? This lack of clear accountability undermines a core tenet of the legal architecture of armed conflict.65
The Role of the ICRC and the Call for New Legally Binding Rules
The International Committee of the Red Cross (ICRC), as the guardian of IHL, has taken a firm position that existing laws are insufficient to address the unique risks posed by AWS. The ICRC has officially recommended that states adopt new, legally binding international rules to regulate these systems.64
The ICRC's proposal is twofold. First, it calls for a prohibition on certain types of AWS. This includes unpredictable autonomous weapons whose effects cannot be sufficiently understood or explained, and systems that are designed or used to apply force directly against persons.67 The ethical argument here is that decisions over life and death have a moral and human dimension that should never be delegated to a machine.64 Second, for all other types of AWS (e.g., those designed to target military objects), the ICRC recommends strict regulations. These would include limits on the types of targets, the duration and geographical scope of their use, and requirements for human supervision and the ability to intervene and deactivate the system.66 The ICRC is convinced that only a new international treaty can provide the necessary clarity and reinforcement of IHL to prevent an erosion of protections for those affected by armed conflict.64
Article 36 Weapon Reviews: Process and Challenges
A key mechanism within existing IHL is the requirement under Article 36 of Additional Protocol I to the Geneva Conventions for states to conduct a legal review of any "new weapon, means or method of warfare" before it is acquired or deployed.69 This review must determine whether the weapon's use would be prohibited by international law in some or all circumstances.71
However, organizations like the Stockholm International Peace Research Institute (SIPRI) have highlighted the unique challenges that AWS pose to the Article 36 review process.69 Reviewing a conventional weapon involves assessing predictable physical effects. Reviewing an AWS requires assessing the behavior of a complex, software-driven system that may act in unpredictable ways, especially when interacting with a dynamic and unstructured battlefield environment. This demands a much more complex and resource-intensive testing and evaluation process to understand the risks of unintended engagements or loss of control.69 Given that many states lack a formal Article 36 review process, and even those that do may lack the technical expertise to evaluate complex AI, there is a strong case for greater international cooperation and the sharing of best practices to ensure these reviews are meaningful.72
Chapter 10: National Policies and Civil Liberties
As the debate continues at the international level, key military powers and alliances are developing their own policies to guide the development and use of AI. At the same time, civil liberties and human rights organizations are raising alarms about the domestic use of these technologies, particularly in surveillance and law enforcement.
The U.S. DoD AI Adoption Strategy and Ethical Principles
The U.S. Department of Defense has publicly committed to a path of responsible AI adoption. The 2023 DoD Data, Analytics, and AI Adoption Strategy explicitly orients the Department's efforts toward an "AI Hierarchy of Needs," with "Responsible AI" at its apex.5 This framework is meant to ensure that the design, development, and use of AI capabilities are consistent with the DoD's AI Ethical Principles, which include concepts like responsibility, equitability, traceability, reliability, and governability.
Senior defense officials have emphasized this commitment. Deputy Secretary of Defense Kathleen Hicks, in unveiling the strategy, stated, "We've worked tirelessly for over a decade to be a global leader in the... responsible development and use of AI technologies in the military sphere" and stressed that "Safety is critical because unsafe systems are ineffective systems".6 The U.S. has also promoted a political declaration at the international level on the responsible military use of AI to build norms around the technology.6 This policy framework signals a clear intent to integrate ethical considerations into the military's AI development lifecycle, though the practical implementation of these high-level principles in complex, real-world systems remains a significant challenge.
NATO's Approach to AI and Robotics
The North Atlantic Treaty Organization (NATO) recognizes that AI is set to revolutionize warfare and is actively working to develop a coherent Alliance-wide approach.73 A 2024 NATO Parliamentary Assembly report highlights the far-reaching potential of AI in military systems but also acknowledges the significant hurdles.74 Key challenges identified include adjusting procurement processes to accommodate rapidly evolving technology, ensuring interoperability between the systems of 32 different member nations, and navigating the profound ethical and legal questions raised by AI-powered weapons.74 NATO's strategy focuses on supporting an innovation ecosystem to make AI accessible to armed forces, enhancing cooperation with partners like the European Union, and continuing the development of common standards for the ethical use of AI.74
ACLU and Amnesty International's Concerns Regarding Surveillance and Lethality
In stark contrast to the qualified embrace of AI by military organizations, leading human rights and civil liberties groups have issued strong warnings against its use, particularly in domestic contexts. The American Civil Liberties Union (ACLU) has argued that AI should never be used to "supercharge dangerous biometric surveillance" or to make "life-altering decisions" in policing, immigration, or the criminal justice system.75 The ACLU warns that AI has the potential to "fuel inequities and injustice," particularly when trained on biased historical data. They have called for outright prohibitions on certain applications, including predictive policing systems that target individuals, real-time facial recognition for mass surveillance, and emotion detection systems.75
Amnesty International has focused on the human rights implications of Autonomous Weapon Systems. The organization defines AWS as systems that can "select, attack, kill and wound human targets" without effective human control and has raised five key human rights issues for consideration.76 Their position aligns with the broader "Campaign to Stop Killer Robots," which advocates for a pre-emptive ban on such weapons. These perspectives highlight the deep societal concerns that accompany the proliferation of AI in security, framing it not just as a tool of efficiency but as a potential threat to fundamental rights and freedoms.
The divergence between these viewpoints reveals a critical "governance gap." On one side, technology companies like Anduril, Palantir, and Axon are developing and deploying powerful AI systems at a rapid, market-driven pace. On the other, public and international institutions are engaged in a slow, deliberate, consensus-based process to establish legal and ethical norms. The Axon Taser drone controversy serves as a perfect microcosm of this gap: a company moved to deploy a radical new technology far ahead of public consensus, and the only immediate check—its own internal ethics board—proved insufficient, leading to its collapse. This dynamic suggests a high probability of "policy-making by crisis," where meaningful regulations are only enacted after a significant and foreseeable misuse of an autonomous system has already occurred.
Part VI: Strategic Analysis and Recommendations
The evidence and case studies presented in this dossier paint a clear picture of a global security environment in the midst of a profound technological transformation. The integration of artificial intelligence and robotics is not a future prospect but a present and accelerating reality, reshaping everything from battlefield tactics to domestic policing. This final section synthesizes the key findings of the report and offers a series of strategic recommendations for defense agencies, law enforcement bodies, and policymakers as they navigate the complex and often perilous landscape of the algorithmic battlefield.
Chapter 11: Synthesis of Key Findings
Three overarching conclusions emerge from the analysis of the key actors and systems driving this transformation.
The Irreversibility of AI Integration: The case studies, from Anduril's autonomous border surveillance to Russia's Lancet drones in Ukraine and Axon's AI-powered policing tools, collectively demonstrate that the integration of AI is an irreversible trend. These systems are delivering tangible operational advantages in terms of speed, scale, and efficiency, making their continued adoption a strategic imperative for any state or non-state actor seeking a competitive edge. The question is no longer if AI will be central to warfare and security, but how it will be governed and controlled.
The Widening Governance Gap: A dangerous chasm is opening between the pace of technological development and the pace of legal and ethical norm-setting. Private sector innovation, driven by commercial incentives and agile development, is deploying capabilities far faster than deliberative bodies like the United Nations or national legislatures can regulate them. This governance gap creates a high-risk environment where powerful autonomous and surveillance systems are fielded without sufficient oversight, increasing the risks of unintended escalation, the erosion of human rights, and the loss of meaningful human control over the use of force.
The Geopolitics of Technology: Strategic competition in the 21st century is inextricably linked to technological dominance, particularly in the fields of AI, robotics, and the microelectronics that power them. China's strategic position as the world's dominant manufacturer of both finished drones and critical dual-use components has become a central geopolitical factor. This creates a critical dependency for nearly every nation, granting Beijing significant leverage and exposing a fundamental vulnerability in the defense industrial base of the United States and its allies.
Chapter 12: Recommendations for Stakeholders
In light of these findings, a proactive and multi-faceted response is required from all stakeholders involved in the development, deployment, and oversight of these technologies.
For Defense & Intelligence Agencies:
Prioritize Counter-Autonomy and C-UAS: The proliferation of cheap, effective unmanned systems, as demonstrated by the Lancet in Ukraine, is now a primary tactical threat. Investment in multi-layered C-UAS capabilities, from detection to kinetic and non-kinetic defeat, must be a top budgetary and acquisition priority for all military services. This includes developing defenses against autonomous swarms.
Invest Aggressively in Resilient Systems: The effectiveness of autonomous systems is contingent on the integrity of their underlying technology. Therefore, R&D funding should be urgently directed toward two critical areas:
Resilient Navigation: Develop and field robust alternatives to GPS for navigation, timing, and positioning to counter the pervasive threat of jamming and spoofing.
Robust AI: Accelerate research into defending against adversarial machine learning attacks to ensure that AI-powered perception and targeting systems can be trusted in contested electronic environments.
Mandate Supply Chain Security and Diversification: The strategic risk posed by dependency on foreign, and potentially adversarial, supply chains can no longer be treated as a secondary concern. The DoD must:
Mandate comprehensive supply chain mapping for all critical AI and robotics programs to identify dependencies at every tier.
Implement a deliberate strategy of risk mitigation, including on-shoring critical manufacturing, diversifying suppliers among trusted allies, and stockpiling essential components.
Codify Doctrine for Human-Machine Teaming: For future systems like the Collaborative Combat Aircraft, it is imperative to develop and codify clear doctrine that defines the roles, responsibilities, and rules of engagement for human-machine teams. This doctrine must establish unambiguous standards for ensuring "meaningful human control" over the use of lethal force.
For Law Enforcement Agencies:
Adopt "Transparency by Design": Before any AI-powered system is procured or deployed, agencies must commit to a policy of radical transparency. This should include mandatory public comment periods, clear use policies, and independent, third-party algorithmic audits to assess systems for accuracy and demographic bias.
Establish Clear Red Lines: Certain high-risk applications of AI pose an unacceptable threat to civil liberties and community trust. Agencies should proactively establish clear prohibitions on the use of AI for predictive policing that targets individuals, real-time facial recognition for mass surveillance, and the deployment of any domestically-operated autonomous weapons systems.
Prioritize Data Governance: The power of AI data-fusion platforms like Palantir Gotham or Axon Fusus lies in the data they ingest. Before adopting such systems, agencies must first establish robust, publicly-vetted data governance and privacy protection frameworks that strictly limit the types of data collected, how it can be used, and how long it can be retained.
For Policymakers & Legislators:
Lead International Efforts for an AWS Treaty: The United States and its allies should proactively engage in international forums, such as the UN Convention on Certain Conventional Weapons, to support the ICRC's call for a new, legally binding treaty on Autonomous Weapon Systems. A treaty that establishes clear prohibitions on the most dangerous types of AWS and strict regulations on all others is essential for global stability and the preservation of IHL.
Enact National Legislation on AI Accountability: Congress and other national legislatures should pass laws that create clear accountability mechanisms for the use of AI in high-stakes government decisions. This should include liability frameworks that close the "responsibility gap" and ensure that a human actor is always legally accountable for the actions of an autonomous system.
Fund Strategic Industrial Policy: Mitigating supply chain vulnerabilities requires a deliberate national industrial strategy. Policymakers should use legislative and financial tools, such as the CHIPS Act and the Defense Production Act, to incentivize the domestic manufacturing and R&D of critical technologies, including microelectronics, advanced batteries, and robotics components, to reduce strategic dependence on geopolitical competitors.
Works cited
Ghost, not just a drone, it's AI - COBBS Industries, accessed October 20, 2025, https://cobbsindustries.com/ghost-not-just-a-drone-its-ai/
Ghost | Anduril, accessed October 20, 2025, https://www.anduril.com/hardware/ghost-autonomous-suas/
Sentry | Anduril, accessed October 20, 2025, https://www.anduril.com/hardware/sentry/
Platforms vs. Applications: Insights from Anduril and Virtru, accessed October 20, 2025, https://www.virtru.com/blog/platforms-vs.-applications-insights-from-anduril-and-virtru
DEPARTMENT OF DEFENSE - 2023 Data, Analytics, and ... - DoD, accessed October 20, 2025, https://media.defense.gov/2024/Oct/25/2003571622/-1/-1/0/2023-11-DOD-DATA-ANALYTICS-AI-ADOPTION-STRATEGY-FACTSHEET_C.PDF
DOD Releases AI Adoption Strategy - Department of Defense, accessed October 20, 2025, https://www.war.gov/News/News-Stories/Article/Article/3578219/dod-releases-ai-adoption-strategy/
CBP more than doubling autonomous sentry towers along Southwest border - FedScoop, accessed October 20, 2025, https://fedscoop.com/anduril-sentry-towers-cbp/
Anduril Introduces Menace-T: A Compact, Field-Deployable C4 ..., accessed October 20, 2025, https://www.anduril.com/article/anduril-introduces-menace-t-a-compact-field-deployable-c4-system-designed-for-the-tactical-edge/
Project Maven - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/Project_Maven
Projects using artificial intelligence – Project Maven – Automatic target detection from unmanned aerial reconnaissance imagery – GIEST, accessed October 20, 2025, https://www.giest.or.jp/en/contents/briefs/2872/
Palantir Technologies Inc - AFSC Investigate, accessed October 20, 2025, https://investigate.afsc.org/company/palantir
Project Maven reaches Europe: NATO selects Palantir's AI analytics system - The Decoder, accessed October 20, 2025, https://the-decoder.com/project-maven-reaches-europe-nato-selects-palantirs-ai-analytics-system/
Palantir Technologies - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/Palantir_Technologies
Palantir, Seemingly Everywhere All at Once - PassBlue, accessed October 20, 2025, https://passblue.com/2025/10/12/palantir-seemingly-everywhere-all-at-once/
Los Angeles Police Department uses Palantir software to target ..., accessed October 20, 2025, https://privacyinternational.org/examples/2487/los-angeles-police-department-uses-palantir-software-target-chronic-offenders
Palantir's Predictive Policing Technology: A Case of algorithmic Bias and Lack of Transparency - ResearchGate, accessed October 20, 2025, https://www.researchgate.net/publication/385290034_Palantir's_Predictive_Policing_Technology_A_Case_of_algorithmic_Bias_and_Lack_of_Transparency
ZALA Lancet - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/ZALA_Lancet
Ukrainian frontlines face deadlier Lancet drone variant - Defence Blog, accessed October 20, 2025, https://defence-blog.com/ukrainian-frontlines-face-deadlier-lancet-drone-variant/
STC Orlan-10 - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/STC_Orlan-10
Orlan-10 Russian Unmanned Aerial Vehicle (UAV) - ODIN - OE Data Integration Network, accessed October 20, 2025, https://odin.tradoc.army.mil/WEG/Asset/Orlan-10_Russian_Unmanned_Aerial_Vehicle_(UAV)
CAIG Wing Loong II - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/CAIG_Wing_Loong_II
Chinese combat drones log over 5,000 flight hours in Saudi Arabia - Defence Blog, accessed October 20, 2025, https://defence-blog.com/chinese-combat-drones-log-over-5000-flight-hours-in-saudi-arabia/
CASC Rainbow - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/CASC_Rainbow
CH-4B (CH-4B Rainbow) Chinese Unmanned Aerial Vehicle (UAV) - ODIN - OE Data Integration Network, accessed October 20, 2025, https://odin.tradoc.army.mil/WEG/Asset/CH-4B_(CH-4B_Rainbow)_Chinese_Unmanned_Aerial_Vehicle_(UAV)
China delivers three more CH-4 drones to support Democratic Republic of Congo to fight M23 rebels - Army Recognition, accessed October 20, 2025, https://armyrecognition.com/archives/archives-land-defense/land-defense-2024/china-delivers-three-more-ch-4-drones-to-support-democratic-republic-of-congo-to-fight-m23-rebels
Chinese Drones in the Ascendance - EDR Magazine, accessed October 20, 2025, https://www.edrmagazine.eu/chinese-drones-in-the-ascendance
Only One Of Iraq's Chinese CH-4B Drones Is Mission Capable As Other Buyers Give Up On Them - The War Zone, accessed October 20, 2025, https://www.twz.com/29324/only-one-of-iraqs-chinese-ch-4b-drones-is-mission-capable-as-other-buyers-give-up-on-them
How China became the world's leading exporter of combat drones ..., accessed October 20, 2025, https://www.aljazeera.com/news/2023/1/24/how-china-became-the-worlds-leading-exporter-of-combat-drones
Behind Russia's battlefield drone surge in Ukraine? Chinese factories., accessed October 20, 2025, https://www.washingtonpost.com/world/2025/10/13/china-russia-drone-parts-ukraine/
dronelife.com, accessed October 20, 2025, https://dronelife.com/2025/10/14/dji-chinese-military-company-designation-appeal/#:~:text=The%20company%20noted%20that%20it,marketed%20drones%20for%20combat%20purposes.
DJI Chinese Military Company designation - DRONELIFE, accessed October 20, 2025, https://dronelife.com/2025/10/14/dji-chinese-military-company-designation-appeal/
DJI to Appeal Court Decision Denying Challenge to Chinese Military Company Listing, accessed October 20, 2025, https://exportcompliancedaily.com/news/2025/10/16/DJI-to-Appeal-Court-Decision-Denying-Challenge-to-Chinese-Military-Company-Listing-2510150053
China is a Key Factor in Ukraine's Surging Drone Industry — Beijing's New Export Controls May Ground It - FDD, accessed October 20, 2025, https://www.fdd.org/analysis/2025/10/10/china-is-a-key-factor-in-ukraines-surging-drone-industry-beijings-new-export-controls-may-ground-it/
Axon AI - Axon.com, accessed October 20, 2025, https://www.axon.com/ai
How Axon is using AI responsibly to transform public safety, accessed October 20, 2025, https://www.axon.com/resources/how-axon-is-using-ai-responsibly
Axon Air - Axon.com, accessed October 20, 2025, https://www.axon.com/products/axon-air
Axon ethics board members resign over taser-equipped drone ..., accessed October 20, 2025, https://www.zdnet.com/article/board-members-resign-over-taser-equipped-drone/
Exclusive: Axon still wants to put Taser drones in your kid's school ..., accessed October 20, 2025, https://therecord.media/exclusive-axon-still-wants-to-put-taser-drones-in-your-kids-school
rick smith - Fast Company, accessed October 20, 2025, https://www.fastcompany.com/section/rick-smith
Axon Cancels Plans to Sell a Taser Drone - DeepLearning.AI, accessed October 20, 2025, https://www.deeplearning.ai/the-batch/ethics-team-scuttles-taser-drone/
Axon's plans for Taser drones blindsided its AI ethics board : r/technology - Reddit, accessed October 20, 2025, https://www.reddit.com/r/technology/comments/v5crnp/axons_plans_for_taser_drones_blindsided_its_ai/
Axon AI Ethics Board — The Policing Project, accessed October 20, 2025, https://www.policingproject.org/axon-ethics-board
Manned-unmanned teaming - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/Manned-unmanned_teaming
CCA and Pilots: How Do We Communicate? - Mitchell Institute for Aerospace Studies, accessed October 20, 2025, https://www.mitchellaerospacepower.org/podcast/cca-and-pilots/
Gorgon Stare - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/Gorgon_Stare
General Atomics MQ-9 Reaper - Wikipedia, accessed October 20, 2025, https://en.wikipedia.org/wiki/General_Atomics_MQ-9_Reaper
Dedrone by Axon: Counter-Drone Defense Solutions & Systems, accessed October 20, 2025, https://www.dedrone.com/
Dedrone Counter-Drone Technology | C-UAV | NWS - Network Wireless Solutions, accessed October 20, 2025, https://nwsnext.com/manufacturers/dedrone/
White paper: Counter-Drone: The Comprehensive Guide to Counter-UAS/C-UAS/CUAS - Dedrone, accessed October 20, 2025, https://www.dedrone.com/white-papers/counter-uas
Counter Unmanned Aerial Systems (C-UAS) | Northrop Grumman, accessed October 20, 2025, https://www.northropgrumman.com/what-we-do/mission-solutions/counter-unmanned-aerial-systems-c-uas
GPS-Spoofing Attack Detection Mechanism for UAV Swarms - IHP Microelectronics, accessed October 20, 2025, https://www.ihp-microelectronics.com/php_scripts/publications/manuscript_files/mykytyn-mykytyn-1-18-2023-2024.pdf
Detection of GPS Spoofing Attacks in UAVs Based on Adversarial ..., accessed October 20, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11436244/
Unmanned Aircraft Capture and Control via GPS Spoofing - University of Texas at Austin, accessed October 20, 2025, http://radionavlab.ae.utexas.edu/images/stories/files/papers/unmannedCapture.pdf
Detection of UAV GPS Spoofing Attacks Using a Stacked Ensemble Method - MDPI, accessed October 20, 2025, https://www.mdpi.com/2504-446X/9/1/2
[D] [R] Is adversarial attack common in industry? : r/MachineLearning - Reddit, accessed October 20, 2025, https://www.reddit.com/r/MachineLearning/comments/p9sy3b/d_r_is_adversarial_attack_common_in_industry/
Defenses in Adversarial Machine Learning: A Survey - arXiv, accessed October 20, 2025, https://arxiv.org/pdf/2312.08890
[2303.06302] Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey - arXiv, accessed October 20, 2025, https://arxiv.org/abs/2303.06302
Neuro-Symbolic AI for Military Applications - arXiv, accessed October 20, 2025, https://arxiv.org/html/2408.09224v2
Causes of Vulnerabilities and Key Threats to Defense Supply Chains - Air University, accessed October 20, 2025, https://www.airuniversity.af.edu/Wild-Blue-Yonder/Article-Display/Article/4207857/causes-of-vulnerabilities-and-key-threats-to-defense-supply-chains/
GAO-25-107283, DEFENSE INSUSTRIAL BASE: Actions Needed to ..., accessed October 20, 2025, https://www.gao.gov/assets/gao-25-107283.pdf
Robotics: Geopolitical Risks and Strategy - SpecialEurasia, accessed October 20, 2025, https://www.specialeurasia.com/2025/03/06/robotics-geopolitics-security/
Supply Chain Vulnerabilities from China in U.S. Federal Information and Communications Technology, accessed October 20, 2025, https://www.uscc.gov/sites/default/files/Research/Interos_Supply%20Chain%20Vulnerabilities%20from%20China%20in%20U.S.%20Federal%20ICT_final.pdf
A legal perspective: Autonomous weapon systems under international humanitarian law - ICRC, accessed October 20, 2025, https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_under_international_humanitarian_law.pdf
Autonomous Weapon Systems and International Humanitarian Law ..., accessed October 20, 2025, https://www.icrc.org/en/article/autonomous-weapon-systems-and-international-humanitarian-law-selected-issues
Full article: The ethical legitimacy of autonomous Weapons systems ..., accessed October 20, 2025, https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131
ICRC commentary on the 'Guiding Principles' of the CCW GGE on 'Lethal Autonomous Weapons Systems', accessed October 20, 2025, https://documents.unoda.org/wp-content/uploads/2020/07/20200716-ICRC.pdf
www.icrc.org, accessed October 20, 2025, https://www.icrc.org/en/law-and-policy/autonomous-weapons#:~:text=The%20ICRC%20has%20recommended%20that,strict%20restrictions%20on%20all%20others.
Autonomous weapons: The ICRC recommends adopting new rules, accessed October 20, 2025, https://www.icrc.org/en/document/autonomous-weapons-icrc-recommends-new-rules
Implementing Article 36 Weapon Reviews in the Light of Increasing ..., accessed October 20, 2025, https://www.sipri.org/publications/2015/sipri-insights-peace-and-security/implementing-article-36-weapon-reviews-light-increasing-autonomy-weapon-systems
Article 36 reviews: dealing with the challenges posed by emerging technologies - SIPRI, accessed October 20, 2025, https://www.sipri.org/sites/default/files/2017-12/article_36_report_1712.pdf
Article 36 Reviews: Dealing with the Challenges posed by Emerging Technologies | SIPRI, accessed October 20, 2025, https://www.sipri.org/publications/2017/policy-reports/article-36-reviews-dealing-challenges-posed-emerging-technologies
SIPRI Compendium on Article 36 Reviews, accessed October 20, 2025, https://www.sipri.org/publications/2017/sipri-background-papers/sipri-compendium-article-36-reviews
2023 - ROBOTICS AND AUTONOMOUS SYSTEMS - REPORT ..., accessed October 20, 2025, https://www.nato-pa.int/document/2023-robotics-and-autonomous-systems-report-weingarten-034-stctts
2024 - nato and artificial intelligence: navigating the challenges and opportunities, accessed October 20, 2025, https://www.nato-pa.int/document/2024-nato-and-ai-report-clement-058-stc
How Policymakers Can Make Sure AI Protects Civil Rights | ACLU of ..., accessed October 20, 2025, https://www.aclunc.org/ai/policymakers
Autonomous weapons systems: Five key human rights issues for ..., accessed October 20, 2025, https://www.amnestyusa.org/reports/autonomous-weapons-systems-five-key-human-rights-issues-for-consideration/
Comments
Post a Comment