In March 2025, a CH-47 Chinook completed an autonomous approach and landing at an undisclosed test facility. There was no pilot in the left seat, no co-pilot in the right. Boeing’s A2X system — a retrofit autonomy package — had accumulated over 150 autonomous approaches by that point, integrating sensor fusion, terrain-following algorithms, and flight envelope protection into a platform designed in the early 1960s. A helicopter that entered service when John F. Kennedy was president is now landing itself.
Meanwhile, in the Nevada desert, XQ-58A Valkyrie demonstrators were flying formation with manned F-22s and F-35s, practicing the tactical behaviors that the US Air Force wants from its Collaborative Combat Aircraft program. Anduril Industries’ Fury — a purpose-built autonomous combat aircraft — was advancing through its own test program. And in a DARPA laboratory, engineers were working on drone swarms capable of launching from standard shipping containers, finding targets in GPS-denied environments, and coordinating their attacks without continuous human command input.
All of these programs share one label: “autonomous.” But that word conceals a spectrum so wide it encompasses everything from a thermostat to a fully independent weapon system. Understanding what military autonomy actually means — technically, operationally, and ethically — requires dismantling several layers of marketing language and confronting some genuinely hard engineering problems.
Quick Facts
- Boeing Chinook A2X: 150+ autonomous approaches logged in testing
- CCA program aircraft: XQ-58A Valkyrie (Kratos), Anduril Fury
- MQ-9 Reaper: >27 million flight hours across US/allied fleets
- DARPA drone swarms: Container-launched, GPS-denied capable
- Autonomy levels: SAE Levels 0–5 (adapted from automotive industry)
- Human-on-the-loop: AI acts; human can override but may not
- Human-in-the-loop: Human must authorize each action
- DoD Directive 3000.09: Governs autonomous weapons use of force
The Autonomy Spectrum: From Remote Control to Independent Action
The US military borrows its autonomy taxonomy from the automotive industry’s SAE International scale, adapted for aviation and weapons systems. Level 0 is a human pilot doing everything manually. Level 1 is flight director guidance — the computer suggests, the human acts. Level 2 is autopilot: the computer manages specific axes (pitch, roll, throttle) while the human monitors and retains authority. Most commercial airliners cruise at Level 2 for the majority of their flights.
Level 3 is where things become interesting. This is conditional automation — the system handles all driving functions in defined conditions, but the human must be ready to take over. Traffic Jam Assist in an automobile is a Level 3 system. In aviation, terrain-following at low altitude in a combat helicopter, with the pilot monitoring, fits this category. The Boeing A2X autonomous Chinook operates at approximately Level 3–4: it executes the approach independently, but the system’s designers envisioned a crew member available to intervene if the situation degrades beyond defined parameters.

Level 4 is high automation: the system can complete the entire mission without human intervention, but only within a clearly defined operational design domain. Step outside that domain — unexpected weather, system failure, novel threat — and the aircraft may be required to safely abort. The MQ-9 Reaper operates in this regime for navigation and flight management. A Reaper can take off, fly a pre-programmed surveillance pattern, and return to base without a human input. But the human remains firmly in the loop for weapons employment — a legal and ethical requirement formalized in US Department of Defense Directive 3000.09.
Level 5 — full automation, capable of any mission anywhere without human oversight — does not exist in any current operational military aircraft, and may not for decades. The engineering challenges alone are formidable. The ethical and legal frameworks governing such a system do not yet exist.
The Crucial Distinction: Human-In versus Human-On the Loop
Military lawyers, ethicists, and engineers have developed a precise vocabulary for describing where the human sits relative to an autonomous system’s decision cycle. The distinction matters enormously, particularly for weapons employment.
Human-in-the-loop means a human must positively authorize each action before the system executes it. The weapon cannot fire without explicit human approval. This is the standard for current unmanned combat aircraft. An MQ-9 Reaper operator in a ground control station at Creech Air Force Base, Nevada, authorizes each Hellfire missile release individually. The aircraft’s autonomy handles navigation, loiter, and sensor management. The kill decision remains human.

Human-on-the-loop is a fundamentally different architecture. The autonomous system acts — it makes targeting decisions and can execute them — but a human supervisor monitors the process and retains the authority to intervene and override. The human is watching the loop, not inside it. This is the model proposed for certain defensive systems (the US Navy’s CIWS Phalanx gun system, which autonomously engages incoming threats, operates on this principle) and is increasingly discussed for Collaborative Combat Aircraft operating in environments where communications may be jammed or latency-degraded.
The gap between these two models is not merely semantic. Human-on-the-loop systems can act faster than human-in-the-loop systems — and in certain tactical situations, particularly missile defense and counter-drone operations, speed is operationally decisive. But human-on-the-loop systems can also act without a human having genuinely reviewed and authorized the specific action. The legal and ethical frameworks governing this distinction are evolving, contested, and nowhere near settled.
Why It Is Harder Than It Looks
The engineering case for autonomous military aircraft is straightforward: remove the human from the cockpit and you remove the most fragile, most expensive, and most physiologically limited component of the system. Autonomous aircraft do not suffer hypoxia at altitude. They do not experience spatial disorientation in cloud. They do not require ejection seats, pressurized cockpits, or anti-G suits. They can sustain maneuvers at 9G indefinitely. They cannot be captured. In a contested environment with advanced integrated air defenses, an expendable autonomous platform absorbs risk that a manned aircraft cannot.
The operational case for the CCA program — where autonomous wingmen fly alongside manned fighters, extending sensor reach, carrying additional weapons, and presenting the adversary with a targeting problem that grows exponentially harder as the number of platforms increases — is compelling on paper. An F-35 pilot commanding three XQ-58 Valkyries is, in theory, four times the tactical problem for an adversary air defense system. The wingmen absorb the first shots. They carry the electronic warfare pods that the manned aircraft would otherwise have to sacrifice weapons stations for. They fly into threat rings that human pilots cannot safely enter.
In practice, the challenges are substantial. Current autonomous systems perform well in structured environments with defined parameters — the Boeing A2X Chinook has 150+ successful autonomous approaches because approaches are a well-defined problem with bounded variables. Combat is the opposite of a bounded problem. Adversaries actively work to introduce novel, unexpected stimuli specifically designed to exceed the decision rules programmed into autonomous systems. GPS jamming, electronic deception, decoys, unusual atmospheric conditions, unexpected civilian presence — the list of variables that can push an autonomous system outside its operational design domain is effectively infinite.
DARPA’s container-launched drone swarms — which represent the far edge of the current autonomous spectrum — address the GPS problem through alternative navigation: terrain correlation, visual odometry, inertial measurement. The swarm coordination algorithms draw on decades of research into distributed computing and emergent behavior. In testing, the results have been promising. In operational conditions against an adversary with a dedicated counter-autonomy capability, the same algorithms become an attack surface. Every autonomous system that communicates with other systems has an electromagnetic signature. Every system that relies on sensors can be deceived by systems designed to feed it false data.
The autonomy revolution in military aviation is real, consequential, and accelerating. What it is not — and what the buzzword-laden press releases from defense contractors rarely acknowledge — is easy. The gap between a CH-47 Chinook landing itself at a test facility and an autonomous combat aircraft making independent targeting decisions in a GPS-denied, electromagnetically contested environment against a peer adversary is not a gap measured in years. It is a gap measured in fundamental unsolved problems in computer science, sensor physics, and moral philosophy. The hardware is advancing. The harder questions are just beginning to be asked seriously.
Sources: DARPA, US Air Force Research Laboratory, Boeing Defense press materials, Kratos Defense XQ-58 program documentation, DoD Directive 3000.09, RAND Corporation autonomous systems research, Air Force Institute of Technology, Congressional Research Service




0 Comments