The Scenario Is Already Operational

Last year, the Department of Defense released details on Project Maven — the AI-assisted targeting program that has been operational in various forms since 2017. The public debate about "AI warfare" treats the technology as hypothetical. It isn't. Autonomous and semi-autonomous systems are already making targeting recommendations, processing sensor data, and accelerating kill chains in ways that have fundamentally altered how the American military conducts operations.

"Operation Epic Fury" is the kind of dramatic label defense analysts use to describe the next-generation version of what's already happening. The name is new. The underlying capability development is a decade old.

The argument for proceeding aggressively with military AI is not complicated: adversaries are. China's People's Liberation Army has made AI-enabled warfare a stated strategic priority. Russia, despite its performance in Ukraine, is investing heavily in autonomous systems. Neither Beijing nor Moscow is waiting for a global ethics consensus before deploying these tools. The only question is whether American forces operate with AI advantage or fight against it.

Where the Real Risk Lives

The critics of military AI tend to focus on two concerns: civilian casualties from autonomous targeting errors, and the destabilizing effect of lowered thresholds for conflict initiation. Both are legitimate. Neither is an argument against development — they're arguments for getting development right, which requires being in the game rather than standing on the sideline calling fouls.

The more serious risk that gets insufficient attention is infrastructure dependency. Systems that rely on satellite uplinks, cloud processing, and networked data pipelines are systems that can be degraded by adversary action. An AI-enhanced military that loses its network connectivity becomes a military operating on 1990s capability with 2020s cost structure. That is a vulnerability, not a strength, and the Pentagon's acquisition culture has historically been poor at stress-testing systems against realistic degraded-environment scenarios.

I spent time reviewing the Government Accountability Office's 2023 report on autonomous weapon systems, which documented significant gaps in DoD's testing and evaluation frameworks for AI-enabled capabilities. The systems are being fielded faster than the evaluation infrastructure can assess them. That gap is the real operational risk — not the technology itself, but the institutional culture that deploys it before understanding its failure modes.

The Doctrine Problem

America has doctrine for air power, land warfare, and naval operations developed over decades of operational experience and institutional learning. AI-enabled warfare doesn't have equivalent doctrine. The rules of engagement frameworks, the legal authorities, the command accountability structures — they're all being improvised in real time, overlaid on weapon systems that operate at speeds human decision cycles can't match.

This is where intellectual rigor matters more than rhetorical caution. The answer to "AI warfare moves too fast for human oversight" is not to slow the AI. It is to build oversight mechanisms that operate at the relevant timescale — which requires investment, imagination, and a willingness to challenge existing institutional arrangements that the Pentagon's bureaucratic culture resists reflexively.

The promise of AI warfare is real. So are the perils. What the debate needs is less hand-wringing about whether to proceed and more hard thinking about how. America's adversaries have already answered the whether question. We don't get to opt out of the era — only to lead it or lose it.