Back to Podcast Digest
Alex Kantrowitz··58m

The Pentagon's AI Plan + Behind the Anthropic Fight — With Under Secretary of War Emil Michael

TL;DR

  • Emil Michael says AI should make war more precise, not more autonomous — he compares Pentagon AI to Tesla FSD and Uber: a tool that augments humans by expanding their “context window,” not a “Skynet” that makes kill decisions on its own.

  • The Pentagon’s current AI stack is mostly orchestration and synthesis, not chatbot-led targeting — Michael describes Palantir-style systems like Maven Smart System as aggregating weather, fuel, collateral-risk, and asset data so humans can act faster and with more information than they could with PowerPoints, Excel, and email.

  • Michael’s biggest fear is adversaries using AI to remove humans from command decisions — he specifically points to China’s military buildup and purges of senior generals as a scenario where regimes that don’t trust people may lean harder on machines than the U.S. would under its constitutional chain of command.

  • Cheap drones are forcing a strategic rethink away from exquisite systems toward ‘mass attritable’ weapons — he highlights programs like Lucas, a roughly $30,000 low-cost drone, and says the Pentagon needs affordable offensive and defensive systems because using million-dollar interceptors against cheap one-way attack drones is a losing equation.

  • The Anthropic fight, in Michael’s telling, was about control over lawful government use cases, not just vibes — he says contract terms originally barred use for planning kinetic actions and weapon-system development, and that the Pentagon ultimately deemed Anthropic a supply-chain risk because a misaligned model vendor could shape or constrain downstream military capabilities.

  • Procurement reform is a quiet but central part of the Pentagon’s AI plan — Michael argues defense contracting shrank from about 50 major contractors in the 1980s to five today, and says the department is pushing more business-style deals where vendors get paid for delivering working systems on time, not just for time spent.

The Breakdown

AI as the Pentagon’s “context window” booster

Michael opens with a deliberately civilian analogy: Uber and Tesla’s full self-driving. His point is that people fear technological change, but the real promise of AI in war is more precision — helping a human distinguish a decoy from a real threat in a drone swarm, or process hundreds of inputs no person could absorb alone.

What Maven actually does — and what it doesn’t

Kantrowitz brings up Cameron Stanley’s public demo of Maven Smart System and its Target Workbench, including the unnerving “left click, right click, left click” workflow into actioning a target. Michael pushes back on the sci-fi framing: what’s happening behind those clicks is the synthesis of weather, fuel, drag, collateral effects, nearby assets, and likely enemy reactions, with the same human approvals still required at the end.

The friction debate: should targeting stay slow?

Kantrowitz raises the obvious discomfort: maybe PowerPoints, spreadsheets, and Word docs are a feature, not a bug, when the consequence is a strike. Michael’s answer is that the friction hasn’t disappeared — the rules of engagement, legal checks, and command approvals are still there — but speed matters, and better information matters more, especially if faster execution means fewer casualties and shorter operations.

Where AI fits today: office work, intelligence, and warfighting

Michael breaks Pentagon AI into three layers: boring enterprise work like memos and PowerPoints, intelligence analysis like anomaly detection across satellite imagery, and warfighting support that accelerates paperwork and modeling around operations. He repeatedly narrows the role of LLMs to summarizing, synthesizing, and surfacing alternatives, insisting they are not directly sitting in the kill chain making final decisions.

Human oversight, adversaries, and the line on agents

The more interesting anxiety for him isn’t the U.S. automating war; it’s other governments doing it because they don’t trust their own people. He contrasts America’s constitutional chain of command with countries like China, arguing the U.S. wants AI to augment humans, while authoritarian systems may use it to eliminate human judgment — and says Pentagon agent experiments are currently limited to mundane enterprise workflows, not battlefield decisions.

Drones changed the economics of war

On Ukraine and Iran, Michael says the lessons are different but equally stark. In Ukraine, drones push humans back from the front line; in Iran, they expose a brutal cost imbalance where cheap one-way attack drones can threaten exquisite assets that require expensive countermeasures, which is why he’s pushing “mass attritable” systems and highlighting low-cost efforts like the Lucas drone at around $30,000 each.

Swarms, cyber, and the new defense problem

He leans into the now-familiar image of Chinese drone light shows and says: imagine those are armed, networked, and reforming against your defenses. The Pentagon, he says, is working both offense and defense — counter-UAS task forces, lasers, electronic warfare — while also watching frontier models become genuinely dangerous in cyber, where they may soon discover and exploit vulnerabilities end-to-end.

The Anthropic rupture and why it escalated

The second half turns into a long defense of the Pentagon’s break with Anthropic. Michael says the issue wasn’t abstract ethics branding but practical control: if a vendor objects to lawful military or scientific use cases, updates models every three months, and can change guardrails or refusals, that creates an intolerable supply-chain dependency; from his perspective, the government needs partners aligned with its mission, not companies trying to rewrite the rules from outside.

Procurement reform — and one last pizza myth

Michael argues the Pentagon’s deeper structural problem is procurement: the defense industry consolidated from roughly 50 big contractors in the 1980s to five, while supply chains got brittle and incentives shifted toward cost-plus contracts. His pitch is more startup-like dealmaking — build it, deliver it on time, get paid — before the conversation ends on a surprisingly earnest note: he genuinely doesn’t buy the Pentagon pizza index, mostly because he claims he wouldn’t even know how to order a pizza into the building.