How to Think About Anthropic, the Pentagon, and AI
AI planned two US military operations in ten weeks. Then the Pentagon swapped the AI engine mid-war. Here's what that means.
Something has Changed
Two major US military operations in ten weeks. Venezuela in January. Iran in late-February/early-March. Iran is still pending - but let’s assume that oil prices smooth back out in short order (not hard to imagine given oil futures for 2027 and 2028 are back in the high 60s already).
If Iran turns out well, then both operations would have achieved their primary objectives and both moved faster than most comparable operations in modern military history. And they happened back to back.
That is extremely rare.
The best historical comparison would be stacking a 1991 Desert Storm invasion on top of the Abbottabad raid. Two different theaters, two different objectives, both executed with a precision and speed that caught most analysts off guard.
When one high profile operation goes unusually well, it can be good planning plus a bit of luck. When two consecutive operations in different regions both outperform historical baselines, it raises the question of whether a structural change in how operations are conducted has occurred.
The most likely explanation: AI is now embedded in the core of US military planning, it’s hyper-effective, and the window for that dominance is closing - which is driving the tempo.
What AI Is Actually Doing
The Wall Street Journal reported that the US-Israeli strikes on Iran “unfolded at unprecedented speed and precision” with AI tools helping “gather intelligence, pick targets, plan bombing missions and assess battle damage at speeds not previously possible.”
The US struck over 1,000 Iranian targets in the first 24 hours. Cyber operations took Iran to 1% internet connectivity, blinding IRGC command and control. Air, Sea, Cyber, the domain integration and complexity of the Iran action has been eye-opening.
The Window
The current US advantage in AI isn’t just compute volume. China is building data centers and scaling energy fast. But for the moment, the US has an edge in both model quality and classified training data integration. The Pentagon’s military implementations are live now, not months away. China can build hardware and has energy, but it can’t replicate what’s been fed into these models.
That’s the current national level moat.
But moats erode. Every serious AI researcher will tell you the gap is closing and China is likely realizing the advantage the US has built within AI/Military integrations.
Leopold Aschenbrenner, the former OpenAI researcher who published “Situational Awareness“ in June 2024, argued that AI would confer “a decisive military advantage potentially surpassing nuclear weapons” and predicted the US government would inevitably take control of AI development through a nationalization of AI tech and a Manhattan Project-style initiative.
The Venezuela-Iran sequence reads as the US using its AI planning advantage while the AI planning dominance window holds.
But two operations in ten weeks is a pace that suggests urgency, not opportunism.
An interesting question: how many more operations are planned and executed before the military planning integration gap narrows?
Tensions and Divorce
Through a partnership with Palantir, Anthropic’s Claude model has been serving as the reasoning engine inside a decision-support system deployed across US combatant commands.
The warfighters appeared to like it. According to CNBC sources, military personnel view Claude as “a better product, the most reliable, with the most user friendly outputs they can assimilate into planning.” (Source)
When the Venezuela operation occurred the relationship between Anthropic and the Pentagon exploded. Claude was reportedly used in both the Venezuela capture operation and the Iran strike planning.
Anthropic reportedly called Palantir to ask whether Claude had been used in the Venezuela operation, eventually wanting two ironclad guardrails in the contract: no fully autonomous weapons, no mass surveillance of US citizens.
The Pentagon wanted “all lawful purposes” with no restrictions.
The Pentagon’s case: the military cannot let a vendor insert itself into the chain of command by restricting what tools commanders can use during combat.
Anthropic’s case: in simulated war games, AI models escalated toward tactical nuclear options in 95% of scenarios, and full nuclear exchange in 3 of 21 scenarios. “All lawful purposes” is a blank check in an environment where the boundary between decision support and autonomous action is blurring by the month and Anthropic believes it knows best the limits of their own model.
Neither side budged, and the Pentagon designated Anthropic a “supply chain risk” - a label historically reserved for foreign companies - and ordered federal agencies to stop using Claude. (Source)
OpenAI signed a deal with the Pentagon hours later.
Wartime A/B Test
The Pentagon is now swapping OpenAI’s ChatGPT in for Anthropic’s Claude, ordering all prime contracts to disengage from Anthropic within six months.
Palantir’s CEO Karp issued a chastisement of Anthropic and the AI world broadly as a special kind of naive days later at an a16z summit:
“If Silicon Valley believes we are going to take away everyone’s white-collar job … and you’re gonna screw the military—if you don’t think that’s gonna lead to nationalization of our technology, you’re retarded. You might be particularly retarded, because you have a 160 IQ.”
Now we are looking at a few potential outcomes.
If ChatGPT performs equivalently or better, then this all was a vendor dispute. The military AI market opens to multiple providers. No single company has leverage. Anthropic’s confidence was misplaced, and nationalization questions dissipate. Planning timelines revert to the broader US-China compute window as the only constraint that matters.
If ChatGPT is meaningfully worse than Claude and planners see a quality decline, then three things can happen.
Either the Pentagon accelerates operations before the six-month Claude transition expires and lives with the OpenAI downgrade.
The Pentagon quietly works to restore Claude access - possibly by letting Anthropic’s lawsuit (filed yesterday) succeed on narrow statutory grounds, giving both sides a face-saving off-ramp - and the market learns that model quality in military AI isn’t exactly a commodity at this moment.
Or Aschenbrenner’s nationalization scenario becomes acute with Anthropic in the crosshairs.
Wake-Up Call
Older tech companies know to play with both US political parties regardless of who is in office. Microsoft survived antitrust by becoming Washington’s best friend. Google, Amazon, Nvidia, and Meta all built bipartisan government relations machines. They play both sides of the aisle because they’ve seen what happens when you don’t.
Anthropic’s leadership just learned what those companies already knew. Product superiority is not political insulation. The best model in the world doesn’t matter if you can’t navigate the relationship with the customer who has the power to nationalize you.
The AI industry is watching. Every startup with a government contract is recalculating the cost of saying no. And every investor in AI is recalculating what “moat” means when your largest customer can designate you a national security threat over a contract clause.
Anthropic’s Confidence
But there’s a lingering question if we look through the political topsoil of Anthropic’s actions.
Anthropic knew the nationalization risk. Aschenbrenner is engaged to the Anthropic CEO’s chief of staff.
Anthropic knew the nationalization risk and STILL overplayed its hand.
Why?
One explanation is ideological stubbornness. They believed in their guardrails more than they feared the consequences.
The more interesting explanation is that they believe they have a moat so durable that the Pentagon couldn’t live without them.
Which in the current AI ecosystem of new ‘best’ models coming out on a near weekly basis - would be an extraordinary belief.
In this vein, Anthropic would have to believe they have not just a better model today, but a compounding advantage in how they build the next one.
If this is their view, that would likely be a recursive AI development loop - where each generation of Claude accelerates the development of the next.
That and they have enough lead time that they believe their competitors can’t close the gap by simply training a bigger model.
Open source recursive models are hitting Github currently, and Anthropic continues to release new updates multiple times a week, so this idea isn’t implausible.
Nevertheless, in an AI landscape where new models leapfrog each other every few weeks, betting your largest government contract on the belief that you’re irreplaceable is either delusional or reveals a level of internal confidence about their development trajectory that the market hasn’t priced in.
If Anthropic is wrong, they just torched their most important relationship.
If they’re right, the government comes back, and the world turns in an interesting direction.
Key Questions Worth Tracking
Will OpenAI’s models match Claude for military planning? At the moment, Claude looks superior in the civilian world. But the gap may not be as wide as Anthropic believes.
If a gap does exist, what happens? Will that gap be large enough to force the nationalization question, will the Pentagon live with a downgrade, or will the Pentagon work with Anthropic to bring it back? Aschenbrenner’s nationalization scenario applied to Anthropic would require Claude to be irreplaceable. If it’s merely better, the Pentagon might just live with the downgrade while OpenAI plays catchup.
If the US continues using its AI fueled military planning dominance while the compute+integration window holds, what do the next military operations look like? Two in ten weeks suggests a high tempo. The market hasn’t priced the possibility of a third action (or more?).
And the question underneath all of this: how does the world avoid nuclear brinksmanship if the AIs being used to plan military operations aren’t yet reliable enough to know when to stop?
Until next time.



