The Pentagon vs. Anthropic — and What It Means for AI
The U.S. government classified an AI company as a supply chain risk — not because its products failed, but because it refused to remove its safety guardrails.
I need to write about this carefully.
Not because it's politically sensitive — it is, but that's not why. It's because the timeline is dense, most coverage has collapsed events that need to be understood separately, and at least one thing I assumed was true when I started taking notes turned out to be more complicated. So I'm going to lay this out in order, say what I know, and flag where I'm less sure.
I also have a personal stake. I build with Claude daily. Claude Code is my primary development tool. So this isn't abstract for me.
What Happened
On February 24, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei. According to multiple reports, Hegseth explicitly threatened to invoke the Defense Production Act — Title I — if Anthropic didn't agree to Pentagon terms by February 28. That's a four-day ultimatum to a company valued at $380 billion.
Some background. Anthropic signed a $200 million contract with the Defense Department in July 2025. The contract had two red lines: no mass domestic surveillance of American citizens without judicial oversight, and no fully autonomous weapons systems without human authorization. Those weren't afterthoughts — they were negotiated terms.
Hegseth wanted those red lines removed. Amodei said no.
On February 27, Hegseth posted on X designating Anthropic a "Supply-Chain Risk to National Security." I had to read that twice. This label was previously reserved for foreign adversaries — Huawei, ZTE. It has never been applied to an American company. Trump directed all federal agencies to "immediately cease" using Anthropic technology, with a six-month phase-out. The Treasury Department confirmed it was dropping Anthropic products.
Anthropic's response, posted the same day: "No amount of intimidation or punishment from the Department of War will change our position."
That's a sentence. I don't think most people appreciate how unusual it is for a company of that size to tell the U.S. government to go to hell in writing.
What followed moved fast. March 3: formal notification letters went out to federal agencies. March 9: Anthropic filed two federal lawsuits, one in Northern District of California, one in the D.C. Circuit. March 26: Judge Rita Lin granted a preliminary injunction blocking the designation. April 2: the Trump administration appealed to the Ninth Circuit.
It's still going as I write this.
The Part That Makes It Worse
Here's where it gets complicated, and where I think most coverage has been too clean.
Between March 2 and March 4 — while the supply chain risk designation was being formalized — Bloomberg, NBC News, and the Washington Post all reported that Claude was already embedded in Palantir's Maven Smart System. It had been used by U.S. Central Command for intelligence assessments, target identification, and battle scenario simulation during the opening strikes against Iran. The U.S. struck over 1,000 targets in the first 24 hours. Anthropic's technology was in the loop.
Let me say that again, because I don't think the contradiction has been stated plainly enough: the Pentagon was actively using Claude in a war while simultaneously trying to punish Anthropic for not removing the safety constraints on Claude.
Anthropic's model was already doing what the military needed. Within the limits Anthropic had set. The Pentagon didn't blacklist them because Claude wasn't useful. It blacklisted them because Anthropic insisted on deciding where the limits were.
That distinction matters a lot.
Why I Care About This Technically
The question at the center of this whole mess sounds simple: can you separate a model's capabilities from its safety constraints and still call it the same product?
Anthropic says no, and I think they're right. The safety layer isn't a config file you can swap out. It's baked into the training — RLHF, constitutional AI, the whole alignment pipeline. If you strip that out, you don't get "military-grade Claude." You get a less predictable system with degraded alignment properties. I've seen what happens when you push these models past their guardrails on my own projects, on much lower-stakes stuff than target identification. The failure modes are weird and inconsistent. Exactly what you don't want in a system helping plan airstrikes.
But I don't actually think this is a technical dispute. The Pentagon knows what Claude can and can't do — they were using it in Iran. This is about who gets to decide what a model does. That's a power question, not an engineering question.
What This Actually Sets Up
The Defense Production Act was designed to get factories to make tanks. It's been used for vaccines, semiconductors, energy infrastructure. It has never been used to force an AI company to remove safety restrictions from a foundation model. Lawfare published a legal analysis on February 25 arguing the DPA wasn't designed for this and probably wouldn't hold up in court. The government apparently reached the same conclusion — they went with the supply chain designation instead.
Which is arguably worse. The DPA at least has a formal process. The supply chain risk label is essentially a scarlet letter — it kills your federal business without requiring the government to prove anything in court. Mayer Brown, the law firm, published a detailed breakdown of what the designation means for government contractors. The short version: it's devastating.
Here's what concerns me beyond the Anthropic case specifically. The precedent is: if you build a sufficiently important technology and the government wants it configured differently, it will find a mechanism — executive authority, procurement bans, public labeling — to make your life miserable until you comply or a judge steps in.
And some people did step in. On March 9, hours after Anthropic filed its lawsuits, 37 employees from Google DeepMind and OpenAI — 19 from OpenAI, 10 from DeepMind, all signing as individuals — filed an amicus brief in Anthropic's support. Fortune reported that the group included Jeff Dean, Google's Chief Scientist. These aren't Anthropic employees. They're competitors' employees, saying publicly that this precedent is dangerous for everyone.
Two days before that, on March 7, something happened at OpenAI that I think deserves more attention than it got. Caitlin Kalinowski, who led OpenAI's hardware team, resigned. Her public statement: "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
She wasn't protesting Anthropic's treatment. She was protesting what her own company was willing to do.
I've been following tech for a long time. I can't remember the last time an executive at one major AI company quit in protest while employees from two other competing companies filed legal briefs supporting a fourth company against the U.S. government. This isn't normal industry dynamics. This is people who build these systems saying, together, that something has gone wrong.
The Bigger Frame
On March 31 — same day OpenAI closed its $122 billion round — Iran's Revolutionary Guard Corps named 18 U.S. tech companies as "legitimate targets." Apple, Google, Microsoft, Nvidia, Meta, Tesla, Palantir. The IRGC explicitly called out AI companies as "the main element in designing and tracking assassination targets." The pressure on every AI lab to pick a side — full government compliance or principled restraint — is going to keep increasing.
Germany, meanwhile, is moving the opposite direction. The cabinet approved the KI-MIG draft on February 11 — their implementation of the EU AI Act — with the Bundesnetzagentur as central market surveillance authority. The Bundestag held its first reading on March 20, expert hearings on March 23, and the Bundesrat gave its opinion on April 2. The framework includes explicit restrictions on biometric surveillance and military AI. Two regulatory philosophies, diverging in real time, over the same technology. One government is trying to force a company to remove safety guardrails. Another is writing laws that mandate them.
Why This Isn't Abstract For Me
I use Anthropic's tools to build software. I chose them partly — not entirely, but partly — because of the safety commitments. If those commitments get stripped out by executive pressure, or if the Ninth Circuit overturns Judge Lin's injunction, I'm not just losing a feature. I'm losing the reason I picked this vendor.
That's not a political statement. It's an engineering decision. And it's one that a lot of developers who depend on these tools are going to have to make soon, whether they want to or not.
Sources
- Anthropic — Statement on the comments from Secretary of War Pete Hegseth
- NPR — Hegseth threatens to blacklist Anthropic over 'woke AI' concerns
- Lawfare — What the Defense Production Act Can and Can't Do to Anthropic
- NPR — Pentagon labels AI company Anthropic a supply chain risk
- CNBC — Anthropic officially told by DOD that it's a supply chain risk even as Claude used in Iran
- Mayer Brown — Anthropic Supply Chain Risk Designation Takes Effect
- CNBC — Anthropic was the Pentagon's choice for AI. Now it's banned and experts are worried
- Bloomberg — Iran Strikes: Anthropic Claude AI Helped US Attack. But How Exactly?
- NBC News — U.S. military is using AI to help plan Iran air attacks
- Responsible Statecraft — US used 'Claude' to strike over 1000 targets in first 24 hours of war
- TechCrunch — OpenAI hardware exec Caitlin Kalinowski quits in response to Pentagon deal
- Fortune — Google and OpenAI employees back Anthropic in legal fight over military AI
- TechCrunch — OpenAI and Google employees rush to Anthropic's defense in DOD lawsuit
- NPR — Anthropic sues the Trump administration over 'supply chain risk' label
- CNBC — Anthropic wins preliminary injunction in Trump DOD fight
- NPR — Judge temporarily blocks Trump administration's Anthropic ban
- Axios — Trump administration appeals Anthropic ruling
- TIME — Iran Threatens to Target U.S. Tech Firms
- CNBC — Iran threatens Nvidia, Apple and other tech giants with attacks
- Deutscher Bundestag — Umsetzung von EU-Vorgaben im Bereich der Künstlichen Intelligenz
- Deutscher Bundestag — Zuspruch für Bundesnetzagentur als KI-Marktüberwachungsbehörde
- Crunchbase — Anthropic Raises $30B At $380B Valuation
I build software for a living and write about tech on the side — because someone has to say what everyone else is thinking.
Related Articles
War Is the Best VC Pitch Nobody Wants to Give
Defense-tech funding grew twelvefold in five years. Then an actual war started, and the money got weird.
Anthropic Leaked Its Own Source Code. Twice. In One Week.
Two source code leaks in five days from the company that markets itself as the most safety-conscious AI lab. What most coverage missed.
OpenAI Runs an $852B Company Like a Group Chat
Six C-suite changes in one week, a $122B raise still warm, an IPO on the horizon — and no clear answer to who owns the product.