~/today's vibe
Published on

Anthropic Said No. The Pentagon Hit Back.

Authors
  • avatar
    Name
    오늘의 바이브
    Twitter

A $200 Million Contract Became a Warzone

Aerial view of the Pentagon building

In July 2025, Anthropic signed a contract worth up to $200 million with the Pentagon. Claude became the first AI model integrated into the Department of Defense's classified networks. The contract included Anthropic's Acceptable Use Policy with two non-negotiable conditions: Claude would not be used in autonomous weapons systems, and it would not be used for mass surveillance of American citizens.

The Pentagon signed it. Then things changed.

Defense Secretary Pete Hegseth sent a formal demand to CEO Dario Amodei: remove all usage restrictions and allow Claude to be used "for all lawful purposes." Autonomous weapons are lawful. Mass surveillance is lawful. So there should be no restrictions -- that was the argument.

Amodei refused. "We cannot in good conscience accede to their request."

That single sentence started the biggest confrontation in the history of the American AI industry.

A Label Reserved for Foreign Enemies

On February 27, 2026, President Trump directed all federal agencies to stop using Anthropic's technology. The same day, Hegseth designated Anthropic a supply chain risk under 10 U.S.C. Section 3252.

To understand the weight of this label, consider its history. The supply chain risk designation has been reserved for companies considered extensions of foreign adversaries -- Huawei, ZTE, companies with alleged ties to the Chinese military. No American company had ever received this designation. Anthropic was the first.

The effect was immediate. Every defense contractor and vendor working with the Pentagon had to certify they were not using Anthropic's models. According to CBS News, an internal Pentagon memo ordered military commanders to remove Anthropic AI technology from key systems. Government agencies had six months to completely phase out Anthropic products.

This was not a contract cancellation. It was a root-and-branch purge of everything Anthropic from the defense supply chain.

The Numbers the CFO Brought to Court

A wooden gavel resting on a dark surface

The financial damage emerged in court filings from Anthropic CFO Krishna Rao. The numbers were staggering.

CategoryImpact
Estimated 2026 revenue lossMultiple billions
Direct DoD contract lossOver $150 million
Public sector annual recurring revenueOver $500 million
Disrupted financial sector deals~$180 million
Fintech customer contract cut15millionto15 million to 5 million
Enterprise customers expressing alarmOver 100

Rao testified that if the government's actions stand, between 50% and 100% of revenue from defense contractors could evaporate. From December 2025 to January 2026, Anthropic's public sector annual recurring revenue had been growing at 4x. A business projected to reach billions over the next five years was suddenly at risk of disappearing altogether.

Before the dispute, Anthropic's valuation stood at 380billionwithanannualizedrevenuerunrateof380 billion** with an annualized revenue run rate of **14 billion. Rao warned that if the government's actions are allowed to stand, the damage would be "almost impossible to reverse."

Anthropic Sued Back

On March 9, 2026, Anthropic filed simultaneous lawsuits against the Trump administration in California federal court and the D.C. Circuit Court of Appeals. The 78-page complaint made four core arguments.

First, the supply chain risk designation violated the Administrative Procedure Act. The government skipped standard procedures -- no comment period, no stakeholder hearings.

Second, it violated the First Amendment. Anthropic argued the designation was retaliation for publicly expressing its position on AI safety. An acceptable use policy is a form of corporate speech, and punishing a company for that speech is unconstitutional.

Third, the Defense Secretary exceeded his statutory authority. Issuing an immediate exclusion without considering alternatives was arbitrary.

Fourth, the entire action was "unprecedented and unlawful," causing "irreparable harm" to the company.

Anthropic simultaneously filed for a Temporary Restraining Order and a Preliminary Injunction to preserve the status quo during litigation. The judge moved the hearing from April 3 to March 24 -- a clear signal the court recognized the urgency.

Silicon Valley Picked a Side

Military technology and AI intersection

When Anthropic filed its lawsuit, something unexpected happened. The entire tech industry -- including competitors -- started lining up behind the company.

Microsoft moved first. It filed an amicus brief supporting the Temporary Restraining Order. OpenAI's largest investor and Anthropic's direct competitor was backing the other side in court.

Then over 30 employees from Google and OpenAI filed a joint amicus brief. Google chief scientist Jeff Dean, 19 OpenAI researchers, and 10 Google DeepMind researchers signed it. Their argument: the Pentagon's blacklist threatens the entire American AI industry.

Tech industry groups representing hundreds of Pentagon-contracted companies filed their own amicus brief demanding a pause on the designation. Amazon and Google publicly stated their customers could continue using Anthropic's technology for non-government purposes. The message was clear: they had no plans to change their Anthropic contracts.

The most interesting signal came from Sam Altman. The OpenAI CEO said "OpenAI shares Anthropic's red lines" -- this coming right after OpenAI had signed its own Pentagon deal. The industry read it as a carefully hedged bet.

Retired Generals and Former Judges Joined the Fight

The legal support extended beyond Silicon Valley. Two groups were particularly notable.

Twenty-two retired generals filed an amicus brief. Their argument was not about tech ethics. It was operational: abruptly swapping tools puts troops at risk. Pentagon insiders who had been using Claude in the field said it was not something easily replaced.

Nearly 150 retired federal and state judges filed another amicus brief. They had been appointed by both Republican and Democratic presidents. Their concern was the legal abuse of the supply chain risk label -- a tool designed for foreign adversaries was being turned on a domestic company.

A group of Catholic ethicists also filed a brief raising moral concerns about AI systems making autonomous kill decisions.

The coalition was remarkable. Tech CEOs, competitor researchers, retired military officers, former judges, religious ethicists -- all backing the same company at the same time. That had never happened before.

The Voice on the Other Side

Not everyone rallied behind Anthropic. The loudest dissent came from Palmer Luckey, founder of defense-AI firm Anduril Industries and creator of Oculus VR.

In an interview with Axios, Luckey said the Pentagon "could have been more forceful" against Anthropic. He suggested the company could have reached a deal by "talking about it in exactly the right way" and "not rubbing the wrong people the wrong way." The implication: Anthropic should have taken OpenAI's approach and been more collaborative.

The Department of Justice filed a brief urging the court to reject Anthropic's First Amendment argument. The blacklist was "lawful and reasonable," and courts should defer to the executive branch on national security assessments.

The DOD went further. According to TechCrunch, it declared Anthropic's red lines an "unacceptable risk to national security." It also raised a new argument -- that the proportion of foreign nationals in Anthropic's workforce itself posed a security risk.

The Irony of Safety as a Business Model

The Pentagon's attack has clearly hurt Anthropic. But it has also produced a paradox.

Axios reported that Anthropic is pulling in more revenue and attention than it did before Trump and the Pentagon went after the company. Major partners and enterprise customers said they are not changing their contracts. The blacklist has, in effect, strengthened Anthropic's brand.

The logic is straightforward. For enterprise customers, an AI company that refused the government's demand to remove safety guardrails is not a liability -- it is a trust signal. In healthcare, finance, and education, the message "this AI keeps its guardrails even under government pressure" is a powerful differentiator.

The question is whether this holds. Anthropic is a 380billioncompanywhoseownCFOhasadmittedincourtthatpublicsectorrevenuecould"shrinksubstantiallyordisappearaltogether."ThedirectDoDcontractlossofover380 billion company whose own CFO has admitted in court that public sector revenue could "shrink substantially or disappear altogether." The direct DoD contract loss of over 150 million is virtually certain. When the price of principle reaches billions, the principle gets tested.

March 24 Is Not a Verdict. It Is a Choice.

The hearing at a San Francisco federal court on March 24 is not just a legal proceeding. The judge will decide whether to pause the blacklist while the case is litigated.

But the real question is outside the courtroom. The precedent this case sets is clear: the government can demand AI companies remove safety guardrails, and if they refuse, they get sanctioned like a foreign adversary. If that framework stands, every AI company faces the same forced choice. Keep your principles and lose the business, or drop them and keep the contracts.

That is why Anthropic's competitors are backing Anthropic. Microsoft, Google, and OpenAI researchers did not file those briefs out of goodwill. They filed them because they know they could be next.

The outcome of this fight will not just determine one company's fate. It will determine whether AI companies have the right to tell the government "no." March 24 is not a verdict. It is a choice.


Sources: