- Authors

- Name
- 오늘의 바이브
First American Company Ever Labeled a Supply Chain Risk

On March 6, 2026, the Department of Defense sent Anthropic a letter. The message boiled down to one line: "You are designated a supply chain risk." This label has historically been reserved for foreign adversaries -- companies from China, Russia, and other hostile nations. No American company had ever received it. Until now.
The designation takes effect immediately. Every defense contractor and subcontractor doing business with the Pentagon must certify that they do not use Anthropic's Claude models in their work. Failure to certify means losing their defense contracts. Defense Secretary Pete Hegseth went further, declaring that any contractor or supplier doing business with the U.S. military must cease all commercial activity with Anthropic.
Context matters here. The supply chain risk designation is a last resort measure for foreign entities that directly threaten U.S. national security. Huawei and ZTE got this label. Now a San Francisco-based AI startup has it too.
From Partnership to Blacklist: The $200 Million Story
Anthropic and the Pentagon were allies not long ago. In November 2024, Anthropic partnered with national security firm Palantir and AWS to deliver Claude to U.S. intelligence and defense agencies. Palantir became the first industry partner to bring Claude models into classified environments. Claude operated at the "Secret" cloud security level and was, by public accounts, the first large language model ever deployed inside classified systems.
In July 2025, the deal grew bigger. The Pentagon's Chief Digital and Artificial Intelligence Office (CDAO) awarded Anthropic a two-year prototype contract with a $200 million ceiling. Claude was integrated into mission workflows on classified networks through Palantir's platform, processing and analyzing vast amounts of complex data.
Piper Sandler analysts noted that Anthropic was "heavily embedded in the military and the intelligence community." Pulling out would cause "short-term disruptions to operations." According to Defense One, replacing Anthropic's AI tools would take the Pentagon months.
February 24: Hegseth's Ultimatum

The story starts in February. On February 24, 2026, Defense Secretary Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. The meeting room was stacked. Deputy Secretary Steve Feinberg, Under Secretary for Research and Engineering Emil Michael, Under Secretary for Acquisition and Sustainment Michael Duffey, chief spokesperson Sean Parnell, and general counsel Earl Matthews all attended. The lineup alone signaled how seriously the Pentagon was taking this.
Hegseth's demand was simple: let the DOD use Claude for all lawful purposes without restriction. Amodei's answer was equally clear. Two red lines were non-negotiable. One, Claude would not power fully autonomous weapons -- weapons where AI makes kill decisions without human oversight. Two, Claude would not be used for mass surveillance of American citizens.
Hegseth gave Amodei until 5:01 PM on Friday, February 27 to comply. The alternatives: cut ties and declare Anthropic a supply chain risk, or invoke the Defense Production Act (DPA) to compel Anthropic to hand over its model on the military's terms.
The DPA is a 1950 law signed by President Truman during the Korean War. It was originally written for steel mills and tank factories. It gives the president authority to direct domestic industry in the service of national defense. The Biden administration had used Section 7 of the DPA to require AI companies to report training activities. But Hegseth was threatening Title I -- the core compulsion power. Experts called this use of the DPA "without precedent." The DPA "has never been used to compel a company to produce a product that it's deemed unsafe, or to dictate its terms of service."
The Pentagon's Argument
The Pentagon's logic goes like this: a private company cannot dictate how the military uses technology it has legally procured. A DOD official stated that "the military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability."
There is a point to this. In defense contracting, vendors almost never attach use conditions to what they sell. Imagine Lockheed Martin selling F-35s with a clause that says "don't use this jet for certain types of missions." The military has operated for decades on the principle that once it buys a capability, it can deploy it for any lawful purpose.
But Anthropic's counterargument is not trivial either. AI is not a fighter jet. A jet has a human pilot in the cockpit. An autonomous weapon has AI making the kill decision. Anthropic's red line was that humans must retain final decision-making authority. That is one of the most fundamental principles in AI safety discourse. Anthropic never objected to Claude being used in defense work. The objection was specifically about two use cases: autonomous weapons and mass surveillance.
Pentagon officials dispute that this fight is about lethal weapons or surveillance at all. Their position is that private companies cannot dictate government technology usage, period.
Rejection and Chain Reaction
On February 27, Amodei rejected the ultimatum. In a public statement, he said the Pentagon's threats "do not change our position." That same day, President Trump directed all federal agencies to cease using Anthropic products within six months. Hegseth promptly designated Anthropic a supply chain risk.
The fallout was immediate. Defense contractors started dropping Claude. Ten portfolio companies of one venture firm that work with the DOD pulled Claude from defense use cases. Major contractors like Lockheed Martin were expected to strip Anthropic technology from their supply chains.
The financial damage was severe. Anthropic's CFO said the actions could reduce the company's 2026 revenue by "multiple billions of dollars." Court filings cited "hundreds of millions of dollars" in contracts at immediate risk. The $200 million Pentagon contract was gone, and every downstream deal built on that contract was in jeopardy.
Meanwhile, competitors filled the vacuum. While Anthropic was being blacklisted, Elon Musk's xAI and OpenAI received clearance for classified system access.
Big Tech's Balancing Act

Within days of the designation, Microsoft, Amazon, and Google issued near-identical statements. The message: "Claude remains available to all customers except for defense work."
Amazon is Anthropic's largest financial backer, having invested $8 billion since 2023. An AWS spokesperson confirmed that customers "can continue to use Claude for all workloads not associated with the Department of War," while "supporting customers and partners as they transition to alternatives" for defense workloads.
Microsoft went further. On March 10, it filed an amicus brief with the court -- a legal filing submitted by non-parties who have relevant expertise or will be affected by the outcome. Microsoft urged a temporary restraining order to pause the Pentagon's supply chain risk designation. The reason was direct: Microsoft integrates Anthropic's products into technology it provides to the U.S. military. It is a direct casualty of the designation. Microsoft argued that a pause would allow Anthropic and the DOD to pursue "a negotiated resolution that will better serve all involved."
| Company | Relationship | Response |
|---|---|---|
| Amazon | $8 billion invested in Anthropic | Claude stays for non-defense, transitioning DOD work |
| Microsoft | Claude integrated into military tech | Filed amicus brief requesting temporary restraining order |
| Claude available via API | Claude stays for non-defense customers |
The Lawsuit: "Unprecedented and Unlawful"
On March 9, Anthropic filed two federal lawsuits. One in the U.S. District Court for the Northern District of California, another in the federal appeals court in Washington, D.C. The core allegation: Pentagon officials illegally retaliated against Anthropic for its position on AI safety.
The language in the filings is blunt. "These actions are unprecedented and unlawful." "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." "No federal statute authorizes the actions taken here." Trump officials are "seeking to destroy the economic value created by one of the world's fastest-growing private companies."
Anthropic's legal strategy frames this as a First Amendment case, not a contract dispute. The government punished a company economically for publicly expressing a viewpoint on AI safety. That, Anthropic argues, violates constitutionally protected free speech. The suit alleges the administration "retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance -- AI safety."
The differential treatment of competitors strengthens the argument. While Anthropic was blacklisted, xAI and OpenAI received classified system access. Same market. Same type of service. Opposite treatment.
From Partnership to Blacklist: The Timeline
| Date | Event |
|---|---|
| November 2024 | Anthropic-Palantir-AWS partnership; Claude deployed on classified nets |
| July 2025 | $200 million Pentagon contract awarded |
| February 24, 2026 | Hegseth-Amodei meeting at Pentagon; ultimatum issued |
| February 27, 2026 | Amodei rejects ultimatum; Trump orders agencies to stop using Claude |
| March 6, 2026 | DOD officially designates Anthropic a supply chain risk |
| March 9, 2026 | Anthropic files two federal lawsuits |
| March 10, 2026 | Microsoft files amicus brief requesting temporary restraining order |
Sixteen months from partnership to blacklist. That is how fast a government relationship can invert in the AI industry.
This Is the First AI Constitutional Battle
This fight is fundamentally about where the line sits between an AI company and the government. Who decides how AI gets used? The company that built it, or the government that bought it?
The Pentagon's position is clear. Private companies cannot hold veto power over technology needed for national security. When federal investment has poured into the AI industry at scale, a vendor saying "you can't use our technology this way" is not acceptable. The military cannot allow a vendor's terms of service to constrain operations in a warzone.
Anthropic's position is equally clear. If the company that built an AI model cannot set ethical limits on its use, the concept of AI safety is meaningless. If governments can demand unrestricted access to any model for any lawful purpose, no AI company can credibly claim to practice responsible development. And punishing a company with a designation reserved for foreign adversaries -- simply for refusing a demand -- is retaliation, plain and simple.
The Wall Street Journal reported that Claude was already being used operationally -- in raid planning for the arrest of Venezuelan leader Nicolas Maduro, and in intelligence assessments identifying targets in the U.S.-Iran conflict. Independent confirmation of these reports is unavailable, but if accurate, they show how deeply Claude had already penetrated military operations. Declaring a blacklist is easy. Actually ripping Claude out of classified networks takes months.
What this dispute decides is one question: who draws the line on "lawful use" in the age of AI? Whether that answer comes from a courtroom or a negotiating table remains unknown. But one thing is certain. If Anthropic wins, AI companies gain precedent to impose ethical conditions on government use. If Anthropic loses, "AI safety" becomes a marketing term. This is not one company's lawsuit. It is the first constitutional case of an era where AI becomes a weapon.
Sources
- Defense tech companies are dropping Claude after Pentagon's Anthropic blacklist -- CNBC
- Anthropic sues Trump administration over Pentagon blacklist -- CNBC
- Anthropic sues the Trump administration over 'supply chain risk' label -- NPR
- Anthropic CEO Amodei says Pentagon's threats 'do not change our position' on AI -- CNBC
- Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards -- Axios
- Microsoft backs Anthropic in Pentagon blacklist battle -- CNBC
- Amazon says Anthropic's Claude still OK for AWS customers to use outside defense work -- CNBC
- It would take the Pentagon months to replace Anthropic's AI tools -- Defense One
- What the Defense Production Act Can and Can't Do to Anthropic -- Lawfare