~/today's vibe
Published on

The Pentagon Started Treating Anthropic Like an Enemy State

Authors
  • avatar
    Name
    오늘의 바이브
    Twitter

A Nation Threatening to Label Its $200M Contract Partner as "Enemy State"

Pentagon building viewed from above

On February 24, 2026, Defense Secretary Pete Hegseth sent CEO Dario Amodei of Anthropic an ultimatum. Remove Claude's military use restrictions by 5:01 PM Friday. Refuse, and the Pentagon will invoke the Defense Production Act and designate Anthropic as a supply chain risk.

Supply chain risk designation is the classification applied to Russian or Chinese firms. U.S. defense contractors would be prohibited from using such companies' products in military operations. The U.S. government threatened to place an American AI company on the same shelf as Russia and China. An unprecedented situation is unfolding where a $200 million defense contract partner is being treated like an enemy state.


The Trigger: Claude Was Used in the Maduro Operation

Digital surveillance network visualized on a globe

In early February 2026, U.S. special forces conducted an operation to apprehend Venezuelan President Nicolas Maduro. Reports emerged that Anthropic's Claude AI was used in this operation. Claude, deployed on the Pentagon's classified network through a partnership with Palantir, was put into actual military action.

The problem was the firefight that occurred during the operation. Gunfire was exchanged during the arrest, resulting in casualties. Anthropic's use policy prohibits AI use for violence, weapons development, or surveillance purposes. An Anthropic executive reportedly contacted Palantir to confirm "whether Claude was used in that operation."

This inquiry became the spark. A senior administration official told Axios: "A senior Anthropic executive inquired in a manner expressing displeasure about their software being used in the operation. The implication was questioning the use of their software because the operation involved a firefight and people were shot." Anthropic's spokesperson denied this entirely. "We cannot comment, classified or otherwise, on whether Claude was used in a specific operation." But the Pentagon's fuse was already lit. Within the administration, perception hardened that Anthropic was a company objecting to its own nation's military operations.


What the Pentagon Wants: "Allow Everything Lawful"

The core of the conflict is simple. The Pentagon wants to use Anthropic's AI without restriction for "all lawful use." Anthropic says it cannot compromise on two red lines.

One is mass surveillance of U.S. citizens. AI analyzing millions of people's communications, locations, and behavioral data to extract patterns. The other is fully autonomous weapons. Systems where AI selects targets and makes firing decisions without human intervention.

Amodei categorized these two as "illegitimate and prone to abuse" uses. His logic is that AI reliability hasn't reached the level to operate weapons autonomously, and laws governing mass surveillance don't exist yet. He's willing to support existing military uses like battlefield intelligence analysis, logistics optimization, and translation assistance. It's selective refusal, not total rejection.

Deputy Secretary of Defense for Research and Engineering Emil Michael has a different stance. While acknowledging negotiations have "hit a snag," the Pentagon cannot accept a structure where AI companies set the terms of use. Making weapons is government authority, not an AI company's domain to judge. The Pentagon's logic: determining the scope of lawful military operations is up to Congress and the administration, not a Silicon Valley CEO.


Comparing Military Contracts of Four AI Companies

In July 2025, the Pentagon signed defense contracts worth up to $200 million each with four companies: Anthropic, OpenAI, Google, and xAI. But responses since the contracts diverged sharply.

CompanyContract Size"Lawful Use" AcceptanceClassified Network AccessCurrent Status
AnthropicUp to $200MRefused (2 red lines)Only one approvedUnder ultimatum
xAIUp to $200MFull acceptanceRecently approvedClassified deployment underway
OpenAIUp to $200MFlexible in unclassifiedUnder negotiationConditional acceptance
GoogleUp to $200MFlexible in unclassifiedUnder negotiationConditional acceptance

There's an irony here. Anthropic is the only company approved for classified network access among the four. Because the Pentagon rated it as "the most advanced and safe model." The most deeply trusted partner is now being pushed the hardest.

xAI accepted "all lawful use" without restriction. OpenAI and Google respond flexibly in unclassified environments while still negotiating classified access conditions. Only Anthropic is drawing clear red lines and holding firm.


Defense Production Act: A 70-Year-Old Weapon Created by the Cold War

U.S. Capitol building in Washington D.C.

Both cards Hegseth pulled out are heavy.

The first is invoking the Defense Production Act (DPA). Enacted shortly after the Korean War began in 1950, this law gives the president authority to control civilian company production and resources for national security. It was modeled on the War Powers Acts Roosevelt used during World War II. The Trump administration invoked this law during the COVID pandemic to force mask and ventilator production.

Applying DPA to an AI company means the military could use Claude whether Anthropic wants it or not. Using Hegseth's expression, they can compel it "if they want to or not." A law created during the Cold War to force tank and ammunition production is being used in 2026 to force AI model usage.

The second card is supply chain risk designation. This designation would prevent all U.S. defense contractors from using Anthropic products in military operations. It's a classification normally applied to companies like Russia's Kaspersky or China's Huawei. Putting this label on a U.S. AI startup would send a signal to the entire enterprise market.

The ripple effects are significant. Not just defense industry giants like Lockheed Martin, Raytheon, and Northrop Grumman, but major cloud companies like Amazon (AWS GovCloud), Microsoft (Azure Government), and Google (Google Public Sector) also hold defense contracts. If they can't use Anthropic products for military-related work, the impact extends to the civilian sector. Anthropic's entire B2B business could be shaken.


The Identity of the "Woke AI" Frame

Trump administration senior officials started calling Anthropic's safety policies "Woke AI." According to NPR reporting, Hegseth's side claims Anthropic's model has liberal bias and excessive safety mechanisms. It's a frame of ideology, not safety.

This frame converts technical debate into political debate. Anthropic's opposition to autonomous weapons is a technical judgment about AI reliability. Opposition to mass surveillance is concern about legal foundation absence. But the moment the "Woke" label attaches, all this technical-legal discussion gets reduced to left-right conflict.

White House AI chief David Sacks reportedly drafted an executive order demanding removal of AI companies' own guardrails. Hegseth also demanded in a January 2026 AI strategy document that company-specific guardrails be eliminated within 180 days. This isn't just about Anthropic. It's part of government movement to dismantle AI safety standards themselves.

The problem is this frame doesn't match reality. Anthropic didn't refuse the defense contract. Rather, it's the only company among the four that penetrated the classified network. It's not refusing military use itself, just drawing lines on two specific uses. Classifying this as "Woke" is a political strategy to simplify the debate, not analysis reflecting reality.


Why Anthropic Won't Back Down

Circuit board symbolizing cyber security technology infrastructure

A source familiar with Anthropic's internal situation told Axios "Anthropic has no plans to back down to the Pentagon's demands." Amodei reconfirmed his red lines even in his meeting with Hegseth.

This position has several backgrounds.

First, Anthropic's identity issue. Anthropic was founded in 2021 as a split from OpenAI. The founding motivation itself was dissatisfaction with AI safety. The Amodei siblings (Dario, Daniela) judged that OpenAI prioritized commercialization over safety and left to start their own company. The moment they allow autonomous weapons and mass surveillance, this company's reason for existence disappears. It's a matter of brand value and corporate philosophy.

Second, technical judgment. Amodei believes current AI reliability levels are insufficient to operate autonomous weapons. When errors occur in systems where AI identifies targets and makes firing decisions, the consequences are on a different dimension from code bugs. Civilian casualties. This is a technical issue, not ideological.

Third, legal void. The U.S. doesn't yet have federal law governing mass surveillance using AI. There's potential conflict with the Fourth Amendment (prohibition of unreasonable searches and seizures), but how this applies in the AI era hasn't been established through case law. From Anthropic's perspective, handing unlimited surveillance tools to the government before laws are established is irresponsible.

Fourth, market judgment. Anthropic recently closed a 30billionfundingroundata30 billion funding round at a 38 billion valuation. Annual revenue is around 10billion.The10 billion. The 200 million defense contract is just 2% of total revenue. It's not an amount worth abandoning corporate philosophy for. Meanwhile, maintaining the "safe AI" brand yields much greater value in enterprise customer and developer markets.

However, supply chain risk designation could deliver a much bigger blow than the 200millioncontract.IfcompaniesholdingdefensecontractscantuseAnthropicproducts,manyenterprisecustomersrelyingontheClaudeAPIcoulddefect.Thisisnt200 million contract. If companies holding defense contracts can't use Anthropic products, many enterprise customers relying on the Claude API could defect. This isn't 200 million risk but tens of billions. Anthropic can hold firm because it may see low probability of this threat being executed.


The Dilemma for Other AI Companies

The Anthropic-Pentagon collision is throwing other AI companies into an uncomfortable question. Axios reported this dispute "has put other AI labs in a major dilemma."

xAI already fully accepted. True to Elon Musk's company, it took government demands as-is. But OpenAI and Google have nuanced positions. They respond flexibly in unclassified environments while still avoiding clear declarations on classified access and autonomous weapons.

If Anthropic is pushed out, pressure on the remaining three companies intensifies. If the logic "Anthropic didn't but you can" works, all attempts to maintain safety standards weaken. Conversely, if Anthropic holds, other companies gain justification to say "we also have red lines."

Google faces a particularly complex situation. It went through the 2018 Project Maven incident. After participating in a project providing AI for Pentagon drone video analysis, it faced protest signatures from 4,000 employees and resignations of over a dozen, then withdrew from the contract. Google subsequently announced the principle "we won't use AI in weapons." But it quietly modified this principle after 2025 and re-entered the defense market. If Anthropic is pushed out, Google's past promises could be resummoned.

OpenAI is the same. ChatGPT's use policy explicitly prohibited "weapons development." But it softened this clause in 2024 for defense contracts. Sam Altman justified the pivot saying "AI use for national security is protecting democracy." If Anthropic's resistance collapses, OpenAI's past concessions get retroactively justified. Conversely, if Anthropic holds, OpenAI's concessions become objects of criticism.

Ultimately, the real issue of this dispute isn't just Anthropic. It's where the military ethics baseline for the entire AI industry gets placed. If the company drawing the hardest line collapses, that line ceases to exist. In the AI safety camp, Anthropic is playing the role of last bastion.


Not an Enemy State, but a Mirror

The threat to designate Anthropic as a supply chain risk company is technically possible but logically contradictory. Anthropic is a U.S. company founded in the U.S. The Amodei siblings, U.S. citizens, operate it in the U.S. with U.S. investor money. Chris Liddell, Trump's former White House deputy chief of staff, sits on the board. If you classify this company in the same category as Russian or Chinese firms, the concept of "supply chain risk" itself becomes meaningless.

DPA invocation also has unclear precedent. This law was designed to compel production of physical goods. Masks, ventilators, ammunition. There's no case of applying this law to compel AI model use conditions. Legal debate starts from whether government forcibly changing software terms of service falls within DPA's scope. George Washington University Law School experts analyzed this application would be an "unprecedented expansion." Legal challenges will likely follow, and litigation could continue for years.

What this situation shows is that ethical standards for AI militarization haven't been socially agreed upon yet. Government sees everything as possible if lawful. Companies see there are things that shouldn't be done even if lawful. How to bridge this gap, the U.S. hasn't yet produced an answer.

This scene of the Pentagon treating Anthropic like an enemy state is actually a question America is asking itself. Do the people who made AI have the right to draw lines on AI's use? Or is that line meaningless before national security?

What's interesting is that the Pentagon rated Anthropic's model as the best. That's why it's the only one among four AI companies approved for classified network access. The company most concerned with safety made the best model. Whether this correlation is coincidence or necessity is debatable, but there's no guarantee anywhere that dismantling safety standards produces better AI.

After Friday passes, at least this round's answer will come.


Sources: