- Authors

- Name
- 오늘의 바이브
The Person Building Robots Left Over a Weapons Deal

$200 million. That is the value of OpenAI's first classified deployment contract with the Pentagon. Before the ink dried, the person leading the company's robotics division packed up and left. Caitlin Kalinowski -- the hardware veteran who designed MacBook Pros at Apple, built Quest VR headsets at Meta, and was shaping humanoid robots at OpenAI.
On March 7, 2026, she posted her resignation on X and LinkedIn simultaneously. The message was brief and deliberate. Someone who joined to build robots left when she saw where that technology was headed. She was the most senior internal departure over the Pentagon deal.
This was not a single resignation. It was the first visible crack in what happens when an AI company takes a military contract and the people building the technology have to decide whether to stay.
A Friday Night Signing
The story begins in the last week of February 2026. On February 23, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. The demand was straightforward: give the military unfettered access to Claude for "all lawful purposes." No restrictions on mass surveillance. No limits on autonomous weapons. Two days later, Hegseth gave Amodei an ultimatum -- comply by Friday or face consequences. The threat was either a "supply chain risk" designation or invoking the Defense Production Act to force compliance.
On February 26, Amodei refused. He told his team internally: "Do not change our position." He argued that accepting "any lawful use" language for mass surveillance and autonomous weapons was a line Anthropic would not cross. Hegseth called Anthropic "woke AI."
Then came February 27, a Friday. President Trump directed all federal agencies to stop using Anthropic's technology. The Department of Defense designated Anthropic a "supply-chain risk to national security" -- a label typically reserved for companies associated with foreign adversaries. Anthropic had previously held its own $200 million contract with the DoD, and Claude was the first major AI model deployed in the government's classified networks. That relationship ended overnight.
That same Friday, OpenAI signed a classified deployment contract with the Pentagon worth up to $200 million. It was OpenAI's first classified deal. The moment a competitor fell to government pressure, OpenAI stepped into the signing line. Whether the timing was intentional or not, the message was clear: what Anthropic refused, OpenAI accepted.
What the Contract Says -- and Does Not Say
On March 1, OpenAI published a blog post titled "Our agreement with the Department of War," partially disclosing the contract terms. They presented three "red lines."
| Issue | OpenAI's Position |
|---|---|
| Mass surveillance of Americans | Prohibited |
| Autonomous weapons without human control | Prohibited where law/policy requires human oversight |
| High-stakes autonomous decisions without human approval | Prohibited |
Technical safeguards were also outlined: cloud-only deployment architecture (not edge devices), continued operation of OpenAI's safety stack, participation of cleared OpenAI employees in operations, and monitoring classifiers tracking model behavior.
It looked solid on the surface. Then the Electronic Frontier Foundation published an analysis titled "Weasel Words: OpenAI's Pentagon Deal Won't Stop AI-Powered Surveillance."
The core critique: U.S. law already permits collection of location data, social media posts, and phone records. The phrase "all lawful purposes" does nothing to prevent layering AI on top of existing legal surveillance. Executive Order 12333 allows the NSA to collect Americans' data through overseas interception. The contract bans "independent control" but permits AI involvement in the kill chain -- targeting, tracking, and analysis. Terms like "human responsibility" and "proper oversight" lack clear definitions.
Most critically, the full contract text was never released. There is no way to verify any of these claims externally. And Pentagon policies can change at any time.
What Kalinowski Actually Said
Kalinowski's resignation statement appeared on X and LinkedIn simultaneously:
"I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together."

A follow-up post made one thing clear:
"To be clear, my issue is that the announcement was rushed without the guardrails defined. It's a governance concern first and foremost. These are too important for deals or announcements to be rushed."
Her background gives those words weight. Kalinowski holds a BS in Mechanical Engineering from Stanford. She spent roughly six years at Apple, playing a critical role in developing the MacBook Air and Mac Pro and serving on the original unibody MacBook Pro team. She holds multiple patents from her Apple tenure. She then spent approximately nine years at Meta (Facebook/Oculus) leading VR hardware -- Quest 2, Touch controllers, Rift, Go, and Rift S. Her final two and a half years at Meta were spent leading the Orion AR glasses project. She was named one of Business Insider's most powerful female engineers in 2018 and appeared on the Fast Company Queer 50 list in 2021 and 2022.
She joined OpenAI in November 2024 to lead robotics and consumer hardware. About 15 months later, she left on principle.
The Company Was Fuming Internally
Kalinowski's departure was the tip of the iceberg. CNN reported on March 4 that OpenAI employees were "fuming" about the Pentagon deal. One staffer told CNN that many colleagues "really respect" Anthropic's stance.
Research scientist Aidan McLaughlin publicly stated: "I personally don't think this deal was worth it." He simultaneously called the internal discussion "overwhelming" but said he felt "incredibly proud to work somewhere where people can speak their mind."
The backlash went beyond private conversations. More than 300 Google employees and over 60 OpenAI employees signed an open letter urging leadership to support Anthropic and refuse unilateral military AI use. A broader letter gathered signatures from nearly 900 employees at Google and OpenAI, demanding that leadership refuse government requests for AI-powered mass surveillance and autonomous lethal targeting.
Sam Altman acknowledged the problem. In an internal memo on March 2, he wrote:
"We shouldn't have rushed to get the agreement out on Friday."
On March 3, he publicly admitted the rollout looked "opportunistic and sloppy." He acknowledged that signing a contract the same hour Anthropic was branded a supply-chain risk did not "look great" -- especially since he had previously expressed public support for Anthropic's red lines on mass surveillance and autonomous weapons.
OpenAI's head of national security partnerships, Katrina Mulligan -- who previously led media response to the Snowden disclosures in 2013 during the Obama administration -- confirmed that "defense intelligence components are excluded from this contract." But she also said she would be open to future work with the NSA "if the right safeguards were in place."
Outside, People Were Deleting the App

Internal anger spilled into public action. The QuitGPT movement launched. Activists gathered outside OpenAI's Mission Bay headquarters in San Francisco. The movement claimed more than 1.5 million people took action -- canceling subscriptions, sharing boycott messages, and signing up through quitgpt.org.
The app metrics told the story in numbers:
| Metric | Change |
|---|---|
| ChatGPT app uninstalls | 295% increase day-over-day (vs. typical 9%) |
| 1-star reviews | 775% surge on Saturday, another 100% Sunday |
| 5-star reviews | 50% decline |
| Claude app downloads | 37-51% increase day-over-day |
Claude briefly became the No. 1 app in the U.S. App Store. The government punished the company that refused, and consumers punished the company that complied. A perfect inversion.
On March 2, OpenAI amended the contract under public pressure. They explicitly added that AI would "not be intentionally used for domestic surveillance of U.S. persons and nationals." The NSA and other intelligence agencies were explicitly excluded. OpenAI spokesperson Kayla Wood stated that the agreement "creates a workable path for responsible national security uses of AI while making clear our red lines." But critics asked: does an "amendment" to an already-signed classified contract carry binding force? Does "intentionally" create a loophole for unintentional use? Who audits what the AI actually does inside a classified network?
What Happens to the Robotics Team
Kalinowski's departure left a concrete gap. Over the past year, OpenAI built a robotics lab in San Francisco employing roughly 100 data collectors. The team was training a robotic arm to perform household chores, with the ultimate goal of building a humanoid robot. In December 2025, they told employees about plans for a second lab in Richmond, California.
Then the person leading all of it left. OpenAI confirmed Kalinowski's resignation and stated there are no plans to replace her in the role. The hiring of ex-Scale Labs CEO Benjamin Bolte was announced, but eliminating the robotics lead position entirely is telling. Whether the robotics effort has shifted direction, a successor has not been found, or priorities have changed remains unclear.
| Aspect | Anthropic's Approach | OpenAI's Approach |
|---|---|---|
| Core language | Explicit contractual prohibitions | "All lawful purposes" |
| Surveillance | Banned entirely | Permitted under current law |
| Autonomous weapons | Banned (AI not reliable enough) | Permitted if law allows, "human responsibility" required |
| Philosophy | Imposes restrictions beyond legal requirements | Defers to existing law and policy |
The difference between the two companies reduces to one thing. Anthropic said "even if the law allows it, we prohibit it." OpenAI said "if the law allows it, so do we." Opposite answers to the same government demand.
Principles Are Thinner Than Contracts
OpenAI's shift on military use did not happen overnight. On January 10, 2024, the company quietly deleted language from its usage policy that expressly prohibited use for "weapons development" and "military and warfare." The new policy instead forbade using the technology to "harm yourself or others." The Intercept first reported the change. Two years later, that quiet policy edit became a $200 million military contract.
The trajectory shows a pattern:
- 2015: Founded as a nonprofit AI safety research lab. Mission: "AGI beneficial to all of humanity."
- 2019: Created a for-profit entity.
- January 2024: Deleted the military use prohibition.
- Late 2024: Began working with the DoD on cybersecurity tools.
- February 2026: Signed the military contract a competitor refused, on the same day.
What Kalinowski's resignation makes clear is this: when AI companies say "we have red lines," where those lines are drawn can shift with the contract amount. OpenAI's three red lines exist. But what happens in the wide gray zone beneath those lines lives behind a classified wall. Between unpublished contracts and policies that can change at any time, principles dry faster than the ink on the contract.
In 2018, Google's Project Maven controversy saw thousands of employees' collective action force the company to withdraw from a military contract. Google walked away from a Pentagon project that used AI to analyze drone footage and subsequently published formal AI ethics principles. But 2026 is a different environment. The government now has a mechanism to designate refusing companies as supply-chain risks and eject them from the federal market. In 2018, Google could choose to give up the contract under employee and public pressure. In 2026, Anthropic held to its principles and was expelled from a multibillion-dollar government market overnight. The cost of refusal has fundamentally changed.
Kalinowski expressed "deep respect for Sam and the team" as she left. But respect was not enough to stay. When the distance between building robots and those robots running on military infrastructure narrowed to $200 million, she chose to walk out. When the next Kalinowski appears, will there be any option other than a resignation letter? Unless the structure changes, the same resignations will keep repeating.
Sources
- OpenAI hardware exec Caitlin Kalinowski quits over Pentagon deal - TechCrunch
- OpenAI robotics leader resigns - NPR
- OpenAI robotics leader resigns - Fortune
- Sam Altman admits deal "opportunistic and sloppy" - Fortune
- ChatGPT uninstalls surged 295% - TechCrunch
- EFF: Weasel Words - EFF
- OpenAI quietly deletes military ban - The Intercept
- Some OpenAI staff are fuming - CNN