
Anthropic CEO Dario Amodei publicly refused Pentagon demands this week to strip the company’s AI safeguards, declaring that threats from the Defense Department would not change his position.
At the center of the dispute sits a $200 million contract between Anthropic and the Pentagon. The Pentagon wants Anthropic to lift its restrictions so the military can use Claude for “all lawful use” with no limits. However, Anthropic holds firm on two concerns it refuses to drop: AI-controlled weapons and mass domestic surveillance of American citizens.
Pentagon Issues a Friday Deadline
On February 24, Defense Secretary Pete Hegseth gave Amodei a hard deadline, relent by 5:01 p.m. on Friday, February 27 and allow unrestricted use of its AI models “for all legal purposes. Anthropic stood firm.
In response, Amodei published a statement revealing the full scope of Pentagon pressure. The Defense Department threatened to: remove Anthropic from its systems entirely, designate the company a “supply chain risk,” and invoke the Defense Production Act to force safeguard removal.
Amodei called out the contradiction directly. “These latter two threats are inherently contradictory. One labels us a security risk, the other labels Claude as essential to national security,” he wrote.
Weapons and Surveillance: The Limits Anthropic Won’t Cross
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei continued. He described domestic mass surveillance and fully autonomous weapons as uses that are “simply outside the bounds of what today’s technology can safely and reliably do.”
Those uses “have never been included in our contracts with the Department of War, and we believe they should not be included now,” he added.
However, Pentagon officials pushed back hard. Under Secretary of Defense Emil Michael called Amodei “a liar” with a “God-complex,” arguing the Defense Department “doesn’t do mass surveillance as that is already illegal.” Defense Secretary Hegseth reposted both of Michael’s messages.
The Fallout and What Comes Next
On Friday, February 28, President Trump directed federal agencies to stop using Anthropic’s products and Hegseth officially designated the company a supply chain risk.
Meanwhile, OpenAI took a completely different path. OpenAI struck a deal with the Pentagon allowing its AI systems for “all lawful purposes.” Amodei blasted the decision in an internal memo as “safety theater,” accusing OpenAI of caring only about placating employees rather than preventing abuses.
Despite the standoff, talks continue. Anthropic is reportedly back in negotiations with Emil Michael over a deal that would define how the Pentagon can use its AI models. If finalized, the U.S. military could continue using Anthropic’s tools, and the company would likely avoid the supply chain risk designation.
Ultimately, Amodei says Anthropic’s strong preference remains to keep serving the Defense Department, but only with its two safeguards intact. Whether the Pentagon accepts those terms remains to be seen.