President Donald Trump and Defense Secretary Pete Hegseth tried to force a private AI company into submission and now newly revealed evidence suggests their pressure campaign may have backfired in a way they didn’t expect.
What was framed publicly as a hard break over national security concerns is now being challenged in court, where Anthropic has laid out a very different version of events — one that shows just how close the two sides were to reaching a deal before everything blew up.

In sworn declarations recently filed on Friday in federal court in California, Anthropic pushed back on the Pentagon’s claim that the company poses an “unacceptable risk to national security,” arguing that the government’s case is built on technical misunderstandings and concerns that were never raised during months of negotiations.
The filings also include internal communications from those negotiations, including an email from a top Pentagon official to Anthropic CEO Dario Amodei indicating the two sides were “very close” to agreement on the very issues the Trump administration later used to justify blacklisting the company from doing business with the U.S. government.
The email, sent just one day after the Pentagon formally designated Anthropic a “supply chain risk,” raises fresh questions about whether the administration’s aggressive public stance was less about national security and more about forcing total compliance.
The dispute traces back to late February, when Trump and Hegseth escalated a behind-the-scenes contract negotiation into a full-blown public confrontation after Anthropic refused to drop two key restrictions on how its AI technology could be used: mass domestic surveillance and fully autonomous weapons.
Anthropic, the creator of the Claude AI product, agreed to loosen most of its guardrails. But on those two issues, the company held firm.
“We cannot in good conscience accede to their request,” Amodei said at the time, warning that some uses of AI could “undermine, rather than defend, democratic values.”
That refusal set off a chain reaction.
In a series of posts, Trump blasted Anthropic as a “RADICAL LEFT, WOKE COMPANY” and ordered federal agencies to cut ties with its technology. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump ranted.
Hegseth followed with an even more aggressive move, declaring the company a national security risk and warning contractors to sever business relationships — a sweeping directive that legal experts have since questioned.
Behind the scenes, however, the tone appeared very different.
In a sworn declaration, Anthropic’s head of policy said that during negotiations over a $200 million contract to integrate its product into classified defense systems, the Pentagon never raised several of the concerns it later cited in court filings — including claims that the company might interfere with military operations.
Those arguments, she said, appeared for the first time only after the dispute had gone public, leaving Anthropic with no opportunity to address them.
Even more striking, the same Pentagon official who publicly denied that negotiations were ongoing had privately acknowledged that the two sides were nearly aligned.
Anthropic argues that contradiction goes to the heart of its lawsuit, which accuses the Trump administration of using government power to retaliate against a company for refusing to comply with demands that went beyond the scope of its contract.
The clash highlights two competing visions for how artificial intelligence should be used in warfare.
Anthropic — founded by former OpenAI researchers — has positioned itself as one of the most safety-focused AI companies in the industry, even while working closely with the U.S. government. Its Claude system had already been deployed in classified environments and used to support national security operations, including intelligence analysis tied to ongoing conflicts.
That includes the current war in Iran. At the same time the administration was moving to sever ties, Anthropic’s technology was reportedly being used to help analyze intelligence and support military planning — a contradiction that has intensified scrutiny of how the dispute unfolded.
Experts say the case could have far-reaching implications.
If the government is allowed to punish companies for refusing to adopt its preferred uses of technology, it could send a chilling message across the AI industry particularly for firms weighing whether to partner with the federal government at all.
For now, the dispute is headed to federal court, where a judge will weigh whether the administration’s actions were justified or an overreach of executive power.
But the newly revealed emails have already complicated the narrative.
What Trump and Hegseth portrayed as a decisive break over an uncooperative company now looks more like a negotiation that was close to resolution until the pressure campaign escalated and turned a contract dispute into a public showdown.
And in the process, what was meant to force compliance may have instead strengthened Anthropic’s case that the administration went too far.
