Anthropic Officially Sues the Pentagon for Labeling the AI Company a ‘Supply Chain Risk’


Anthropic filed two lawsuits against the U.S. Department of Defense on Monday over its designation as a “supply chain risk to national security” that would prohibit the AI company from obtaining U.S. government contracts and blacklist it among other defense contractors.
The Pentagon labeled the AI company a supply chain risk after it refused to agree to new terms that would allow the U.S. government to use its AI model Claude for mass domestic surveillance and the development of fully autonomous weapons.
“These actions are unprecedented and unlawful,” the Anthropic lawsuits read. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here.”
The lawsuits, which were filed in the U.S. District Court of the Northern District of California and the D.C. Circuit Court of Appeals, according to the New York Times, list almost three dozen defendants, including entire government agencies that were using Claude as well as the heads of those agencies.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.”
Anthropic has previously explained that its objections to domestic surveillance and prohibitions on the use of Claude for fully autonomous weapons are largely over concerns surrounding technical ability. The company made a similar argument in its lawsuits Monday, explaining that Anthropic has never tested Claude for these uses and the guardrails are rooted in the company’s understanding of its risks and limitations, along with a belief in the U.S. Constitution.
“Anthropic has collaborated with the Department of War [sic] on modifications to its usage restrictions to facilitate the Department’s work with Claude, in recognition of the Department’s unique missions. But Anthropic has always maintained its commitment to those two specific restrictions, including in its work with the Department of War,” the company wrote in its lawsuits.
Anthropic’s lawsuits have been expected since President Donald Trump and Defense Secretary Pete Hegseth first threatened to either invoke the Defense Production Act to force Anthropic to do its bidding or be labeled a supply chain risk, a designation never applied to a U.S. company. Anthropic CEO Dario Amodei met with Hegseth on Feb. 24, but the Pentagon didn’t formally label Anthropic a supply chain risk until March 5. The lawsuits include screenshots of posts from Trump on Truth Social and Hegseth on X, as well as links to tweets from members of the cabinet, like Treasury Secretary Scott Bessent.
In its lawsuits, Anthropic argues that the Department of Defense has every right to look out for supply chain risks, but it has a responsibility to do that in the least restrictive manner. Anthropic writes that DoD “had a straightforward and unrestrictive option that would have fully served that interest: terminate the contract and hire a different developer.” Instead, the Pentagon went with a punitive response to make the company toxic.
There are some thorny legal questions about what it means to be labeled a supply chain risk, including the question of whether it means that other private companies that do business with the federal government are prohibited from using Anthropic’s software in any capacity. Whatever the letter of the law, defense contractors like Lockheed Martin have been cutting ties with Anthropic anyway.
The Pentagon has drawn up new AI guidelines that would require companies to allow the military to engage in “any lawful use” of their models, according to the Financial Times. The definition of “lawful” is obviously very flexible for the Trump regime, given the fact that there’s effectively no one to hold them accountable when they break the law.
“The consequences of this case are enormous,” the lawsuits read. “The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance—AI safety and the limitations of its own AI models—in violation of the Constitution and laws of the United States.”
Several tech observers have argued that harming Anthropic harms U.S. competitiveness and gives China an edge in the race to build the most advanced AI systems. And Anthropic made a similar argument in its lawsuits on Monday.
“Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation,” the lawsuits read. “The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance.”
Anthropic didn’t immediately respond to a request for comment Monday morning. Gizmodo will update this article when we hear back.