Key Points
- On March 3, the Pentagon placed Anthropic on its blacklist following the company’s decision not to eliminate safeguards preventing autonomous weapon systems and domestic surveillance applications.
- In a March 17 court filing, the Trump administration argued the action was legal and suggested Anthropic’s First Amendment claims have limited merit.
- Federal officials characterized Anthropic as presenting an “unacceptable risk” to defense supply chains, with concerns about potential AI interference during critical operations.
- The AI company has launched two legal challenges — one in California federal court and another in a D.C. appellate court — contesting the blacklist designation.
- Microsoft, an Anthropic customer that also contracts with the military, submitted a supporting brief cautioning that the action threatens the wider AI industry.
A federal court battle has erupted between the United States government and Anthropic, the creator of the Claude AI platform, over a Pentagon action that threatens the company with substantial financial consequences.
The Trump administration vowed a legal fight to oust Anthropic from all US government agencies following a dispute over how the company’s AI technology would be used https://t.co/3mZ5mkDBOq
— Bloomberg (@business) March 18, 2026
Defense Secretary Pete Hegseth officially classified Anthropic as a national security supply chain threat on March 3. This designation followed unsuccessful negotiations lasting several months between Pentagon officials and the AI firm.
At the heart of the conflict lies Anthropic’s unwillingness to eliminate certain operational constraints on its artificial intelligence technology. The company maintained its position against permitting its systems to be deployed in autonomous weaponry or domestic surveillance programs.
Pentagon officials deemed these limitations problematic. According to court documents, they contended that permitting Anthropic ongoing involvement with military infrastructure would create “unacceptable risk” within defense supply chains.
Government attorneys also highlighted apprehensions regarding Anthropic’s capacity to “disable its technology or preemptively alter the behavior of its model” while military operations are underway if the company determined its ethical standards were being violated.
Administration Argues Actions Constitute Conduct, Not Expression
The Justice Department, representing the Trump administration’s position, contested Anthropic’s constitutional free speech arguments. Officials maintained the matter involved contractual obligations and national security considerations rather than protected expression.
The government’s submission characterized Anthropic’s unwillingness to remove safety limitations — described as “conduct, not protected speech” — as the catalyst for President Trump’s directive ordering all federal entities to terminate relationships with the company.
Anthropic initiated its primary legal action in California federal court on March 9. The company characterized the blacklist decision as “unprecedented and unlawful,” asserting violations of both constitutional free speech protections and due process guarantees.
A companion lawsuit was submitted to a Washington, D.C. appeals court contesting an additional Pentagon classification under separate legislation — one potentially expanding the prohibition across the entire federal government.
Microsoft Files Brief Supporting Anthropic’s Position
Microsoft, which integrates Anthropic’s Claude platform while maintaining U.S. military contracts, submitted an amicus brief backing Anthropic’s legal challenge last week. The technology giant cautioned that the designation risks damaging the entire artificial intelligence industry.
“This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” Microsoft wrote.
Anthropic indicated it was examining the government’s recent court submission. Company representatives stated the litigation represents “a necessary step to protect our business, our customers, and our partners.”
Anthropic has also challenged assertions that its technology presents security hazards. Company officials argued that AI technology has not reached sufficient maturity for deployment in autonomous weapon systems and emphasized their principled opposition to domestic surveillance applications.
The White House did not respond to a request for comment.
Company leadership has cautioned the designation could result in billions of dollars in financial damage throughout 2026. Such blacklist classifications are generally applied to entities from adversarial foreign nations, including Chinese technology company Huawei.
