Key Takeaways
- San Francisco federal judge grants preliminary injunction halting Pentagon’s prohibition on Anthropic’s Claude AI system
- Judge Rita Lin characterized the ban as “classic illegal First Amendment retaliation” against the company
- Conflict originated when Anthropic declined to permit Claude’s use in lethal autonomous weapons or mass surveillance operations
- Enterprise AI market data from 2025 showed Anthropic commanding 32% market share, surpassing OpenAI’s 25%
- Seven-day pause on injunction allows government opportunity to file appeal
In a significant legal victory for Anthropic, a federal judge has issued a temporary halt to the Trump administration’s prohibition on federal government agencies using the company’s artificial intelligence technology—a ban the AI firm claimed would result in billions of dollars in lost contracts and revenue.
BREAKING: Anthropic has been GRANTED a preliminary injunction re: Pentagon ‘supply chain risk’ designation by Judge Rita Lin in California but is allowing a stay for one week https://t.co/1xk41AB5zQ
— Hadas Gold (@Hadas_Gold) March 26, 2026
US District Court Judge Rita Lin, presiding in the Northern District of California, delivered the preliminary injunction ruling on Thursday. The order includes a seven-day suspension period, providing the federal government with an opportunity to file an appeal before the injunction takes effect.
At the heart of this legal battle is a July 2025 agreement between Anthropic and the Department of Defense. Under the original terms, Claude would have become the inaugural frontier AI system authorized for deployment on classified military networks.
The partnership unraveled in February 2026 when Pentagon officials sought to renegotiate the terms. Defense officials insisted on broader permissions, requesting that Anthropic permit military deployment of Claude “for all lawful purposes” without limitations or usage restrictions.
Anthropic stood firm in its refusal. Company leadership maintained that their AI technology must not be deployed for lethal autonomous weapons systems or mass domestic surveillance programs targeting American citizens.
President Trump issued an executive directive on February 27, mandating that all federal agencies immediately cease using any Anthropic products. In a Truth Social post, the President declared the company had committed a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
Following the presidential order, the Defense Department formally classified Anthropic as a national security supply chain threat. In response, Anthropic initiated legal proceedings in Washington, DC federal court on March 9, contending that Defense Secretary Pete Hegseth exceeded his statutory authority.
Court Scrutinizes Government’s Justification
The case proceeded to a 90-minute hearing in San Francisco on March 24, where Judge Lin pressed government attorneys on whether punitive action was being taken against Anthropic for its public criticism of Pentagon policies.
In her written opinion, Judge Lin determined that the prohibition appeared disconnected from genuine national security considerations. “If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude,” the judge stated.
The ruling further characterized the government’s actions as “designed to punish Anthropic” and condemned them as “arbitrary, capricious, and an abuse of discretion.”
Throughout the proceedings, legal counsel for Anthropic emphasized that the Pentagon maintains independent authority to review any AI system prior to operational deployment. Additionally, Anthropic possesses no technical capability to remotely disable a model, alter its functionality, or monitor military applications.
Arguments Presented by Each Party
Government attorneys contended that Anthropic undermined the trust necessary for defense contracting by attempting to impose policy constraints on Pentagon operations. Counsel argued that officials harbored concerns regarding potential “future sabotage” risks posed by the company.
Judge Lin dismissed this rationale as insufficient. Her ruling stated that the Justice Department lacked any “legitimate basis” for concluding that Anthropic’s ethical stance on usage restrictions could transform it into a security threat or saboteur.
Market analysis from Menlo Ventures indicated that Anthropic captured 32% of the enterprise AI sector by 2025, outpacing OpenAI’s 25% market share. A comprehensive government prohibition threatened to severely undermine that competitive position.
In a statement following the ruling, Anthropic expressed being “grateful to the court for moving swiftly.” The company has simultaneously pursued a separate legal challenge in a Washington, DC appellate court, concentrating on federal procurement law violations.
The legal matter is designated as Anthropic v. US Department of War, case number 26-cv-01996, filed in the US District Court for the Northern District of California.
