Skip to main content
Mar 27

Judge Pauses Pentagon's Anthropic Ban

A federal judge has ruled that "punishing Anthropic … is classic illegal First Amendment retaliation," marking a significant development in the AI com

6 min read72 views3 tags
Originally reported bytheverge

A federal judge has ruled that "punishing Anthropic … is classic illegal First Amendment retaliation," marking a significant development in the AI company's dispute with the Pentagon.

After weeks of intense standoff, Anthropic achieved a crucial legal milestone as a judge granted a preliminary injunction in its lawsuit. This injunction aims to temporarily reverse its government blacklisting while the broader judicial process unfolds.

In her order, set to take effect in seven days, Judge Rita F. Lin, a district judge in the northern district of California, stated, "The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’" She further elaborated, "Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation."

A final verdict in the case is still anticipated to be weeks or months away.

Anthropic spokesperson Danielle Cohen issued a statement on Thursday, expressing, "We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."

During the Tuesday hearing, Judge Lin acknowledged the core of the disagreement, stating, "I do think this case touches on an important debate." She outlined the two opposing views: "On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do."

Judge Lin clarified her judicial role, asserting, "It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor." She emphasized that the central legal question was "whether the government violated the law when it went beyond that."

The controversy originated from a January 9 memo by Defense Secretary Pete Hegseth, which mandated the inclusion of "any lawful use" language in all AI services procurement contracts within 180 days, encompassing existing agreements with companies like Anthropic, OpenAI, xAI, and Google. Anthropic's subsequent weeks-long negotiations with the Pentagon stalled over two critical "red lines": the company's refusal to permit military use of its AI for domestic mass surveillance and lethal autonomous weapons (AI systems capable of killing targets without human decision-making). The events that followed included public social media disputes, a formal "supply chain risk" designation—a move with potentially severe business repercussions for Anthropic—competitors actively pursuing new deals, and the eventual lawsuit.

In its lawsuit, Anthropic contends that it was penalized for speech protected under the First Amendment and seeks to overturn the "supply chain risk" designation.

Notably, the designation of a U.S. company as a "supply chain risk" is exceedingly rare, typically reserved for non-U.S. entities with potential ties to foreign adversaries. Anthropic's classification ignited widespread concern and bipartisan controversy, raising fears that disagreement with a presidential administration could lead to disproportionate retaliation against businesses across any sector.

Anthropic's court filings indicate that the designation has significantly impacted its operations. The company reported "outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic," with "dozens of companies" seeking guidance on potential termination rights. Depending on the extent to which the government restricts its contractors from working with Anthropic, the company estimates potential revenue losses ranging from hundreds of millions to multiple billions of dollars.

During Tuesday’s hearing, both parties responded to Judge Lin’s pre-released questions, which delved into matters such as Defense Secretary Hegseth’s authority to issue certain directives and the rationale behind Anthropic’s "supply chain risk" designation. The judge also inquired about the conditions under which a government contractor might face termination for using Anthropic’s technology, posing a specific scenario: "if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?"

Judge Lin also appeared to admonish the Department of War regarding Hegseth’s X post, which, according to Anthropic's earlier court filings, caused widespread confusion by stating that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

"You’re standing here saying, ‘We said it but we didn’t really mean it,’" Judge Lin remarked during the hearing, pressing the Department of War on why Hegseth issued such a broad directive prohibiting contractors from working with Anthropic, rather than simply designating the company as a supply chain risk.

In a series of questions, Judge Lin asked if the Department of War intended to terminate contractors for their work with Anthropic if that work was separate from their engagements with the department. A representative for the Department of War responded, "That is my understanding."

Judge Lin further probed, "Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?" The Department of War representative confirmed, "For non-DoW work, that is my understanding." However, when the judge inquired whether a military contractor providing IT services to the Department of War, but not for national security systems, could face termination for using Anthropic, the representative did not provide a definitive answer.

During the hearing, Judge Lin referenced an amicus brief that used the term "attempted corporate murder." While she stated, "I don’t know if it’s ‘murder,’" she concluded, "but it looks like an attempt to cripple Anthropic."

An attorney for Anthropic asserted during the hearing that the company continues "to be irreparably injured by this directive," specifically referencing Hegseth’s nine-paragraph X post. Conversely, a recent court filing by the Department of Defense alleged that Anthropic could conceivably "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations" if it perceived the military was crossing its red lines—a theoretical scenario the Pentagon deemed an "unacceptable risk to national security." Judge Lin’s pre-released questions appeared to challenge this assertion, or at least sought more information, asking, "What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?"

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news