Anthropic, the artificial intelligence firm, announced on Thursday its intention to legally contest the Department of Defense's decision to classify it as a supply chain risk. Dario Amodei, Anthropic's leader, characterized this designation as “legally unsound.”
This declaration from Anthropic followed shortly after the Department formally imposed the supply chain risk label, concluding a weeks-long disagreement concerning the extent of military oversight required for AI systems. Such a designation can effectively prevent a company from engaging in contracts with the Pentagon and its associated contractors. Amodei had previously established clear boundaries, stating that Anthropic’s AI should not be deployed for widespread surveillance of U.S. citizens or for completely autonomous weaponry. Conversely, the Pentagon maintained its stance that it required unfettered access for “all lawful purposes.”
In his accompanying statement, Amodei clarified that the majority of Anthropic’s clientele would remain unaffected by this specific supply chain risk classification.
He elaborated, stating, “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.”
Offering insight into Anthropic's probable legal arguments, Amodei asserted that the Department's communication, which labeled the firm a supply chain risk, is narrow in its application.
“It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain,” Amodei explained. He further added, “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”
Amodei also reiterated that Anthropic had engaged in productive discussions with the Department in recent days. These conversations are widely believed to have been disrupted following the leak of an internal memo he had circulated to staff, in which he critically described competitor OpenAI’s engagements with the Department of Defense as “safety theater.”
Notably, OpenAI has since secured an agreement to collaborate with the Defense Department, effectively replacing Anthropic, a development that has reportedly generated significant internal dissent among OpenAI’s own employees.
In his Thursday statement, Amodei offered an apology for the memo’s leak, asserting that Anthropic neither intentionally disseminated it nor instructed anyone else to do so. He stated, “It is not in our interest to escalate the situation.”
Amodei explained that the memo was drafted within “a few hours” of a rapid succession of announcements, including a presidential post on Truth Social indicating Anthropic’s removal from federal systems, followed by Defense Secretary Hegseth’s supply chain risk designation, and ultimately the Pentagon’s partnership announcement with OpenAI. He apologized for the memo’s tone, describing that day as “a difficult day for the company” and clarifying that the memo did not represent his “careful or considered views.” He added that, having been written six days prior, it now stands as an “out-of-date assessment.”
He concluded by emphasizing Anthropic’s paramount commitment to ensuring that American military personnel and national security professionals retain access to vital tools amidst ongoing significant combat operations. Amodei confirmed that Anthropic is presently contributing to certain U.S. operations in Iran and pledged that the company would persist in offering its models to the Defense Department at a “nominal cost” for “as long as necessary to make that transition.”
While Anthropic possesses the option to challenge this designation in federal court, likely in Washington, D.C., the underlying legal framework presents significant hurdles. This framework restricts the conventional avenues through which companies can dispute government procurement decisions and grants the Pentagon extensive latitude on issues pertaining to national security.
As articulated by Dean Ball, a former White House AI advisor during the Trump administration and a critic of Hegseth’s handling of Anthropic: “Courts are pretty reluctant to second-guess the government on what is and is not a national security issue…There’s a very high bar that one needs to clear in order to do that. But it’s not impossible.”
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.