AI developer Anthropic has initiated legal proceedings against the Department of Defense (DOD), fulfilling its pledge to contest the agency's recent classification of the company as a "supply chain risk."
The creator of the Claude AI model formally lodged a complaint against the Department on Monday, marking the escalation of a protracted disagreement. This conflict centers on the extent of the military's access to Anthropic's advanced AI systems, with Anthropic establishing two critical boundaries: a firm refusal for its technology to be employed in mass surveillance of American citizens, and a conviction that its systems are not yet mature enough to power fully autonomous weapons that operate without human oversight in targeting and firing decisions.
In response, Defense Secretary Pete Hegseth has publicly asserted the Pentagon's prerogative to utilize AI systems for "any lawful purpose." The "supply chain risk" designation, typically reserved for foreign adversaries, carries significant implications, compelling any entity or agency collaborating with the Pentagon to attest that it does not integrate Anthropic's models into its operations.
Anthropic, in its complaint filed in San Francisco federal court, characterized the DOD's actions as both "unprecedented and unlawful." The company further argued, "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."
This situation remains fluid, and further updates will be provided as they become available.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.