Microsoft has reassured its clientele, including enterprises and startups leveraging Anthropic's Claude models via Microsoft products, that these advanced AI capabilities will remain fully accessible. This confirmation was shared with TechCrunch and other prominent publications, dispelling any concerns about potential service disruption.
This commitment positions Microsoft as the first major technology firm to guarantee the continued availability of Anthropic's models to its customers, even amidst an escalating dispute between the Trump Administration's Department of Defense (DoD) and Anthropic.
The DoD formally designated the American AI startup as a supply chain risk following Anthropic's refusal to grant unrestricted access to its technology. Anthropic cited safety concerns, stating its AI was not designed to support applications such as mass surveillance and fully autonomous weaponry.
Such a supply-chain risk designation is typically reserved for foreign adversaries. For Anthropic, this classification means the Pentagon is prohibited from using its products. Furthermore, any company or agency collaborating with the Pentagon must verify that they do not utilize Anthropic's models. Anthropic has publicly declared its intention to challenge this designation in court.
Given Microsoft's extensive portfolio of products, from Office suites to cloud services, which are widely adopted by numerous federal agencies, including the Department of Defense, the company's stance is significant. A Microsoft spokesperson affirmed that the company would continue to integrate and make Anthropic's models available within its own offerings and to its global customer base.
“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry, and that we can continue to work with Anthropic on non-defense related projects,” a Microsoft spokesperson elaborated in an email, a comment initially reported by CNBC.
This assertion from Microsoft aligns closely with statements made by Anthropic CEO Dario Amodei, who has also pledged to contest the designation.
Amodei clarified, “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.” He further added, “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”
Meanwhile, Anthropic's refusal to accede to the department's demands has not hindered the significant surge in Claude's consumer adoption, which continues to grow robustly.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.