Skip to main content
Feb 24

Anthropic's Pentagon Talks: AI's Existential Future

A high-stakes confrontation is unfolding between a leading artificial intelligence firm and the Department of Defense, a dispute involving far more th

8 min read119 views3 tags
Originally reported bytheverge

A high-stakes confrontation is unfolding between a leading artificial intelligence firm and the Department of Defense, a dispute involving far more than just a lucrative $200 million military contract.

The weekslong standoff between AI startup Anthropic and the Pentagon has played out publicly across social media, through stern public statements, and via direct quotes from anonymous Defense officials to the press. At the core of the $380 billion company's future are three pivotal words: “any lawful use.” This updated contractual language, reportedly already accepted by OpenAI and xAI, would grant the U.S. military unrestricted authority to deploy AI services for broad mass surveillance and the operation of lethal autonomous weapons—systems capable of tracking and eliminating targets without human involvement in decision-making.

Negotiations have intensified, with Pentagon CTO Emil Michael, a former senior executive at Uber, leading the government’s efforts by threatening to classify Anthropic as a “supply chain risk,” according to two sources familiar with the discussions. This designation is typically reserved for grave national security threats, such as foreign espionage or cyber warfare. Anthropic CEO Dario Amodei is reportedly scheduled to meet with Secretary Pete Hegseth at the Pentagon on Tuesday, a meeting an unnamed Defense official colorfully characterized as a “shit-or-get-off-the-pot meeting.”

The Pentagon’s decision to issue such a threat against an American company is unprecedented. Even more remarkably, the department chose to make this threat publicly known.

Ordinarily, for security reasons, the Pentagon refrains from publicly disclosing companies on these risk lists, let alone openly threatening them for differing views. Geoffrey Gertz, a senior fellow at the Center for a New American Security (CNAS), informed The Verge that existing federal regulations would have allowed the Pentagon to classify Anthropic as a risk without any public announcement or explanation. Gertz noted, “It’s the extra step of trying to specifically label them a national security risk, and keep other companies from doing business with Anthropic, that goes above and beyond here.”

The crux of the conflict centers on Anthropic’s steadfast enforcement of its “acceptable use policy.”

Should the classification become official, it would terminate Anthropic’s $200 million contract with the Pentagon. However, the repercussions would be far more severe, creating a devastating ripple effect on Anthropic’s overall financial health. Major defense contractors and technology giants, including AWS, Palantir, and Anduril, rely on Anthropic’s Claude for their Pentagon-related work, primarily because it was the first AI model authorized to process classified information. In plain terms: if Anthropic is labeled a “supply chain risk,” any company currently engaged in military contracts or aspiring to secure one would be compelled to abandon Anthropic’s AI systems, widely regarded as among the industry’s best. (Significantly, the evening before Amodei’s scheduled meeting with Hegseth, the Pentagon confirmed it had inked a deal to integrate Grok, Elon Musk’s xAI’s controversial AI model, into classified systems. The Pentagon offered no immediate comment when asked about this timing.)

The implementation of such a ban could range from highly specific to extremely broad. Gertz suggested, “I suspect the more logical explanation would be the narrower definition, that Anthropic can’t be used as part of a specific statement of work for the Pentagon.” However, he added, “But based on some of the reporting and effort to make this seem like a punitive move against Anthropic, it’s worth thinking through both of those scenarios.”

Despite the Pentagon and its media allies launching a campaign to brand Anthropic as “woke,” they have yet to present any concrete accusations of security vulnerabilities or potential espionage. Instead, sources familiar with internal discussions indicate that the clash revolves entirely around Anthropic’s adherence to its “acceptable use policy.”

An anonymous source, citing the sensitivity of the negotiations, informed The Verge that Anthropic has clearly communicated its non-negotiable "red lines" to the government. These specifically include autonomous kinetic operations and mass domestic surveillance. Regarding surveillance, the source explained that “laws haven’t caught up to what AI can do,” potentially infringing on American civil liberties. For lethal autonomous weapons, the source stated that the technology “isn’t there yet for fully autonomous weapons with no humans in loop.”

Hamza Chaudhry, the AI and national security lead at the Future of Life Institute, a nonpartisan research group focused on AI governance, observed that Anthropic’s self-imposed limitations are consistent with existing, unrepealed government directives.

Chaudhry emphasized via text to The Verge, “DoD Directive 3000.09 requires that all autonomous weapon systems be designed so that commanders and operators be able to ‘exercise appropriate levels of human judgment over the use of force’ and the Political Declaration on Military Use of AI launched by the US Government and endorsed by 50 states enshrines this principle.” He continued, “And DoD Directive 5240.01, reinforced by provisions in the FY2017 NDAA and the Trump-era Responsible AI Implementation Pathway, prohibits intelligence components from collecting information on U.S. persons except under specific legal authorities such as FISA or Title 50.”

Chaudhry concluded, “Anthropic’s acceptable use policy reflects these same lines, and until the Pentagon formally renounces, clarifies or updates these policy positions, the big question is whether the company can be compelled out of a policy that the government itself has committed to in principle.”

Representing the Pentagon in these talks is Emil Michael, a Trump appointee who serves as the Undersecretary of Defense for Research and Engineering—a role often likened to the Pentagon’s chief technology officer. The first source described Michael, who cultivated an aggressive reputation as Uber’s chief business officer and once boasted about conducting opposition research on journalists, as a “tough negotiator.” (It should be noted that Michael was ousted from Uber in 2017 following a board investigation into the company’s culture of sexual harassment, which was triggered by his and several executives’ visit to a South Korean escort bar.)

“This is truly a matter of principle for Emil,” stated a second individual familiar with the situation, indicating Michael’s displeasure that a private entity sought to restrict the government’s utilization of its technology. It remains unclear whether the White House or David Sacks, the influential venture capitalist and AI and crypto czar, had pre-approved Michael’s assertive tactics.

Currently, Anthropic’s “acceptable use policy” is an integral component of the $200 million contract it secured with the Department of Defense last July. In its initial announcement, the company underscored “responsible AI” five times. Anthropic articulated its belief that “the most powerful technologies carry the greatest responsibility,” asserting that within government contexts, “where decisions affect millions and stakes couldn’t be higher,” such responsibility was “essential” to ensure “AI development strengthens democratic values globally by maintaining technological leadership to protect against authoritarian misuse.”

However, in January, Hegseth issued a memo declaring that the department would become “an ‘AI-first’ warfighting force across all components” and mandated that the “any lawful use” clause be integrated into all AI services procurement contracts within 180 days, including existing agreements.

Hegseth’s memo repeatedly stressed an uncompromising prioritization of speed, stating that the nation must “eliminate blockers to data sharing … [and] approach risk tradeoffs, ‘equities’, and other subjective questions as if we were at war.” He further indicated that AI agents would be integrated “from campaign planning to kill chain execution” and that the department would aim to turn “intel into weapons in hours” in their development and experimentation.

Hegseth consistently placed speed above safety and potential inaccuracies, writing, “We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.” He reiterated later in the memo that “responsible AI” would undergo significant changes within the department, both in combat scenarios and across military ranks. He explicitly stated that “Diversity, Equity, and Inclusion and social ideology have no place in the DoW,” adding that the department “must also utilize models free from usage policy constraints that may limit lawful military applications.” Echoing Trump’s anti-“woke AI” executive order, Hegseth announced that benchmarks for model objectivity would become a new primary criterion for AI service procurement.

OpenAI, xAI, and Google promptly renegotiated their own $200 million contracts with the Pentagon to conform to Hegseth’s directive. Yet, none of these companies’ models possess an Impact Level 6 security classification, meaning that ChatGPT, Grok, and Gemini could not immediately substitute Claude if Anthropic were blacklisted—a single-supplier vulnerability that could severely backfire on the Pentagon.

Chaudhry highlighted, “Claude is the only frontier AI model operating on fully classified Pentagon networks, deployed through Palantir’s AI Platform and Amazon’s Top Secret Cloud, meaning it sits at the center of workflows that most other models cannot yet access.” He added, “The designation would require every defense contractor seeking government work to certify they have removed all Anthropic technology from their systems.”

This unique position grants Anthropic significant leverage in its ongoing disputes with the Pentagon, which reportedly intensified after the company discovered its models were used in the capture of Venezuelan President Nicolás Maduro, a clear violation of their existing agreement.

Technically, Anthropic is prohibited from coordinating or forming alliances with other AI labs being offered these new terms, even if they were amenable, due to federal procurement regulations. However, with the conflict unfolding publicly, tech workers, AI employees, and others from the tech industry have voiced frustration that other companies are not advocating for similar terms as Anthropic. Conversely, some observers believe it is only a matter of time before Anthropic yields to the pressure.

William Fitzgerald, a former Google employee who now leads the advocacy firm The Worker Agency, remarked, “It would be a really good time for [other labs] to be like, ‘Wait, what are you doing with our technology?’” He continued, “These AI labs people have so much power. They’re smaller teams, and they’re still kind of shaping who they’re going to be … I do think that they can justify their valuations without the military work. There’s other ways that you can run a business without killing people in your business model.”

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news