A critical juncture has arrived for Anthropic as it confronts a looming ultimatum from the Pentagon: grant the U.S. military unrestricted access to its artificial intelligence technology for purposes including mass surveillance and fully autonomous lethal weapons, or risk being designated a “supply chain risk,” potentially forfeiting contracts worth hundreds of billions of dollars. This escalating pressure has prompted tech professionals across the industry to scrutinize their own companies’ engagements with government and military entities, questioning the societal implications of the future they are helping to construct.
While the Department of Defense has been engaged in weeks of negotiations with Anthropic to remove its protective guardrails – including allowing the U.S. military to utilize Anthropic’s AI to identify and engage targets without human oversight – reports indicate that OpenAI and xAI had already consented to similar terms. However, OpenAI is now reportedly striving to adopt the same ethical boundaries in its agreements as Anthropic. This unfolding situation has left employees at various defense-contracted companies feeling a profound sense of disillusionment. An Amazon Web Services employee articulated this sentiment to The Verge, stating, “When I joined the tech industry, I thought tech was about making people’s lives easier, but now it seems like it’s all about making it easier to surveil and deport and kill people.”
Conversations with The Verge revealed similar moral disquiet among current and former employees from major tech firms including OpenAI, xAI, Amazon, Microsoft, and Google, reflecting a shifting ethical landscape within their organizations. Organized worker groups, collectively representing 700,000 tech professionals at Amazon, Google, Microsoft, and other companies, have collectively signed a letter urging their employers to reject the Pentagon’s demands. Yet, many expressed skepticism that their companies, whether directly involved in this specific conflict or not, would challenge governmental directives or push back effectively.
“From their perspective, they’d love to keep making money and not have to talk about it,” commented a software engineer from Microsoft, encapsulating the perceived corporate priority.
To date, Anthropic has maintained its principled stance. Anthropic CEO Dario Amodei issued a statement on Thursday asserting that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” Nevertheless, Amodei clarified that he is not entirely opposed to lethal autonomous weapons in the future, provided the technology achieves sufficient reliability, which he deemed insufficient “today.” He even extended an offer to collaborate with the DoD on “R&D to improve the reliability of these systems, but they have not accepted this offer,” as detailed in his statement.
In recent years, a discernible trend has emerged where major tech companies have relaxed their internal regulations or revised their mission statements to capitalize on lucrative government and military contracts. In 2024, OpenAI notably removed a prohibition on “military and warfare” use cases from its terms of service, subsequently signing a deal with autonomous weapons manufacturer Anduril and securing its DoD contract. Just this week, Anthropic revised its long-standing responsible scaling policy, abandoning a prior safety pledge to ensure its competitive standing in the rapidly evolving AI sector. Similarly, tech giants such as Amazon, Google, and Microsoft have facilitated defense and intelligence agencies’ access to their AI products, with some even agreeing to collaborate with ICE despite widespread public and employee protests.
Historically, tech workers’ collective opposition to partnerships and deals perceived as detrimental to society has occasionally precipitated significant change. For instance, in 2018, thousands of Google employees successfully pressured the company to terminate its “Project Maven” collaboration with the Pentagon. Microsoft workers also presented leadership with an anti-ICE petition signed by approximately 500 employees, although Microsoft continues to work with the agency. In 2020, following the murder of George Floyd, tech companies made public declarations and financial commitments in support of the Black Lives Matter movement. However, recent months have revealed a starkly different reality within the industry: a pervasive culture of fear and silence, particularly amidst cooperation with the Trump administration and ICE, as tech workers recently conveyed to The Verge.
Companies appear to be emulating the trajectory of long-standing surveillance and military technology partnerships, which have grown increasingly hawkish. This includes Palantir, co-founded by Peter Thiel, whose CEO Alex Karp recently informed shareholders, “Palantir is here to disrupt and make the institutions we partner with the very best in the world, and, when it’s necessary, to scare enemies and on occasion kill them. And we hope you’re in favor of that.” In response to these developments, Protect Democracy, a non-profit organization, recently issued an open letter advocating for Congressional oversight of the Department of Defense’s demands for unrestricted AI use.
OpenAI, Google, Microsoft, xAI, and Amazon did not immediately respond to requests for comment regarding these matters.
A former xAI employee commented to The Verge that “Everyone is actually working on killer robots at this point,” expressing a belief that all companies will eventually follow the path of Palantir, Anduril, and xAI. This, he suggested, is driven by a governmental perception that non-compliance is “against the benefits of the country, in a sense.” He noted a “big push for working with the military, and the trend is it’s cool to do it… You’re a patriot if you do it.”
A Google employee characterized the situation as a “dominance display from Hegseth that is disgusting.” He further elaborated, “Over and over AI is presenting us with choices about who we want to be and what kind of society and future we want to have. And they’re coming at us fast and with, really, the least thoughtful and least principled leaders in power that we could imagine. I can only thank Anthropic for insisting on the decent path and using their leverage — that they are indispensable — to chart a course toward a humane world and a humane future.”
The AWS employee observed that “boundaries have definitely eroded in terms of the customers big tech is willing to court” and highlighted a “deliberate whitewashing of the implications of new lucrative deals.” She recounted a recent email from an AWS executive celebrating a more than $580 million contract with the U.S. Air Force, among other partnerships, as a testament to Amazon’s AI successes, without any acknowledgment of the broader scope or potential harms involved.
“If the government is hell-bent on pursuing technologies like this, they should have to build them themselves, and be answerable for those decisions,” she asserted.
This erosion of ethical boundaries appears to have extended to internal company culture, normalizing the idea of constant surveillance. The AWS employee noted that she and her colleagues are monitored on their AI usage for work, office attendance, and other metrics. “I can see myself and my coworkers getting more desensitized to surveillance on ourselves at work, and I’m worried that means we’re obeying, complying, and giving up too much in advance,” she expressed.
An OpenAI employee suggested that the general atmosphere within the AI industry over recent weeks “has reopened the door to more discussion… about the values and the future of the technology.” The employee cited the Pentagon-Anthropic situation, recent ICE-related headlines, and the rapid advancements in AI as key factors stimulating these internal dialogues.
However, the OpenAI employee also noted that individuals who are immigrants or in more vulnerable positions remain hesitant to voice their concerns.
According to the former xAI employee, Anthropic appears to be uniquely positioned to refuse the Pentagon’s demands and remain viable. Its strategic focus on enterprise rather than consumer AI business may offer a more sustainable path, even without government contracts, providing it with leverage. A software engineer at Microsoft, speaking generally about Anthropic’s stance, remarked, “I was surprised to see them stand on some form of principle. I don’t know how long it’ll last.”
“Will it last?” indeed seems to be the pervasive question. The Pentagon has reportedly already contacted two major defense contractors, Boeing and Lockheed Martin, requesting information about their reliance on Anthropic’s Claude, as it proceeds with potentially designating Anthropic a “supply chain risk”—a classification typically reserved for threats to national security and rarely, if ever, applied to a U.S. company. Reports also suggest the Pentagon may be contemplating invoking the Defense Production Act to compel Anthropic to comply with its request.
Should Anthropic ultimately yield, the Microsoft employee predicted that there is little likelihood of it or other companies reversing course on military AI and surveillance. “Once you’re in the door with the Department of Defense or whatever we’re calling it now… I think it’s probably hard for them to actually have the oversight they claim. It’s just going to be lucrative to basically give themselves permission to do the thing that makes the most money.”
In Microsoft’s specific case, he expressed low expectations for the company to adhere to ethical principles, citing its extensive collaboration with the Israeli Defense Forces, which has included mass surveillance of Palestinians and dissidents, despite employee protests. (Microsoft stated it terminated some aspects of this partnership last year.)
Another Microsoft employee told The Verge that while “Microsoft holds a Responsible AI ‘commitment,’… they are currently attempting to play both sides for the sake of profit rather than meaningfully commitment to Responsible AI.”
However, an AI startup employee suggested this dynamic is not new. In her view, the boundaries concerning what types of applications companies are willing to power with their technology have often been “fuzzy, especially within AI.” She added, “A lot of it has been going on beneath the surface for as long as AI has been around.”
The AWS employee underscored the urgent need for “cross-tech solidarity and a coherent, worker-led vision for AI now more than ever.”
She further clarified Anthropic’s proposed safeguards: “The safeguards that Anthropic is trying to keep in place are no mass surveillance of Americans and no fully autonomous weapons, which just means that they want a human in the loop if the machine is going to kill somebody.” She concluded by questioning public sentiment: “Even if this technology were perfect — which it isn’t — I think most Americans don’t want machines that kill people without human oversight running around in an America that’s become an AI-powered mass surveillance state.”
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.