The legal framework governing artificial intelligence does not align with the interpretations presented by OpenAI CEO Sam Altman.
On a recent Friday evening, in the wake of a dispute between the Department of Defense (DoD) and AI firm Anthropic, OpenAI CEO Sam Altman declared that his company had successfully concluded negotiations for new terms with the Pentagon. This announcement followed the government's move to blacklist Anthropic for upholding two critical red lines concerning military AI usage: a strict prohibition on mass surveillance of Americans and a ban on lethal autonomous weapons or AI systems capable of making kill decisions without human oversight. Altman, however, suggested that OpenAI had secured a unique agreement that incorporated these very limitations.
Altman stated, "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." He further elaborated, "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," using the Trump Administration's preferred nomenclature for the Defense Department, the Department of War.
Immediately, Altman's assertion faced widespread skepticism across social media platforms and within the AI industry. Observers questioned why the Pentagon would suddenly accept stipulations it had previously and unequivocally rejected.
Sources close to The Verge indicated that the Pentagon had, in fact, not altered its stance. Instead, OpenAI reportedly agreed to adhere to existing laws that have historically permitted forms of mass surveillance, while simultaneously claiming these laws would protect its stated red lines.
A source familiar with the Pentagon’s discussions with AI companies confirmed that OpenAI’s agreement is considerably less stringent than the one Anthropic advocated for. This divergence is largely attributed to the inclusion of three crucial words: "any lawful use." During negotiations, the source revealed, the Pentagon remained firm on its intent to collect and analyze bulk data on American citizens. A line-by-line examination of OpenAI’s terms, according to the source, indicates that any technically legal action by the U.S. military can be executed using OpenAI’s technology. Over recent decades, the definition of "technically legal" has been expanded to encompass extensive mass surveillance programs and other far-reaching operations.
Miles Brundage, formerly OpenAI’s head of policy research, commented on X that, "in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them."
In a statement to The Verge, OpenAI spokesperson Kate Waters countered these claims, asserting that the Pentagon had not requested mass surveillance capabilities and denying that the agreement permitted the violation of specific boundaries. Waters affirmed, "The system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way."
AI systems possess the capacity to empower military and other governmental departments to conduct extensive surveillance operations with unprecedented granularity. AI excels at identifying patterns, and human behavior is inherently pattern-based. One can envision an AI system aggregating an individual's geolocation data, web browsing history, personal financial information, CCTV footage, voter registration records, and more – some publicly available, others acquired from data brokers. Anthropic's CEO, Dario Amodei, stated, "Using these systems for mass domestic surveillance is incompatible with democratic values." He added, "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale."
While Anthropic pushed for a contract that explicitly outlawed such practices, OpenAI appears to rely heavily on existing legal limitations. Its agreement with the Pentagon reportedly stipulates that "for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose."
However, this reliance offers little reassurance. In the years following 9/11, U.S. intelligence agencies significantly expanded surveillance systems, which they deemed to fall within the very legal boundaries OpenAI now cites. This included multiple mass domestic spying operations, alongside seemingly highly intrusive international ones. In 2013, former National Security Agency contractor Edward Snowden exposed the vast scope of some of these programs, such as the reported daily collection of Verizon customer telephone records and the bulk gathering of individual data from tech giants like Microsoft, Google, and Apple through a clandestine program named PRISM. Despite promises of reform from intelligence agencies and legislative efforts, few substantial limitations on these powers were ever implemented. Mike Masnick, founder of Techdirt, commented online that OpenAI’s deal "absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons."
Dave Kasten of Palisade Research critiqued OpenAI’s agreement, writing, "The intelligence law section of this is very persuasive if you don’t realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities."
Waters reiterated that the Pentagon "has not asked us to support that type of collection or analysis, and our agreement does not permit it." She added, "Our agreement does not permit uses of our models for unconstrained monitoring of U.S. persons’ private information, and all intelligence activities must comply with existing US law. In practical terms, this means the system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way."
Anthropic’s Amodei has publicly expressed concern that current legal frameworks have not kept pace with AI's potential for large-scale surveillance. Altman, in his statement, emphasized that OpenAI’s contract "reflects [its red lines] in law and policy," implying adherence to existing laws and current Pentagon policies, the latter of which are subject to change. OpenAI attempts to address this in a Q&A, stating that the contract "explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement."
Sarah Shoker, a senior research scholar at the University of California Berkeley and former lead of OpenAI’s geopolitics team, informed The Verge that she observed "a lot of modifying words that are in the sentences that the [OpenAI] spokesperson gave." Shoker suggested that the ambiguous language makes it unclear what precisely is prohibited. "The use of the word ‘unconstrained,’ the use of the word ‘generalized,’ ‘open-ended’ manner — that’s not a complete prohibition. That is language that’s designed to allow optionality for the leadership … It allows leaders also not to lie to their employees in the event that the Pentagon does use the LLM in a legal manner without OpenAI leadership’s knowledge."
Based on the known details of OpenAI’s contract and the Pentagon’s current legal parameters, the military could legally leverage OpenAI’s technology to search foreign intelligence databases for information on Americans at scale. Furthermore, the Pentagon could acquire bulk location data from data brokers and utilize OpenAI’s technology to identify typical American movement patterns, or to rapidly and seamlessly construct profiles of numerous U.S. citizens using publicly available information, including surveillance footage, social media posts, online news, and voter registration records, potentially integrated with other previously purchased data.
OpenAI’s "red line" regarding lethal autonomous weapons appears similarly tenuous. Excerpts from the company’s contract with the Pentagon, released on Saturday, state that OpenAI’s technology "will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control." This aligns with a 2023 Department of Defense directive. The agreement reportedly contains no additional contractually mandated prohibitions or restrictions, which ostensibly facilitated its approval by the Pentagon. Anthropic, conversely, sought an outright ban on unsupervised lethal autonomous weapons until the technology was deemed sufficiently mature.
The source indicated that the majority of OpenAI’s agreement contained no novel elements, nor anything unfamiliar to other AI companies engaged in Pentagon contracts, whether through negotiation proposals or existing practices.
After a Trump administration official confirmed that OpenAI’s agreement "flows from the touchstone of ‘all lawful use,’" Altman cited other provisions of the agreement to argue that OpenAI was upholding its red lines. He mentioned, for instance, that some OpenAI employees would obtain security clearances to monitor the systems, and that OpenAI would implement classifiers—small models designed to monitor and tag larger models, potentially preventing them from performing certain actions. In its blog post about the agreement, OpenAI stated that its deployment architecture "will enable us to independently verify that these red lines are not crossed, including running and updating classifiers."
However, the source contended that this is not necessarily accurate. According to the source, AI companies already involved with the Pentagon employ these safeguards, and their effectiveness is limited. Classifiers, for example, would be unable to confirm whether a human reviewed an AI system’s decision to attack a target before a lethal strike, the source explained. Nor, the source added, could a classifier determine if a request to summarize an American's social media posts was an isolated query or part of a broader mass surveillance operation. Furthermore, if the government deems an action legal, OpenAI’s classifiers would not be permitted to prevent the technology from executing it, the source stated.
Altman asserted that OpenAI’s deal includes "human responsibility for the use of force, including for autonomous weapon systems." This differs from Anthropic’s demand for these systems not to be deployed "without proper [human] oversight." While the specific contractual definitions of these terms are not publicly available, "human responsibility" could imply accountability for system decisions after the fact, whereas Anthropic’s call for "oversight" would necessitate human involvement before and/or during an AI system’s decision-making process for lethal targets.
Similar to its stance on mass surveillance, OpenAI argues that technical safeguards would help maintain its red line against "killer robots." The company stated that it was "not providing the DoW with ‘guardrails off’ or non-safety trained models," and that its technology would be deployed solely in the cloud, rather than on edge devices—devices that process data locally, such as a military drone—where it noted "there could be a possibility of usage for autonomous lethal weapons."
However, the source indicated that deploying OpenAI's technology exclusively in the cloud holds little significance for either of OpenAI's stated limitations. Mass domestic surveillance, the source explained, involves such vast quantities of data that carrying it out without cloud infrastructure is virtually impossible. Moreover, even if the final kill decisions are executed on a local machine, the majority of the preceding decisions—the "autonomous kill chain"—involve running powerful algorithms initially in the cloud, according to the source. Therefore, even if OpenAI’s technology is not directly involved in pulling the trigger, it could very well be powering every step leading up to that point, without any guarantee of human oversight at the final stage.
Again, OpenAI’s agreement ultimately permits anything the U.S. government determines to be legal. Even its assurances of adhering only to current laws and policies, rather than modified or reissued ones, may not offer robust safeguards. Historically, agencies have reinterpreted existing laws in ways that effectively grant them new powers. For instance, the Trump administration asserted that laws like the International Emergency Economic Powers Act justified unprecedented presidential powers, such as imposing global tariffs. While these powers have sometimes been declared illegal, this typically occurs only after months of legal challenges, during which OpenAI would be compelled either to comply with administrative directives or make an independent legal judgment. Altman has publicly stated that, unlike Anthropic, OpenAI is "generally quite comfortable with the laws of the US."
Defense Secretary Pete Hegseth and President Trump, through a flurry of social media posts, emphatically declared that they would never permit a private tech company to dictate how the U.S. military utilized technology for warfare. Hegseth wrote, "The Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives," while Trump added, "America’s warfighters will never be held hostage by the ideological whims of Big Tech."
Even Jeremy Lewin, a former undersecretary in the Trump administration, noted that the Pentagon’s deal with OpenAI (and a separate agreement with xAI) represented a "compromise that Anthropic was offered, and rejected"—underscoring that the terms did not align with Anthropic’s own red lines. Lewin suggested that these deals included certain mutually agreed-upon safety mechanisms, plausibly referring to the technical safeguards Altman had mentioned.
In his Friday announcement, Altman stated that OpenAI had requested the Pentagon to "offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept." This remark appeared to be a subtle jab at Anthropic, given that the OpenAI rival had not accepted such an agreement and, according to Lewin, had already been offered similar terms and declined them.
Anthropic’s refusal of that "compromise" has led to significant repercussions. On Friday, following the breakdown of negotiations between Anthropic and the Pentagon, the latter announced that Anthropic would be designated a "supply-chain risk"—a classification typically reserved for foreign companies with cybersecurity concerns and almost never publicly applied to an American firm. Anthropic indicated its willingness to challenge this designation in court. Trump subsequently ordered federal agencies to cease using Anthropic’s AI, and it was not immediately clear to what extent the Pentagon might potentially blacklist companies that utilized Claude for services unrelated to national security.
Tech workers across the industry have voiced support for Anthropic’s decision to stand firm, questioning why their own companies were not aligning with Anthropic’s red lines and presenting a united front. The company’s stance has been widely praised online, and on Saturday, its app surpassed ChatGPT to become the most-downloaded application on Apple’s App
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.