Skip to main content
Mar 1

OpenAI Reveals New Details on Pentagon Pact

OpenAI CEO Sam Altman has openly acknowledged that the company's recent agreement with the Department of Defense was "definitely rushed," conceding th

3 min read119 views3 tags
Originally reported bytechcrunch

OpenAI CEO Sam Altman has openly acknowledged that the company's recent agreement with the Department of Defense was "definitely rushed," conceding that "the optics don’t look good."

This admission follows the collapse of negotiations between rival AI firm Anthropic and the Pentagon on Friday. In the wake of this, President Donald Trump mandated that federal agencies cease using Anthropic's technology after a six-month transition period, with Secretary of Defense Pete Hegseth further labeling the AI company as a supply-chain risk.

Amidst this backdrop, OpenAI swiftly announced its own agreement to deploy its models within classified government environments. Given that Anthropic had previously drawn clear "red lines" against the use of its technology for fully autonomous weapons or mass domestic surveillance, and Altman had stated OpenAI shared these same principles, immediate questions arose: Was OpenAI genuinely committed to its stated safeguards? And what enabled them to secure a deal where Anthropic had failed?

As OpenAI executives took to social media to defend the new agreement, the company simultaneously published a blog post detailing its operational approach and safeguards.

The post specifically outlined three critical areas where OpenAI's models are prohibited from use: mass domestic surveillance, autonomous weapon systems, and "high-stakes automated decisions (e.g. systems such as ‘social credit’)."

OpenAI emphasized that, unlike other AI companies that have reportedly "reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments," its own agreement protects its red lines through "a more expansive, multi-layered approach."

The blog post further elaborated on these protections: "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law."

OpenAI also commented on Anthropic's situation, stating, "We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it."

However, shortly after the post's publication, Mike Masnick of Techdirt contended that the deal "absolutely does allow for domestic surveillance." His argument hinged on the agreement's stipulation that private data collection would comply with Executive Order 12333, among other laws. Masnick characterized this order as the mechanism by which "the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons."

In response, Katrina Mulligan, OpenAI’s head of national security partnerships, posted on LinkedIn, asserting that much of the debate surrounding the contract language mistakenly assumes that "the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War."

Mulligan clarified, "That’s not how any of this works," stressing that "Deployment architecture matters more than contract language [...] By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware."

Altman himself engaged with questions regarding the deal on X, reiterating his admission that it had been rushed. This speed led to considerable backlash against OpenAI, so much so that Anthropic’s Claude briefly surpassed OpenAI’s ChatGPT in Apple’s App Store on Saturday. This prompted the crucial question: why proceed?

Altman explained his rationale: "We really wanted to de-escalate things, and we thought the deal on offer was good. If we are right and this does lead to a de-escalation between the [Department of War] and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as […] rushed and uncareful."

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news