Caitlin Kalinowski, a prominent hardware executive, has announced her resignation today from her position as head of OpenAI’s robotics team. This decision comes in direct response to the company’s recently finalized, and highly contentious, agreement with the Department of Defense.
In a public statement shared on social media, Kalinowski expressed the difficulty of her choice, noting, "This wasn’t an easy call." While acknowledging AI's vital contribution to national security, she drew clear boundaries, stating, "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
Kalinowski, who previously led the team responsible for developing augmented reality glasses at Meta, joined OpenAI in November 2024. In her announcement, she underscored that her decision was "about principle, not people," emphasizing her "deep respect" for CEO Sam Altman and the entire OpenAI team.
A subsequent post on X further clarified Kalinowski’s position: "To be clear, my issue is that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed."
An OpenAI spokesperson confirmed Kalinowski’s departure when contacted by TechCrunch.
In response, OpenAI released a statement asserting, "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons." The company further acknowledged, "We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world."
OpenAI’s agreement with the Pentagon was unveiled just over a week ago, following the collapse of discussions between the Pentagon and fellow AI company Anthropic. Anthropic had sought to negotiate robust safeguards to prevent its technology from being used in mass domestic surveillance or fully autonomous weapons. Subsequently, the Pentagon designated Anthropic a supply-chain risk. Anthropic has declared its intent to challenge this designation in court, while Microsoft, Google, and Amazon have affirmed their commitment to continue offering Anthropic’s Claude to non-defense customers.
In the wake of Anthropic's situation, OpenAI swiftly announced its own agreement, permitting its technology to be utilized in classified environments. As executives attempted to explain the deal on social media, the company characterized its approach as "a more expansive, multi-layered approach" that incorporates not only contractual language but also technical safeguards to uphold red lines similar to those Anthropic had sought.
Nonetheless, the controversy appears to have negatively impacted OpenAI’s standing among some consumers. Reports indicate a 295% surge in ChatGPT uninstalls, while Anthropic’s Claude has climbed to the top of the App Store charts. As of Saturday afternoon, Claude and ChatGPT held the number one and number two positions, respectively, among free apps in the U.S. App Store.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.