Elon Musk's legal challenge to dismantle OpenAI hinges on whether its for-profit arm supports or undermines the foundational mission of the advanced AI lab: ensuring artificial general intelligence benefits humanity.
In a federal court hearing in Oakland on Thursday, testimony from a former employee and board member suggested that the company's aggressive push to commercialize AI products compromised its dedication to AI safety.
Rosie Campbell, who joined OpenAI's AGI readiness team in 2021, departed in 2024 following the disbandment of her team. Concurrently, another team dedicated to safety, the Super Alignment team, was also dissolved.
She testified that upon her arrival, "it was very research-focused and common for people to talk about AGI and safety issues." However, she observed, "Over time it became more like a product-focused organization."
During cross-examination, Campbell acknowledged the probable necessity of substantial funding for the lab's pursuit of AGI. Nevertheless, she maintained that developing a super-intelligent computer model without robust safety protocols would deviate from the mission of the organization she initially joined.
Campbell highlighted an incident where Microsoft deployed a version of OpenAI's GPT-4 model in India via its Bing search engine, prior to its evaluation by the company's Deployment Safety Board (DSB). While the model itself posed no significant risk, she emphasized the importance of "to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably."
OpenAI's attorneys also elicited Campbell's "speculative opinion" that OpenAI's safety methodology surpasses that of xAI, the AI company founded by Musk and acquired by SpaceX earlier this year.
OpenAI publicly releases evaluations of its models and shares a safety framework, though the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, appointed as its head of Preparedness in February from Anthropic, was a hire that CEO Sam Altman stated would allow him to "sleep better tonight."
The GPT-4 deployment in India, among other concerns, served as a critical factor leading OpenAI's non-profit board to briefly oust CEO Sam Altman in 2023. This event followed complaints from employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, regarding Altman's conflict-averse management style. Tasha McCauley, a board member at the time, testified about anxieties that Altman's lack of transparency hindered the unusual structure of the board from functioning effectively.
McCauley further detailed a widely reported pattern of Altman misleading the board. Notably, Altman reportedly misrepresented McCauley's intent to remove Helen Toner, another board member who had published a white paper implicitly criticizing OpenAI's safety policy. Additionally, Altman failed to inform the board about the public launch of ChatGPT, leading to concerns among members regarding his insufficient disclosure of potential conflicts of interest.
McCauley informed the court, "We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us. Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way."
However, the decision to remove Altman coincided with a tender offer to the company's employees. McCauley explained that as OpenAI's staff rallied behind Altman and Microsoft intervened to restore the previous leadership, the board ultimately reversed its decision, with the dissenting members subsequently stepping down.
This apparent inability of the non-profit board to exert influence over the for-profit entity directly supports Musk's claim that OpenAI's transformation from a research organization into one of the world's largest private companies violated the implicit agreement among its founders.
David Schizer, a former Dean of Columbia Law School serving as an expert witness for Musk's team, echoed McCauley's concerns.
Schizer stated, "OpenAI has emphasized that a key part of its mission is safety and they are going to prioritze safety over profits. Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue."
Given the deep integration of AI within for-profit companies, this issue extends far beyond a single laboratory. McCauley contended that the internal governance failures at OpenAI should prompt stronger government regulation of advanced AI, asserting that "[if] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal."
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.
