Former employees indicate that a recent restructuring at xAI was precipitated by underlying tensions concerning safety protocols and a pervasive feeling of being “stuck in the catch-up phase.”
The past few days have seen a notable churn at xAI, marked by a series of high-profile departures among its co-founders and staff. On Tuesday and Wednesday, co-founder Yuhuai (Tony) Wu announced his exit, citing it was “time for [his] next chapter.” Later that day, co-founder Jimmy Ba followed suit with a similar post, declaring it was “time to recalibrate [his] gradient on the big picture.” These resignations mean that xAI now retains only half of its original 12 co-founders. Additionally, numerous other employees have taken to X (formerly Twitter) to announce their departures, with some revealing plans to launch their own AI companies.
Elon Musk’s AI startup, through various mergers and acquisitions, now operates under the same corporate umbrella as his space exploration company, SpaceX, and his social media platform, X. Following the SpaceX merger announcement last week, rumors have intensified regarding a reported $1.25 trillion valuation and the integrated company’s future ambitions. Musk has publicly stated these plans include “space-based AI” data centers and the creation of “the most ambitious, vertically-integrated innovation engine on (and off) Earth.” During an internal xAI meeting on Tuesday, Musk reportedly discussed proposals for establishing an AI satellite factory and even a city on the Moon.
While post-merger periods often present a natural point for organizational changes—and Musk has acknowledged that some departures were part of a reorganization that “unfortunately required parting ways with some people”—there are also strong indications that employees are dissatisfied with the strategic direction Musk is implementing.
An anonymous former employee, who left xAI earlier this year and spoke with The Verge, cited widespread disillusionment within the company regarding xAI’s focus on "NSFW Grok creations" and a perceived disregard for safety. This source also expressed a feeling that the company was perpetually “stuck in the catch-up phase,” failing to introduce anything genuinely novel or distinct from its competitors. The individual elaborated, “Although we were iterating really fast, we were never able to get to a point like, ‘Oh, we’ve made a step function change over what OpenAI or Anthropic or other companies had released.’”
The SpaceX merger reportedly involved issuing $250 billion in new shares to xAI shareholders, potentially affording employees with equity greater financial flexibility to pursue their own ventures. Reflecting this trend, former employee Vahid Kazemi posted on X, stating, “all AI labs are building the exact same thing, and it’s boring. I think there’s room for more creativity. So, I’m starting something new.” Another former staffer confirmed their departure with the aim to “build something new, focused on accelerating science.”
In a similar vein, another former employee announced the launch of Nuraline, an AI infrastructure company, co-founded with other ex-xAI colleagues. This individual wrote, “During my time at xAI, I got to see a clear path towards hill climbing any problem that can be defined in a measurable way. At the same time, I’ve seen how raw intelligence can get lobotomized by the finest human errors … Learning shouldn’t stop at the model weights, but continue to improve every part of an AI system.”
“Safety is a dead org at xAI.”
Musk released a recording of xAI’s 45-minute internal all-hands meeting where he detailed the company's new organizational structure. He outlined four primary divisions for xAI: Grok Main and Voice (dedicated to the core Grok AI model), Coding, Imagine (focusing on image and video AI), and Macrohard, which Musk described as “intended to do full digital emulation of entire companies.”
The former employee who departed earlier this year suggested that Grok’s pivot towards NSFW content was partly due to the dissolution of the safety team. This left minimal safety review processes for the models, beyond basic filters for egregious content like CSAM. “Safety is a dead org at xAI,” the source asserted. It is noteworthy that the restructured organizational chart shared by Elon Musk on X makes no mention of a dedicated safety team. The source also indicated that during their tenure, leadership exhibited frequent disagreements on product feature prioritization, with internal conflicts occasionally hindering progress. Many shipping decisions, they added, were made through an all-company group chat on X, which included Musk.
A second anonymous source, who left xAI before the recent restructuring, echoed the sentiment that Musk’s company was primarily engaged in a "catch-up" strategy. This source articulated, “Trying to do what OpenAI was doing a year ago is not how you beat OpenAI. Everything is a catch-up. There’s almost zero risky bet. If something hasn’t been done before we’re not going to do it.”
This source further highlighted xAI’s insufficient focus on safety, an issue that The Washington Post also brought to light in its reporting earlier this month.
“There is zero safety whatsoever in the company — not in the image [model], not in the chatbot,” the second source emphatically stated. They added a critical observation: “He [Musk] actively is trying to make the model more unhinged because safety means censorship, in a sense, to him.”
Furthermore, the second source revealed that xAI engineers were accustomed to immediately “push[ing] to prod[uction],” and for an extended period, human review was entirely absent from this process.
This source concluded with a stark assessment of the internal culture: “You survive by shutting up and doing what Elon wants.”
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.