Skip to main content
Feb 11

OpenAI Axes Mission Alignment Team Focused on AI Safety

OpenAI has dissolved an internal team explicitly tasked with ensuring its artificial intelligence systems are "safe, trustworthy, and consistently ali

2 min read118 views3 tags
Originally reported bytechcrunch

OpenAI has dissolved an internal team explicitly tasked with ensuring its artificial intelligence systems are "safe, trustworthy, and consistently aligned with human values." Concurrently, the former leader of this team has transitioned into a new executive position within the company, designated as "chief futurist."

Confirming reports initially surfaced by Platformer, OpenAI informed TechCrunch that all members of the disbanded team have been reassigned to other roles within the organization.

The unit in question, which reportedly formed in September of 2024, served as the startup's dedicated internal research group focusing on AI alignment. This represents a critical and expansive area of interest across the industry, aimed at guaranteeing that AI development proceeds in harmony with human interests and societal well-being.

An excerpt from OpenAI’s Alignment Research blog underscores this mission, stating, “We want these systems to consistently follow human intent in complex, real-world scenarios and adversarial conditions, avoid catastrophic behavior, and remain controllable, auditable, and aligned with human values.”

Further illustrating its purpose, an OpenAI job posting for the Alignment team described its work as centered on AI research dedicated to “developing methodologies that enable AI to robustly follow human intent across a wide range of scenarios, including those that are adversarial or high-stakes.”

In a blog post published on Wednesday, Josh Achiam, who previously headed OpenAI’s Alignment team, elaborated on his new responsibilities as the company’s Chief Futurist. Achiam articulated his objective: “My goal is to support OpenAI’s mission — to ensure that artificial general intelligence benefits all of humanity — by studying how the world will change in response to AI, AGI, and beyond.”

Achiam also noted that his new role would involve collaboration with Jason Pruet, a physicist from OpenAI’s technical staff.

An OpenAI spokesperson confirmed that the remaining six or seven individuals from the Alignment team have been redistributed to various departments across the company. While the exact new assignments were not specified, the spokesperson indicated that these team members would continue to engage in similar lines of work. It remains unconfirmed whether Achiam will form a new team as part of his "futurist" role.

The spokesperson characterized the team's disbandment as a typical outcome of routine reorganizations common within a rapidly evolving company.

This is not the first instance of OpenAI restructuring its safety-focused initiatives; a "superalignment team," established in 2023 to investigate long-term existential threats posed by AI, was also disbanded in 2024.

Achiam’s personal website continues to list him as head of Mission Alignment at OpenAI, reflecting his stated interest in ensuring that the “long-term future of humanity is good.” His LinkedIn profile indicates he held the position of Head of Mission Alignment since September of 2024.

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news