The Meta Oversight Board is strongly urging the company to significantly enhance its AI content labeling initiatives, particularly through the widespread adoption of C2PA standards.
According to the semi-independent Meta Oversight Board, which provides guidance on the company’s content moderation policies, Meta’s current methods for identifying deepfakes are “not robust or comprehensive enough.” This inadequacy, they state, prevents Meta from effectively managing the rapid spread of misinformation during armed conflicts, such as the Iran war. Consequently, the Board is calling for a comprehensive overhaul of how Meta detects, surfaces, and labels AI-generated content across its platforms, including Facebook, Instagram, and Threads.
This urgent call for action follows an investigation into a fabricated AI video shared last year on Meta’s social platforms, which falsely depicted damage to buildings in Israel. The Board emphasizes that its recommendations are particularly critical now, given the “massive military escalations” observed across the Middle East this week. In its official announcement, the Board underscored the vital importance of access to accurate and reliable information for public safety, especially amidst the heightened risk of AI tools being exploited to disseminate misinformation.
“The Board’s findings highlight that Meta’s current system to properly label AI content is overly dependent on self-disclosure of AI usage and escalated review and does not meet the realities of today’s online environment,” the Meta Oversight Board stated. The Board further noted, “The case also highlights the challenges with cross-platform proliferation of such content, with the content appearing to have originated on TikTok before appearing on Facebook, Instagram, and X.”
The Board has issued several key recommendations for Meta. These include improving existing misinformation rules to specifically address deceptive deepfakes and establishing a new, distinct community standard dedicated to AI-generated content. Meta is also being pressed to develop more advanced AI detection tools, increase transparency regarding penalties for AI policy violations, and scale its AI content labeling efforts. The latter specifically involves ensuring that “High-Risk AI” labels are more frequently applied to synthetic images and videos, and enhancing the adoption of C2PA (Content Credentials) to make information on AI-generated content “clearly visible and accessible to users.”
Concerns have also been raised by the Board regarding reports that Meta is “inconsistently implementing” the C2PA standard, even for content generated by its own AI tools, with only “a portion” of Meta AI outputs reportedly being labeled correctly. While Meta is not legally bound to implement these recommendations, they do align with previous concerns voiced by Instagram head Adam Mosseri last year regarding the necessity of improving the identification of authentic photographs and videos across Meta’s platforms.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.