Skip to main content
Feb 23

Big Tech's AI Slop: Fighting It While Feeding It

It is inherently more challenging to rectify a problem while simultaneously contributing to its creation. As 2025 drew to a close, Adam Mosseri, head

10 min read81 views3 tags
Originally reported bytheverge

It is inherently more challenging to rectify a problem while simultaneously contributing to its creation.

As 2025 drew to a close, Adam Mosseri, head of Instagram, concluded the year by expressing profound concerns about artificial intelligence. Mosseri lamented that “Authenticity is becoming infinitely reproducible,” adding, “Everything that made creators matter — the ability to be real, to connect, to have a voice that couldn’t be faked — is now accessible to anyone with the right tools.” Despite this, Mosseri asserted that people still desired “content that feels real.” His proposed remedy involved implementing a system to label authentic media, suggesting that “Camera manufacturers will cryptographically sign images at capture, creating a chain of custody.” This, he believed, would establish a reliable method for distinguishing non-AI content.

Ironically, Mosseri's envisioned solution already exists: it is known as C2PA. The unfortunate reality is that Instagram is already employing this standard, yet it appears to be largely ineffective in practice. Instead, C2PA is increasingly perceived as a proxy for genuine action, particularly as Instagram aggressively pursues the development of its own generative AI tools.

AI's growing proficiency in simulating reality poses a significant threat to the cultural landscape and business models that many social media platforms have cultivated around content creators. AI can effortlessly replicate popular dance trends and photo shoots, generate non-existent artists and influencers, and generally mimic the homogeneous content that already saturates social media. Creators are attempting to counteract this by adopting raw and imperfect aesthetics, but AI is rapidly becoming adept at replicating these as well. More alarmingly, AI can be leveraged to rapidly disseminate misinformation regarding critical events, such as the ICE protests in Minnesota or the tragic killings of Renee Nicole Good and Alex Pretti.

Over the past few years, numerous prominent tech companies have ostensibly addressed this issue by adopting Content Credentials, or C2PA. C2PA, an acronym for the Coalition for Content Provenance and Authenticity, is a provenance-based standard established in 2021 by Adobe, Intel, Microsoft, ARM, Truepic, and the BBC. As Mosseri indicated, C2PA tackles deepfakes not by explicitly labeling fabricated material, but by authenticating media that has not been generated by AI. This is achieved by embedding invisible metadata into images, videos, and audio at the point of creation or editing, enabling verification of the content's origin, its creation process, and whether AI was involved. Meta joined the C2PA Steering Committee in September 2024, endorsing the standard and emphasizing that the capacity to comprehend digital content is “critical to maintaining the health of the digital ecosystem.”

While C2PA boasts support from industry giants like Microsoft, Meta, Google, OpenAI, TikTok, and Qualcomm, it represents merely one approach to differentiating authentic content from fabricated. Despite its potential, the system's current implementation appears insufficient in safeguarding users from the proliferation of "AI slop" or deceptive deepfakes. Even with more synthetic content incorporating C2PA information, ordinary users are largely expected to manually search for this data across the vast amount of online media, often without even knowing C2PA exists. This suggests that AI providers may be utilizing C2PA to deflect responsibility, while simultaneously advancing their own generative AI capabilities.

Companies have invested heavily in C2PA and other provenance-based solutions, such as Google’s SynthID watermarking system. While inference-based solutions also exist, which scan for subtle indicators of synthetic generation (like Reality Defender, a C2PA initiative member), these can only estimate the likelihood of AI use. Provenance-based solutions, however, face significant drawbacks. A primary challenge is the requirement for universal adoption across every stage of media creation and hosting, an arguably unattainable goal. For example, C2PA has seen only gradual adoption by camera manufacturers such as Canon, Nikon, Sony, FujiFilm, and Leica, with support being slow and primarily limited to new camera models.

Nathan Kellum-Pathe, a spokesperson for Leica Camera USA, informed The Verge that “Older cameras that do not support C2PA will continue to produce important and valid photographs.” He added, “For these images, trust will still rely on context, reputation, and editorial responsibility.”

Furthermore, provenance metadata is so susceptible to manipulation that OpenAI, a steering member of C2PA, itself acknowledges it can be “easily be removed either accidentally or intentionally.” Platforms like LinkedIn and TikTok continue to struggle with reliably tagging content that should carry C2PA metadata. YouTube employs C2PA, Google’s SynthID, and other systems for proactive AI labeling, but these labels are often inconsistent and challenging to locate. The very definition of a "photo" has become ambiguous, making the clear distinction between real and fake content exceedingly difficult. Meta experienced this firsthand when it mistakenly applied “Made by AI” labels to genuine photographs on Instagram, provoking considerable backlash from photographers.

Meta subsequently rebranded these labels as “AI info” and made them significantly less prominent. On the Instagram app, this label is typically found in minuscule text beneath an account name for AI-generated or manipulated content, though it can be intermittently supplanted by song titles or other post details. Even if spotted, users must navigate to the three-dot menu on images and videos to access the "AI info" label. These AI labels may also be entirely absent from Instagram’s desktop website, even for posts that display the “AI Info” label on mobile apps. In the absence of labels or visual C2PA indicators, users are expected to scrutinize suspicious content using a Chrome browser extension or by manually uploading it to one of the official C2PA checker websites.

The capabilities of C2PA as an AI labeling solution have been extensively critiqued. While the standard’s adoption is gradually expanding, and a system that functions intermittently is preferable to none, it was never conceived to offer a universal solution for deepfake detection or AI-generated content. Andy Parsons, senior director of Content Authenticity at Adobe, acknowledged that while AI undeniably presents harmful challenges, it is inaccurate to assume C2PA resolves all of them.

“This is not a silver bullet,” Parsons stated to The Verge. “It does solve a whole class of problems.”

The conspicuous absence of X from C2PA further illustrates why the standard cannot comprehensively address current issues of AI and authenticity. Despite Twitter being a founding member of C2PA, it withdrew from the initiative after Elon Musk's acquisition and rebranding to X. Parsons confirmed that X is not presently involved with C2PA, expressing that they would “embrace X participating actively.” X represents a vast online space where news circulates rapidly, and many brands and prominent figures leverage the platform for announcements. However, given the ongoing controversies surrounding Grok’s generation of violent and sexualized material, and Musk’s own dissemination of misleading deepfakes, X appears to have little interest in protecting its 270 million daily users from AI fakery or misinformation. This means a substantial number of individuals rely on X as a primary news source, often disseminating that information to other platforms, despite having minimal assurance of its authenticity.

Ben Colman, CEO of Reality Defender, also points out that if C2PA alone were a viable solution, "AI slop" and deepfakes would not be spreading unlabeled. He argues that an exclusive reliance on labeling or watermarking solutions erroneously assumes malicious AI content is produced using only a limited set of tools. Colman told The Verge, “Which is the absolute wrong assumption, mind you, but that’s what we’ve got powering moderation for the world’s biggest social platforms at the moment.”

Even a highly effective labeling system might not fully resolve the problem. A recent study indicated that transparency warnings appear insufficient to prevent harm from AI-generated deepfakes, noting “little empirical evidence to support the effectiveness of AI transparency.”

Nevertheless, this has not deterred the continuous repetition of a familiar message: that standards like C2PA represent an important, ongoing step in developing authenticity and deepfake detection systems. Parsons acknowledged understanding the “potential frustration that there could be more and faster” progress, and affirmed that the capability to observe C2PA evidence across online platforms “is coming,” albeit “more slowly than any of us would like.”

One might reasonably expect that if AI providers such as Meta and Google were genuinely committed to safeguarding people from deception and misinformation, these companies would cease developing tools that significantly exacerbate these problems until a viable solution is found. Mosseri’s concerns about preserving reality appear hollow when Meta actively promotes an Instagram alternative composed entirely of "AI slop." Similarly, OpenAI launched a TikTok clone featuring AI-generated videos that infringed copyright laws and imitated real individuals without consent. YouTube has vocally committed to combating the rising levels of "slop content" on its platform, while simultaneously encouraging creators to utilize Google’s AI models for video production.

The AI providers guiding C2PA appear to be attempting to have it both ways, seemingly sidestepping their responsibility to control their misinformation-generating machines while those very machines are generating substantial profits.

OpenAI derives the majority of its revenue from charging ChatGPT and Sora users subscriptions to access higher image and video generation limits. The pervasiveness of "AI slop" on YouTube was such that it accounted for 10 percent of the platform’s fastest-growing channels in July 2024, despite the introduction of policies aimed at curbing "inauthentic content." Meta is reportedly planning to place some AI capabilities behind premium subscriptions for Instagram, Facebook, and WhatsApp, while CEO Mark Zuckerberg champions AI as the inevitable future of social media.

Colman asserted that “Platforms have wholeheartedly embraced deepfakes and AI slop, so-called ‘preventative measures’ be damned, because like other inflammatory or harmful content that exists to enrage, spark controversy, and thus spark engagement, it’s yet another kind of content to keep users on the platform longer and push more ads.”

Occasionally, such content is less harmful than it is bizarre and irritating, exemplified by the "shrimp Jesus"-style images that have gone viral on Facebook. Generative AI tools also drastically lower the traditional skill and time barriers for visual content creation, leading to an overwhelming deluge that competes with traditional media for our attention and necessitates increased effort to filter through it all.

Ultimately, C2PA functions as a mere honor system, never truly poised for success as a definitive solution for deepfakes.

Efforts to verify the authenticity of online content appear largely futile. While there is continuous progress and expansion, C2PA is essentially an honor system that was unlikely to ever fully succeed as a deepfake solution. Some platforms are now exploring systems that analyze creators themselves, rather than solely the content they post. Mosseri indicates that Instagram will need to shift its focus “to who says something, instead of what is being said.”

YouTube adopted this approach to moderate videos following the killings of Alex Pretti and Renee Nicole Good. Google spokesperson Boot Bullwinkle informed The Verge that most of the footage from these incidents was uploaded “with public interest value and will remain on the platform,” and that users are directed toward official news sources in searches and on the YouTube homepage during significant events.

“As events are unfolding, it can take time to produce high-quality videos, so we provide short previews of text-based news articles in search results on YouTube, along with a reminder that breaking and developing news can rapidly change,” Bullwinkle stated. This contrasts sharply with YouTube’s parent company, Google, which is actively replacing news headlines with often inaccurate and low-quality AI summaries.

Indeed, any measure that effectively prevents synthetic materials from being mistaken for human-made content inherently conflicts with the business interests of every company investing heavily in AI, particularly if it casts the technology in a negative light. The extent of responsibility that can genuinely be assumed in the face of such a profound conflict of interest remains questionable.

Regardless, Mosseri seems to believe that AI has already triumphed in the battle for reality, akin to a soft launch of the "dead internet theory." He advised that Instagram creators will need to be “real, transparent, and consistent” to distinguish themselves in a “world of infinite abundance and infinite doubt.” If navigating the deluge of AI fakery were that straightforward, existing solutions like community notes and "I am not a robot" verification would have resolved it long ago.

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news