Digital platforms face an imminent deadline of February 20th to commence labeling all content generated or manipulated by artificial intelligence.
The existing methods for identifying and labeling online deepfakes are poised for a significant real-world test. India recently announced new mandates requiring social media platforms to accelerate the removal of illegal AI-generated materials and ensure all synthetic content is clearly marked. While tech companies have long expressed an intent to self-regulate in this area, they now have mere days before these legal obligations take effect on February 20th.
With a billion internet users, many of whom are young, India represents one of the most vital growth markets for social platforms globally. Consequently, any regulatory obligations introduced there could profoundly influence deepfake moderation efforts worldwide, either by propelling detection technology to truly effective levels or by compelling tech companies to acknowledge the necessity for entirely new solutions.
Under India's amended Information Technology Rules, digital platforms must implement "reasonable and appropriate technical measures" to prevent users from creating or sharing illegal synthetically-generated audio and visual content, commonly known as deepfakes. Any such generative AI content that is not blocked must be embedded with "permanent metadata or other appropriate technical provenance mechanisms." Specific requirements for social media platforms include mandating users to disclose AI-generated or edited materials, deploying tools to verify these disclosures, and prominently labeling AI content to enable immediate identification as synthetic, such as through verbal disclosures for AI audio.
Achieving this, however, presents a considerable challenge, given the current underdeveloped state of AI detection and labeling systems. C2PA, also known as content credentials, stands as one of the most advanced systems available. It functions by attaching detailed metadata to images, videos, and audio at the point of creation or editing, invisibly documenting how the content was produced or altered.
Despite its promise, C2PA's effectiveness is currently limited. Major tech companies like Meta, Google, and Microsoft already utilize C2PA, yet its implementation has not been entirely successful. Platforms such as Facebook, Instagram, YouTube, and LinkedIn do add labels to content flagged by the C2PA system, but these labels are often inconspicuous, allowing some synthetic content that should be marked to evade detection. Furthermore, social media platforms cannot label content that lacks initial provenance metadata, including materials from open-source AI models or "nudify apps" that do not adhere to the voluntary C2PA standard.
India's immense social media landscape underscores the urgency of these regulations. According to DataReportal research cited by Reuters, India boasts over 500 million social media users, including 500 million YouTube users, 481 million Instagram users, 403 million Facebook users, and 213 million Snapchat users. It is also estimated to be X's third-largest market.
A significant hurdle for C2PA is interoperability, and while India's new rules might encourage broader adoption, C2PA metadata is far from permanent. It can be easily removed, sometimes unintentionally, by online platforms during file uploads. The new regulations explicitly prohibit platforms from allowing metadata or labels to be modified, hidden, or removed, leaving little time to develop compliance strategies. Social media platforms, particularly those like X which currently lack any AI labeling systems, now have just nine days to implement them.
Requests for comment from Meta, Google, X, and Adobe, a key proponent of the C2PA standard, went unanswered.
Adding to the pressure, India has mandated that social media companies remove unlawful materials within three hours of discovery or report, a drastic reduction from the previous 36-hour deadline. This expedited timeline also applies to deepfakes and other forms of harmful AI content.
The Internet Freedom Foundation (IFF) has voiced concerns that these imposed changes risk transforming platforms into "rapid fire censors." In a statement, the IFF warned, "These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal."
The inclusion of a clause specifying that provenance mechanisms should be implemented "to the extent technically feasible" suggests that Indian officials are likely aware of the current limitations in AI detection and labeling technology. Organizations supporting C2PA have long asserted that the system will prove effective with widespread adoption; these new mandates present a critical opportunity for them to validate that claim.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.