Skip to main content
Feb 10

India to Social Media: Faster Deepfake Takedowns Now

India has mandated that social media platforms significantly enhance their oversight of deepfakes and other AI-generated impersonations, concurrently

4 min read139 views3 tags
Originally reported bytechcrunch

India has mandated that social media platforms significantly enhance their oversight of deepfakes and other AI-generated impersonations, concurrently drastically shortening the timeframe for compliance with content takedown directives. This pivotal decision is poised to redefine content moderation practices for global technology firms operating within one of the world's largest and fastest-expanding internet markets.

These new regulations, officially released on Tuesday as amendments to India’s 2021 IT Rules, establish a formal regulatory framework for deepfakes. They stipulate the mandatory labeling and traceability of synthetic audio and visual content, while also imposing much tighter compliance deadlines on platforms. This includes a stringent three-hour window for official takedown orders and an even shorter two-hour period for certain urgent user complaints.

India's immense significance as a digital market amplifies the potential global repercussions of these new rules. Boasting over a billion internet users and a predominantly youthful demographic, the South Asian nation represents a crucial market for major platforms such as Meta and YouTube. Consequently, the compliance measures adopted in India are highly likely to influence product development and content moderation strategies worldwide.

Under the revised rules, social media platforms enabling users to upload or share audio-visual content are now required to demand disclosures regarding whether material is synthetically generated. Furthermore, they must deploy sophisticated tools to verify these claims and ensure that deepfakes are unambiguously labeled and embedded with traceable provenance data.

The rules explicitly prohibit certain categories of synthetic content, including deceptive impersonations, non-consensual intimate imagery, and material linked to serious criminal activities. Non-compliance, particularly in instances flagged by governmental authorities or users, could expose companies to heightened legal liability by jeopardizing their crucial safe-harbor protections under Indian law.

To fulfill these obligations, the regulations place a strong emphasis on the deployment of automated systems. Platforms are expected to implement advanced technical tools to verify user disclosures, accurately identify and label deepfakes, and proactively prevent the creation or sharing of prohibited synthetic content.

Rohit Kumar, founding partner at The Quantum Hub, a New Delhi-based policy consulting firm, observed, "The amended IT Rules represent a more calibrated approach to regulating AI-generated deepfakes." He further noted, "The significantly compressed grievance timelines — such as the two- to three-hour takedown windows — will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections."

Aprajita Rana, a partner at AZB & Partners, a prominent Indian corporate law firm, highlighted that the new rules now specifically target AI-generated audio-visual content, rather than all online information, and include exemptions for routine, cosmetic, or efficiency-related applications of AI. However, she cautioned that the mandate for intermediaries to remove content within three hours of becoming aware of it constitutes a departure from established free-speech principles.

"The law, however, continues to require intermediaries to remove content upon being aware or receiving actual knowledge, that too within three hours," Rana stated. She added that the labeling requirements would be broadly applied across various formats to effectively curb the dissemination of child sexual abuse material and deceptive content.

The Internet Freedom Foundation, a digital advocacy group based in New Delhi, expressed concerns that these rules risk accelerating censorship due to the drastically compressed takedown timelines. They argue that this leaves minimal scope for human review and will likely push platforms towards automated over-removal. In a statement posted on X, the group also raised objections regarding the expansion of prohibited content categories and provisions allowing platforms to disclose user identities to private complainants without judicial oversight.

"These impossibly short timelines eliminate any meaningful human review," the group asserted, issuing a warning that such changes could erode free-speech protections and due process.

Two industry sources informed TechCrunch that the amendments emerged from a limited consultation process, with only a narrow selection of suggestions ultimately incorporated into the final rules. While the Indian government appeared to adopt proposals to narrow the scope of covered information — specifically focusing on AI-generated audio-visual content rather than all online material — other key recommendations were not implemented. The sources indicated that the significant differences between the draft and final rules warranted an additional round of consultation to provide companies with clearer guidance on compliance expectations.

Governmental takedown powers have historically been a point of contention in India. Social media platforms and civil society organizations have frequently criticized the broad scope and lack of transparency surrounding content removal orders. Even Elon Musk’s X previously challenged New Delhi in court over directives to block or remove posts, contending that these actions constituted overreach and lacked adequate safeguards.

Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comments regarding these new regulations.

These latest changes are introduced just months after the Indian government, in a recent move, reduced the number of officials authorized to issue internet content removal orders. This earlier adjustment was a direct response to a legal challenge by X concerning the scope and transparency of takedown powers.

The amended rules are scheduled to take effect on February 20, providing platforms with a very short period to adapt their compliance systems. This rollout strategically coincides with India’s hosting of the AI Impact Summit in New Delhi, running from February 16 to 20, an event anticipated to attract senior global technology executives and policymakers to the country.

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news