India has ordered social media platforms to step up policing of deep fakes and other AI-driven takedowns, while shortening the time they have to comply with takedown orders. It’s a move that could change how global tech companies moderate content in one of the largest and fastest-growing markets for internet services.
The changes, PUBLISHED (PDF) on Tuesday as amendments to The 2021 IT Rules in Indiabring deepfakes under a formal regulatory framework, which mandates the labeling and traceability of synthetic audio and visual content, while also cutting compliance timelines for platforms, including a three-hour deadline for official takedown orders and a two-hour window for some urgent user complaints.
India’s importance as a digital market magnifies the impact of the new rules. With more than a billion internet users and a largely young population, the South Asian country is a critical market for platforms such as Meta and YouTube, making it likely that the compliance measures adopted by India will influence global product and moderation practices.
Under the amended rules, social media platforms that allow users to upload or share audio-visual content must require disclosures of whether the material is synthetically produced, deploy tools to verify claims, and ensure that deep fakes are clearly marked and included in traceable source data.
Certain categories of synthetic content – including deceptive imitations, non-consensual intimate images, and material linked to serious crimes – are prohibited by the rules. Non-compliance, especially in cases flagged by authorities or users, can expose companies to greater legal liability by undermining their safe harbor protections under Indian law.
The rules rely heavily on automated systems to fulfill obligations. Platforms are expected to deploy technical tools to verify user disclosures, identify, and flag deepfakes, and prevent the creation or sharing of prohibited synthetic content in the first place.
“The revised IT Rules mark a more calibrated approach to regulating AI-driven ventures,” said Rohit Kumar, co-founder of New Delhi-based policy consulting firm The Quantum Hub. “Significantly compressed complaint timelines — such as two- to three-hour takedown windows — materially increase compliance burdens and deserve close scrutiny, especially given that noncompliance is linked to the loss of safe harbor protections.”
Techcrunch event
Boston, MA
|
June 23, 2026
Aprajita Rana, a partner at AZB & Partners, a leading Indian corporate law firm, said the rules now focus on AI-generated audio-visual content rather than all online information, while carving out exceptions for routine, cosmetic, or efficiency-related uses of AI. However, he warned that the requirement for the mediators to remove the content within three hours if they know it from the established principles of free speech.
“The law, however, continues to require intermediaries to remove the content if they know or receive actual knowledge, that too within three hours,” said Rana, adding that the marking requirements apply to formats to prevent the spread of child sexual abuse material and fraudulent content.
New Delhi-based digital advocacy group Internet Freedom Foundation SAYS the rules risk speeding up censorship by severely compressing takedown timelines, leaving little scope for human review and pushing platforms toward automated excessive takedowns. In a statement posted by X, the group also raised concerns about the expansion of prohibited content categories and provisions that allow platforms to disclose the identities of users to private complainants without judicial oversight.
“These impossibly short timelines eliminate any meaningful human review,” the group said, warning that the changes could undermine free speech and due process protections.
Two industry sources told TechCrunch that the changes followed a limited consultation process, with a small set of proposals reflected in the final rules. While the Indian government appears to have taken the board’s recommendations to narrow the scope of information covered — focusing on AI-generated audio-visual content rather than all online material — other recommendations were not adopted. The scale of the changes between the draft and final rules requires another round of consultation to give companies clearer guidance on compliance expectations, the sources said.
The removal powers of the government have become a point of contention in India. Social media platforms and civil-society groups have long criticized the width and opacity of content removal ordersand even Elon Musk’s X challenged New Delhi in court done directives to block or remove postsarguing that they are excessive and lack adequate safeguards.
Meta, Google, Snap, X, and India’s IT ministry did not respond to requests for comments.
The latest changes come just months after the Indian government, in October 2025, reduced the number of officials allowed to order the removal of content from the internet in response to a X’s legal challenge on the scope and transparency of takedown powers.
The revised rules will go into effect on February 20, giving platforms some time to adjust their compliance systems. The rollout coincides with India’s hosting of the AI Impact Summit in New Delhi from February 16 to 20, which is expected to draw senior global technology executive and legislators of the country.







