India’s IT Rules Updated To Regulate AI-Generated Content With Three-Hour Takedowns And Mandatory Labels
India has amended its IT rules to regulate AI-generated and synthetic content, mandating clear labelling and swift takedowns to curb deepfakes and misinformation.
The Government of India has formally notified a significant amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at regulating AI-generated content including deepfakes and other synthetic media in the country’s vast online ecosystem. The update, published on February 10 and scheduled to come into force on February 20, 2026, marks a pivotal shift in India’s approach to digital governance by explicitly bringing artificial intelligence-generated and manipulated content under the ambit of legally enforceable norms.
At its core, the amendment introduces a clear definition for what the government terms “synthetically generated information”, encompassing audio, visual, or audio-visual material that has been created, generated, modified, or altered using a computer resource in a way that makes it appear authentic or real. This definition is broad enough to cover deepfake videos, AI-enhanced images, and text or audio generated using AI models, provided they are designed to seem plausible or indistinguishable from genuine content. Routine digital editing such as colour correction, accessibility improvements, or legitimate academic or creative modifications is excluded from this category so long as it does not mislead or create false records.
Mandatory Labelling And Traceability Requirements
One of the amendment’s cornerstone provisions is the mandatory labelling of AI-generated content. Platforms that enable creation, hosting, or dissemination of synthetic content must ensure that such material is prominently marked so that users can immediately identify that it is AI generated. This labelling must be clear and unambiguous, and where technically feasible, platforms must embed permanent metadata or unique identifiers that trace the content back to the system or tool that produced it. The statute also bars intermediaries from removing or suppressing these labels once applied, in order to preserve transparency.
To enforce this, platforms are required to ask users to declare whether the content they are uploading is AI generated. Intermediaries are expected to deploy reasonable and proportionate technical measures including automated systems to verify the accuracy of these declarations. If a platform fails to take these steps and ends up hosting unlabelled or deceptive synthetic content, it may be deemed to have failed its due diligence obligations, which can have serious legal consequences.
Faster Takedown Timelines And Platform Accountability
Under the revised framework, the timelines for removing unlawful content have been significantly compressed in response to concerns about the speed at which online misinformation and deepfakes can spread. Previously, intermediaries had up to 36 hours from the time of a lawful order to disable or remove content. The new rules reduce this to just three hours after receiving a court order or direction from a competent authority. Further, user grievance redressal timelines have also been tightened, with initial acknowledgements now expected within seven days instead of fifteen, and some urgent takedown actions required within two hours.
These obligations apply not only to international social media platforms with a presence in India such as Meta’s Facebook and Instagram, Google’s YouTube, and X (formerly Twitter) but also to any digital intermediary that enables public sharing of content. The requirement to remove certain categories of AI-generated material within strict timelines is aimed at curbing rapid spread of harmful deepfakes and misleading manipulated media in real time.
Focus On Harmful And Illegal AI Content
The rules explicitly target categories of synthetic media that pose a heightened risk of harm or unlawful behaviour. Content that includes child sexual abuse material, non-consensual intimate imagery, false documents, impersonation, or misleading depictions of events or people falls under prohibited material and must be proactively prevented or removed. Platforms are also required to inform users at least once every three months about their responsibilities, including warnings that failure to comply may result in account suspension, termination, or legal action.
To enforce accountability, the amendment clarifies that intermediaries will not lose legal protections under Section 79 of the Information Technology Act, 2000 for removing or restricting access to synthetic content including via automated tools provided they observe due diligence obligations as stipulated in the revised rules. This addresses earlier industry concerns about whether proactive takedown could undermine a platform’s safe harbour protections.
Balancing Safety, Innovation, And Digital Freedom
The notification reflects the government’s intent to strike a balance between encouraging innovation and safeguarding users against misuse of generative AI technologies. As AI tools become more accessible, the potential for harmful applications such as misleading political deepfakes, fraudulent impersonations, or synthetic media that could fuel social discord has increased. By integrating AI content into the existing IT Rule framework, India joins a growing list of jurisdictions seeking to govern synthetic media without stifling legitimate technological advancement.
However, how these rules will be implemented in practice especially by smaller platforms and start-ups remains a subject of discussion. The tightened timelines and mandatory labelling could pose compliance challenges, and debates on how to balance enforcement with freedom of expression are likely to continue as the rules take effect.