/vnd/media/media_files/2025/09/20/cert-in-sets-the-rules-2025-09-20-13-02-25.jpg)
The Ministry of Electronics and Information Technology (MeitY) has introduced a detailed framework for identifying, classifying, and regulating deepfakes and other forms of artificially generated content. This has been achieved through a new set of proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
This initiative marks a major shift in the treatment of digital content produced or altered by artificial intelligence, as it is the first time synthetically generated information has been given a formal legal definition under India’s IT laws.
The draft proposal comes amid a rapid rise in deepfake content on social media, fuelling concerns about online harassment, identity theft, and misinformation. With national elections approaching and the use of AI tools expanding, the government aims to establish clear standards for how platforms should identify, label, and respond to synthetic content.
Mandatory labelling for AI-generated content
One of the most significant reforms proposed is mandatory labelling. Under the new Rule 3(1), every piece of AI-generated content must include a permanent, non-removable, and clearly visible label. Platforms offering AI tools for content creation or modification will be required to ensure compliance.
For visual content, the label must cover at least 10% of the screen and remain permanently visible. For audio content, an audible label must play for at least the first 10% of the clip. This means AI-generated content can no longer circulate without clear identification, and platforms will be prohibited from removing or altering these labels.
Operational mechanism for compliance
The draft also outlines a detailed operational mechanism. Any platform or application that enables users to create, modify, or share artificially generated content must ensure that such material is properly labelled or carries embedded information identifying it as synthetic.
Major platforms such as YouTube, Instagram, X, and Meta will be required to ask users to confirm whether their uploads were created using artificial intelligence before posting. They must also use automated tools to detect synthetic content and prominently label it once confirmed. A platform will be deemed to have breached due diligence obligations if it knowingly allows unlabeled deepfake content to be published.
Obligations for significant social media intermediaries
Under the IT Rules, platforms classified as Significant Social Media Intermediaries (SSMIs), those with more than five million registered users in India, must ensure users disclose whether their uploads are AI-generated. These intermediaries will also need to use suitable technical tools to verify such claims. If content is confirmed to be synthetically created, it must be clearly labelled or carry a visible warning.
The proposed amendments also clarify that platforms will not be held liable under Section 79 of the IT Act if they remove AI-generated content in response to user complaints. This provision protects intermediaries when they act against artificially generated misinformation.
Balancing regulation and privacy
To prevent overreach, MeitY has specified that these obligations will apply only to publicly posted or published content on social media platforms, not to private or unpublished material.
Furthermore, the updated regulations state that false information, defamatory content, or fraudulent impersonations created using AI will be treated in the same way as equivalent real-world offences, as synthetically generated data is now explicitly covered under the IT Rules’ definition of “information”.
Greater accountability in content takedown procedures
Separately, the Ministry has amended the Information Technology Rules, 2021 to enhance accountability in the process of ordering online content removals. From 15 November, only senior government officials, such as those holding the position of Joint Secretary or above, or, in the case of law enforcement, officers of the rank of Deputy Inspector General (DIG) or higher, will be authorised to issue takedown notices under Section 79(3)(b) of the IT Act.
Speaking to the media, Ashwini Vaishnaw, Minister for Electronics and Information Technology, said the draft seeks to hold social media companies accountable while helping users distinguish between genuine and manipulated content. “The use of well-known people’s faces to create deepfakes is an increasing threat. The measures we have introduced will ensure that users can determine whether content is real or fake. Mandatory labelling and information will make this distinction clear,” Vaishnaw explained.
MeitY has released the draft for public consultation and invited comments from individuals, industry representatives, and stakeholders until 6 November.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)
/vnd/media/media_files/2025/09/26/vnd-banner-2025-09-26-11-20-57.jpg)