On Wednesday, October 22, 2025, the Ministry of Information Technology proposed new rules to label, trace, and regulate deepfake and AI-generated content under India’s IT Rules, a move experts say aims to ensure authenticity in digital content.

The proposed rules would require social media platforms to have users declare if they are posting AI-generated deepfake content that could potentially harm individuals, organizations, or the government.
Amendments to IT Rules 2021 add accountability for State and Central governments
Pavan Duggal, a Supreme Court advocate specializing in cyberlaw and AI law, described the proposed changes as a “historic leap” in digital legislation, addressing the growing threat of deepfakes and synthetic content. He highlighted that the amendments, for the first time, formally define “synthetically generated information” as computer-altered content pretending to be genuine.
“In today’s AI-driven environment, where fabricated data can closely mimic real information, this clarity is transformative,” Duggal said. “The amendments recognize the need to regulate synthetic content, which, if misused, can undermine trust, spread misinformation, and compromise digital integrity.”
Mandatory labelling of AI-generated content
The rules would also impose strict due diligence on intermediaries. Platforms enabling synthetic content must clearly label such material, embedding permanent metadata or identifiers. Duggal noted that at least 10 percent of the visual or audio interface must carry this labeling to ensure public awareness.
Mahesh Makhija, Partner and Technology Consulting Leader at EY India, added that labeling AI-generated content with non-removable identifiers helps users distinguish between real and synthetic content and provides a foundation for responsible AI adoption. He suggested that the next step should be creating clear implementation standards and collaborative frameworks between government and industry to make the rules practical and scalable.
Duggal emphasized that enforcement will require technical sophistication, cross-platform standards, and international alignment. He also highlighted a new liability framework: intermediaries that knowingly allow or ignore unmarked synthetic content could lose their Section 79 safe harbour protections.
Under the proposed rules, social media platforms would face three main obligations: users must disclose synthetic content origins, platforms must use technical tools to verify these disclosures, and prominent synthetic labeling is required whenever such origins are confirmed.
