Centre’s proposal for new IT rules is a clear step toward ensuring authenticity in digital content: Experts


The changes, crucially, place robust due diligence obligations on intermediaries, experts said. File

The changes, crucially, place robust due diligence obligations on intermediaries, experts said. File
| Photo Credit: Getty Images

The Ministry of Information Technology’s proposal on Wednesday (October 22, 2025) for new rules to label, trace and bring deepfake, AI-generated content under IT Rules, marked a clear step toward ensuring authenticity in digital content, say cyber security and technology experts.

The ministry’s proposed rules mandate social media companies to require their users to declare if they are uploading any artificial intelligence (AI) created deepfake content that can cause harm to individuals, organisations or governments.

Pavan Duggal, a Supreme Court advocate specialised in cyberlaw, cybercrime law, cybersecurity law and AI Law, told The Hindu: “The proposed amendments to India’s Information Technology Rules, 2021 mark a historic leap in our digital lawmaking, finally confronting the complex threat of deepfakes and synthetic content.’’

He said, for the first time, Indian cyber law draft amendments recognised and clearly defined “synthetically generated information” as computer-altered content masquerading as genuine—a much-needed shift aligning law with digital realities.

“In our AI-driven age, where fabrications are nearly indistinguishable from real data, this definitional certainty is transformative. The amendments grasp the urgent need to police synthetic content, which—when misused—can destabilise trust, spread disinformation, and corrode digital integrity,’’ Mr. Duggal said.

According to him, these changes, crucially, place robust due diligence obligations on intermediaries. Platforms facilitating synthetic content creation must ensure every such piece is unmistakably labeled, embedding permanent metadata or identifiers. “This labeling—covering at least ten percent of the visual or audio interface—is no mere formality; it’s a resolute step to ensure public awareness,’’ Mr. Duggal argued.

Mahesh Makhija, Partner and Technology Consulting Leader, EY India said, “Labelling AI generated material and embedding non-removable identifiers will help users distinguish real content from synthetic.’’

He further said it would serve as the foundation for responsible AI adoption, these measures would give businesses the confidence to innovate and scale AI responsibly.

“The next step should be to establish clear implementation standards and collaborative frameworks between government and industry, to ensure the rules are practical, scalable, and supportive of India’s AI leadership ambitions,’’ Mr. Makhija added.

Echoing similar sentiments, Mr. Duggal opined, Implementation, however, demanded technical sophistication, cross-platform standards, strong enforcement, and international alignment.

Mr. Duggal further said, a new liability framework would further sharpen the regime. “If an intermediary knowingly permits or ignores unmarked synthetic content, it is deemed to have failed in due diligence—risking the vital Section 79 safe harbour immunity.’’

Under the proposed regime, social media platforms are now bound to a triad of obligations: Users must disclose synthetic origins; Platforms must deploy technical tools to verify such disclosures; Transparent, prominent synthetic labelling is mandatory whenever synthetic origins are confirmed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *