Home > Business > In the age of deepfakes, India finally calls a fake a fake

In the age of deepfakes, India finally calls a fake a fake

By: KHUSHBU JAIN
Last Updated: February 15, 2026 01:27:31 IST

Once, saying “I saw the video” ended every debate. In 2026, it starts one. A clip can now be the quickest route to a lie, built from your display photo, your voice and a few vacation shots, stitched into a new “you” that looks and sounds authentic but is entirely false.

A FAKE YOU, LIVING YOUR LIFE

The most unsettling harm is also the hardest to see on paper, when a fake version of you starts acting in the world and doing so through the very social and messaging platforms you and your family use every day. Imagine a late night call to your parents. It is your voice, your intonation, your pet names. You say you have met with an accident, that you need money immediately, that they must not call anyone else. The number is unfamiliar, but the panic is real. Or picture your finance team receiving a video message in the office WhatsApp group from what appears to be the CFO, calmly instructing an urgent transfer to a “new” account. The face looks right, the background is the usual office, the tone is familiar. By the time anyone asks questions, the money is gone.

FROM “CONTENT” TO “SYNTHETICALLY GENERATED INFORMATION”

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2026, notified on 10 February, introduce an unglamorous but important term into our statute books: “synthetically generated information”. It covers audio, visual or audio-visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a way that makes it appear real, authentic or true and capable of being perceived as indistinguishable from a real person or real-world event. Runtime edits — correcting colour, removing background noise, reformatting a PDF — are expressly carved out. The target is deception, not creativity.

The rules do not list such scenarios, but they directly address the underlying conduct. Intermediaries offering tools that can create synthetic media are now obliged to use reasonable and appropriate technical measures including automated systems to prevent users from generating or sharing synthetics that misrepresent a person’s identity, voice, conduct, action or statement in a deceptive way, or falsely depict a real-world event. If it looks like you, sounds like you and is designed to mislead, it is now firmly in the crosshairs of existing criminal law, with platforms expected to help stop it rather than look away.

THE UGLIEST DEEPFAKES OF ALL

Some harms leave no room for argument. The rules single out synthetically generated content that contains child sexual abuse material, non-consensual intimate imagery or is obscene, paedophilic or invasive of bodily privacy. This is the legal system catching up with one of the most vicious abuses of AI: taking someone’s ordinary photograph and turning it into a weapon that can be circulated endlessly through social feeds and private chats.

The technology to paste faces onto explicit images is already cheap and widespread; teenagers and women are often the targets. A victim may be told, “It’s not really you”, but colleagues, neighbours and relatives who watch a clip dropped into a group chat or trending on a local channel might not be so nuanced. The social punishment is meted out as if the image were real.

FAKE PAPERS, REAL DANGER

AI is also very good at faking paper and process. The rules explicitly mention synthetics that result in the creation or alteration of fake documents or false electronic records. Think of AI-generated Aadhaar or PAN cards used to open accounts, synthetic degree certificates attached to job applications or fabricated court orders and “notices” doing the rounds on messaging apps to intimidate or confuse, and not to forget their use in so-called “digital arrests” staged over video calls. They also cover synthetic media relating to the preparation or procurement of explosives, arms or ammunition. In other words, the law is alive to the fact that AI-generated manuals, videos or “how-to” guides are not just harmless information. In the wrong hands, they are operational instructions ready to be shared and reshared at the click of a button.

THE TARGET IS DECEPTION, NOT CREATIVITY

One of the most striking features of the amendment is its insistence on labelling and provenance. If synthetic content does not fall into the outright illegal categories, it can still be published but only with a visible or audible label that clearly identifies it as synthetically generated. Platforms must also, to the extent technically feasible, embed permanent metadata or other technical provenance markers, including a unique identifier that points to the computer resource used to create or alter the content. They are expressly forbidden from allowing these labels or markers to be removed.

FASTER CLOCKS, HEAVIER SHOULDERS

The most consequential change is in the timelines. The generic 36-hour window for complying with removal directions has been cut to 3 hours in the opening clause. Certain grievance timelines have been halved. Some categories of user complaints now expect action in as little as 2 hours. These numbers reflect an uncomfortable truth: in the attention economy, especially on social networks and private messaging apps, a fake can do most of its damage in the first few hours. A doctored clip that trends all morning does not become harmless because it is finally taken down in the evening.

Taken together with the new due-diligence obligations, the effect is to shift responsibility onto those best placed to act quickly — the platforms themselves. And for significant social-media platforms, there is an added step: they must ask users up front whether content is synthetic, use reasonable tools to test that declaration and label confirmed synthetic content before it goes live. If they knowingly allow violations or fail to act, they are deemed to have failed in their duty of care.

THE NECESSITY OF LAW; WHETHER IT IS ENOUGH, ONLY THE FUTURE WILL TELL.

It is tempting to see every new AI rule as either a magic shield or a censor’s sword. These amendments are neither. They do not ban AI, turn every filter into a crime, or guarantee you will never be targeted by a deepfake. What they offer is narrower and more honest: They finally call synthetically generated content what it is, anchor it in existing criminal law, and tell platforms that prevention, labelling and speed are now legal duties, not feel-good CSR. They give victims a clearer route to demand takedown and enforcement in an ecosystem where a single fake can reach millions before breakfast. The rest is up to us. No statute can stop people from believing what they want to believe and no label/watermark can replace the habit of asking “Could this be false?” before forwarding that jaw-dropping clip.

Khushbu Jain is a practicing advocate in the Supreme Court of India and founding partner of Ark Legal, specialising in privacy law and data protection.

Check out other tags:

Most Popular

The Sunday Guardian is India’s fastest
growing News channel and enjoy highest
viewership and highest time spent amongst
educated urban Indians.

The Sunday Guardian is India’s fastest growing News channel and enjoy highest viewership and highest time spent amongst educated urban Indians.

© Copyright ITV Network Ltd 2025. All right reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?