The instinct to “let AI breathe” is not only understandable, it is, in many ways, admirable. For a country determined not to miss the next technological wave, India’s hesitation to rush into heavy, top-down AI legislation reflects a genuine commitment to entrepreneurship, experimentation and global competitiveness. There is a legitimate fear that if we move too fast with rigid rules, we might unintentionally push our most ambitious founders and researchers to more permissive jurisdictions. That caution deserves respect, not dismissal.
Yet precisely because AI holds so much promise for India, it also deserves a thoughtful layer of basic safeguards. The more accurate question is not whether India should regulate AI, but what kind of regulation it should adopt and when. A blanket “no regulation” position sounds attractive only if we ignore the basic fact that AI is not emerging in a legal vacuum. The moment AI systems ingest personal data, automate decision-making in finance or health, or generate synthetic media at scale, they intersect with existing obligations under data protection, consumer protection, intermediary liability, election and sectoral regulations. Saying “no AI regulation” does not repeal those frameworks; it only guarantees that they will be applied in an ad hoc, reactive and often incoherent way.
Moreover, a pure laissez-faire position underestimates the speed and irreversibility of certain harms. Deepfakes that distort elections, synthetic child sexual abuse material, automated discrimination in credit or insurance and opaque models used in policing or welfare delivery are not abstract hypotheticals. They are harms that, once normalised and scaled, become legally and technically difficult to roll back. Waiting for a crisis before acting may feel innovation-friendly, but it is, in effect, a policy of outsourcing our risk decisions to the most aggressive market actors.
The real opportunity for India lies in a middle path: light but firm guardrails at the design stage, rather than heavy-handed licensing or blanket bans. This is not about standing up a monolithic “AI law” that attempts to govern everything from chip design to poetry bots. Instead, it is about a thin layer of ex ante obligations that attach to how systems are built and deployed, calibrated to the level of risk and context.
One way to think about this is in terms of “design hygiene” rather than “control”: if we do not build certain safeguards into the architecture of AI systems now, they will be almost impossible to reconstruct later.
-
The first such safeguard is a clear duty of provenance and watermarking for realistic synthetic media. Any AI-generated or AI-manipulated image, audio or video that could plausibly be mistaken for reality ought to carry both an on-screen disclosure for humans and a robust, machine-detectable watermark or provenance tag embedded in the file. Once a flood of unmarked content is in circulation, tracing origin and authenticity after the fact becomes technically and evidentially near-impossible. Importantly, India has already taken a step in this direction: the recent amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, explicitly bring “synthetically generated information” within their scope and mandate prominent labelling and embedded identifiers for AI-generated content on intermediary platforms. Building on this, the next logical step is to move from platform-level labelling to model-level responsibility: the law should categorically require that no AI system be permitted to generate realistic audio, video or images for public dissemination unless it embeds such watermarking and provenance at the point of creation itself. In effect, no synthetic media without a birthmark.
-
A second core safeguard is a default obligation of logging and traceability for AI systems used in high-impact domains such as credit and insurance, employment screening, welfare targeting, healthcare triage, law enforcement and election integrity. These systems must, by design, record key inputs, outputs and model versions in a way that allows regulators, courts and affected individuals to reconstruct how a consequential decision was made and where necessary, to challenge it. If India allows opaque, unlogged models to become the infrastructure of decision-making, no later law will be able to retroactively recover the missing audit trail.
-
Third, high-risk deployments should be required to include meaningful human oversight as a design feature, not an afterthought. That means clearly specifying where and how humans can override, pause or review an AI system’s outputs, and ensuring interfaces and documentation that make such oversight real rather than purely formal. Once fully automated, end-to-end pipelines are entrenched in critical functions, inserting genuine human control later will be institutionally and technically far more difficult. Finally, providers of high-risk systems and large general-purpose models should be under a legal duty to conduct and document risk and impact assessments across the lifecycle: before deployment, on significant model updates and when serious incidents are reported. If we scale models today without this discipline, we will lack the basic maps needed tomorrow to understand where harms arose and how to correct course, making any attempt at retrospective governance largely performative.
Framed this way, basic design mandates are not about distrusting innovators, but about backing them with a stable, predictable environment. Global partners and customers increasingly look for jurisdictions where AI is both cutting-edge and dependable; if India can show that its models are not only powerful, but also transparent, safe and accountable where it matters, that will strengthen our position as a preferred destination for AI development and deployment. Starting from a place of trust in innovation and layering in a few carefully chosen guardrails is, therefore, not a contradiction. It is a way of aligning India’s growth story with long-term public confidence. The aim is not to slow AI down, but to ensure that as it accelerates, it runs on a track that is secure, fair and worthy of the scale of our ambitions. Whether we choose a light-touch, design-innovative pathway or a more traditional regulatory route, one principle must be non-negotiable: certain checks and regulations, with strict and credible implementation, are essential if India’s AI moment is to be both transformative and trusted.
-
Khushbu Jain is a practicing advocate in the Supreme Court of India and founding partner of Ark Legal, specialising in privacy law and data protection.