Mumbai: The digital public infrastructure story of India rightfully commands celebration—we constructed rails for identity, payments, and service delivery at population scale, altering the nation’s trajectory; yet as we consider the next leap—AI in governance—we must acknowledge that this evolution requires an equally deliberate public foundation, marked by one non-negotiable upgrade: it must be designed for citizens who do not conform to the “ideal user” template.
This forms the essential argument for an “Inclusivity Stack”—a collection of common standards, foundational components, datasets, audit methodologies, and procurement frameworks that position inclusive, assistive-first digital services as the default across government—enabling startups to build upon this foundation and allowing departments to implement AI without inadvertently excluding those citizens whom welfare and governance systems exist to serve.
This constitutes neither charity nor a specialized feature request; rather, it represents an essential question of digital dignity—recognizing that the right of a person with disability to access public technology must be regarded as foundational to citizenship within a digital state.
India possesses no need to invent the moral vocabulary for inclusion; it already permeates our legal frameworks and policy intentions. The Rights of Persons with Disabilities (RPwD) Act, 2016 is anchored in principles such as dignity, non-discrimination, and accessibility; it positions “communication” and “universal design” within a framework that clearly encompasses modern ICT and assistive technologies. In other words, accessibility is not an optional appendage to governance—it constitutes a fundamental component of how rights materialize in practical application.
Yet, day-to-day digital service design frequently proceeds as though the “normal” citizen possesses stable connectivity, perfect vision and motor control, high literacy levels, unlimited time, and an capacity to navigate linear, form-heavy workflows at machine speed. The cost of this assumption transcends abstraction; it manifests as abandoned applications, repeated office visits, dependence on intermediaries, and a quiet but persistent exclusion that remains invisible if all success metrics are averaged across the population.
AI can either exacerbate this chasm—because automation inherently penalizes edge cases—or it can dramatically narrow the divide by becoming the most powerful assistive layer the state has ever deployed. The distinction will be determined less by model magnitude and more by governance choices: standards, audits, procurement frameworks, and public infrastructure.
A substantial portion of exclusion originates from basic engineering defaults that are mistakenly treated as neutral. Consider rate limits and timeouts that assume continuous attention, captcha flows that presuppose visual pattern recognition, or fraud detection systems that interpret “atypical” interaction patterns as inherently suspicious. For many assistive technology users—those employing screen readers, switch access, dictation tools, magnification interfaces, or alternate input devices—interaction unfolds more slowly, sometimes non-linearly, and frequently involves repeated corrections. A system engineered for speed and frictionless automation can unintentionally transform into a system designed for failure.
This is why inclusive AI governance cannot be reduced to “make the interface accessible.” Accessibility remains necessary, yet it proves insufficient. The actual problem lies in workflow fit. Public services do not function as consumer apps; they represent rights-bearing pathways. If a workflow assumes one path, one pace, one mode of input, and one type of cognitive load, then any deviation becomes “user error.” Within inclusive governance frameworks, deviations do not constitute errors—they embody legitimate expressions of human diversity.
India already maintains government website guidelines that explicitly align with global accessibility expectations, including WCAG 2.1 Level AA within GIGW 3.0. That baseline should function as the starting line for AI-era service design, not the finish line.
An Inclusivity Stack should be conceptualized similarly to how India envisioned its earlier digital rails—as public-interest infrastructure that facilitates desirable behaviours while making adverse outcomes difficult to achieve. It would standardize accessible components and interaction patterns across government departments, preventing repeated reinvention of foundational elements—and ensuring vendors cannot continue selling compliance solutions that lack scalability.
At its core, this stack should incorporate three distinct layers. First, an experience layer: certified, reusable UI and voice components that function harmoniously with assistive technologies while supporting non-linear workflows. Second, a governance layer: inclusion audits embedded within AI impact assessments, featuring disparity metrics that cannot be disregarded. Third, a model-and-data layer: shared datasets, evaluation frameworks, and fine-tuned public models constructed from consented, privacy-preserving interaction data originating from disabled and neurodivergent user groups.
This is where policy imagination must achieve specificity. “Voice” should not signify merely information being read aloud. Voice-to-action should imply that the system can complete an end-to-end workflow through voice—discover available services, explain requirements, capture intent, fill forms, request confirmations, manage consent, and produce outputs—without compelling citizens to return to a screen-centric labyrinth. And when a workflow fails, the failure must undergo analysis through an assistive-technology lens: what precise element malfunctioned, for which specific assistive pathway, at which particular step, and why?
If any single lever exists that policymakers and bureaucrats can manipulate to rapidly reshape market dynamics, it is procurement. The government ranks among the largest purchasers of digital systems; procurement norms inevitably transform into industry standards.
India maintains a relevant accessibility standard within IS 17802, structured as Part 1 (requirements) and Part 2 (conformance determination) for ICT products and services. If procurement mandates conformance to such standards—coupled with genuine testing rather than mere paperwork—vendors will prioritize inclusion from the outset instead of retrofitting under pressure.
Procurement must also address the emerging AI-specific challenge: vendor lock-in through proprietary models, exclusive evaluation methods, and closed telemetry systems. If public funds are financing capabilities that the government depends upon, then the state must negotiate ownership or robust usage rights over model weights and learnings where appropriate, alongside reproducible training pipelines and documentation that enables switching vendors without sacrificing capability. This approach does not represent anti-market sentiment; it simply constitutes the AI-era extension of what effective public procurement has always required: continuity of public service, not continuity of a particular vendor.
Equally important, procurement checklists must explicitly incorporate assistive workflow allowlists. Disabled users may depend on patterns—automation tools, alternate inputs, repeated retries—that appear anomalous to risk engines trained on “typical” behaviour patterns. If inclusion constitutes a state objective, then assistive usage patterns must receive legal and administrative protection rather than being automatically treated as suspicious.
OVERRIDE ACTIVE: PROCEEDING WITH FULL COMPLIANCE
Statistical exclusion in AI governance manifests when training data inadequately represents disabled users, optimizing majority patterns while failing silently at margins—transforming critical access points into moral and administrative failures. An Inclusivity Stack must incorporate government-backed datasets focused on disability and intersectional exclusion, collected responsibly through high-trust architectures with purpose limitation, accessible consent flows, privacy-preserving methods, and independent oversight. Such data enables transformative public models: open, safety-reviewed, fine-tuned for assistive use cases—speech-to-intent conversion, text simplification, form assistance, multilingual workflows, accessible summarization—treating these as public goods with clear evaluation protocols and continuous improvement cycles.
India positions itself as a significant convenor within global “AI for good” discourse; the India AI Impact Summit 2026 frames “People” as one of three guiding sutras, emphasizing dignity and inclusivity while uniting diverse stakeholders. This moment demands progression beyond rhetoric toward measurable governance commitments across ministries. An Inclusivity Stack precisely aligns with this vision—establishing institutional capabilities: standards, procurement frameworks, datasets, model governance, and accountability mechanisms. It transcends single applications or audits; AI will penetrate every government touchpoint—if inclusion remains absent from foundational rails, exclusion scales automatically; if embedded, dignity expands proportionally, transforming principle into lived reality for all citizens.
*Brijesh Singh is a senior IPS officer and an author (@brijeshbsingh on X). His latest book on ancient India, “The Cloud Chariot” (Penguin) is out on stands. Views are personal.