Strategically located Malaysian Peninsula: A storehouse of natural resources

Despite significant trade with China, Malaysia is...

Mamata cracks whip on Abhishek’s cheerleaders

Kolkata: Last week, the Trinamool Congress steamrolled...

Envisioning responsible AI for India

Editor's ChoiceEnvisioning responsible AI for India

The path to AI regulation may seem daunting, but nations have already proven capable of enacting impactful digital policies.

We are back with the second article in our series on AI bias and its impact on society. If you missed the first article, you can check out The Sunday Guardian, pg 7, May 19, or view the web version of it. In this second part, we’re exploring strategies to tackle this challenge.

Given the far-reaching consequences of AI development, governments around the world have started taking action. Recently, the Biden-Harris administration in the USA issued an executive order on “Safe, Secure, and Trustworthy Artificial Intelligence,” in October, 2023. More recently, The European Union announced that its AI act will go into force starting next month. In comparison to the US, Europe has opted for a stricter regulation with fines for violating the AI act ranging between 7.5 to 35 million Euros (Rs 68 to 315 crore)!

In India, the policy-making scenario seems to be a catch-up game. On March 1st, the Ministry of Electronics and IT (Meity), issued an advisory asking corporations to take permits before launching AI products. A few days later, Meity issued a clarification that start-ups are exempted from this advisory, which applies only to “large” platforms. It also added that the government is aiming to launch a draft AI policy in July. Although the legalities of the current advisory are unclear to the tech community, there is hope that the July policy will not be withdrawn due to poor formulation, as happened in the past with the Encryption policy.

 

THE WAY FORWARD

As Foundational models are shaped by the attributes of the entities building them—such as model design, the data used for training, bias mitigation techniques applied, and the people involved in the process (see the previous article for details)—the AI policy should be geared towards seeking accountability from such entities in ensuring unbiased outputs. Setting a stage for discussion and debate, we urge the readers of this article and the policymakers responsible for drafting the AI policy to think along the following lines while formulating a policy for our diverse nation:

  1. Bias Definition and Categorization: Identify and categorize key domains where these models can have biases that broadly impact society. For example, misinterpretations of culturally specific terms or expressions, and disparities in representing gender, caste, or regional diversity.
  2. Bias Mitigation Policies: To address the potential for bias in AI outputs, effective and enforceable bias mitigation policies should include:
    1. Enforceable guidelines to ensure the removal of unnecessary bias from data and algorithms. This may involve data cleaning techniques, fairness metrics, and human oversight during model development and training.
    2. Transparency for creative/non-factual outputs should be ensured when the AI’s outputs are intended to be creative/generative rather than grounded in factual data. Users should be explicitly informed of this distinction, ensuring such outputs are not presented as factual representations.
  1. Collaborative Framework: We need to take a broader approach to policy development by engaging scientists, technologists, thought leaders, legal experts, corporations, and society at large. This engagement should be driven by an empowered technical committee that translates the ideas of “responsible AI” into detailed, implementable, and enforceable mechanisms for Indian society. This is crucial because business imperatives often hold sway in for-profit corporations, making it unreliable to rely solely on their internal policies for ensuring “Responsible AI.”
  2. Fine-tuning driven by human testing: Reinforcement Learning from Human Feedback (RLHF) is a method to improve the accuracy of Foundational models based on feedback provided by humans. For the models to be used in India, RLHF or similar fine tuning should happen in India and should be done by a diverse group of Indian testers, reflecting a fair distribution of the Indian social fabric. The AI policy must have enforceable guidelines to ensure this.
  3. Preserving Indian User Data Integrity: Very strict laws need to be established to prevent cross border transfers of data generated by Indians interacting with AI models.  This data could be crucial in revealing communities’ thought processes, which foreign powers could leverage against the nation if needed.
  4. Embrace India’s Cultural Richness and Diversity: The AI policy must facilitate and enforce efforts to ensure that this technology reaches the masses by making it accessible across the broad and diverse linguistic base of India. It is also important to ensure that the model is trained on India’s vast and diverse knowledge base, much of which is non-digital.
  5. Effective Punishments for Non-Compliance: The policy should impose strong but fair punishments for non-compliance, deterring even large corporations without stifling competition or new market entrants.

Some might argue that creating “Indic” Foundational models is the better way forward, instead of a complex web of regulations. While creating “Indic” alternatives is crucial, it isn’t a foolproof solution. Just as vehicle drivers take personal safety measures but still need regulations for overall safety, AI also requires robust regulation. Moreover, building a Foundational model of our own will be technology-specific, whereas policy changes will be more robust to technological advancements.

The path to AI regulation may seem daunting, but nations have already proven capable of enacting impactful digital policies. For instance, the EU’s GDPR and the most recent AI Act, sets global data protection standards. Closer to home, the RBI mandates local data storage for card payment networks in India, showcasing India’s commitment to safeguarding digital sovereignty. By leveraging its vast pool of expertise in technology and governance, coupled with a collective effort involving policymakers, industry leaders, academics, and civil society, India can craft a robust AI policy that fosters innovation while safeguarding societal and national interests.

 

The author Akshay Jajoo graduated from IIT Guwahati with a B.Tech in Computer Science with a Gold medal and a Ph.D in CS from Purdue. Ksheera Sagar received his Integrated M.Sc in Mathematics & Computing from IIT Kharagpur and a Ph.D. in Statistics from Purdue. Both are currently working in leading industries as researchers. The views in the article are the author’s personal opinion. Authors also want to thank their friends, Anil, Avatans, Apoorva, Nimayi and Prateek for their feedback.

 

- Advertisement -

Check out our other content

Check out other tags:

Most Popular Articles