Poor air quality forces Haryana to order school closures

Haryana government orders school closures and online...

IS ISRAEL GUILTY OF GENOCIDE OR ETHNIC CLEANSING OR BOTH?

The end game, according to the plan,...

Rajesh Katyal granted Bail in Rs 200 cr money laundering case

New Delhi: Businessman Rajesh Katyal, who was...

The ethical dilemmas of Artificial Intelligence

NewsThe ethical dilemmas of Artificial Intelligence

Ethical questions in AI research and development present unique challenges in that they ask us to consider whether, when, and how machines should make decisions about human lives, and whose values should guide those decisions

 

Beijing: With so many businesses and governments having entered the AI race, there is a risk that many may have lost sight of some important things that should continue to matter along the way—such as the rule of law, good governance, and ethics. In the AI arena, the stakes are extremely high and it is quickly becoming a free-for-all from data acquisition to the stealing of corporate and state secrets. The “rules of the road” are being determined along the way, since the legal regime governing who can do what to whom, and how, is either wholly inadequate or does not exist. As is the case in the cyber world, the law is well behind the curve.
Ethical questions abound with AI systems, raising questions about how machines recognize and process values and ethical paradigms. AI is certainly not unique among emerging technologies in creating ethical quandaries, but ethical questions in AI research and development present unique challenges in that they ask us to consider whether, when, and how machines should make decisions about human lives—and whose values should guide those decisions.
In a world filled with unintended consequences, will our collectively shared values fall by the wayside in an effort to reach AI supremacy? Will the notion of human accountability eventually disappear in an AI-dominated world? Could the commercial AI landscape evolve into a winner takes all arena in which only one firm or machine is left standing? Will we lose our ability to distinguish between victory and a victory worth having—in business as well as on the military battlefield? Some military strategists already view future AI-laden battlefields as “casualty-free” warfare, since machines will be the ones killing and at risk.
While AI remains in an embryonic state, it would be a perfect time to establish rules, norms, and standards by which AI is created, deployed, and utilized. We should ensure that it enhances globally shared collective values to elevate the human condition in the process. While there will probably never be a single set of universal principles governing AI, by trying to understand how to shape the ethics of a machine, we are at the same time forced to think more about our own values, and what is really important.
New forms of threats are evolving as AI becomes more widely utilized, so it is important that we regain agency over it. In the United States, for example, the technology giants of Silicon Valley have pledged to work together to ensure that any AI tools they develop will be safe. Equivalent discussions about the limits of ethical AI research are occurring elsewhere in the world, but are more opaque, and some other governments seem entirely unconcerned about ethical considerations.
All the leading AI researchers in the West are signatories to an open letter from 2015 calling for a ban on the creation of autonomous weapons. Just as Microsoft proposed in 2017 a Digital Geneva Convention that would govern how governments use cyber capabilities against the private sector, an international protocol should be created to govern not only how governments project AI onto one another, but how they will do so with the private sector, and how the private sector will do so with itself.
Attempting to govern AI will not be an easy process, for there are overlapping frames of reference. New norms are emerging, but it will take a long time to work through the various questions that are being raised. Many are straight forward issues about technology, but many others are about what kind of societies we want to live in and what type of values we wish to adopt in the future. If AI forces us to look ourselves in the mirror and tackle such questions with vigour, transparency, and honesty, then its rise will be doing us a great favour in the long-term. History would suggest, however, that the things that should really matter will either get lost in translation or be left by the side of the road in the process.
We may see a profound shift in agency away from man and toward machine, wherein decision-making could become increasingly delegated to machines. If so, our ability to implement and enforce the rule of law could prove to be the last guarantor of human dignity and values in an AI-dominated world. As we continue to grapple with such fundamental issues as equality and gender bias with great difficulty, what should be on the top of the AI “values” pyramid? How can we even know what human compatible AI is or will become?
In 2017, the Asilomar AI Principles were created as a framework to govern how AI may be used ethically and beneficially. Thousands of AI researchers (and others) have signed on to these principles. Some professionals in the field worry that regulations imposed in the future could prove to be unhelpful, misguided, or even stifle innovation and cede competitive advantage to individuals and organizations in countries where the Principles may not be adopted. Others see them as a definitive step in the right direction. There is, naturally, disagreement among AI researchers about just what the risks of AI are, when those risks could arise, and whether AI could ultimately pose an existential risk to humankind.
Few researchers would suggest that AI poses no risk. The number of AI researchers who signed the Principles―as well as the open letters regarding developing beneficial AI and opposing lethal autonomous weapons―shows that there is strong consensus among researchers more generally that much more needs to be done to understand and address known and potential risks of AI. The right policy and governance solutions could help align AI development with these principles, as well as encourage interdisciplinary dialogue about how that may be achieved.
What may be inevitable is that AI will fall into the same abyss that the cyber arena has succumbed to, with nefarious actors hijacking the domain and negatively impacting its evolution. Serious ethical questions have already been raised with AI, which are only likely to grow exponentially with time. It is up to governments and the global business and academic communities to provide and maintain momentum and propose solutions about how the ethical dilemmas raised by AI will be addressed in the decades to come.
Daniel Wagner is senior investment officer for guarantees and syndications at the Asian Infrastructure Investment Bank in Beijing. He is co-author of “AI Supremacy: Winning in the Era of Machine Learning”.

- Advertisement -

Check out our other content

Check out other tags:

Most Popular Articles