CHRISTINITY: Obedience; Way to abide in Christ’s love

Obedience to Christ is like a compass—keeping...

Taming the Hydra: Regulatory solutions that could actually curb online-falsehoods

NewsTaming the Hydra: Regulatory solutions that could actually curb online-falsehoods

President Emmanuel Macron of France, in an address to the United States Congress in 2018 warned about the scourge of fake news as undermining democracy, true choices and rational decisions. Earlier this year, in a virtual discussion organized by the Atlantic Council, he expressed shock that the networks which “sometimes helped” Donald Trump being so efficient in reaching the masses, backflipped by suspending him “the very second, when they were sure it was over”.
Macron is not alone voicing pushback against social media giants. German Chancellor Angela Merkel and European Union President Ursula Von Der Leyen echoed similar sentiments. Political leaders across the world are flexing muscle to rein in the power of “private players” acting as judge, jury and executioner on matters pertaining speech, misinformation, anti-trust and data.
It’s the wild west in the tech world, and the cowboys don’t need no sheriff!

NEW SHERIFFS IN TOWN
However, the Covid situation might have accelerated regulation. There’s a misinformation infodemic about the pandemic. Research suggests that people are more vulnerable to falsehoods during periods of stress and unpredictability. As a response, governments across many nations have introduced a raft of legislative, administrative, and judicial measures, aimed at targeting online falsehoods. Some recent examples include:
France passed two anti-fake news laws in 2019, in the wake of alleged presidential election meddling by Russia. Germany passed the NetzDG law in the same year, labelled as the most ambitious effort by a Western democracy to check online content. However, implementation of the law ran into trouble as it’s resulted in too much content being blocked. Europe wide, the Digital Services Act incorporates proposals such as external auditing of how firms intend to stop the spread of misinformation and a spruced-up national regulators to police potentially bad behaviour. Multi-million dollar fines pegged to annual revenue are also a feature. Russian laws on fake news oblige platformers to post correction and remove content deemed false. Singapore has a similar provision in its legislation targeting online falsehoods. Taiwan’s laws are more specific in that they are targeted at spread of disinformation based on determinations of interference by foreign powers. However, implementation of such a law faces problems as attribution to foreign actors is not possible.
Technology companies on the other hand are torn between the platform vs publisher debate; and claim limited ability to deal with user-generated content, especially live/instantaneously. Their challenge becomes even steeper with well-funded, synchronised, pre-planned, misinformation campaigns, by a host of anonymous players. For every fake-account and bot they seem to shut down, a dozen more appear in its place.
Twitter, Google, YouTube, Facebook and Apple have all shared their helplessness, often in formal Congressional hearings, while at the same time, insisting that they self-regulate.
So, here’s the million-dollar question. If self-regulation is not working, and government-regulation isn’t the answer, then what is?

THE HYDRA EMERGES
Greek mythology speaks of Hydra, a serpent-like monster. The Hydra had many heads, and when you cut one off, two more would grow back in its place. Finally, Hydra meets its match in Hercules, who in tandem with Lolaus, cuts off one head after the other, burns it, and then finally buries it under a rock so no more heads could grow.
Big Tech wants to self-regulate. Partially through metaphorically cutting off Hydra heads as they emerge, and partially through education and advocacy campaigns with users. Non-coercive measures were at some point favoured by Western governments over top-down regulation. However, in the light of serious charges like election interference and cyber-attacks, many are rethinking this approach, believing that tech companies cannot be left to regulate themselves. The efficacy of blocking measures in curbing deliberate online falsehoods is increasingly the subject of academic study. Research shows that deliberate online falsehoods are better addressed by making available vast amounts of accurate information, as opposed to controlling information, or even blocking communication. Temper this with FOE activism, forced measures to regulate could actually become counter productive.
Regulators are not taking any chances though. Many are meting out economic and technical sanctions like shutting out entities from operating in a jurisdiction and slowing down access speeds to offending platforms, to fines, to coerce compliance with rulings and directives.
In exercising sovereign power, regulators are increasingly getting drawn into adversarial relationships with dominant technology players, which often stem from reactive rather than proactive thinking in corridors of power. Thinking and moving proactively requires vigilance in updating policy and law. Regulators may also need to “move fast, break things”, especially where the regulatory framework governing the internet is from the 1990s. This ought to be balanced with strategically using their toolkit and adopting a multi-pronged approach of coercive and non-coercive measures.

TAME THE HYDRA
Very few sovereigns have considered institutional and proactive solutions. For example, setting up an official fact-checking agency with constitutional authority, or an expert-led regulatory oversight board, or public hearings on public-interest matters, or consultative mechanisms that enable big tech to table their perspective, or PPP models to collaboratively prevent foreign interference in domestic issues/elections.
In short, curbing fake news and falsehoods is a fairly complex and evolving challenge. Framing this as a “Sovereign vs. Platform” issue is belittling it. The reality is that if anyone can do anything at scale, across borders, it is the internet platforms. The Joint Declaration on Freedom of Expression sponsored by the UN Human Rights Commission, recognizes that regulating fake news ought to be a multi-stakeholder effort led by the sovereign involving key stakeholder such as technology companies journalists, media outlets and a host of non-social media technology intermediaries.
The challenge lies in finding common ground among diverse stakeholders who mostly appear to be cross purposes. Outcomes-based legislation and enforcement, based on a clear understanding of the problem and possible solutions can provide the framework for multi stakeholder cooperation to curb the travesty of online falsehoods.
Taming the Hydra is partially about strategy, but mostly about finding your inner strength and allies. Finally, battle is always about combat tactics.

Anuraag Saxena is based in Singapore, is a board advisor and public affairs expert.
Ankur Gupta is a member of the Internet Society Singapore Chapter, teaching media and technology law.

- Advertisement -

Check out our other content

Check out other tags:

Most Popular Articles