Even as social media has taken center stage in the lives of millions of people across the world, including in India, and has become a powerful tool to spread a message or influence a viewpoint, it continues to be accompanied by some thorny baggage—that of being a medium to spread hate speech and disinformation. 

This problem is further compounded when social media platforms are inconsistent in their approach on tackling such behavior on their platforms. Or, on the flipside, when they take down content that doesn’t seem obviously problematic and do so without explaining why.

In the mix of this ambiguity, the Indian government is using a combination of existing and new rules to clampdown on social media. Partly, and ostensibly, it’s doing this to curb the menace of hate speech and disinformation prevalent on social media platforms—due to their lack of responsiveness in dealing with these issues. 

But, in that process, the government also ends up accumulating vast powers to dictate to these platforms what their users can be allowed to say or not. Ultimately, it will control the narratives online and influence conversations in the public domain—especially since several hundred million Indians actively use multiple social media platforms.

A Pattern of Favouritism

One of the more glaring examples of inconsistent social media policing by the platforms was revealed last year in August. The Wall Street Journal reported that there was “a pattern of favoritism” by Facebook in India towards Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP). In one instance, Facebook didn’t take down social media posts against Muslims by BJP MLA T. Raja Singh, despite their being repeatedly flagged by the company’s employees tasked with the job of policing content. Time further reported that some Indian activists were so fed up with Facebook India’s policy team that they bypassed it altogether by flagging hate speech directly to the company’s headquarters in Menlo Park, California.

Taking down content when it’s not clear why is rampant too. In November of 2019, citing an email from Twitter to Indian lawyer Sanjay Hegde, The Print reported that Twitter said it would permanently delete Hegde’s account for retweeting a tweet back in 2017, while the original tweeter activist Kavita Krishnan faced no action. This decision came off the back of Twitter temporarily suspending Hegde’s account a week earlier—his profile’s cover image of August Landmesser refusing to perform the Nazi salute allegedly violated its media policy. This led to a temporary migration of several Twitter users to Mastadon, a free and open-source software that lets users create social networking services and offers a microblogging feature similar to Twitter.

Hegde’s account was first suspended for its cover photo of “German national, August Landmesser, refusing to enact the Nazi salute before Hitler.” | Source: The Print.

More recently, a tweet by journalist Salil Tripathi—on a poem he wrote to his mother on the 2020 communal riots in Gujarat—was blocked and his account suspended without him being told why. These instances create user distrust in social media platforms and fuel their personal biases, be they for or against the platform in question.

In the midst of this, however, there’s another layer of complexity—the government, too, doesn’t trust social media platforms and on multiple occasions has used the country’s antiquated technology laws to stifle dissent while selectively ‘regulating’ social media. 

On the Orders of the Government of India

Last month, Twitter suspended hundreds of accounts tweeting on the ongoing farmer’s protests against three agricultural laws passed in September of 2020. After a massive public outcry, the microblogging platform restored some of the accounts and revealed in a blog post that it had taken them down on the orders of the Indian government. 

On this occasion, and on many others, the government has leaned on section 69A of the Information Technology Act, 2000 (IT Act), which allows it to issue directives to block public access to any information in the interest of the sovereignty and integrity of India, among other reasons.

For the government to enact 69A however, a government-appointed committee has to approve its use. While “it’s good to have a committee, as it means there’s a defined process for when or how content should be taken down,” the problem is that by legal design it is only made up of members of the executive, says Udbhav Tiwari, public policy adviser at Mozilla in India. “With no judicial or independent representative on this committee (..) you only have to convince other government representatives [to allow section 69A’s use]. There’s no judicial standard, no legal analysis, no civil society oversight,” he adds.

In a similar vein as per the law, the proceedings are required to be confidential—which means the platform cannot tell a user why his or her content is being taken down. “That’s quite worrying,” says Tiwari, adding that in the interests of transparency, the standard for such confidentiality provisions should be much higher. “It should be the exception and not the rule.”

Now, the government has gone a step further to extend its control over the Internet. In February it notified new rules for intermediary liability—titled the Information Technology (Guidelines For Intermediaries And Digital Media Ethics Code) Rules, 2021—which, experts say, are problematic on multiple fronts for social media regulation.

Stretching the Ambit of their Reach

One big area of concern is the fact that the new Rules have stretched the ambit of their reach to include digital news media and streaming services. This is problematic as the Rules are framed under the IT Act, which “does not actually regulate these entities [streaming services and digital media] in any way,” says Torsha Sarkar, a lawyer and a policy officer at the Centre for Internet & Society, Bengaluru. Moreover, unlike other sections of the new Rules, these provisions hadn’t been previously discussed in the public domain in the 2018 draft rules—which form the basis for the current Rules. This denies users a chance to vet and debate their various aspects.

“To pass rules (which didn’t have the parliamentary proceedings to be backed by) and create new entities for regulation violates existing constitutional jurisprudence on legislation,” Sarkar tells The Bastion in an email. That is, if the IT Act does not have any framework for regulating streaming platforms or digital newswhich it doesn’tthen any rules made under the Act cannot encroach onto those areas either.

The matter is already in court as a handful of media portals have filed petitions questioning its application to news media platforms. LiveLaw News Media Pvt. Ltd., which publishes the legal news website LiveLaw, the Foundation for Independent Journalism, a trust which owns online news portal The Wire, and The News Minute founder and Editor-In-Chief Dhanya Rajendran have filed petitions challenging the Rules in various High Courts.

There are other concerns too. 

For instance, the Rules say social media platforms should take down non-consensual sexually explicit content and morphed or impersonated content within 24 hours of being notified of it by users. While the issue is a legitimate one, it goes against the landmark 2015 Shreya Singhal judgement by the Supreme Court of India which clearly stated that companies can be expected to remove content only when ordered by a court or a government agency to do so, says Mozilla’s Tiwari. 

Plus, 24 hours is “not a lot of time at all,” adds Tiwari. “It doesn’t account for weekends [when companies will be hard-pressed to respond quickly], clarificatory information that the platforms may need, or the fact that those requests may be invalid.”

Apart from the fact that the Rules give social media companies such legal grounds to actively censor content, they also don’t take into account the same companies already use filters to scan for and remove such content, experts say. 

However, ultimately, there is the concern that content filters are not very accurate, as they don’t gauge context, sarcasm, or even the use of different languages, says Shashank Mohan, Project Manager at the Centre for Communication Governance at the National Law University Delhi. “There’s a large space of content that’s in the grey space between legal and illegal,” which could also get targeted by algorithms or user complaints under these new Rules, says Mohan.

There’s precedence for this concern. In 2018 the government blocked The Dowry Calculator, a satirical website where users could fill in information on the social background, education status, and personal traits of a groom to arrive at a ‘fictional dowry amount’ he could demand. The founder was never informed why the site—a parody of the factors that Indian families tend to promote when looking for prospective partners for their sons—was blocked. The matter ended up in court and remains unresolved—the site is still blocked to this day. Algorithms are only as imperfect as their creators—such offline examples could bode ill for the accuracy of the content filters supposedly regulating ‘harmful content’ under the Rules on social media platforms too.

The End of Privacy?

The Rules also raise multiple concerns over privacy. For instance, they require messaging apps to identify the originator of a message—and if the originator is outside India, to identify the first originator of the message in India. For social media intermediaries to do this they have to either break their end-to-end encryption—even though the government says it is only seeking the identity of the sender and not the content of the messages themselves—and/or start storing additional sensitive information, which potentially “compromises the privacy rights of millions of Indians at one go,” says Mohan.

Along these lines, the Rules also recommend that social media platforms give their users the option to verify their identities, a process that would probably include users providing the platforms with phone numbers and/or government-issued photo IDs. In a blog, Mozilla’s Tiwari calls the move “dangerous for the privacy and anonymity of internet users,” while also making them vulnerable to being profiled and targeted. “There is no evidence to prove that this measure will help fight misinformation (its motivating factor), and it ignores the benefits that anonymity can bring to the internet [sic], such as whistle blowing and protection from stalkers,” he adds.

So, is there anything useful in the Rules when it comes to regulating the harms of social media? 

There is indeed an attempt to introduce more transparency and accountability into the way particularly influential social media intermediaries interact with their users, says CIS’s Sarkar. This includes mandating them to provide more information around user complaints, a periodic compliance report, and the requirement for notice and appeal for users whose content is being removed. 

But, will any of the measures go any distance in reducing hate speech and disinformation—some  of the suggested motivations behind the Rules? “It’s difficult to tell at this juncture, since much of the content takedown procedures are newly added,” says Sarkar, even as some of the existing provisions continue to be a cause of concern.


Featured image courtesy of Ravi Sharma on Unsplash.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.