This is the second instalment of a two-part series on how the Information Technology Rules, 2021 impact Non-Profit Organisations. Click here to read part one.

We live in a world where the possibilities rendered by deepfake technology can be potent and harmful. With people pasted to their phones thanks to the pandemic, the human costs of consuming viral fake news have increased tremendously. Technology exponentially increases the reach and speeds of transmission of this information—while remaining agonistic to the integrity of it.

Probably to regulate this phenomena, the Ministry of Electronics and Information Technology (MEITY) and the Ministry of Information and Broadcasting (MIB) released the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 (the Rules). The Rules—as noted in the first part of this series—regulate publishers of all kinds of digital media and social media producing news and journalistic content, which includes even non-profit social impact entities publishing content online. 

While the necessity for the regulation of social media is justified, the manner in which the Rules set out to do so is rather discomforting—to say the least. We fear that these guidelines will lead to a kind of capitalist plutocracy directly in conflict with the constitutional values of free speech—which may also impact the free flow of marginalised voices that non-profit organisations strive to promote online. Let us show you how.

Acknowledging That Free Speech Can Hurt: the Constitutional Design

In the name of teaching free speech, teacher Samuel Paty displayed caricatures of the Prophet Mohammed in a history class in France. Not much later, he was gruesomely beheaded by  Abdullakh Anzarov, an 18-year-old Muslim youth entwined with fundamentalist Islamic groups. French politicians and publics rallied in protest against the beheading, in support of the country’s uncompromising attitudes towards unrestricted free speech.

On the other hand, in India, banning controversial material—such as Salman Rushdie’s books or M.F. Husain’s paintings—to prevent the hurting of religious sentiments (while restricting free speech) is commonplace. Founded in the same spirit,  in recent times, shows like Leila, A Suitable Boy, Paatal Lok, and Tandav have been the subject of public complaints for hurting religious sentiments and stereotyping identities. These complaints are typically made before the local police and are tried as criminal offences.

That this can happen in India—unlike in some of its democratic counterparts—is intended by legal design. The fundamental right to freedom of expression under Article 19 of the Constitution does not entail an absolute right to express. It allows “reasonable restrictions” on the right to free speech to protect the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality or in relation to contempt of court, defamation or incitement to an offence.

In the context of the Rules, these restrictions are being applied to social media. Built on the ideas of unfettered free speech, social media has now easily lent itself as a platform for organized vandalism, social protests, and even the circulation of rape threats. The victims of Internet violence often cannot obtain effective redressal from the online intermediaries—such as Facebook, LinkedIn, or Twitter—as ‘action’ is limited to reporting the offender and blocking their account. Once again, for many, going to the police is the legally pragmatic way of resolving online abuse and threats.

This changes with the new Rules: the intermediaries are now held more accountable for the activities taking place on their platforms. The appointment of an identified grievance redressal officer will indeed provide a definitive and welcome pathway for the redressal of a user’s bonafide grievance. 

But beyond this, we see little merit in the proposed “self-regulatory system” for the Publishers.

The Executive Holds the Gavel

The Rules implement a three-tier regulatory mechanism for grievance redressal regarding online content. The first level is a “self-regulating mechanism” to be established by the intermediary. At the second level, is a “self-regulating body” formed by Publishers and headed by a retired High Court/Supreme Court judge, or an independent eminent person from the relevant field. Above this is an “Oversight Mechanism”—an Inter-Departmental Committee inter alia consisting of representatives from various ministries constituted by the Ministry of Information & Broadcasting (MIB). 

What’s important here: the Rules do not provide for appealing the Committee’s decision.

If an individual has concerns regarding pornographic content published on a website, the person can approach the platform’s self-regulating mechanism. At first instance, it might appear that the publisher is the primary decision-maker on the content it carries. But when seen more closely, the power of the executive—that is, the Inter Departmental Committee—reigns supreme. This is because the third level of regulation has not only appellate jurisdiction from levels 1 and 2, but also on matters directly referred to it by the MIB. 

Simply put: this means that on its own discretion, the MIB can pick content that it deems offensive and refer it to the Committee for ‘action’. Action could include a directive to take down content, modify the content, issue an apology, reclassify it or add a disclaimer to it. 

Considering that the Committee also largely consults representatives from the various ministries, it is difficult to ignore the executive’s usurping control over digital media in one sweeping stroke. It would be impossible to keep political motivations out of complaints to the Committee. This could affect the online media of civil society organisations and non-profits, whose work often covers the underbelly of government narratives—as seen below, they may be arbitrarily and ambiguously considered worthy of striking down by the State.

Moreover, since the Rules do not provide for an appellate authority—like the Supreme Court, for example—to challenge the decisions by the Committee, the MIB holds the gavel for the final decision on circulating the content. This will carry other far-reaching practical implications.

Political Control over Digital Media

The government will have the power to censor content on online platforms. A number of small organisations cannot afford to divert resources towards defending and explaining grievances before the Committee time and again. Instead, to avoid appearances before the Committee, it is likely they will aim at self-censorship and align their content towards the interests of the government. 

This breeds an unhealthy political dynamic operating at the mercy of the ruling party. Digital content will come to bear a plutocratic voice—this is the complete anathema to the intent of regulation. Diversity in media narratives is likely to be compromised, simultaneously affecting the reach and impact of non-profits working to address less savoury social issues.

Instead, it might have been better placed for the Rules to propose a judicial or quasi-judicial body to address the grievances raised on content. It is not for a bureaucratic body to interpret, for example, whether the content is likely to disturb public order—especially given that this is a legal question. 

Challenging the Legal Validity of the Rules

It would seem that the Central Government travelled beyond its mandate under the Information Technology Act, 2000 (the Act) by extending the application of the Rules on publishers of news and current affairs. The Central Government derives its authority to make rules from Section 87(2)(z) & (zg) of the Act. These provisions empower the Central government to make rules for data intermediaries. The Act neither regulates publishers of news and current affairs nor classifies them as intermediaries. Therefore, publishers of news and current affairs cannot be a subject of control under the Rules—the same goes for non-profits too. 

More importantly, it is apparent that such executive control on content will violate the right to freedom of speech and expression under Article 19(1)(a) of the Constitution. On these same charges, the Rules have been challenged by LiveLaw before the Kerala High Court and by the Foundation for Independent Journalism before the Delhi High Court. It remains to be seen if they can pass the test of constitutionality—whether in the case of publishers, or in the case of end-to-end encryption.

Tracing the Originator: On Paper and In Practice

Checking fake news, content spread by anonymous users, and presaging dangerous disruptions that are potential threats to law and order is an important undertaking. To address such incidents, the Rules prescribe that a significant social media intermediary (described in piece one) shall trace the first originator of information, if asked for through a judicial order from the competent court or the competent authority under the Act. This certainly looks good on paper, but is enforcement possible? 

Following international standards of data protection—like the Global Data Protection Regulations (GDPR)—social media intermediaries have adopted end-to-end encryption that makes it difficult, if not impossible, to track the first originator. For instance, in a matter presented before the Madras High Court and the Supreme Court, Facebook had expressed its inability to trace the originator of a post, after multiple reposts. Increasingly popular messaging platforms such as Signal do not collect data that can identify any person. 

You May Also Like: In the Digital Age, Here’s How Encryption is Protecting Your Privacy

While the Rules compel intermediaries to trace the originator and break end-to-end encryption, they will also have to ensure that the right to privacy of individuals is not compromised in the process. How the government plans to work with relevant stakeholders to make this balanced tracing possible remains to be seen.

What Can We Expect in the Future? 

It is a matter of comfort to know that in the post-GDPR era, several countries like China, Singapore, and Australia, and even countries in the Global South like Brazil and India are looking to align with the GDPR to protect an individual’s right to data privacy. 

However, the flow of data is controlled by a few tech giants, such that the international climate is gravitating towards a paradox—the regulator is becoming the regulated. That is, Big Tech is being gradually equipped with the resources to determine how its own operations should be regulated. With the Rules in India, the stage is now set for similar corporate-political alignment, which is detrimental to the health of a well-functioning social sector that holds power structures accountable. 

This will strengthen capitalist narratives that align with the State’s version of events—while muzzling the stories that must be told fearlessly and without censorship. 

The Supreme Court criticized the Rules as being ‘toothless’ since they don’t provide for (criminal) action to be initiated against the OTT platforms. In response, the government  submitted that the Rules encourage a free-for-all media and internal self-regulation, and that it would come up with a better version shortly. 

If a toothless version of the Rules is this stifling, one shudders to imagine how the ‘improved’ version would lash out. Afterall, Article 19 is hinged on reasonableness, not restraint. It is now in the hands of the High Courts to determine whether the Rules are a violation of the fundamental right to the freedom of expression.


Featured photograph courtesy of Markus Winkler on Unsplash.

Nivedita is a lawyer and company secretary (LL.B & ACS). She is the Founder of Pacta, a social and impact sector exclusive law firm based in Bengaluru. Pacta advises billion-dollar philanthropies, family foundations, NGOs, CSR entities, public trusts, start-ups, social incubators/accelerators, schools, and universities. Nivedita is also a performing classical dancer and is passionate about exploring the intersection of the performing arts and the law.
Geetanjali is a lawyer with a specialization in the field of Law and Development. She is currently a legal associate at Pacta, Bengaluru. Her areas of interest include technology regulations, privacy and data protection laws, and climate change.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.