According to UNICEF, one in three Internet users is a child. However, the Internet can be an open minefield for children. As an increasing number of children access the Internet, they disclose copious amounts of their personal data in the process, which severely endangers their privacy and security.
Personal data means data that directly or indirectly identifies a person, such as information on any characteristic, trait, attribute or any other feature or combination of such features. When it comes to children, they disclose their name, date of birth, photographs and videos, addresses, and even their live geographical location, while accessing social media platforms. If such data falls in the wrong hands, it can make a child susceptible to a myriad of risks ranging from behavioural profiling, exposure to inappropriate content, financial fraud, cyber bullying, to even child pornography or trafficking.
For example, recently, in the United States, Google’s YouTube was fined to the tune of USD 170 million for allegedly and knowingly profiling children based on their browsing activity, to show them targeted advertisements. The video sharing application TikTok—which is hugely popular among teenagers across the globe—has also been sued by the former Children’s Commissioner for England, for allegedly illegally harvesting the personal data of millions of children in Europe.
These platforms came under the scanner in the United States and the United Kingdom largely due to the existence of proper mechanisms in these countries that protect children’s personal data. However, in India, the absence of a comprehensive data protection law makes it difficult to hold social media companies accountable. Such platforms scarcely come under scrutiny in India over their data collection policies, particularly regarding children.
Clearly, there is a pressing need to have special measures in place for protection of children’s personal data online. This fact has also been highlighted several times by legislators as well as courts, including by the Apex Court in India. However, India’s legal approach to protecting children’s data online depends on ambiguous—if not arbitrary—definitions of the “child” in need of protection. This could have ramifications for children’s privacy and security, and progressive data legislation as well.
How Do The Courts Recognise Children’s Data Privacy?
The Supreme Court, while declaring the Right to Privacy as a fundamental right in 2017’s landmark Puttaswamy ruling, noted that “children around the world create perpetual digital footprints on social network websites on a 24/7 basis as they learn their ‘ABCs’: Apple, Bluetooth, and Chat followed by Download, E-Mail, Facebook, Google, Hotmail, and Instagram. They should not be subjected to the consequences of their childish mistakes and naivety, their entire life. Privacy of children will require special protection not just in the context of the virtual world, but also the real world.” The Puttaswamy judgment paved the way for formulating a comprehensive data protection legislation for India. Accordingly, a Committee headed by B.N. Srikrishna (Srikrishna Committee) was tasked with analysing and making specific suggestions on principles underlying a data protection bill.
Sonia Livingstone .53% of UK 3-4 year old go online, with YouTube their favourite app.23% of 8-11 year olds have a social media profile, Society cannot continue to be reactive, discovering too late that services for “everyone” are used by children https://t.co/U8YDIfCTdG #SOCGDW pic.twitter.com/dhIWja4Ag8
— OCR Sociology (@OCR_Sociology) December 6, 2017
In November 2017, the Committee released a White Paper on the proposed Data Protection Framework for India (White Paper) to solicit public comments on what shape the data protection law in India must take. The White Paper discussed the issue of children’s privacy in detail and noted that children using the Internet represent a vulnerable group, and hence require heightened levels of protection with respect to their personal information. Based on public comments received on the White Paper, the Committee released a Report (Srikrishna Committee Report) and a Draft Personal Data Protection Bill 2018 in July 2018. The Srikrishna Committee Report reiterated that safeguarding the best interests of the child should be the guiding principle for India’s law on protecting data of children.
In 2019, an updated draft, titled the Personal Data Protection Bill 2019 (Bill) was introduced in Parliament, based on the recommendations of the Srikrishna Committee Report. The Bill, which is currently being deliberated before a Joint Parliamentary Committee (JPC), has laid out additional measures for the protection of children’s personal data and sensitive personal data (which constitutes financial data, health data, biometric data, sexual orientation among other things). After a slew of delays, the JPC’s report on the Bill is expected to be released in the current monsoon session of the Parliament. The clauses laid out in the Bill will determine how children’s data will be protected in India in the future.
How Young Do You Have To Be, To Be Considered A Child On The Internet?
A “child” is defined under the Bill as “a person who has not completed eighteen years of age”. And so, the Bill intends to make access to the Internet by anybody below 18 years, subject to strict parental consent and age gating mechanisms. Valid consent under the Bill means consent that is free, informed, specific, clear, and capable of being withdrawn. The Bill does not consider persons below 18 years capable of giving such valid consent—which means their parents will have to consent on their behalf, when children sign up for social media websites.
However, while the principles underlying protection of children’s data in the Bill are welcome, the notion of the child in the Bill is drawn from archaic legislation and does not consider the privacy or interests of “children” using the Internet themselves, or the reality of the current online world. This is also evident from the fact that the age of consent defined under the Bill is significantly higher, when compared to other jurisdictions.
At present, social media websites circumvent such age restrictions by placing the accountability on the user, without taking any parental consent. Companies such as Facebook, Instagram, Twitter, and Snapchat disclaim in their respective Terms of Service that one has to be 13 years or above (or the minimum legal age in the specific country/jurisdiction allowed to use its product/services) to be able to access their platforms. The cut off age of 13 years flows from the legal system of the United States, and more specifically, from its Children’s Online Privacy Protection Act, 1999 which fixes the age of consent at 13, based on the assessment that children below 13 cannot understand a website’s request for information and its implications on privacy. The United Kingdom and Canada also fix the age of consent at 13 years. In the European Union, the General Data Protection Regulation (GDPR) sets the age limit under which a person is defined as a “child” at 16 years and member states may lower it to 13 years.
This comment from @facebook is important as Indian draft data protection bill keeps age of consent at 18 vs 13 in USA. Bill wants parent permission for sharing children data. This threatens wide user base of teenagers in India – a largely young country. #privacy #dataprotection
— Megha Mandavia (@MeghaMandaviaET) December 18, 2018
So, in the Indian context, social media websites are presently being accessed by “children” within the meaning of the Bill, in spite of the age restrictions of the platforms themselves. These social media websites, being data fiduciaries under the Bill (or entities who alone or in conjunction with others, determine the purpose and means of processing of personal data), would have to operationalize, a mechanism for age verification—and for obtaining parental consent for its ‘minor’ audience—in case the cut off age of 18 is retained in the Bill that finally passes.
However, it remains to be seen how social media websites will verify the age of its users, given that the current process of simply providing a date of birth can be easily circumvented by providing an older date of birth. The Bill is silent on what constitutes an appropriate age verification mechanism, something which is expected to be specified by subsequent regulations.
The White Paper, for example, suggested developing an aptitude test, as an age verification mechanism, to assess the capacity of the child to understand the consequences of giving consent. Another possible mechanism that has caught interest is, requiring users to link identification documents to their social media accounts: petitions have been filed across Courts, praying for directions to link social media accounts with Aadhaar cards. However, Courts, including the Supreme Court, have declined to pass such directions, stating that such linkages would put the personal data of account holders at stake. Recently, the Government also disclosed that it does not yet have any plans to require account holders on social media to submit Aadhaar cards. This comes as a temporary sigh of relief, with rising concerns of a growing surveillance State.
All these discussions centre around what the State or parents should do to protect their children, leading to an extremely paternalistic approach, which neglects the voices and interests of the children themselves and takes away their agency. An overt reliance on parental consent for all persons below 18 years also incentivizes children to lie about their age—which may cause more harm than good.
But, more importantly, the blanket age of consent imposed under the Bill treats a 16-year-old teenager and a 7-year-old child in the same manner. This ignores the fact that the Internet hosts a large audience of “children”, which is only growing by the minute. Even among children, different age groups access and consume digital services and content differently, given their varying levels of understanding, cognitive development, and preferences—which means they require different kinds of protection.
How Can ‘Children’ Online Be Treated Differently, Yet Protected Equally?
As opposed to uniformly imposing parental consent requirements on anyone under the age of 18, a variable system of consent based on the age and the type of online service being accessed, would be more feasible and effective. Notably, the White Paper had also suggested a variable age limit instead of a blanket age of consent at 18 years. However, it is important to note that the Srikrishna Committee Report released subsequently, dropped this suggestion, instead stating that the age of consent to access the Internet needs to be commensurate with the age to contract.
Nevertheless, a cue can be taken from the recently notified Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which, notwithstanding their other more controversial suggestions, have introduced a graded rating system for viewing audio-visual digital content, on the basis of the age of the viewer. These categories follow the familiar ratings of: “U”, “U/A 7+”, “U/A 13+”, “U/A 16+” and “A”. So, by way of example, minors above a certain age (say above 16 years) can be allowed to access online services without parental consent, however profiling and targeted advertisements should be banned for all children below 18 years.
Additional security measures can also be put in place for different age groups such as making minors below 16 years non-searchable on social media websites and having content filters for different age groups of children such as for 7+, 13+, 16+. (For example, information about birth control can be made accessible to a 16-year-old but should not be accessible to a 7- year-old). Such a variable system of consent balances the right of children to access the Internet while also protecting them from harm.
Additionally, certain data fiduciaries that operate commercial websites/online services directed at “children” or process large volumes of data related to children, will be reclassified as “Guardian Data Fiduciaries”. Guardian Data Fiduciaries would be barred from profiling, tracking, or behaviourally monitoring minors or directing targeted advertisements at children. Given their large adolescent audience between the ages of 13 to 18, it is likely that gaming and ed-tech applications, as well as social media websites such as Facebook, Instagram, and YouTube, will be classified as Guardian Data Fiduciaries.
Social media websites would need to completely reevaluate their age verification and consent mechanisms, as well as their methods of data analytics and advertising for minor users, to stay compliant with the Bill. Once passed, any processing of children’s data in violation of the Bill would invite stringent penalties as high as INR 15 crore or 4% of the worldwide annual turnover of the Data Fiduciary/Guardian Data Fiduciary.
It remains to be seen if the JPC will, in its long-awaited report on the Bill, propose lowering the age of consent from 18 years, in consonance with international practice. Irrespective of the age of consent, the law must maintain the precarious balance between being in sync with the realities of the online world and making the Internet a safer space for children and their personal data.