Safer Internet – By the People, Of the People, For the People

Safer Internet – By the People, Of the People, For the People

Dr Partha Das Chowdhury – Core Researcher at REPHRAIN

Digital participation is critical to our shared access to many essential services, such as education and healthcare. This participation is managed and realised, in part, through avenues such as the Internet, which is rapidly being recognised as a public utility, in a similar manner to transport, infrastructure, etc. While the safety and standards of roads, for example, are long established as matters for public discourse, the debate around online safety is now beginning to gain a similar traction.

One example of this growing concern is the establishment and observation of a Safer Internet Day globally – this blog will summarise and reflect on the efforts previously made by system designers, policy makers, local authorities, schools, and parents, using our own expertise and experience to highlight and evolve useful insights for the future.

The moral obligation of protection mechanisms – The design of protection mechanisms is often done without consulting the needs and abilities of the communities and groups who will be using these technologies. This inadvertent exclusion tends to disproportionately affect those who fall outside our conception of a body and mind of the user (those with less privilege). Consequently, protection mechanisms fail to fulfil their moral obligations.

In 2019, when deciding on whether a person has the capacity to decide on the internet and social media use, a judge in the United Kingdom observed[1]: “I do not envisage that the precise details or mechanisms of the privacy settings need to be understood, but P should be capable of understanding that they exist and be able to decide (with support) whether to apply them.” This statement is instructive in how an evaluation of the mental and physical ability of the applicants is critical in understanding whether an individual (i.e., P) would be able to use the privacy mechanisms provided.

Involving a diverse range of users with differing abilities and experience in the design stage of such a mechanism would better safeguard the needs of all individuals – in the case above, the individual was neither physically nor mentally capable of using their social media privacy settings correctly, meaning they did not have equal access to safe and private browsing, which becomes a matter of injustice. As a general insight, the focus during design should be to develop technologies that all users are able to use in the manner they can and they value.

A Liability Regime That Works – TikTok, the video-focused social networking service, is especially popular among younger users, including school children. Famed for dance routines and life hacks, the app has also been used in the spread of harmful content – for example, TikTok challenges such as ‘slap a teacher’[2].  While schools and local authorities can respond to such instances, they are not in a position to take the content down – only the platform can take down inappropriate content. This is part of the larger issue of regulation and the expected roles and responsibilities of platforms and governments in protecting citizens online. Possible pathways to solutions can be found in other areas – looking to the realm of automobile manufacture, sustained lobby resulted in the USA developing NHTSA and Europe the Product Liability Directive[3].

While both serve as an example of successful regulatory efforts, the important distinction in evolving regulatory content for the Internet is its impact on civil liberties (this tension being at the heart of REPHRAIN). We can draw from the debates that took place several centuries ago in the domain of jurisprudence, as articulated by noble laureate economist Amartya Sen[4].

The prevailing theory of jurisprudence in the west at the moment goes back to Hobbes. It was developed by Immanuel Kant and later by Rousseau and is sometimes called the contractarian model after Rousseau’s idea of the social contract. The opponents of Hobbes never really got their act together, and we have among them the likes of Woolston, Croft, and Bentham.

There might not be a perfectly just and fair way of resolving these debates, yet the way forward is a realization focussed comparative approach achieved through democratic principles. This means understanding how actual societies and groups emerge instead of adopting canonical positions either on freedom of speech or complete absence of it. An integral component of resolving these debates would be learning from plural formulations of aggregate social choices in different contexts. This synergises with the formulations of inclusivity, as specified in the above section on the moral obligations of protection mechanisms.

Systematic reporting and escalation mechanisms – Studies conducted with citizens (including information technology professionals) report widespread presence of fear and paranoia[5]. Fear can impede our ability to make rational decisions. In 2018, police helicopters were scrambled after reports of a child kidnapped in broad daylight. After seven hours of police investigation and blanket media coverage, the kidnapper contacted the authorities to advise he was not a kidnapper, and the child was actually his daughter[6].

This is an example of how the rapid spread of hysteria can result in the inappropriate use of resources – if  we expect ordinary citizens to play an active role in protecting themselves, we risk an unrealistic assessment of threats and consequent inappropriate responses[7]. This is especially true in online spaces, where the internet criminal is a faceless entity who may well be skilled in presenting themselves as legitimate. We cannot reasonably expect citizens to know the modus operandi of criminals, or how apps harvest data – protection mechanisms, during the design phase, must make a reasoned scrutiny of the skills of users and adjust their expectations of user-based assessments accordingly. It is worth noting here that platforms do have mechanisms in place to assess user reports of abuse and/or threats.

Co-design Systems with Citizens – Systems have been traditionally built on tendentious assumptions driven by the point of view of providers on what is right for their users. This entails making flawed assumptions about users — their personal, social, and political realities. Systems built without considering actual users and groups fail to meet legitimate user expectations and needs.  The moral argument for understanding the personal realities of users in terms of their age, ability and education should be extended to include their social and political realities. System designers should engage in a reasoned scrutiny of the winners and losers any system will create; this is important from the perspective of dignity and justice. This is contra Ferdinand[8] and will require a paradigm shift in the way regulators and system designers treat their prospective users, making a conscious effort to explain the possible uses and limitations of their technology. Such a process would benefit designers with a more able user set and provide said user set with greater reason to value the technology. There is precedent for such a relationship, citizens by and large cooperated with their governments during the pandemic when restrictions to their movements were backed by reason.

The Human Element – Although we recommend augmenting our expectations of users in an online environment, we encourage users to develop their relationship with the online world with suitable safeguards. One such safeguard, shown to be effective in routine activity theory and criminology, is the presence of a capable guardian[9]. This guardian could operate such simple functions as flagging unsolicited messages to children, advising parents that their child’s internet activity has dramatically increased, or even simpler measures such as having a default lens cover for all devices with a camera. While not necessarily precursors to harmful activity, these measures work to contribute to an environment of alertness among citizens, which can be developed further by safety clinics, security games and awareness drives (such as this one!). We can work to reduce crime against ourselves by improving this aspect of our online selves.

Conclusion – “The best is the enemy of the good”[10] This adage applies to the design and construction of security systems as much as it applies to any other technology – we build these systems to protect humans, and their failures have human consequences. It is therefore critical that system designers, policy makers and other relevant stakeholders understand the human element in the use of their systems. This understanding must include a diverse range of users and their needs in order to develop an inclusive system to protect all. The pursuit of excellence is pernicious in security systems design.

The landscape of online harms is complex, and it is unrealistic to encourage a regime of blame, coercion, and train users in negotiating this landscape safely, especially those who were not considered during the design phase of the systems they are using. Citizens lock their doors to protect themselves from thieves, which is a simple and achievable measure, but we do not expect citizens to install anti-aircraft batteries on their rooftops, as this relative complexity is both inappropriate and beyond their ken. We have password managers to help citizens with generating and remembering passwords, or E2EE to help them encrypt their messages; we should also expect critical role stakeholders to play a more active role in protecting citizens online.

[1] https://www.bailii.org/ew/cases/EWCOP/2019/3.html

[2] https://www.bbc.co.uk/news/uk-wales-59219230

[3] https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex%3A31985L0374

[4] https://www.opendemocracy.net/en/amartya-sen-and-idea-of-justice/

[5] https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01298/full

[6] https://letgrow.org/kidnapping-false-alarm-news/

[7] https://www.schneier.com/essays/archives/2010/05/worst-case_thinking.html

[8] Fiat justitia, et pereat mundus: “let justice be done though the world be destroyed” was the motto of Ferdinand I, Holy Roman Emperor, 1558-1564. This phrase was also used by Kant in his “Perpetual Peace” (1795) to emphasise the counter-utilitarian aspect of his moral philosophy.

[9] https://arxiv.org/pdf/1910.06380.pdf

[10] https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good