Defining and measuring “harm”
A key challenge for the mitigation of online harms is the need to identify the ways in which regulators, practitioners and researchers can have joint understandings of the nature of the harm, and how that harm might be then measured. These two steps – definition and quantification – are key to many of the policy aims shared by both industry and Government, and are the building blocks for later attempts to map causal paths between online harms and mitigation actions. Further, the online harms white paper, and response to consultation, stress the importance of transparency and for proportionate regulatory response. This – alongside the proposed duty of care for content hosting services – will require detailed work to build shared understanding of harms, as well as how those harms and their mitigations can be shared and audited in a transparent way. While some measures are relatively well understood (e.g. take downs of fake accounts, terrorist or extremist material etc), we know too little about how these actions impact on harm to judge their success or otherwise, and thus whether actions are proportionate or not. For instance, terms such as ‘misinformation’, ‘disinformation’, ‘fake news’ and ‘problematic content’ are often used interchangeably, and the potential harm of such content is currently difficult to measure – as are the impact of any mitigations (e.g. removing content or fact checking).
In this project we seek to theorize (and problematize) the notion of online harm using a range of creative elicitation techniques and structured dimensionality reduction techniques to define and limit the definition of ‘harm’ in an online setting. This project begins with a systematic review of online ‘harms’ from a multi-disciplinary perspective, and stakeholder workshops in order to elicit conceptions of harm from regulators, law enforcement, industry and end users. Following this, a taxonomy of harms – alongside potential exemplar definitions and metrics – will be developed.