MITIGATE

Understanding and Auditing the Impact of Mitigation Strategies on Online Harms

Laura Smith, Adam Joinson, Othman Esoul, Alicia Cork – University of Bath

Several strategies are routinely used by social media companies, governments, and online communities to mitigate online harms. These include setting community standards, removing content that violates community standards (takedowns), blocking user accounts, counter-narratives, and moderation by members of the community (“bystander interventions”, including reporting of harmful behaviour or content). These strategies have a specific aim: to reduce the harm that may come to individual users and communities from social media use. However, and despite efforts by platforms to engage with researchers to improve social behaviour within their platforms, there has been little or no research that has worked with clearly defined harms and mitigating behaviours, and then quantified the effectiveness of interventions:

  • To set standards and employ effective mitigation strategies, stakeholders need to be able to define and quantify harm;
  • To understand causal pathways between the content/behaviours and harm;
  • To design mitigation strategies in line with those understandings;
  • The need for data on the efficacy of the strategies.

This test case will work to study – using the Bath Nexus cohort and data available from our bespoke platforms – the extent to which interventions to tackle online harm are effective, and ways to encourage community reporting of potential harmful content and interactions.