Proactive Monitoring of Online Harms at Scale
Current research efforts on online harms (abuse, hate, dis/misinformation, cyberbullying, revenge pornography, etc.) have focused on examining single platforms (e.g., Twitter or Facebook) and/or isolating known sources of abuse (e.g., fake news websites, platforms with high degree of toxicity), with limited effects.
One aspect that is often under-explored is how certain web communities act as an amplifying platform for these efforts – in particular, communities like 4chan, Reddit, Manosphere-related forums, etc. These are considered “fringe,” however, fringe does not mean unimportant. In fact, thinking that their actions are confined to their platforms is naïve and unhelpful, as prior work has repeatedly shown that it is not the case.
In this project, we will study fringe communities to increase our understanding of the role they play in the online harms ecosystem with respect to the actions they “cause” on other, bigger platforms. In particular, we aim to:
- learn how to proactively understand when coordinated actions perpetrated by users on those platforms are about to “spill” on mainstream platforms and social media;
- do so without infringing on users privacy, i.e., setting up a monitoring infrastructure that does not require massive surveillance and problematic data retention.
For the former, we plan to build on our data science experience to model online harms issues at scale; for the latter, we will use differential privacy and multiparty computation.
As part of the project, we will develop tools for social networks and users to flag up content that is part of coordinated online harm efforts, all while protecting users’ privacy.