Ethical, political and epistemic implications of machine learning (mis)information classification insights from an interdisciplinary collaboration

Publications

Ethical, political and epistemic implications of machine learning (mis)information classification insights from an interdisciplinary collaboration

Andrés Domínguez Hernández, Richard Owen, Dan Saattrup Nielsen and Ryan McConville

mission1 mission3 clariti

Abstract

Machine learning (ML) classification models are becoming increasingly popular for tackling the sheer volume and speed of online misinformation. In building these models data scientists need to make assumptions about the legitimacy and authoritativeness of the sources of ‘truth’ employed for model training and testing. This has political, ethical and epistemic implications which are rarely addressed in technical papers. Despite (and due to) their reported high performance, ML-driven moderation systems have the potential to shape public debate and create downstream negative impacts. This article presents findings from a responsible innovation (RI) inflected collaboration between science and technology studies scholars and data scientists. Following an interactive co-ethnographic process, we identify a series of algorithmic contingencies—key moments during ML model development which could lead to different future outcomes, uncertainties and harmful effects. We conclude by offering recommendations on how to address the potential failures of ML tools for combating online misinformation.
Link to Paper