Feature-Based Explanations Don't Help People Detect Misclassifications of Online Toxicity

  • Samuel Carton University of Michigan
  • Qiaozhu Mei University of Michigan
  • Paul Resnick University of Michigan

Abstract

We present an experimental assessment of the impact of feature attribution-style explanations on human performance in predicting the consensus toxicity of social media posts with advice from an unreliable machine learning model. By doing so we add to a small but growing body of literature inspecting the utility of interpretable machine learning in terms of human outcomes. We also evaluate interpretable machine learning for the first time in the important domain of online toxicity, where fully-automated methods have faced criticism as being inadequate as a measure of toxic behavior.

We find that, contrary to expectations, explanations have no significant impact on accuracy or agreement with model predictions, through they do change the distribution of subject error somewhat while reducing the cognitive burden of the task for subjects. Our results contribute to the recognition of an intriguing expectation gap in the field of interpretable machine learning between the general excitement the field has engendered and the ambiguous results of recent experimental work, including this study.

Published
2020-05-26
How to Cite
Carton, S., Mei, Q., & Resnick, P. (2020). Feature-Based Explanations Don’t Help People Detect Misclassifications of Online Toxicity. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 95-106. Retrieved from https://www.aaai.org/ojs/index.php/ICWSM/article/view/7282