Empirical Analysis of Multi-Task Learning for Reducing Identity Bias in Toxic Comment Detection

  • Ameya Vaidya Bridgewater-Raritan Regional High School
  • Feng Mai Stevens Institute of Technology
  • Yue Ning Stevens Institute of Technology

Abstract

With the recent rise of toxicity in online conversations on social media platforms, using modern machine learning algorithms for toxic comment detection has become a central focus of many online applications. Researchers and companies have developed a variety of models to identify toxicity in online conversations, reviews, or comments with mixed successes. However, many existing approaches have learned to incorrectly associate non-toxic comments that have certain trigger-words (e.g. gay, lesbian, black, muslim) as a potential source of toxicity. In this paper, we evaluate several state-of-the-art models with the specific focus of reducing model bias towards these commonly-attacked identity groups. We propose a multi-task learning model with an attention layer that jointly learns to predict the toxicity of a comment as well as the identities present in the comments in order to reduce this bias. We then compare our model to an array of shallow and deep-learning models using metrics designed especially to test for unintended model bias within these identity groups.

Published
2020-05-26
How to Cite
Vaidya, A., Mai, F., & Ning, Y. (2020). Empirical Analysis of Multi-Task Learning for Reducing Identity Bias in Toxic Comment Detection. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 683-693. Retrieved from https://www.aaai.org/ojs/index.php/ICWSM/article/view/7334