Bias Misperceived:The Role of Partisanship and Misinformation in YouTube Comment Moderation

Authors

  • Shan Jiang Northeastern University
  • Ronald E. Robertson Northeastern University
  • Christo Wilson Northeastern University

DOI:

https://doi.org/10.1609/icwsm.v13i01.3229

Abstract

Social media platforms have been the subject of controversy and scrutiny due to the spread of hateful content. To address this problem, the platforms implement content moderation using a mix of human and algorithmic processes. However, content moderation itself has lead to further accusations against the platforms of political bias. In this study, we investigate how channel partisanship and video misinformation affect the likelihood of comment moderation on YouTube. Using a dataset of 84,068 comments on 258 videos, we find that although comments on right-leaning videos are more heavily moderated from a correlational perspective, we find no evidence to support claims of political bias when using a causal model that controls for common confounders (e.g., hate speech). Additionally, we find that comments are more likely to be moderated if the video channel is ideologically extreme, if the video content is false, and if the comments were posted after a fact-check.

Downloads

Published

2019-07-06

How to Cite

Jiang, S., Robertson, R. E., & Wilson, C. (2019). Bias Misperceived:The Role of Partisanship and Misinformation in YouTube Comment Moderation. Proceedings of the International AAAI Conference on Web and Social Media, 13(01), 278-289. https://doi.org/10.1609/icwsm.v13i01.3229