Understanding the Impact of Text Highlighting in Crowdsourcing Tasks

Authors

  • Jorge Ramírez University of Trento
  • Marcos Baez University of Trento
  • Fabio Casati University of Trento
  • Boualem Benatallah University of New South Wales

DOI:

https://doi.org/10.1609/hcomp.v7i1.5268

Abstract

Text classification is one of the most common goals of machine learning (ML) projects, and also one of the most frequent human intelligence tasks in crowdsourcing platforms. ML has mixed success in such tasks depending on the nature of the problem, while crowd-based classification has proven to be surprisingly effective, but can be expensive. Recently, hybrid text classification algorithms, combining human computation and machine learning, have been proposed to improve accuracy and reduce costs. One way to do so is to have ML highlight or emphasize portions of text that it believes to be more relevant to the decision. Humans can then rely only on this text or read the entire text if the highlighted information is insufficient. In this paper, we investigate if and under what conditions highlighting selected parts of the text can (or cannot) improve classification cost and/or accuracy, and in general how it affects the process and outcome of the human intelligence tasks. We study this through a series of crowdsourcing experiments running over different datasets and with task designs imposing different cognitive demands. Our findings suggest that highlighting is effective in reducing classification effort but does not improve accuracy - and in fact, low-quality highlighting can decrease it.

Downloads

Published

2019-10-28

How to Cite

Ramírez, J., Baez, M., Casati, F., & Benatallah, B. (2019). Understanding the Impact of Text Highlighting in Crowdsourcing Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 144-152. https://doi.org/10.1609/hcomp.v7i1.5268