AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
To Re(label), or Not To Re(label)
Christopher H. Lin, . Mausam, Daniel S Weld

Last modified: 2014-09-05


One of the most popular uses of crowdsourcing is to provide training data for supervised machine learning algorithms. Since human annotators often make errors, requesters commonly ask multiple workers to label each example.  But is this strategy always the most cost effective use of crowdsourced workers? We argue "No" --- often classifiers can achieve higher accuracies when trained with noisy "unilabeled" data. However, in some cases relabeling is extremely important.  We discuss three factors that may make relabeling an effective strategy: classifier expressiveness, worker accuracy, and budget.


relabeling; crowdsourcing; machine learning;

Full Text: PDF