AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Optimal Worker Quality and Answer Estimates in Crowd-Powered Filtering and Rating
Akash Das Sarma, Aditya Parameswaran, Jennifer Widom

Last modified: 2014-09-05

Abstract


We consider the problem of optimally filtering (or rating) a set of items based on predicates (or scoring) requiring human evaluation. Filtering and rating are ubiquitous problems across crowdsourcing applications. We consider the setting where we are given a set of items and a set of worker responses for each item: yes/no in the case of filtering and an integer value in the case of rating. We assume that items have a true inherent value that is unknown, and workers draw their responses from a common, but hidden, error distribution. Our goal is to simultaneously assign a ground truth to the item-set and estimate the worker error distribution. Previous work in this area has focused on heuristics such as Expectation Maximization (EM), providing only a local optima guarantee, while we have developed a general framework that finds a maximum likelihood solution. Our approach extends to a number of variations on the filtering and rating problems.

Keywords


crowdsourcing, crowd algorithms, filtering, rating, maximum likelihood

References


(1) Raykar, V. C., and Yu, S. 2011. Ranking annotators for crowdsourced labeling tasks. In Advances in neural information processing systems, 1809–1817.

(2) Whitehill, J.; Wu, T.-f.; Bergsma, J.; Movellan, J. R.; and Ruvolo,P. L. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing systems, 2035–2043.


Full Text: PDF