AAAI Publications, Third AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Predicting Quality of Crowdsourced Image Segmentations from Crowd Behavior
Mehrnoosh Sameki, Danna Gurari, Margrit Betke

Last modified: 2015-09-23


Quality control (QC) is an integral part of many crowd- sourcing systems. However, popular QC methods, such as aggregating multiple annotations, filtering workers, or verifying the quality of crowd work, introduce additional costs and delays. We propose a complementary paradigm to these QC methods based on predicting the quality of submitted crowd work. In particular, we pro- pose to predict the quality of a given crowd drawing directly from a crowd worker’s drawing time, number of user clicks, and average time per user click. We focus on the task of drawing the boundary of a single object in an image. To train and test our prediction models, we collected a total of 2,025 crowd-drawn segmentations for 405 familiar everyday images and unfamiliar biomedical images from 90 unique crowd workers. We first evaluated five prediction models learned using different combinations of the three worker behavior cues for all images. Experiments revealed that time per number of user clicks was the most effective cue for predicting segmentation quality. We next inspected the predictive power of models learned using crowd annotations collected for familiar and unfamiliar data independently. Prediction models were significantly more effective for estimating the segmentation quality from crowd worker behavior for familiar image content than unfamiliar image content.


Crowdsourcing; Computer Vision; Image Segmentation

Full Text: PDF