AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Cost-Effective HITs for Relative Similarity Comparisons
Michael J. Wilber, Iljung S. Kwak, Serge J. Belongie

Last modified: 2014-09-05

Abstract


Similarity comparisons of the form "Is object a more similar to b than to c?" form a useful foundation in several computer vision and machine learning applications. Unfortunately, an embedding of n points is only uniquely specified by n3 triplets, making collecting every triplet an expensive task. In noticing this difficulty, other researchers investigated more intelligent triplet sampling techniques, but they do not study their effectiveness or their potential drawbacks. Although it is important to reduce the number of collected triplets to generate a good embedding, it is also important to understand how best to display a triplet collection task to the user to better respect the worker's human constraints. In this work, we explore an alternative method for collecting triplets and analyze its financial cost, collection speed, and worker happiness as a function of the final embedding quality. We propose best practices for creating cost effective human intelligence tasks for collecting triplets. We show that rather than changing the sampling algorithm, simple changes to the crowdsourcing UI can drastically decrease the cost of collecting similarity comparisons. Finally, we provide a food similarity dataset as well as the labels collected from crowd workers.

Keywords


Relative comparisons; Embedding; Ordinal embedding; Perceptual similarity; Triplet embedding; User interface

References


  • Ferecatu, M., and Geman, D. 2009. A statistical framework for image category search from a mental picture. IEEE TPAMI.
  • Frome, A.; Singer, Y.; Sha, F.; and Malik, J. 2007. Learning globally-consistent local distance functions for shape-based image retrieval and classification. In IEEE ICCV.
  • Gomes, R.; Welinder, P.; Krause, A.; and Perona, P. 2011. Crowdclustering. NIPS.
  • Heikinheimo, H., and Ukkonen, A. 2013. The crowd-median algorithm. In First AAAI Conference on Human Computation and Crowdsourcing.
  • Jamieson, K., and Nowak, R. 2011. Low-dimensional embedding using adaptively selected ordinal data. In Allerton Conference on Communication, Control, and Computing.
  • Kendall, M., and Gibbons, J. D. 1990. Rank Correlation Methods. Charles Griffin Book Series. Oxford University Press, 5th edition.
  • Kittur, A.; Nickerson, J. V.; Bernstein, M.; Gerber, E.; Shaw, A.; Zimmerman, J.; Lease, M.; and Horton, J. 2013. The future of crowd work. In Conference on Computer Supported Cooperative Work, 1301–1318. ACM.
  • Kleindessner, M., and von Luxburg, U. 2014. Uniqueness of ordinal embedding. JMLR.
  • McFee, B. 2012. More like this: machine learning approaches to music similarity. Ph.D. Dissertation, University of California, San Diego.
  • Rao, V. R., and Katz, R. 1971. Alternative multidimensional scaling methods for large stimulus sets. Journal of Marketing Research 8(4):488–494.
  • Tamuz, O.; Liu, C.; Belongie, S.; Shamir, O.; and Kalai, A. T. 2011. Adaptively learning the crowd kernel. In ICML.
  • van der Maaten, L., and Weinberger, K. 2012. Stochastic triplet embedding. In IEEE MLSP.
  • Wah, C.; Horn, G. V.; Branson, S.; Maji, S.; Perona, P.; and Belongie, S. 2014. Similarity comparisons for interactive fine-grained categorization. In IEEE CVPR.
  • Wolfe, J. M. 1994. Guided search 2.0 a revised model of visual search. Psychonomic Bulletin & Review 1.
  • Yi, J.; Jin, R.; Jain, S.; and Jain, A. 2013. Inferring users preferences from crowdsourced pairwise comparisons: A matrix completion approach. In HCOMP.

Full Text: PDF