AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
A Crowd of Your Own: Crowdsourcing for On-Demand Personalization
Peter Organisciak, Jaime Teevan, Susan Dumais, Robert C. Miller, Adam Tauman Kalai

Last modified: 2014-09-05


Personalization is a way for computers to support people’s diverse interests and needs by providing content tailored to the individual. While strides have been made in algorithmic approaches to personalization, most require access to a significant amount of data. However, even when data is limited online crowds can be used to infer an individual’s personal preferences. Aided by the diversity of tastes among online crowds and their ability to understand others, we show that crowdsourcing is an effective on-demand tool for personalization. Unlike typical crowdsourcing approaches that seek a ground truth, we present and evaluate two crowdsourcing approaches designed to capture personal preferences. The first, taste-matching, identifies workers with similar taste to the requester and uses their taste to infer the requester’s taste. The second, taste-grokking, asks workers to explicitly predict the requester’s taste based on training examples. These techniques are evaluated on two subjective tasks, personalized image recommendation and tailored textual summaries. Taste-matching and taste-grokking both show improvement over the use of generic workers, and have different benefits and drawbacks depending on the complexity of the task and the variability of the taste space.


crowdsourcing; personalization


Ahn, L.; and Dabbish, L. 2004. Labeling Images with a Computer Game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 319–26. Vienna, Austria: CHI’ 04.

Alonso, O.; Marshall, C.C.; Najork, M. 2013. Are Some Tweets More Interesting Than Others? #HardQuestion. In Proceedings of the Symposium on Human-Computer Interaction and Information Retrieval, 2:1–2:10. New York, NY: HCIR ’13.

Bernstein, M.S., Brandt, J.; Miller, R.C.; and Karger, D.R. 2011. Crowds in Two Seconds: Enabling Realtime Crowd-Powered Interfaces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, 33–42. New York, NY.

Bernstein, M.S.; Little G.; Miller R.C.; Hartmann B.; Ackerman, M.S.; Karger, D.R.; Crowell, D.; and Panovich, K. 2010. Soylent: A Word Processor with a Crowd Inside. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Tech., New York: UIST ‘10.

Bernstein, M.S.; Tan, D.; Czerwinski, M.S.; and Horvitz, E. 2008. Personalization via Friendsourcing. ACM Transactions on. Computer-Human Interactions. 17 (2): 6:1–6:28.

Hofmann, T. 2004. Latent Semantic Models for Collaborative Filtering. ACM Transactions on Information Systems 22 (1).

Kokkalis, N; Köhn, T; Pfeiffer C.; Chornyi, D.; Bernstein, M.S.; and Klemmer, S.R. 2013. EmailValet: Managing Email Overload through Private, Accountable Crowdsourcing. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. New York, NY: CSCW ’13.

Krishnan, V.; Narayanashetty, P.D.; Nathan, M.; Davies, R.T., and Konstan, J.A. 2008. Who Predicts Better?: Results from an Online Study Comparing Humans and an Online Recommender System. In Proceedings of the 2008 ACM Conference on Recommender Systems. New York, NY: RecSys ’08.

Law, E.; and Ahn L. 2011. Human Computation. In Synthesis Lectures on Artificial Intelligence and Machine Learning 5 (3).

Marmorstein, H.; Grewal D.; and Fishe, R.P.H. 1992. The Value of Time Spent in Price-Comparison Shopping. Journal of Consumer Research 19.1:52-61. University of Chicago Press.

Novotney, S.; and Callison-Burch, C. 2010. Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription. Annual Conference of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: HLT '10.

Pugh, C. 2009. Star Wars Uncut: Director’s Cut.

Quinn, A.J.; and Bederson, B.B. 2011. Human Computation: A Survey and Taxonomy of a Growing Field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY: CHI ’11.

Tamuz, O; Liu C.; Belongie, S.; Shamir, O; and Kalai, A.T. 2011. Adaptively Learning the Crowd Kernel. In Proceedings of the International Conference on Machine Learning. ICML ‘11.

Teevan, J.; Dumais, S.T.; and Liebling, D.J. 2008. To Personalize or Not to Personalize. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, 163. New York: SIGIR ’08.

The Johnny Cash Project.

Zhang, H.; Law, E.; Miller, R.; Gajos, K; Parkes, D.; and Horvitz, E. 2012. Human Computation Tasks with Global Constraints. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 217–26. New York, NY: CHI ’12.

Full Text: PDF