AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Saving Money While Polling with InterPoll Using Power Analysis
Benjamin Livshits, Todd Mytkowicz

Last modified: 2014-09-05


Crowd-sourcing is increasingly being used for providing responses to polls and surveys on a large scale. Companies such as SurveyMonkey and are attempting to make crowd-sourced surveys commonplace, by making it easy to pose survey questions using an easy-to-use UI and retrieve results with a relatively low latency by having dedicated crowds at their disposal. In this paper we argue that the ease with which polls can be created conceals an inherent difficulty: the survey maker does not know how many workers to hire for their survey. Asking too few may lead to samples sizes that `"do not look impressive enough." Asking too many clearly involves spending extra money, which can quickly become costly. Existing crowd-sourcing platforms do not provide help with this, neither, one can argue, do they have any incentive to do so. We present a systematic approach to determining how many samples (i.e. workers) are required to achieve a certain level of statistical significance by showing how to automatically perform power analysis on questions of interest. Using a range of queries we demonstrate that power analysis can save significant amounts of money and time by concluding that frequently, only a handful of results is required to arrive at a certain decision. We have implemented our approach within InterPoll, aprogrammable developer-driven polling system that uses a generic crowd (Mechanical Turk) as a back-end. Power analysis is automatically performed given both the structure of the query and the data that is being polled from the crowd. In all of our studies we are able to obtain statistically significant answers for under $30, with most costing less than $10. Our approach saves both time and money for the survey maker.


do it yourself polling; human and computer work; survey making


  • Andrews, D.; Nonnecke, B.; and Preece, J.2003.Electronic survey methodology: A case study in reaching  hard-to-involve Internet users.International Journal of ....
  • Antin, J., and Shaw, A.2012.Social desirability bias and self-reports of motivation: a study of  Amazon Mechanical Turk in the US and India.Proceedings of the SIGCHI Conference on Human Factors in  Computing Systems.
  • Barowy, D.; Curtsinger, C.; Berger, E.; and McGregor, A.2012.AutoMan: A platform for integrating human-based and digital  computation.Proceedings of the ACM international conference on Object  oriented programming systems languages and applications - OOPSLA '12  639.
  • Behrend, T.~S.; Sharek, D.~J.; and Meade, A.~W.2011.The viability of crowdsourcing for survey research.Behavior research methods.
  • Berinsky, A. J.~A.; Huber, G. A.~G.; and Lenz, G.~S.2010.Using mechanical Turk as a subject recruitment tool for experimental  research.Typescript, Yale  1--26.
  • Berinsky, A.; Huber, G.; and Lenz, G.2012.Evaluating Online Labor Markets for Experimental Research:'s Mechanical Turk.Political Analysis 20(3):351--368.
  • Bornholt, J.; Mytkowicz, T.; and Mckinley, K.~S.Uncertain<T>: A first-order type for uncertain data.In International Conference on Architectural Support for  Programming Languages and Operating Systems.
  • Bourguignon, F., and Fournier, M.2007.Selection bias corrections based on the multinomial logit model:  Monte Carlo comparisons.of Economic Surveys.
  • Buhrmester, M., and Kwang, T.2011.Amazon's Mechanical Turk: A new source of inexpensive, yet  high-quality, data?on Psychological Science.
  • Callegaro, M., and DiSogra, C.2008.Computing response metrics for online panels.Public Opinion Quarterly.
  • Chandler, J.; Mueller, P.; and Paolacci, G.Methodological concerns and advanced uses of crowdsourcing in  psychological research.
  • Chen, X.2011.Exact computation of minimum sample size for estimation of binomial  parameters.Journal of Statistical Planning and Inference 141(8):2622 --  2632.
  • Cooper, C.; McCord, D.~M.; and Socha, A.2011.Evaluating the college sophomore problem: the case of personality  and politics.The Journal of psychology 145(1):23--37.
  • Couper, M.~P.2000.Review: Web surveys: A review of issues and approaches.The Public Opinion Quarterly  1--31.
  • Curmi, F., and Ferrario, M.~A.2013.Online sharing of live biometric data for crowd-support: Ethical  issues from system design.
  • Dillman, D.; Tortora, R.; and Bowker, D.1998.Principles for constructing Web surveys.
  • Erceg-Hurn, D., and Mirosevich, V.2008.Modern robust statistical methods: an easy way to maximize the  accuracy and power of your research.American Psychologist.
  • Evans, J.; Hempstead, N.; and Mathur, A.2005.The value of online surveys.Internet Research 15(2):195--219.
  • Eysenbach, J.; Eysenbach, G.; and Wyatt, J.2002.Using the Internet for Surveys and Health Research.Journal of Medical Internet Research 4(2):e13.
  • Ferneyhough, E.2012.Crowdsourcing Anxiety and Attention Research.
  • Gosling, S.; Vazire, S.; Srivastava, S.; and John, O.2004.Should we trust web-based studies? A comparative analysis of six  preconceptions about Internet questionnaires.American Psychologist 59(2):93--104.
  • Gunn, H.2002.Web-based surveys: Changing the survey process.First Monday.
  • Hanley, J.~A.; Negassa, A.; and Forrester, J.~E.2003.Statistical analysis of correlated data using generalized estimating  equations: an orientation.American journal of.
  • HubSpot, and SurveyMonkey.Using online surveys in your marketing. 1--43.
  • Keeter, S.; Christian, L.; and Researcher, S.2012.A Comparison of Results from Surveys by the Pew Research Center and  Google Consumer Surveys.
  • Keeter, S.2006.The impact of cell phone noncoverage bias on polling in the 2004  presidential election.Public Opinion Quarterly.
  • Kittur, A.; Khamkar, S.; Andre, P.; and Kraut, R.2012.CrowdWeaver: Visually Managing Complex Crowd Work.Proceedings of the ACM 2012 conference on Computer Supported  Cooperative Work - CSCW '12  1033.
  • Kraut, R.; Olson, J.; Banaji, M.; Bruckman, A.; Cohen, J.; and Couper, M.2004.Psychological Research Online: Report of Board of Scientific  Affairs' Advisory Group on the Conduct of Research on the Internet.American Psychologist 59(2):105--117.
  • Kraut, R.~E.2011.CrowdForge : Crowdsourcing Complex Work.UIST  43--52.
  • Kulkarni, A.~P.; Can, M.; and Hartmann, B.2011.Turkomatic: automatic recursive task and workflow design for  mechanical turk.CHI'11 Extended Abstracts on Human.
  • Kulkarni, A.; Can, M.; and Hartmann, B.2012.Collaboratively crowdsourcing workflows with turkomatic.of the ACM 2012 conference on.
  • Lee, E.~S., and Forthofer, R.~N.2006.Analyzing complex survey data.
  • Lee, S., and Valliant, R.2009.Estimation for volunteer panel web surveys using propensity score  adjustment and calibration adjustment.Sociological Methods & Research.
  • Lee, S.2006.Propensity score adjustment as a weighting scheme for volunteer  panel web surveys.Journal of official statistics.
  • Little, G.; Chilton, L.~B.; Goldman, M.; and Miller, R.~C.2009.TurKit: tools for iterative tasks on Mechanical Turk.Proceedings of UIST  1--2.
  • Loosveldt, G., and Sonck, N.2008.An evaluation of the weighting procedures for an online access panel  survey.Survey Research Methods.
  • Lumley, T.2004.Analysis of complex survey samples.Journal of Statistical Software.
  • Mayo, J.2008.LINQ Programming.McGraw-Hill Osborne Media, 1 edition.
  • Comparing Google Consumer Surveys to Existing Probability and  Non-Probability Based Internet Surveys.
  • Pew Research Center.2013.Demographics of Internet users.
  • Podsakoff, P.; MacKenzie, S.; and Lee, J.2003.Common method biases in behavioral research: a critical review of  the literature and recommended remedies.Journal of Applied Psychology 88(5):879--903.
  • Ramo, D.; Hall, S.; and Prochaska, J.2011.Reliability and validity of self-reported smoking in an anonymous  online survey with young adults.Health Psychology.
  • Schmidt, L.2010.Crowdsourcing for human subjects research.Proceedings of CrowdConf.
  • Schonlau, M.; Soest, A.; Kapteyn, A.; and Couper, M.2009.Selection bias in Web surveys and the use of propensity scores.Sociological Methods & Research 37(3):291--318.
  • Sparrow, N.2006.Developing Reliable Online Polls.International Journal of Market Research 48(6).
  • Stephenson, L.~B., and Crete, J.2011.Studying political behavior: A comparison of Internet and telephone  surveys.International Journal of Public Opinion Research.
  • SurveyMonkey.2013.Market Research Survey; Get to know your customer, grow your  business.
  • Swan, M.2012a.Crowdsourced health research studies: an important emerging  complement to clinical trials in the public health research ecosystem.Journal of Medical Internet Research.
  • Swan, M.2012b.Scaling crowdsourced health studies : the emergence of a new form of  contract research organization.9:223--234.
  • Truong, H.~L.; Dustdar, S.; and Bhattacharya, K.2012.Programming hybrid services in the cloud.Service-Oriented Computing  1--15.
  • US Census.2010.Current population survey, October 2010, school enrollment and  Internet use supplement file.(October).
  • USamp.2013.Panel Book 2013.
  • Valliant, R., and Dever, J.~A.2011.Estimating propensity adjustments for volunteer Web surveys.Sociological Methods & Research.
  • Vella, F.1998.Estimating models with sample selection bias: a survey.Journal of Human Resources.
  • Wald, A.1945.Sequential tests of statistical hypotheses.The Annals of Mathematical Statistics 16(2):117--186.
  • Winship, C., and Radbill, L.1994.Sampling weights and regression analysis.Sociological Methods & Research.
  • Wyatt, J.2000.When to use web-based surveys.Journal of the American Medical Informatics Association.
  • Yeager, D.; Krosnick, J.; Chang, L.; and Javitz, H.2011.Comparing the accuracy of RDD telephone surveys and internet surveys  conducted with probability and non-probability samples.Public Opinion Quarterly.
  • Yin, X.; Liu, W.; Wang, Y.; Yang, C.; and Lu, L.2014.What? How? Where? A Survey of Crowdsourcing.Frontier and Future Development of.

Full Text: PDF