AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Output Agreement Mechanisms and Common Knowledge
Bo Waggoner, Yiling Chen

Last modified: 2014-09-05

Abstract


The recent advent of human computation -- employing non-experts to solve problems -- has inspired theoretical work in mechanism design for eliciting information when responses cannot be verified. We study a popular practical method, output agreement, from a theoretical perspective. In output agreement, two agents are given the same inputs and asked to produce some output; they are scored based on how closely their responses agree. Although simple, output agreement raises new conceptual questions. Primary is the fundamental importance of common knowledge: We show that, rather than being truthful, output agreement mechanisms elicit common knowledge from participants. We show that common knowledge is essentially the best that can be hoped for in any mechanism without verification unless there are restrictions on the information structure. This involves generalizing truthfulness to include responding to a query rather than simply reporting a private signal, along with a notion of common-knowledge equilibria. A final important issue raised by output agreement is focal equilibria and player computation of equilibria. We show that, for eliciting the mean of a random variable, a natural player inference process converges to the common-knowledge equilibrium; but this convergence may not occur for other types of queries.

Keywords


mechanism design; game theory; output agreement

References


Aumann, R. J. 1976. Agreeing to disagree. Annals of
Statistics 4(6):1236–1239.
Dasgupta, A., and Ghosh, A. 2013. Crowdsourced judgement
elicitation with endogenous proficiency. In Proceedings of
the 22nd international conference on World Wide Web, 319–
330. International World Wide Web Conferences Steering
Committee.
Della Penna, N., and Reid, M. 2012. Crowd & prejudice:
An impossibility theorem for crowd labelling without a gold
standard. In Collective Intelligence, CI ’12.
Goel, S.; Reeves, D.; and Pennock, D. 2009. Collective reve-
lation: A mechanism for self-verified, weighted, and truthful
predictions. In Proceedings of the 10th ACM conference on
Electronic commerce, EC ’09, 265–274. ACM.
Huang, S., and Fu, W. 2012. Systematic analysis of output
agreement games: Effects of gaming environment, social
interaction, and feedback. In Proceedings of HCOMP 2012:
The Fourth Workshop on Human Computation.
Jain, S., and Parkes, D. 2008. A game-theoretic analysis of
games with a purpose. In Internet and Network Economics,
WINE ’08, 342–350.
Jurca, R., and Faltings, B. 2005. Enforcing truthful strategies
in incentive compatible reputation mechanisms. In Internet
and Network Economics, WINE ’05, 268–277. Springer.
Jurca, R., and Faltings, B. 2006. Minimum payments that
reward honest reputation feedback. In Proceedings of the 7th
ACM conference on Electronic commerce, EC ’06, 190–199.
ACM.
Jurca, R., and Faltings, B. 2007a. Collusion-resistant,
incentive-compatible feedback payments. In Proceedings
of the 8th ACM conference on Electronic commerce, EC ’07,
200–209. ACM.
Jurca, R., and Faltings, B. 2007b. Robust incentive-
compatible feedback payments. In Fasli, M., and Shehory, O.,
eds., Agent-Mediated Electronic Commerce, volume LNAI
4452, 204–218. Berlin Heidelberg: Springer-Verlag.
Jurca, R., and Faltings, B. 2009. Mechanisms for making
crowds truthful. Journal of Artificial Intelligence Research
34(1):209.
Lambert, N., and Shoham, Y. 2008. Truthful surveys. WINE
’08, 154–165. Springer.
McKelvey, R. D., and Page, T. 1986. Common knowl-
edge, consensus, and aggregate information. Econometrica
54(1):109–127.
Miller, N.; Resnick, P.; and Zeckhauser, R. 2005. Eliciting
informative feedback: The peer-prediction method. Manage-
ment Science 51(9):1359–1373.

Nielsen, L. T.; Brandenburger, A.; Geanakoplos, J.; McK-
elvey, R.; and Page, T. 1990. Common knowledge of an
aggregate of expectations. Econometrica 58(5):1235–1238.
Ostrovsky, M. 2012. Information aggregation in dynamic
markets with strategic traders. Econometrica 80(6):2595–
2647.
Prelec, D. 2004. A bayesian truth serum for subjective data.
Science 306(5695):462–466.
Radanovic, G., and Faltings, B. 2013. A robust bayesian
truth serum for non-binary signals. In Proceedings of the
27th AAAI Conference on Artificial Intelligence, AAAI ’13.
Samet, D. 1998. Iterated expectations and common priors.
Games and economic Behavior 24(1-2):131–141.
von Ahn, L., and Dabbish, L. 2004. Labeling images with a
computer game. In Proceedings of the SIGCHI conference
on human factors in computing systems, CHI ’04, 319–326.
ACM.
von Ahn, L., and Dabbish, L. 2008. Designing games with a
purpose. Communications of the ACM 51(8):58–67.
Weber, I.; Robertson, S.; and Vojnovic, M. 2008. Rethinking
the esp game. In Proceedings of the 27th International Con-
ference on Human factors in Computing Systems, volume 9
of CHI ’08, 3937–3942.
Witkowski, J., and Parkes, D. 2012a. A robust bayesian
truth serum for small populations. In Proceedings of the 26th
AAAI Conference on Artificial Intelligence, AAAI ’12.
Witkowski, J., and Parkes, D. 2012b. Peer prediction without
a common prior. In Proceedings of the 13th ACM Conference
on Electronic Commerce, EC ’12, 964–981. ACM.
Witkowski, J.; Bachrach, Y.; Key, P.; and Parkes, D. C. 2013.
Dwelling on the negative: Incentivizing effort in peer predic-
tion. In First AAAI Conference on Human Computation and
Crowdsourcing.


Full Text: PDF