AAAI Publications, First AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Two Methods for Measuring Question Difficulty and Discrimination in Incomplete Crowdsourced Data
Sarah K. K. Luger, Jeff Bowles

Last modified: 2013-11-03

Abstract


Assistance in creating high-quality exams would be welcomed by educators who do not have direct access to the proprietary data and methods used by educational testing companies. The current approach for measuring question difficulty relies on models of how good pupils will perform and contrasts that with their lower-performing peers. Inverting this process and allowing educators to test their questions before students answer them will speed up question development and utility. We cover two methods for automatically judging the difficulty and discriminating power of MCQs and how best to build sufficient exams from good questions.

Full Text: PDF