AAAI Publications, Third AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
The Effect of Text Length in Crowdsourced Multiple Choice Questions
Sarah K. K. Luger

Last modified: 2016-03-28

Abstract


Automated systems that aid in the development of Multiple Choice Questions (MCQs) have value for both educators, who spend large amounts of time creating novel questions, and students, who spend a great deal of effort both practicing for and taking tests. The current approach for measuring question difficulty in MCQs relies on models of how good pupils will perform and contrasts that with their lower-performing peers. MCQs can be difficult in many ways. This paper looks specifically at the effect of both the number of words in the question stem and in the answer options on question difficulty. This work is based on the hypothesis that questions are more difficult if the stem of the question and the answer options are semantically far apart. This hypothesis can be normalized, in part, with an analysis of the length of texts being compared. The MCQs used in the experiments were voluntarily authored by university students in biology courses. Future work includes additional experiments utilizing other aspects of this extensive crowdsourced data set.

Full Text: PDF