Get IT Scored Using AutoSAS — An Automated System for Scoring Short Answers

  • Yaman Kumar Adobe Inc.
  • Swati Aggarwal Netaji Subhas Institute of Technology
  • Debanjan Mahata Bloomberg, Inc.
  • Rajiv Ratn Shah Indian Institute of Technology Delhi
  • Ponnurangam Kumaraguru Indian Institute of Technology Delhi
  • Roger Zimmermann National University of Singapore

Abstract

In the era of MOOCs, online exams are taken by millions of candidates, where scoring short answers is an integral part. It becomes intractable to evaluate them by human graders. Thus, a generic automated system capable of grading these responses should be designed and deployed. In this paper, we present a fast, scalable, and accurate approach towards automated Short Answer Scoring (SAS). We propose and explain the design and development of a system for SAS, namely AutoSAS. Given a question along with its graded samples, AutoSAS can learn to grade that prompt successfully. This paper further lays down the features such as lexical diversity, Word2Vec, prompt, and content overlap that plays a pivotal role in building our proposed model. We also present a methodology for indicating the factors responsible for scoring an answer. The trained model is evaluated on an extensively used public dataset, namely Automated Student Assessment Prize Short Answer Scoring (ASAP-SAS). AutoSAS shows state-of-the-art performance and achieves better results by over 8% in some of the question prompts as measured by Quadratic Weighted Kappa (QWK), showing performance comparable to humans.

Published
2019-07-17
Section
EAAI Symposium: Full Papers