Get IT Scored Using AutoSAS — An Automated System for Scoring Short Answers

Authors

  • Yaman Kumar Adobe Inc.
  • Swati Aggarwal Netaji Subhas Institute of Technology
  • Debanjan Mahata Bloomberg, Inc.
  • Rajiv Ratn Shah Indian Institute of Technology Delhi
  • Ponnurangam Kumaraguru Indian Institute of Technology Delhi
  • Roger Zimmermann National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v33i01.33019662

Abstract

In the era of MOOCs, online exams are taken by millions of candidates, where scoring short answers is an integral part. It becomes intractable to evaluate them by human graders. Thus, a generic automated system capable of grading these responses should be designed and deployed. In this paper, we present a fast, scalable, and accurate approach towards automated Short Answer Scoring (SAS). We propose and explain the design and development of a system for SAS, namely AutoSAS. Given a question along with its graded samples, AutoSAS can learn to grade that prompt successfully. This paper further lays down the features such as lexical diversity, Word2Vec, prompt, and content overlap that plays a pivotal role in building our proposed model. We also present a methodology for indicating the factors responsible for scoring an answer. The trained model is evaluated on an extensively used public dataset, namely Automated Student Assessment Prize Short Answer Scoring (ASAP-SAS). AutoSAS shows state-of-the-art performance and achieves better results by over 8% in some of the question prompts as measured by Quadratic Weighted Kappa (QWK), showing performance comparable to humans.

Downloads

Published

2019-07-17

How to Cite

Kumar, Y., Aggarwal, S., Mahata, D., Shah, R. R., Kumaraguru, P., & Zimmermann, R. (2019). Get IT Scored Using AutoSAS — An Automated System for Scoring Short Answers. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9662-9669. https://doi.org/10.1609/aaai.v33i01.33019662

Issue

Section

EAAI Symposium: Full Papers