Do Skills Combine Additively to Predict Task Difficulty in Eigth-Grade Mathematics

Elizabeth Ayers, Brian Junker

During the 2004-2005 school year, over 900 eighth-grade students used an online intelligent tutoring system, the Assistment System of Heffernan, et al. (2001), to prepare for the mathematics portion of the Massachusetts Comprehensive Assessment System (MCAS) end-of-year exam. A transfer model, identifying the skills that each tutoring task and exam problem depends upon, was developed to help align tutoring tasks with exam problems. We use a Bayesian form of item response theory (IRT) modeling to attempt to model the difficulty of tutoring tasks and exam items additively in terms of these component skills: the more skills, the more difficult the task or test item. Our goal is to directly examine the alignment between tutoring tasks and assessment items and to use the transfer model to build more efficient functions for predicting end-of-year exam performance from student activity with the online tutor. However, our analysis shows that the additive skills model (the Linear Logistic Test Model, LLTM) does not adequately account for task-to-task or item-to-item variation in difficulty.

Subjects: 1.3 Computer-Aided Education; 9.3 Mathematical Foundations

Submitted: May 17, 2006

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.