John Stamper, Tiffany Barnes
In building intelligent tutoring systems, it is critical to be able to understand and diagnose student responses in interactive problem solving. However, building this understanding into the tutor is a time-intensive process usually conducted by subject experts. Much of this time is spent in building production rules that model all the ways a student might solve a problem. In our prior work, we have proposed a novel application of Markov decision processes (MDPs), to automatically generate hints for an intelligent tutor that learns. We demonstrate the feasibility of this approach by extracting MDPs from four semesters of student solutions in a logic proof tutor, and calculating the probability that we may be able to generate hints for students. Our results indicate that extracted MDPs will be able to provide over 80% of students with hints while working problems.
Subjects: 1.3 Computer-Aided Education; 12.1 Reinforcement Learning
Submitted: Apr 8, 2008