Brent H. Daniel, William H. Bares, Charles B. Callaway, James C. Lester
Intelligent multimedia systems hold great promise for knowledge-based learning environments. Because of recent advances in our understanding of how to dynamically generate multimodal explanations and the rapid growth in the performance of 3D graphics technologies, it is becoming feasible to create multimodal explanation generators that operate in realtime. Perhaps most compelling about these developments is the prospect of enabling generators to create explanations that are customized to the ongoing "dialogue" in which they occur. To address these issues, we have developed a student-sensitive multimodal explanation generation framework that exploits a discourse history to automatically creaee explanations whose content, cinemetography, and accompanying natural language utterances are customized to the dialogue context. By these means, they create integrative explanations that actively promote knowledge integration. This framework has been irnt)lerrlented in CINESPEAK, a student-sensitive multimodal explanation generator.