Brent H. Daniel, North Carolina State University; William H. Bares, University of Southwestern Louisiana; Charles B. Callaway and James C. Lester, North Carolina State University
Intelligent multimedia systems hold great promise for knowledge-based learning environments. Because of recent advances in our understanding of how to dynamically generate multimodal explanations and the rapid growth in the performance of 3D graphics technologies, it is becoming feasible to create multimodal explanation generators that operate in realtime. Perhaps most compelling about these developments is the prospect of enabling generators to create explanations that are customized to the ongoing "dialogue" in which they occur. To address these issues, we have developed a student-sensitive multimodal explanation generation framework that exploits a discourse history to automatically create explanations whose content, cinematography, and accompanying natural language utterances are customized to the dialogue context. By these means, they create integrative explanations that actively promote knowledge integration. This framework has been implemented in CineSpeak, student-sensitive multimodal explanation generator and incorporated in a testbed learning environment for the domain of botanical anatomy and physiology.