Eye Gaze for Attention Prediction in Multimodal Human-Machine Conversation

Zahar Prasov, Joyce Y. Chai, Hogyeong Jeong

In a conversational system, determining a user's focus of attention is crucial to the success of the system. Motivated by previous psycholinguistic findings, we are currently examining how eye gaze contributes to automated identification of user attention during conversation. As part of this effort, we investigate the role various features extracted from eye gaze and the visual interface for this task. More precisely, we conduct a data-driven evaluation of these features and propose a novel evaluation metric for performing such an investigation. The empirical results indicate that gaze fixation intensity serves an integral role in attention prediction. Fixations to objects are fairly evenly distributed between the start of a reference and 1500 milliseconds prior. When combined with some visual features (e.g., the amount of visual occlusion of an object), fixation intensity can become even more reliable in predicting user attention. This paper describes this empirical investigation of features and discusses the further implication of attention prediction based on eye gaze for language understanding in multimodal conversational interfaces.

Subjects: 13. Natural Language Processing

Submitted: Jan 26, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.