C. R. Voss, J. Gurney and J. Walrath
With the availability of large TIVA (text, image, video, audio) corpora in digital form has come the call for "effective" computational methods to access the content of these corpora. Our research has focussed specifically on interactive access to one class of corpora, the large terrain and object data sets that make up a virtual reality (VR) world. We are currently experimenting with a new interactive computational method to assist a person in exploring the content of our VR system: by tracking their eye gaze for points of fixation on the VR screen while they are talking about what they see and what they want to see, we can analyze how people integrate visual searching and verbal searching during real-time exploration of images and changing scenes. In the first phase of this research we ask if a VR user’s gaze points can disambiguate the referents in the natural language (NL) speech they produce and thereby increase the accuracy of their access to the VR world objects and data.