A Robust Spoken Language Architecture to Control a 2D Game

Andrea Corradini, Thomas Hanneforth, Adrian Bak

Speech has been increasingly used as input and/or output modality in commercial systems and academic prototypes. Its capability to complement and enhance applications makes it possible to consider spoken language as a new means to support human interaction with computers in interactive domains: computer games being one of them. This paper presents an architecture that we developed to play a 2D graphical version of a board game using spoken natural language. In our system, the sensed player's speech is syntactically parsed with a robust weighted finite state transducer algorithm that creates a simplified representation of the input sentence. In a pipelined process, a semantic parser is then responsible of splitting the input representation generated during syntactical analysis into a series of data structures each representing the semantics of the underlying text chunks within the original sentence. These data structures are then passed on to the game logic module which generates game commands, resolves possible ambiguities or incompleteness within the data structures, and checks preconditions related to the validity of the commands. If the preconditions are met, the instructions are carried out and the game state is updated as direct effect of the commands issued. In a series of preliminary tests to assess the robustness of our approach, we obtained a correct classification of input sentences for up to 95.8% of the instructions issued by the user. Interaction with our systems occurs in real-time.

Subjects: 6. Computer-Human Interaction; 13. Natural Language Processing

Submitted: Feb 2, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.