Paul Schermerhorn, James Kramer, Christopher Middendorff, Matthias Scheutz
Autonomous human-like robots that interact in natural language with people in real-time pose many design challenges, from the functional organization of the robotic architecture, to the computational infrastructure possibly employing middle-ware for distributed computing, to the hardware operating many specialized devices for sensory and effector processing in addition to embedded controllers and standard computational boards. The task is to achieve a functional integration of very diverse modules that operate at different temporal scales using different representations on parallel hardware in a reliable and fault-tolerant manner that allows for natural, believable human-robot interaction (HRI). To achieve reliable, natural interaction with humans, several challenging requirements must be met, two of which are (R1) appropriate interaction capabilities, including natural language capacity (speech recognition and speech production), dialog structure (knowledge about dialogs, teleological discourse, etc.), affect recognition and expression (both for speech as well as facial expressions), and mechanisms for non-verbal communication (via gestures, head movements, gaze, etc.); and (R2) mechanisms for ensuring robust interactions, including recovery from various communication failures (acoustic, syntactic, semantic misunderstandings, dialog failures, etc.) as well as software and hardware failure recovery (crashes of components, internal timing problems, faulty hardware, etc.).
We are developing DIARC, a distributed integrated affect, reflection, cognition architecture for robots that interact naturally with humans. DIARC is a complete architecture that can be employed for HRI experiments without any modifications--robot behaviors can be expressed simply by virtue of scripts that contain general knowledge about conversations and action sequences. DIARC provides several features that are critical for the study of natural human interaction that are not easily found in other robotic systems. Some of these features are described below, and will be featured in the 2006 AAAI Robot Competition and Exhibition. Specifically, the robot will participate in the following categories of the Human-Robot Interaction competition: emotion recognition and appropriate emotion expression, natural language understanding and action execution, perceptual learning, and the integration challenge.
Subjects: 17. Robotics; 6. Computer-Human Interaction