Planning for Human-Robot Interaction Using Time-State Aggregated POMDPs

Frank Broz, Illah Nourbakhsh, Reid Simmons

In order to interact successfully in social situations, a robot must be able to observe others' actions and base its own behavior on its beliefs about their intentions. Many interactions take place in dynamic environments, and the outcomes of people's or the robot's actions may be time-dependent. In this paper, such interactions are modeled as a POMDP with a time index as part of the state, resulting in a fully Markov model with a potentially very large state space. The complexity of finding even an approximate solution often limits POMDP's practical applicability for large problems. This difficulty is addressed through the development of an algorithm for aggregating states in POMDPs with a time-indexed state space. States that represent the same physical configuration of the environment at different times are chosen to be combined using reward-based metrics, preserving the structure of the original model while producing a smaller model that is faster to solve. We demonstrate that solving the aggregated model produces a policy with performance comparable to the policy from the original model. The example domains used are a simulated elevator-riding task and a simulated driving task based on data collected from human drivers.

Subjects: 1.11 Planning; 3.6 Temporal Reasoning

Submitted: Apr 15, 2008


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.