AAAI Publications, 2013 AAAI Spring Symposium Series

Font Size: 
Integrating Visual Learning and Hierarchical Planning for Autonomy in Human-Robot Collaboration
Mohan Sridharan

Last modified: 2013-03-15


Mobile robots deployed in real-world domains frequently find it difficult to process all sensor inputs, or to operate without human input and domain knowledge. At the same time, complex domains make it difficult to provide robots all relevant domain knowledge in advance, and humans are unlikely to have the time and expertise to provide elaborate and accurate feedback. This paper presents an integrated framework that creates novel opportunities for addressing these learning, adaptation and collaboration challenges associated with human-robot collaboration. The framework consists of hierarchical planning, bootstrap learning and online reinforcement learning algorithms that inform and guide each other. As a result, robots are able to make best use of sensor inputs, soliciting high-level feedback from non-expert humans when such feedback is necessary and available. All algorithms are evaluated in simulation and on wheeled robots in dynamic indoor domains.


Hierarchical planning; Bootstrap learning; Reinforcement learning

Full Text: PDF