AAAI Publications, Twenty-Fourth AAAI Conference on Artificial Intelligence

Font Size: 
Biped Walk Learning Through Playback and Corrective Demonstration
Cetin Mericli, Manuela Veloso

Last modified: 2010-07-05

Abstract


Developing a robust, flexible, closed-loop walking algorithm for a humanoid robot is a challenging task due to the complex dynamics of the general biped walk. Common analytical approaches to biped walk use simplified models of the physical reality. Such approaches are partially successful as they lead to failures of the robot walk in terms of unavoidable falls. Instead of further refining the analytical models, in this work we investigate the use of human corrective demonstrations, as we realize that a human can visually detect when the robot may be falling. We contribute a two-phase biped walk learning approach, which we experiment on the Aldebaran NAO humanoid robot. In the first phase, the robot walks following an analytical simplified walk algorithm, which is used as a black box, and we identify and save a walk cycle as joint motion commands. We then show how the robot can repeatedly and successfully play back the recorded motion cycle, even if in open-loop. In the second phase, we create a closed-loop walk by modifying the recorded walk cycle to respond to sensory data. The algorithm learns joint movement corrections to the open-loop walk based on the corrective feedback provided by a human, and on the sensory data, while walking autonomously. In our experimental results, we show that the learned closed-loop walking policy outperforms a hand-tuned closed-loop policy and the open-loop playback walk, in terms of the distance traveled by the robot without falling.

Full Text: PDF