Expectation-Based Vision for Self-Localization on a Legged Robot

Daniel Stronger, Peter Stone

This paper presents and empirically compares solutions to the problem of vision and self-localization on a legged robot. Specifically, given a series of visual images produced by a camera on-board the robot, how can the robot effectively use those images to determine its location over time? Legged robots, while generally more robust than wheeled robots to locomotion in various terrains~\cite{Wettergreen96}, pose an additional challenge for vision, as the jagged motion caused by walking leads to unusually sharp motion in the camera image. This paper considers two main approaches to this vision and localization problem, which we refer to as the object detection approach and the expectation-based approach. In both cases, we assume that the robot has complete, a priori knowledge of the three-dimensional layout of its environment. These two approaches are described in the following section. They are implemented and compared on a popular legged robotic platform, the Sony Aibo ERS-7. This paper's contributions are an exposition of two competing approaches to vision and localization on a legged robot and an empirical comparison of the two methods.

Subjects: 19.1 Perception; 17. Robotics


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.