AAAI Publications, Sixteenth AAAI/SIGART Doctoral Consortium

Font Size: 
Learning Sensor, Space and Object Geometry
Jeremy Stober

Last modified: 2011-08-04

Abstract


Robots with many sensors are capable of generating volumes of high-dimensional perceptual data. Making sense of this data and extracting useful knowledge from it is a difficult problem. For robots lacking proper models, trying to understand a stream of uninterpreted data is an especially acute problem. One critical step in linking raw uninterpreted perceptual data to cognition is dimensionality reduction. Current methods for reducing the dimension of data do not meet the demands of a robot situated in the world, and methods that use only perceptual data do not take full advantage of the interactive experience of an embodied robot agent. This work proposes a new scalable, incremental and active approach to dimensionality reduction suitable for extracting geometric knowledge from uninterpreted sensors and effectors. The proposed method uses distinctive state abstractions to organize early sensorimotor experience and sensorimotor embedding to incrementally learn accurate geometric representations based on experience. This approach is applied to the problem of learning the geometry of sensors, space, and objects. The result is evaluated using techniques from statistical shape analysis.

Full Text: PDF