AAAI Publications, Workshops at the Twenty-Seventh AAAI Conference on Artificial Intelligence

Font Size: 
Learning Behavior Hierarchies via High-Dimensional Sensor Projection
Simon D. Levy, Suraj Bajracharya, Ross W. Gayler

Last modified: 2013-06-29

Abstract


We propose a knowledge-representation architecture allowing a robot to learn arbitrarily complex, hierarchical / symbolic relationships between sensors and actuators. These relationships are encoded in high-dimensional, low-precision vectors that are very robust to noise. Low-dimensional (single-bit) sensor values are projected onto the highdimensional representation space using low-precision random weights, and the appropriate actions are then computed using elementwise vector multiplication in this space. The high-dimensional action representations are then projected back down to low-dimensional actuator signals via a simple vector operation like dot product. As a proof-of-concept for our architecture, we use it to implement a behavior-based controller for a simulated robot with three sensors (touch sensor, left/right light sensor) and two actuators (wheels). We conclude by discussing the prospects for deriving such representations automatically.

Full Text: PDF