AAAI Publications, Twenty-Fourth AAAI Conference on Artificial Intelligence

Font Size: 
Using Imagery to Simplify Perceptual Abstraction in Reinforcement Learning Agents
Samuel Wintermute

Last modified: 2010-07-05


In this paper, we consider the problem of reinforcement learning in spatial tasks. These tasks have many states that can be aggregated together to improve learning efficiency. In an agent, this aggregation can take the form of selecting appropriate perceptual processes to arrive at a qualitative abstraction of the underlying continuous state. However, for arbitrary problems, an agent is unlikely to have the perceptual processes necessary to discriminate all relevant states in terms of such an abstraction.

To help compensate for this, reinforcement learning can be integrated with an imagery system, where simple models of physical processes are applied within a low-level perceptual representation to predict the state resulting from an action. Rather than abstracting the current state, abstraction can be applied to the predicted next state. Formally, it is shown that this integration broadens the class of perceptual abstraction methods that can be used while preserving the underlying problem. Empirically, it is shown that this approach can be used in complex domains, and can be beneficial even when formal requirements are not met.


reinforcement learning; imagery; abstraction; cognitive architecture; Soar; state-action aggregation; state aggregation; spatial reasoning

Full Text: PDF