Learning Visual Routines with Reinforcement Learning

Andrew McCallum

Reinforcement learning is an ideal framework to learn visual routines since the routines are made up of sequences of actions. However, such algorithms must be able to handle the hidden state (perceptual aliasing) that results from visual routine’s purposefully narrowed attention. The U-Tree algorithm successfully learns visual rou-tines for a complex driving task in which the agent makes eye movements and executes deictic actions in order to weave in and out of traffic on a four-laned highway. The task involves hidden state, time pres-sure, stochasticity, a large world state space, and a large perceptual state space. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees, Parti-game, algorithm , and Variable Resolution Dynamic Programming. Tree is a direct descendant of Utile Suffix Memory, which used short-term memory, but not selective perception. Unlike Whitehead’s Lion algorithm, the algorithm handles noise, large state spaces, and uses short-term memory to uncover hi-den state.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.