Applying Online Search Techniques to Continuous-State Reinforcement Learning

Scott Davies, Andrew Y. Ng, Andrew Moore

In this paper, we describe methods for efficiently computing better solutions to control problems in continuous state spaces. We provide algorithms that exploit online search to boost the power of very approximate value functions discovered by traditional reinforcement learning techniques. We examine local searches, where the agent performs a finite-depth lookahead search, and global searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of the local methods lies in taking a value function, which gives a rough solution to the hard problem of finding good trajectories from every single state, and combining that with online search, which then gives an accurate solution to the easier problem of finding a good trajectory specifically from the current state. The key to the success of the global methods lies in using aggressive state-space search techniques such as uniform-cost search and A*, tamed into a tractable form by exploiting neighborhood relations and trajectory constraints that arise from continuous-space dynamic control.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.