AAAI Publications, Sixth European Conference on Planning

Font Size: 
Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control
Daniel S. Bernstein, Shlomo Zilberstein

Last modified: 2014-05-21


Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.


Markov decision process, planning, Mars rover, reinforcement learning

Full Text: PDF