AAAI Publications, Twenty-Fourth International Conference on Automated Planning and Scheduling

Font Size: 
On MABs and Separation of Concerns in Monte-Carlo Planning for MDPs
Zohar Feldman, Carmel Domshlak

Last modified: 2014-05-10


Linking online planning for MDPs with their special case of stochastic multi-armed bandit problems, we analyze three state-of-the-art Monte-Carlo tree search al-gorithms: UCT, BRUE, and MaxUCT. Using the outcome, we (i) introduce two new MCTS algorithms,MaxBRUE, which combines uniform sampling with Bellman backups, and MpaUCT, which combines UCB1with a novel backup procedure, (ii) analyze them formally and empirically, and (iii) show how MCTS algorithms can be further stratified by an exploration control mechanism that improves their empirical performance without harming the formal guarantees.


Markov Decision Process, Multi-Armed Bandit, Online Planning, Simple Regret, Monte-Carlo Tree Search

Full Text: PDF