*Eric A. Hansen*

Bounded policy iteration is an approach to solving infinite-horizon POMDPs that represents policies as stochastic finite-state controllers and iteratively improves a controller by adjusting the parameters of each node using linear programming. In the original algorithm, the size of the linear programs, and thus the complexity of policy improvement, depends on the number of parameters of each node, which grows with the size of the controller. But in practice, the number of parameters of a node with non-zero values is often very small, and it does not grow with the size of the controller. To exploit this, we develop a version of bounded policy iteration that manipulates a sparse representation of a stochastic finite-state controller. It improves a policy in the same way, and by the same amount, as the original algorithm, but with much better scalability.

*Subjects:* 1.11 Planning; 15.5 Decision Theory

*Submitted:* May 5, 2008

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.