AAAI Publications, Twenty-Eighth AAAI Conference on Artificial Intelligence

Font Size: 
Probabilistic Planning with Reduced Models
Luis Enrique Pineda

Last modified: 2014-06-21


Markov decision processes (MDP) offer a rich model that has been extensively used by the AI community for planning and learning under uncertainty. However, solving MDPs is often intractable, which has led to the development of many approximate algorithms. In my dissertation work I introduce a new paradigm to handle this complexity by defining a family of MDP reduced models characterized by two parameters: the maximum number of primary outcomes per action that are fully accounted for and the maximum number of occurrences of the remaining exceptional outcomes that are planned for in advance. Reduced models can be solved much faster using heuristic search algorithms, benefiting from the dramatic reduction in the number of reachable states. This framework places recent work on MDP determinization in a broader context and lays the foundation for efficient and systematic exploration of the space of MDP model reductions. Progress so far work includes a formal definition of this family of MDP reductions, a continual planning paradigm to handle the case when the number of exceptions reaches the maximum allowed, a simple greedy approach to generate good reductions for a given planning domain, and a compilation scheme that generates MDP reductions from a PPDDL description of a planning problem.


Markov Decision Processes; Determinization; Continual Planning; Planning Under Uncertainty

Full Text: PDF