Craig Boutilier, University of Toronto
Models for sequential decision making under uncertainty (e.g., Markov decision processes, or MDPs) have been studied in operations research for decades. The recent incorporation of ideas from many areas of AI, including planning, probabilistic modeling, machine learning, and knowledge representation) have made these models much more widely applicable. I briefly survey recent advances within AI in the use of fully- and partially-observable MDPs as a modeling tool, and the development of computationally-manageable solution methods. I will place special emphasis on factored problem representations such as Bayesian networks and algorithms that exploit the structure inherent in these representations.