*R. Nair, M. Tambe, M. Yokoo, D. Pynadath, and S. Marsella*

The problem of deriving joint policies for a group of agents that maximze some joint reward function can be modelled as a decentralized partially observ- able Markov decision process (DEC-POMDP). Significant algorithms have been developed for single agent POMDPs however,with a few exceptions,ef- fective algorithms for deriving policies for decentralized POMDPS have not been developed.As a first step, we present new algorithms for solving decentral- ized POMDPs.In particular,we describe an exhaustive search algorithm for a globally optimal solution and analyze the complexity of this algorithm, which we find to be doubly exponential in the number of agents and time,highlighting the importance of more feasible approximations. We define a class of algorithms which we refer to as Joint Equilibrium-based Search for Policies (JESP) and describe an exhaustive algorithm and a dynamic programming algorithm for JESP. Finally, we empirically compare the exhaustive JESP algorithm with the globally optimal exhaustive algorithm.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.