AAAI Publications, Twenty-Fourth International Conference on Automated Planning and Scheduling

Font Size: 
Decentralized Multi-Robot Cooperation with Auctioned POMDPs
Jesus Capitan, Matthijs Spaan, Luis Merino, Anibal Ollero

Last modified: 2014-05-11


Planning under uncertainty faces a scalability problem when
considering multi-robot teams, as the information space scales
exponentially with the number of robots. To address this issue,
this paper proposes to decentralize multi-robot Partially Observable
Markov Decision Processes (POMDPs) while maintaining cooperation
between robots by using POMDP policy auctions. Auctions provide a
flexible way of coordinating individual policies modeled by POMDPs
and have low communication requirements. Additionally,
communication models in the multi-agent POMDP literature severely
mismatch with real inter-robot communication. We address this issue
by exploiting a decentralized data fusion method in order to
efficiently maintain a joint belief state among the robots.
The paper presents results in two different applications: environmental monitoring
with Unmanned Aerial Vehicles (UAVs); and cooperative tracking, in which
several robots have to jointly track a moving target of interest.


Multi-robot Cooperation; Decentralized Systems; Planning under Uncertainty

Full Text: PDF