Nancy Fulda, Dan Ventura
We present a conceptual framework for creating Q-learning-based algorithms that converge to optimal equilibria in cooperative multiagent settings. This framework includes a set of conditions that are sufficient to guarantee optimal system performance. We demonstrate the efficacy of the framework by using it to analyze several well-known multi-agent learning algorithms and conclude by employing it as a design tool to construct a simple, novel multiagent learning algorithm.
Subjects: 7.1 Multi-Agent Systems; 12.1 Reinforcement Learning
Submitted: Oct 10, 2006