Learning to Model Opponent Learning (Student Abstract)

Authors

  • Ian Davies University College London
  • Zheng Tian University College London
  • Jun Wang University College London

DOI:

https://doi.org/10.1609/aaai.v34i10.7157

Abstract

Multi-Agent Reinforcement Learning (MARL) considers settings in which a set of coexisting agents interact with one another and their environment. The adaptation and learning of other agents induces non-stationarity in the environment dynamics. This poses a great challenge for value function-based algorithms whose convergence usually relies on the assumption of a stationary environment. Policy search algorithms also struggle in multi-agent settings as the partial observability resulting from an opponent's actions not being known introduces high variance to policy training. Modelling an agent's opponent(s) is often pursued as a means of resolving the issues arising from the coexistence of learning opponents. An opponent model provides an agent with some ability to reason about other agents to aid its own decision making. Most prior works learn an opponent model by assuming the opponent is employing a stationary policy or switching between a set of stationary policies. Such an approach can reduce the variance of training signals for policy search algorithms. However, in the multi-agent setting, agents have an incentive to continually adapt and learn. This means that the assumptions concerning opponent stationarity are unrealistic. In this work, we develop a novel approach to modelling an opponent's learning dynamics which we term Learning to Model Opponent Learning (LeMOL). We show our structured opponent model is more accurate and stable than naive behaviour cloning baselines. We further show that opponent modelling can improve the performance of algorithmic agents in multi-agent settings.

Downloads

Published

2020-04-03

How to Cite

Davies, I., Tian, Z., & Wang, J. (2020). Learning to Model Opponent Learning (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13771-13772. https://doi.org/10.1609/aaai.v34i10.7157

Issue

Section

Student Abstract Track