Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications

Authors

  • Daniel S. Brown University of Texas at Austin
  • Scott Niekum University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v33i01.33017749

Abstract

Inverse reinforcement learning (IRL) infers a reward function from demonstrations, allowing for policy improvement and generalization. However, despite much recent interest in IRL, little work has been done to understand the minimum set of demonstrations needed to teach a specific sequential decisionmaking task. We formalize the problem of finding maximally informative demonstrations for IRL as a machine teaching problem where the goal is to find the minimum number of demonstrations needed to specify the reward equivalence class of the demonstrator. We extend previous work on algorithmic teaching for sequential decision-making tasks by showing a reduction to the set cover problem which enables an efficient approximation algorithm for determining the set of maximallyinformative demonstrations. We apply our proposed machine teaching algorithm to two novel applications: providing a lower bound on the number of queries needed to learn a policy using active IRL and developing a novel IRL algorithm that can learn more efficiently from informative demonstrations than a standard IRL approach.

Downloads

Published

2019-07-17

How to Cite

Brown, D. S., & Niekum, S. (2019). Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7749-7758. https://doi.org/10.1609/aaai.v33i01.33017749

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty