Task Transfer by Preference-Based Cost Learning

Authors

  • Mingxuan Jing Tsinghua University
  • Xiaojian Ma Tsinghua University
  • Wenbing Huang Tencent AI Lab
  • Fuchun Sun Tsinghua University
  • Huaping Liu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v33i01.33012471

Abstract

The goal of task transfer in reinforcement learning is migrating the action policy of an agent to the target task from the source task. Given their successes on robotic action planning, current methods mostly rely on two requirements: exactlyrelevant expert demonstrations or the explicitly-coded cost function on target task, both of which, however, are inconvenient to obtain in practice. In this paper, we relax these two strong conditions by developing a novel task transfer framework where the expert preference is applied as a guidance. In particular, we alternate the following two steps: Firstly, letting experts apply pre-defined preference rules to select related expert demonstrates for the target task. Secondly, based on the selection result, we learn the target cost function and trajectory distribution simultaneously via enhanced Adversarial MaxEnt IRL and generate more trajectories by the learned target distribution for the next preference selection. The theoretical analysis on the distribution learning and convergence of the proposed algorithm are provided. Extensive simulations on several benchmarks have been conducted for further verifying the effectiveness of the proposed method.

Downloads

Published

2019-07-17

How to Cite

Jing, M., Ma, X., Huang, W., Sun, F., & Liu, H. (2019). Task Transfer by Preference-Based Cost Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2471-2478. https://doi.org/10.1609/aaai.v33i01.33012471

Issue

Section

AAAI Technical Track: Human-AI Collaboration