Switch-Based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning

Authors

  • Yuexin Wu Carnegie Mellon University
  • Xiujun Li Microsoft Research
  • Jingjing Liu Microsoft
  • Jianfeng Gao Microsoft Research
  • Yiming Yang Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v33i01.33017289

Abstract

Training task-completion dialogue agents with reinforcement learning usually requires a large number of real user experiences. The Dyna-Q algorithm extends Q-learning by integrating a world model, and thus can effectively boost training efficiency using simulated experiences generated by the world model. The effectiveness of Dyna-Q, however, depends on the quality of the world model - or implicitly, the pre-specified ratio of real vs. simulated experiences used for Q-learning. To this end, we extend the recently proposed Deep Dyna-Q (DDQ) framework by integrating a switcher that automatically determines whether to use a real or simulated experience for Q-learning. Furthermore, we explore the use of active learning for improving sample efficiency, by encouraging the world model to generate simulated experiences in the stateaction space where the agent has not (fully) explored. Our results show that by combining switcher and active learning, the new framework named as Switch-based Active Deep Dyna-Q (Switch-DDQ), leads to significant improvement over DDQ and Q-learning baselines in both simulation and human evaluations.1

Downloads

Published

2019-07-17

How to Cite

Wu, Y., Li, X., Liu, J., Gao, J., & Yang, Y. (2019). Switch-Based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7289-7296. https://doi.org/10.1609/aaai.v33i01.33017289

Issue

Section

AAAI Technical Track: Natural Language Processing