SDRL: Interpretable and Data-Efficient Deep Reinforcement Learning Leveraging Symbolic Planning

Authors

  • Daoming Lyu Auburn University
  • Fangkai Yang Maana Inc.
  • Bo Liu Auburn University
  • Steven Gustafson Maana Inc.

DOI:

https://doi.org/10.1609/aaai.v33i01.33012970

Abstract

Deep reinforcement learning (DRL) has gained great success by learning directly from high-dimensional sensory inputs, yet is notorious for the lack of interpretability. Interpretability of the subtasks is critical in hierarchical decision-making as it increases the transparency of black-box-style DRL approach and helps the RL practitioners to understand the high-level behavior of the system better. In this paper, we introduce symbolic planning into DRL and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning. The task-level interpretability is enabled by relating symbolic actions to options.This framework features a planner – controller – meta-controller architecture, which takes charge of subtask scheduling, data-driven subtask learning, and subtask evaluation, respectively. The three components cross-fertilize each other and eventually converge to an optimal symbolic plan along with the learned subtasks, bringing together the advantages of long-term planning capability with symbolic knowledge and end-to-end reinforcement learning directly from a high-dimensional sensory input. Experimental results validate the interpretability of subtasks, along with improved data efficiency compared with state-of-the-art approaches.

Downloads

Published

2019-07-17

How to Cite

Lyu, D., Yang, F., Liu, B., & Gustafson, S. (2019). SDRL: Interpretable and Data-Efficient Deep Reinforcement Learning Leveraging Symbolic Planning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2970-2977. https://doi.org/10.1609/aaai.v33i01.33012970

Issue

Section

AAAI Technical Track: Knowledge Representation and Reasoning