Deep Reinforcement Learning via Past-Success Directed Exploration

Authors

  • Xiaoming Liu Army Engineering University
  • Zhixiong Xu Army Engineering University
  • Lei Cao Army Engineering University
  • Xiliang Chen Army Engineering University
  • Kai Kang Army Engineering University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019979

Abstract

The balance between exploration and exploitation has always been a core challenge in reinforcement learning. This paper proposes “past-success exploration strategy combined with Softmax action selection”(PSE-Softmax) as an adaptive control method for taking advantage of the characteristics of the online learning process of the agent to adapt exploration parameters dynamically. The proposed strategy is tested on OpenAI Gym with discrete and continuous control tasks, and the experimental results show that PSE-Softmax strategy delivers better performance than deep reinforcement learning algorithms with basic exploration strategies.

Downloads

Published

2019-07-17

How to Cite

Liu, X., Xu, Z., Cao, L., Chen, X., & Kang, K. (2019). Deep Reinforcement Learning via Past-Success Directed Exploration. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9979-9980. https://doi.org/10.1609/aaai.v33i01.33019979

Issue

Section

Student Abstract Track