Dynamic Vehicle Traffic Control Using Deep Reinforcement Learning in Automated Material Handling System

  • Younkook Kang Seoul National University
  • Sungwon Lyu Seoul National University
  • Jeeyung Kim Seoul National University
  • Bongjoon Park Seoul National University
  • Sungzoon Cho Seoul National University

Abstract

In automated material handling systems (AMHS), delivery time is an important issue directly associated with the production cost and the quality of the product. In this paper, we propose a dynamic routing strategy to shorten delivery time and delay. We set the target of control by analyzing traffic flows and selecting the region with the highest flow rate and congestion frequency. Then, we impose a routing cost in order to dynamically reflect the real-time changes of traffic states. Our deep reinforcement learning model consists of a Q-learning step and a recurrent neural network, through which traffic states and action values are predicted. Experiment results show that the proposed method decreases manufacturing costs while increasing productivity. Additionally, we find evidence the reinforcement learning structure proposed in this study can autonomously and dynamically adjust to the changes in traffic patterns.

Published
2019-07-17
Section
Student Abstract Track