Constructing Temporal Abstractions Autonomously in Reinforcement Learning

Authors

  • Pierre-Luc Bacon McGill University
  • Doina Precup McGill University

DOI:

https://doi.org/10.1609/aimag.v39i1.2780

Abstract

The idea of temporal abstraction, i.e. learning, planning and representing the world at multiple time scales, has been a constant thread in AI research, spanning sub-fields from classical planning and search to control and reinforcement learning. For example, programming a robot typically involves making decisions over a set of controllers, rather than working at the level of motor torques. While temporal abstraction is a very natural concept, learning such abstractions with no human input has proved quite daunting. In this paper, we present a general architecture, called option-critic, which allows learning temporal abstractions automatically, end-to-end, simply from the agent’s experience. This approach allows continual learning and provides interesting qualitative and quantitative results in several tasks.

Downloads

Published

2018-03-27

How to Cite

Bacon, P.-L., & Precup, D. (2018). Constructing Temporal Abstractions Autonomously in Reinforcement Learning. AI Magazine, 39(1), 39-50. https://doi.org/10.1609/aimag.v39i1.2780

Issue

Section

Articles