Deriving Subgoals Autonomously to Accelerate Learning in Sparse Reward Domains

  • Michael Dann RMIT University
  • Fabio Zambetta RMIT University
  • John Thangarajah RMIT University

Abstract

Sparse reward games, such as the infamous Montezuma’s Revenge, pose a significant challenge for Reinforcement Learning (RL) agents. Hierarchical RL, which promotes efficient exploration via subgoals, has shown promise in these games. However, existing agents rely either on human domain knowledge or slow autonomous methods to derive suitable subgoals. In this work, we describe a new, autonomous approach for deriving subgoals from raw pixels that is more efficient than competing methods. We propose a novel intrinsic reward scheme for exploiting the derived subgoals, applying it to three Atari games with sparse rewards. Our agent’s performance is comparable to that of state-of-the-art methods, demonstrating the usefulness of the subgoals found.

Published
2019-07-17
Section
AAAI Technical Track: Applications