The Utility of Sparse Representations for Control in Reinforcement Learning

Authors

  • Vincent Liu University of Alberta
  • Raksha Kumaraswamy University of Alberta
  • Lei Le Indiana University Bloomington
  • Martha White University of Alberta

DOI:

https://doi.org/10.1609/aaai.v33i01.33014384

Abstract

We investigate sparse representations for control in reinforcement learning. While these representations are widely used in computer vision, their prevalence in reinforcement learning is limited to sparse coding where extracting representations for new data can be computationally intensive. Here, we begin by demonstrating that learning a control policy incrementally with a representation from a standard neural network fails in classic control domains, whereas learning with a representation obtained from a neural network that has sparsity properties enforced is effective. We provide evidence that the reason for this is that the sparse representation provides locality, and so avoids catastrophic interference, and particularly keeps consistent, stable values for bootstrapping. We then discuss how to learn such sparse representations. We explore the idea of Distributional Regularizers, where the activation of hidden nodes is encouraged to match a particular distribution that results in sparse activation across time. We identify a simple but effective way to obtain sparse representations, not afforded by previously proposed strategies, making it more practical for further investigation into sparse representations for reinforcement learning.

Downloads

Published

2019-07-17

How to Cite

Liu, V., Kumaraswamy, R., Le, L., & White, M. (2019). The Utility of Sparse Representations for Control in Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4384-4391. https://doi.org/10.1609/aaai.v33i01.33014384

Issue

Section

AAAI Technical Track: Machine Learning