Learning to Adaptively Scale Recurrent Neural Networks

Authors

  • Hao Hu University of Central Florida
  • Liqiang Wang University of Central Florida
  • Guo-Jun Qi Huawei Cloud

DOI:

https://doi.org/10.1609/aaai.v33i01.33013822

Abstract

Recent advancements in recurrent neural network (RNN) research have demonstrated the superiority of utilizing multiscale structures in learning temporal representations of time series. Currently, most of multiscale RNNs use fixed scales, which do not comply with the nature of dynamical temporal patterns among sequences. In this paper, we propose Adaptively Scaled Recurrent Neural Networks (ASRNN), a simple but efficient way to handle this problem. Instead of using predefined scales, ASRNNs are able to learn and adjust scales based on different temporal contexts, making them more flexible in modeling multiscale patterns. Compared with other multiscale RNNs, ASRNNs are bestowed upon dynamical scaling capabilities with much simpler structures, and are easy to be integrated with various RNN cells. The experiments on multiple sequence modeling tasks indicate ASRNNs can efficiently adapt scales based on different sequence contexts and yield better performances than baselines without dynamical scaling abilities.

Downloads

Published

2019-07-17

How to Cite

Hu, H., Wang, L., & Qi, G.-J. (2019). Learning to Adaptively Scale Recurrent Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3822-3829. https://doi.org/10.1609/aaai.v33i01.33013822

Issue

Section

AAAI Technical Track: Machine Learning