Character-Level Language Modeling with Deeper Self-Attention

Authors

  • Rami Al-Rfou Google Research
  • Dokook Choe Google
  • Noah Constant Google AI
  • Mandy Guo Google AI
  • Llion Jones Google AI

DOI:

https://doi.org/10.1609/aaai.v33i01.33013159

Abstract

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

Downloads

Published

2019-07-17

How to Cite

Al-Rfou, R., Choe, D., Constant, N., Guo, M., & Jones, L. (2019). Character-Level Language Modeling with Deeper Self-Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3159-3166. https://doi.org/10.1609/aaai.v33i01.33013159

Issue

Section

AAAI Technical Track: Machine Learning