Sentence-Wise Smooth Regularization for Sequence to Sequence Learning

Authors

  • Chengyue Gong Peking Univeristy
  • Xu Tan Microsoft Research Asia
  • Di He Peking University
  • Tao Qin Microsoft Research Asia

DOI:

https://doi.org/10.1609/aaai.v33i01.33016449

Abstract

Maximum-likelihood estimation (MLE) is widely used in sequence to sequence tasks for model training. It uniformly treats the generation/prediction of each target token as multiclass classification, and yields non-smooth prediction probabilities: in a target sequence, some tokens are predicted with small probabilities while other tokens are with large probabilities. According to our empirical study, we find that the non-smoothness of the probabilities results in low quality of generated sequences. In this paper, we propose a sentence-wise regularization method which aims to output smooth prediction probabilities for all the tokens in the target sequence. Our proposed method can automatically adjust the weights and gradients of each token in one sentence to ensure the predictions in a sequence uniformly well. Experiments on three neural machine translation tasks and one text summarization task show that our method outperforms conventional MLE loss on all these tasks and achieves promising BLEU scores on WMT14 English-German and WMT17 Chinese-English translation task.

Downloads

Published

2019-07-17

How to Cite

Gong, C., Tan, X., He, D., & Qin, T. (2019). Sentence-Wise Smooth Regularization for Sequence to Sequence Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6449-6456. https://doi.org/10.1609/aaai.v33i01.33016449

Issue

Section

AAAI Technical Track: Natural Language Processing