Insufficient Data Can Also Rock! Learning to Converse Using Smaller Data with Augmentation

Authors

  • Juntao Li Peking University
  • Lisong Qiu Peking University
  • Bo Tang Southern University of Science and Technology
  • Dongmin Chen Peking University
  • Dongyan Zhao Peking University
  • Rui Yan Peking University

DOI:

https://doi.org/10.1609/aaai.v33i01.33016698

Abstract

Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural networks. The effectiveness of deep neural network models depends on the amount of training data. As it is laboursome and expensive to acquire a huge amount of data in most scenarios, how to effectively utilize existing data is the crux of this issue. In this paper, we use data augmentation techniques to improve the performance of neural dialogue models on the condition of insufficient data. Specifically, we propose a novel generative model to augment existing data, where the conditional variational autoencoder (CVAE) is employed as the generator to output more training data with diversified expressions. To improve the correlation of each augmented training pair, we design a discriminator with adversarial training to supervise the augmentation process. Moreover, we thoroughly investigate various data augmentation schemes for neural dialogue system with generative models, both GAN and CVAE. Experimental results on two open corpora, Weibo and Twitter, demonstrate the superiority of our proposed data augmentation model.

Downloads

Published

2019-07-17

How to Cite

Li, J., Qiu, L., Tang, B., Chen, D., Zhao, D., & Yan, R. (2019). Insufficient Data Can Also Rock! Learning to Converse Using Smaller Data with Augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6698-6705. https://doi.org/10.1609/aaai.v33i01.33016698

Issue

Section

AAAI Technical Track: Natural Language Processing