Leveraging BERT with Mixup for Sentence Classification (Student Abstract)

Authors

  • Amit Jindal Manipal Institute of Technology
  • Dwaraknath Gnaneshwar Manipal Institute of Technology
  • Ramit Sawhney Netaji Subhas Institute of Technology
  • Rajiv Ratn Shah Indraprastha Institute of Information Technology, Delhi

DOI:

https://doi.org/10.1609/aaai.v34i10.7186

Abstract

Good generalization capability is an important quality of well-trained and robust neural networks. However, networks usually struggle when faced with samples outside the training distribution. Mixup is a technique that improves generalization, reduces memorization, and increases adversarial robustness. We apply a variant of Mixup called Manifold Mixup to the sentence classification problem, and present the results along with an ablation study. Our methodology outperforms CNN, LSTM, and vanilla BERT models in generalization.

Downloads

Published

2020-04-03

How to Cite

Jindal, A., Gnaneshwar, D., Sawhney, R., & Shah, R. R. (2020). Leveraging BERT with Mixup for Sentence Classification (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13829-13830. https://doi.org/10.1609/aaai.v34i10.7186

Issue

Section

Student Abstract Track