AAAI Publications, Thirty-Second AAAI Conference on Artificial Intelligence

Font Size: 
Adversarial Dropout for Supervised and Semi-Supervised Learning
Sungrae Park, JunKeon Park, Su-Jin Shin, Il-Chul Moon

Last modified: 2018-04-29

Abstract


Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper-parameter that is different from a continuous-valued parameter to specify the strength of the regularization.

Keywords


adversarial training; regularization; deep learning

Full Text: PDF