Generating Adversarial Examples for Holding Robustness of Source Code Processing Models

Authors

  • Huangzhao Zhang Key Lab of High Confidence Software Technologies
  • Zhuo Li Key Lab of High Confidence Software Technologies
  • Ge Li Key Lab of High Confidence Software Technologies
  • Lei Ma Kyushu University
  • Yang Liu Nanyang Technology University
  • Zhi Jin Key Lab of High Confidence Software Technologies

DOI:

https://doi.org/10.1609/aaai.v34i01.5469

Abstract

Automated processing, analysis, and generation of source code are among the key activities in software and system lifecycle. To this end, while deep learning (DL) exhibits a certain level of capability in handling these tasks, the current state-of-the-art DL models still suffer from non-robust issues and can be easily fooled by adversarial attacks.

Different from adversarial attacks for image, audio, and natural languages, the structured nature of programming languages brings new challenges. In this paper, we propose a Metropolis-Hastings sampling-based identifier renaming technique, named \fullmethod (\method), which generates adversarial examples for DL models specialized for source code processing. Our in-depth evaluation on a functionality classification benchmark demonstrates the effectiveness of \method in generating adversarial examples of source code. The higher robustness and performance enhanced through our adversarial training with \method further confirms the usefulness of DL models-based method for future fully automated source code processing.

Downloads

Published

2020-04-03

How to Cite

Zhang, H., Li, Z., Li, G., Ma, L., Liu, Y., & Jin, Z. (2020). Generating Adversarial Examples for Holding Robustness of Source Code Processing Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 1169-1176. https://doi.org/10.1609/aaai.v34i01.5469

Issue

Section

AAAI Technical Track: Applications