Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World

Authors

  • Kaichen Yang University of Florida
  • Tzungyu Tsai National Tsing Hua University
  • Honggang Yu University of Florida
  • Tsung-Yi Ho National Tsing Hua University
  • Yier Jin University of Florida

DOI:

https://doi.org/10.1609/aaai.v34i01.5459

Abstract

Adversarial examples that can fool deep neural network (DNN) models in computer vision present a growing threat. The current methods of launching adversarial attacks concentrate on attacking image classifiers by adding noise to digital inputs. The problem of attacking object detection models and adversarial attacks in physical world are rarely touched. Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. In this paper, we propose a novel physical adversarial attack targeting object detection models. Instead of simply printing images, we manufacture real metal objects that could achieve the adversarial effect. In both indoor and outdoor experiments we show our physical adversarial objects can fool widely applied object detection models including SSD, YOLO and Faster R-CNN in various environments. We also test our attack in a variety of commercial platforms for object detection and demonstrate that our attack is still valid on these platforms. Consider the potential defense mechanisms our adversarial objects may encounter, we conduct a series of experiments to evaluate the effect of existing defense methods on our physical attack.

Downloads

Published

2020-04-03

How to Cite

Yang, K., Tsai, T., Yu, H., Ho, T.-Y., & Jin, Y. (2020). Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 1088-1095. https://doi.org/10.1609/aaai.v34i01.5459

Issue

Section

AAAI Technical Track: Applications