A Novel Framework for Robustness Analysis of Visual QA Models

Authors

  • Jia-Hong Huang King Abdullah University of Science and Technology
  • Cuong Duc Dao King Abdullah University of Science and Technology
  • Modar Alfadly King Abdullah University of Science and Technology
  • Bernard Ghanem King Abdullah University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33018449

Abstract

Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy was the main focus of research but now there is a trend toward assessing the robustness of these models against adversarial attacks by evaluating their tolerance to varying noise levels. In VQA, adversarial attacks can target the image and/or the proposed main question and yet there is a lack of proper analysis of the later. In this work, we propose a flexible framework that focuses on the language part of VQA that uses semantically relevant questions, dubbed basic questions, acting as controllable noise to evaluate the robustness of VQA models. We hypothesize that the level of noise is negatively correlated to the similarity of a basic question to the main question. Hence, to apply noise on any given main question, we rank a pool of basic questions based on their similarity by casting this ranking task as a LASSO optimization problem. Then, we propose a novel robustness measure Rscore and two largescale basic question datasets (BQDs) in order to standardize robustness analysis for VQA models.

Downloads

Published

2019-07-17

How to Cite

Huang, J.-H., Dao, C. D., Alfadly, M., & Ghanem, B. (2019). A Novel Framework for Robustness Analysis of Visual QA Models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8449-8456. https://doi.org/10.1609/aaai.v33i01.33018449

Issue

Section

AAAI Technical Track: Vision