Sanity Checks for Saliency Metrics

Authors

  • Richard Tomsett IBM Research
  • Dan Harborne Cardiff University
  • Supriyo Chakraborty IBM Research
  • Prudhvi Gurram Booz Allen Hamilton
  • Alun Preece Cardiff University

DOI:

https://doi.org/10.1609/aaai.v34i04.6064

Abstract

Saliency maps are a popular approach to creating post-hoc explanations of image classifier outputs. These methods produce estimates of the relevance of each pixel to the classification output score, which can be displayed as a saliency map that highlights important pixels. Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i.e. their “fidelity”). We therefore investigate existing metrics for evaluating the fidelity of saliency methods (i.e. saliency metrics). We find that there is little consistency in the literature in how such metrics are calculated, and show that such inconsistencies can have a significant effect on the measured fidelity. Further, we apply measures of reliability developed in the psychometric testing literature to assess the consistency of saliency metrics when applied to individual saliency maps. Our results show that saliency metrics can be statistically unreliable and inconsistent, indicating that comparative rankings between saliency methods generated using such metrics can be untrustworthy.

Downloads

Published

2020-04-03

How to Cite

Tomsett, R., Harborne, D., Chakraborty, S., Gurram, P., & Preece, A. (2020). Sanity Checks for Saliency Metrics. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6021-6029. https://doi.org/10.1609/aaai.v34i04.6064

Issue

Section

AAAI Technical Track: Machine Learning