The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems

Authors

  • Mahsan Nourani University of Florida
  • Samia Kabir Texas A&M University
  • Sina Mohseni Texas A&M University
  • Eric D. Ragan University of Florida

DOI:

https://doi.org/10.1609/hcomp.v7i1.5284

Abstract

Machine learning and artificial intelligence algorithms can assist human decision making and analysis tasks. While such technology shows promise, willingness to use and rely on intelligent systems may depend on whether people can trust and understand them. To address this issue, researchers have explored the use of explainable interfaces that attempt to help explain why or how a system produced the output for a given input. However, the effects of meaningful and meaningless explanations (determined by their alignment with human logic) are not properly understood, especially with users who are non-experts in data science. Additionally, we wanted to explore how explanation inclusion and level of meaningfulness would affect the user’s perception of accuracy. We designed a controlled experiment using an image classification scenario with local explanations to evaluate and better understand these issues. Our results show that whether explanations are human-meaningful can significantly affect perception of a system’s accuracy independent of the actual accuracy observed from system usage. Participants significantly underestimated the system’s accuracy when it provided weak, less human-meaningful explanations. Therefore, for intelligent systems with explainable interfaces, this research demonstrates that users are less likely to accurately judge the accuracy of algorithms that do not operate based on human-understandable rationale.

Downloads

Published

2019-10-28

How to Cite

Nourani, M., Kabir, S., Mohseni, S., & Ragan, E. D. (2019). The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 97-105. https://doi.org/10.1609/hcomp.v7i1.5284