Human Evaluation of Models Built for Interpretability

Authors

  • Isaac Lage Harvard University
  • Emily Chen Harvard University
  • Jeffrey He Harvard University
  • Menaka Narayanan Harvard University
  • Been Kim Google
  • Samuel J. Gershman Harvard University
  • Finale Doshi-Velez Harvard University

DOI:

https://doi.org/10.1609/hcomp.v7i1.5280

Abstract

Recent years have seen a boom in interest in interpretable machine learning systems built on models that can be understood, at least to some degree, by domain experts. However, exactly what kinds of models are truly human-interpretable remains poorly understood. This work advances our understanding of precisely which factors make models interpretable in the context of decision sets, a specific class of logic-based model. We conduct carefully controlled human-subject experiments in two domains across three tasks based on human-simulatability through which we identify specific types of complexity that affect performance more heavily than others–trends that are consistent across tasks and domains. These results can inform the choice of regularizers during optimization to learn more interpretable models, and their consistency suggests that there may exist common design principles for interpretable machine learning systems.

Downloads

Published

2019-10-28

How to Cite

Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S. J., & Doshi-Velez, F. (2019). Human Evaluation of Models Built for Interpretability. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 59-67. https://doi.org/10.1609/hcomp.v7i1.5280