A Hybrid Approach to Identifying Unknown Unknowns of Predictive Models
When predictive models are deployed in the real world, the confidence of a given prediction is often used as a signal of how much it should be trusted. It is therefore critical to identify instances for which the model is highly confident yet incorrect, i.e. the unknown unknowns. We describe a hybrid approach to identifying unknown unknowns that combines the previous crowdsourcing and algorithmic strategies, and addresses some of their weaknesses. In particular, we propose learning a set of interpretable decision rules to approximate how the model makes high confidence predictions. We devise a crowdsourcing task in which workers are presented with a rule, and challenged to generate an instance that “contradicts” it. A bandit algorithm is used to select the most promising rules to present to workers. Our method is evaluated by conducting a user study on Amazon Mechanical Turk. Experimental results on three datasets indicate that our approach discovers unknown unknowns more efficiently than the state-of-the-art.