Why Question Machine Learning Evaluation Methods (An Illustrative Review of the Shortcomings of Current Methods)

Nathalie Japkowicz

The evaluation of classifiers or learning algorithms is not a topic that has, generally, been given much thought in the fields of Machine Learning and Data Mining. More often than not, common off-the-shelf metrics such as Accuracy, Precision/Recall and ROC Analysis as well as confidence estimation methods, such as the t-test, are applied without much attention being paid to their meaning. The purpose of this paper is to give the reader an intuitive idea of what could go wrong with our commonly used evaluation methods. In particular, we show, through examples, that since evaluation metrics and confidence estimation methods summarize the system’s performance, they can, at times, obscure important behaviors of the hypotheses or algorithms under consideration. We hope that this very simple review of some of the problems surrounding evaluation will sensitize Machine Learning and Data Mining researchers to the issue and encourage us to think twice, prior to selecting and applying an evaluation method.

Subjects: 12. Machine Learning and Discovery; 9. Foundational Issues

Submitted: May 12, 2006


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.