Ranking -- Methods for Flexible Evaluation and Efficient Comparison of Classification Performance

Saharon Rosset

We present the notion of Ranking for evaluation of two-class classifiers. Ranking is based on using the ordering information contained in the output of a scoring model, rather than just setting a classification threshold. Using this ordering information, we can evaluate the model’s performance with regard to complex goal functions, such as the correct identification of the k most likely and/or least likely to be responders out of a group of potential customers. Using Ranking we can also obtain increased efficiency in comparing classifiers and selecting the better one even for the standard goal of achieving a minimal misclassification rate. This feature of Ranking is illustrated by simulation results. We also discuss it theoretically, showing the similarity in structure between the reducible (model dependent) parts of the Linear Ranking score and the standard Misclassification Rate score, and characterizing the situations when we expect Linear Ranking to outperform Misclassification Rate as a method for model discrimination.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.