A Framework for Analyzing Skew in Evaluation Metrics

Alexander Liu, Joydeep Ghosh, Cheryl Martin

For several evaluation metrics for classification problems, correctly classifying an additional point from one class will have a different effect on the value of the evaluation metric compared to correctly classifying an additional point from another class. In this paper, we describe a method for quantifying these effects based on "metric skew". After describing how to find the skew for each class given a particular evaluation metric, we show what the skews are for several common evaluation metrics. In particular, we show that these skews provide a new viewpoint on metrics from which previously known as well as new properties about several popular metrics can be observed.

Subjects: 12. Machine Learning and Discovery; 9. Foundational Issues

Submitted: May 11, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.