Marc Sebban and Richard Nock, Université des Antilles-Guyane, France
Theoretically well-founded, Support Vector Machines (SVM) are well-known to be suited for efficiently solving classification problems. Although improved generalization is the main goal of this new type of learning machine, recent works have tried to use them differently. For instance, feature selection has been recently viewed as an indirect consequence of the SVM approach. In this paper, we also exploit SVMs differently from what they are originally intended. We investigate them as a data reduction technique, useful for improving case-based learning algorithms, sensitive to noise and computationally expensive. Adopting the margin maximization principle for reducing the Structural Risk, our strategy allows not only to eliminate irrelevant instances but also to improve the performances of the standard k-Nearest-Neighbor classifier. A wide comparative study is presented on several benchmarks of UCI repository, showing the utility of our approach.