Gholamreza Nakhaeizadeh, Charles Taylor, Carsten Lanquillon
This paper develops the concept of usefulness in the context of supervised learning. We argue that usefulness can be used to improve the performance of classification rules (as measured by error rate), as well to reduce their storage (or their derivation). We also indicate how usefulness can be applied in a dynamic setting, in which the distribution of at least one class is changing with time. Three algorithms are used to exemplify our proposals. We first review a dynamic nearest neighbor classifier, and then develop dynamic versions of Learning Vector Quantization and a Radial Basis Function network. All the algorithms are adapted to capture dynamic aspects of real-world data sets by keeping a record of usefulness as well as considering the age of the observations. These methods are tried out on real data from the credit industry.