Instance-Based Prediction of Continuous Values

Tony Townsend-Weber and Dennis Kibler

Learning to predict a continuous value rather than a discrete class is an important problem in machine learning. Instance-based algorithms can be effectively used to solve this problem. Two methods of classification using weighted and unweighted averages, based on the k-nearestneighbor algorithm, are presented and shown empirically to do equally well. Two methods for eliminating irrelevant attributes, 1-1ookahead and sequential backward elimination, are presented and shown empirically to do equally well. In seven domains, choosing the best k for knearest- neighbor is shown to reduce the classification error by 1.3% over arbitrarily using k = 3, and eliminating irrelevant attributes reduces the error by 0.50/o. Instance-based algorithms are compared to several other algorithms, including regression + instances, model trees + instances, neural networks -b instances, regression trees, and regression rules. The instance-based approach is the best in a few domains, but overall is slightly less accurate than model trees + instances and neural networks + instances.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.