David L. Waltz
Classification learning applies to a wide range of tasks, from diagnosis and troubleshooting to pattern recognition and keyword assignment. Many methods have been used to build classification systems, including artificial neural networks, rule-based expert systems (both hand-built and inductively learned), fuzzy rule systems, memorybased and case-based systems and nearest neighbor systems, generalized radial basis functions, classifier systems, and others. Research subcommunities have tended to specialize in one or another of these mechanisms, and many papers have argued for the superiority of one methods vis-a-vis others. I will argue that none of these methods is universal, nor does any one method have a priori superiority over all others. To support this argument, I show that all these methods are related, and in fact can be viewed as lying at points along a continuous spectrum, with memory-based methods occupying a pivotal position. I further argue that the selection of one or another of these methods should generally be seen as an engineering choice, even when the research goal is to explore the potential of some method for explaining aspects of cognition; methods and problem areas must be considered together. Finally a set of properues is identified that can be used to characterize each of the classification methods, and to begin to build an engineering science for classification tasks.