Decision Tree Rule Reduction Using Linear Classifiers in Multilayer Perceptron

DaeEun Kim, University of Edinburgh, United Kingdom and Sea Woo Kim, KAIST, Korea

It has been shown that a neural network is better than a direct application of induction trees in modeling complex relations of input attributes in sample data. We propose that concise rules be extracted to support data with input variable relations over continuous-valued attributes. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. It is shown in this paper that when we use a decision tree over linear classifiers extracted from a multilayer perceptron, the number of rules can be reduced. We have tested this method over several data sets to compare it with decision tree results.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.