Discovery of Relevant New Features by Generating Non-Linear Decision Trees

Andreas Ittner, Michael Schlosser

Most decision tree algorithms using selective induction focus on univariate, i.e. axis-parallel tests at each internal node of a tree. Oblique decision trees use multivariate linear tests at each non-leaf node. One well-known limitation of selective induction algorithms, however, is its inadequate description of hypotheses by task-supplied original features. To overcome this limitation this paper reports a novel approach to constructive induction, called non-linear decision trees. The crux of this method consists of the generation of new features and the augmentation of the original primitive features with these new ones. This method can be considered as a powerful tool in KDD, because the constructed new features remain understandable and permit an interpretation by experts. The resulted non-linear decision trees are more accurate than their axis-parallel or oblique counterparts. Experiments on several artificial and real-world data sets demonstrate this property.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.