On Proper Handling of Multi-collinear Inputs and Errors-in-Variables with Explicit and Implicit Neural Models

Seppo J. Karrila and Neil R. Euliano

When model input variables appear redundant, it is common practice to simply drop some of them until the redundancy is removed, prior to model identification. As a result, the final model has forgotten the interdependency in the original input data, which may be an essential condition for model validity. We provide a practical approach to neural network modeling, such that the final model will incorporate also a "memory" of the multi-collinearity in training inputs, and provides a check on new input vectors for consistency with this pattern. We approach this problem stepwise, pointing out the benefits achieved or lost at each step when model complexity is increased. The steps lead in a natural way to building implicit models, which also handle noise in inputs in close resemblance to total least squares. The practical tool for this is a feedforward network of specifically selected configuration.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.