Analysis of the Internal Representations in Neural Networks for Machine Intelligence

Lai-Wan Chan

The internal representation of the training patterns of multi-layer perceptrons was examined and we demonstrated that the connection weights between layers are effectively transforming the representation format of the information from one layer to another one in a meaningful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including the choice of an appropriate representation of the training patterns, the similarities between the patterns as well as the network structure; i.e. the number of hidden layers and the number of hidden units in each layer.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.