Dana H. Ballard
In the development of large-scale knowledge networks, much recent progress has been inspired by connections to neurobiology. An important component of any "neural" network is an accompanying learning algorithm. Such an algorithm, to be biologically plausible, must work for very large numbers of units. Studies of large-scale systems have so far been restricted to systems without internal units (units with no direct connections to the input or output). Internal units are crucial to such systems as they are the means by which a system can encode high-order regularities (or invariants) that are implicit in its inputs and outputs. Computer simulations of learning using internal units have been restricted to small-scale systems. This paper describes a way of coupling autoassociative learning modules into hierarchies that should greatly improve the performance of learning algorithms in large-scale systems. The idea has been tested experimentally with positive results.