Iterpreting Network Structure

Mozer and Smolensky (1989, p. 3) have noted that "one thing that connectionist networks have in common with brains is that if you open them up and peer inside, all you can see is a big pile of goo." For researchers interested in using ANNs to model cognitive or perceptual processes, this is unfortunate. For example, McCloskey (1991) makes a strong argument that an ANN cannot be viewed as a theory, a simulation of a theory, nor even a demonstration of a specific theoretical point because of the general inability to interpret the structure of a trained network. As a counterpoint to such a claim, we have recently discovered that the value unit architecture has, for binary input patterns, an emergent characteristic that permits a rich interpretation of the internal structure of value unit networks.

Consider using a relatively large number of patterns to train an ANN. After training, one could present each pattern once again to the network, and record the activity that each pattern produced in each hidden unit. Then one could use this information to create a jittered density plot for each hidden unit. In such a plot, the horizontal position of each plotted point represents the activation produced by one of the training patterns, and a random vertical jittering is introduced to prevent points from overlapping (Chambers, Cleveland, Kleiner & Tukey, 1983, pp. 19-21). The purpose of the density plot is to provide some indication of the distribution of activities in the unit that are produced by the training patterns.

[IMAGE OF DENSITY PLOT] The density plot for an integration device is often smeared.

Berkeley, Dawson, Medler and Schopflocher (1995) have found that while the jittered density plots for standard ANN processing units are typically smeared, the same plots for value units are organized into distinct bands or stripes. Furthermore, all the points that fall into a single band share a set of common properties. One can use these properties to identify the specific features in the input patterns that are being detected by the hidden units, and one can also identify the combinations of these features that are used to mediate the network's output responses.

[IMAGE OF DENSITY PLOT] The density plot for a value unit is often highly structured

For example, Berkeley et al. (1995) trained a network on a set of logic problems originally investigated by Bechtel and Abrahamsen (1991). The network is taught to identify the type of logic problem being presented, and to determine whether the problem is valid or invalid. Jittered density plot analysis of hidden value unit activities for this network revealed a tremendous degree of banding; each band reflected a series of important logical properties (e.g., some bands represented the type of conjunction used in the logic problem; others represented balancing between variables in different parts of the problem). They went on to show how these bands could be interpreted to reveal formal rules of logic "in the network's head".

We have also been interested in exploring interpretive techniques to apply to standard ANN architectures. Within perceptual psychology, some researchers adhere to what is known as the neuron doctrine (e.g., Barlow, 1972). According to this doctrine, in order to determine what visual property a neuron is most sensitive to, one must identify the neuron's "trigger feature": the visual pattern that produces the most activity from the neuron. With the monotonic activation function that characterizes a standard processing unit in an ANN, it is very easy to identify the trigger feature for a hidden unit simply by inspecting the connection weights that lead into this unit. Dawson, Kremer and Gannon (1994) used this technique to demonstrate that under certain condition, the hidden units in an ANN model of the early visual pathway could evolve biologically relevant receptive fields.

References


[Value Unit Architecture | About the BCP | BCP Home Page]
Last Modified : 10 / 07 / 98