


One way to think about an artificial neural network is that it is a dynamic system that prefers to move to states of maximal stability. This approach views the network as moving to local minima of energy in an energy space, where the location of the network in the space is given by network settings (current values of connection weights, biases, and unit acitivities), and the height of the location indicates the energy associated with these states. Hopfield networks (Hopfield, 1982, 1984; Hopfield & Tank, 1985) are explicitly formulated in these terms: Hopfield defined an energy metric for such networks, and proved that when units change their activity, this necessarily moves energy downhill. This view also means that learning new information is equivalent to inserting new local minima at particular locations in the energy space, and thus draws an analogy between network energy and recall error. It also draws a parallel between networks and other systems: Hopfield himself was inspired by the physical properties of spin glasses, and Ashby's (1956, 1960) homeostat can be considered to be a physical realization of a Hopfield network that changes its states to minimize electrical energy.
References:
 Ashby, W. R. (1956). An Introduction To Cybernetics. London: Chapman & Hall.
 Ashby, W. R. (1960). Design For A Brain (Second Edition ed.). New York, NY: John Wiley & Sons.
 Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79, 25542558.
 Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two state neurons. Proceedings of the National Academy of Sciences USA, 81, 30083092.
 Hopfield, J. J., & Tank, D. W. (1985). "Neural" computation of decisions in optimization problems. Biological Cybernetics, 52(3), 141152.
(Added March 2010)



