Foundations Of Cognitive Science

Error Space

One way to think about an artificial neural network is that it is a dynamic system that prefers to move to states of maximal stability. This approach could view a network as moving to local minima of error in an error space during learning, where the location of the network in the space is given by network settings (current values of connection weights, biases, and unit acitivities), and the height of the location indicates the error associated with these states. This analysis leads to two different notions: 1) the shape of the surface of the error space, and 2) the kinds of trajectories that can be taken by a network along this surface as learning proceeds. For instance, the perceptron convergence theorem (Rosenblatt, 1962) could be viewed as realizing that a linearly separable problem has a bowl-shaped error space with a single global minimum, and that the delta rule is guaranteed of guiding a perceptron to the bottom of this bowl. That linearly nonseparable problems cannot be solved by perceptrons (Minsky & Papert, 1988) could be presented by pointing out that the error space for such problems has a much more complicated shape, and that the delta rule cannot guide a network along this surface to a global minimum.

References:

  1. Minsky, M., & Papert, S. (1988). Perceptrons, 3rd Edition. Cambridge, MA: MIT Press.
  2. Rosenblatt, F. (1962). Principles Of Neurodynamics. Washington: Spartan Books.

(Added March 2010)

(780)-492-5175
Google

WWW
www.bcp.psych.ualberta.ca