


The delta rule is a very general rule that is used to train PDP networks. It can be used to train distributed associative memories as well as perceptrons. In general, this rule involves changing a weight by associating an input value with an output value. However, unlike the Hebb rule – which associates raw inputs with raw outputs – the delta rule first sends an output through the network, and measures the error in response. It changes the weight by associating the raw input with the computed error value. As a result, the delta rule is an improvement over the Hebb rule because it is error correcting – the degree of learning is based on the degree of error, and when error reaches zero, no further weight changes will occur. This solves many of the problems with the Hebb rule, such as the inability to learn associations between correlated inputs, and the inability to stop changing weights when perfect recall has been achieved. The delta rule was developed by Rosenblatt for training perceptrons (e.g., Rosenblatt, 1962). Variations of it can easily be used to train perceptrons that use continuous activation functions like the logistic or the Gaussian (Dawson, 2008).
References:
 Dawson, M. R. W. (2008). Connectionism and classical conditioning. Comparative Cognition and Behavior Reviews, 3 (Monograph), 1115.
 Rosenblatt, F. (1962). Principles Of Neurodynamics. Washington: Spartan Books.
(Added October 2009)




