


An activation function is a component of a processing unit in a connectionist or PDP network. It is a mathematical equation that converts the net input to that unit into some internal level of activity. Most activation functions range from 0 to 1, although some also range from 1 to +1. As well, most of the interesting behavior of networks arises when the activation function is nonlinear.
Activation functions are important in many different ways. First, the activation function dictates the biological plausibility of the processor (Ballard, 1986). Second, changing the activation function can dramatically alter the behavior of a network (Dawson, 2004, 2005). Third, different activation functions can be used to distinguish various connectionist architectures. Indeed, Duch and Jankowski (1999) documented over 640 different activation functions in use.
References:
 Ballard, D. (1986). Cortical structures and parallel processing: Structure and function. The Behavioral And Brain Sciences, 9, 67120.
 Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.
 Dawson, M. R. W. (2005). Connectionism : A Handson Approach. Malden, MA: Blackwell Pub.
 Duch, W., & Jankowski, N. (1999). Survey of neural transfer functions. Neural Computing Surveys, 2, 163212.




