Foundations Of Cognitive Science

Activation Function

An activation function is a component of a processing unit in a connectionist or PDP network. It is a mathematical equation that converts the net input to that unit into some internal level of activity. Most activation functions range from 0 to 1, although some also range from -1 to +1. As well, most of the interesting behavior of networks arises when the activation function is nonlinear.

Activation functions are important in many different ways. First, the activation function dictates the biological plausibility of the processor (Ballard, 1986). Second, changing the activation function can dramatically alter the behavior of a network (Dawson, 2004, 2005). Third, different activation functions can be used to distinguish various connectionist architectures. Indeed, Duch and Jankowski (1999) documented over 640 different activation functions in use.

References:

  1. Ballard, D. (1986). Cortical structures and parallel processing: Structure and function. The Behavioral And Brain Sciences, 9, 67-120.
  2. Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.
  3. Dawson, M. R. W. (2005). Connectionism : A Hands-on Approach. Malden, MA: Blackwell Pub.
  4. Duch, W., & Jankowski, N. (1999). Survey of neural transfer functions. Neural Computing Surveys, 2, 163-212.

(Added October 2009)

(780)-492-5175
Google

WWW
www.bcp.psych.ualberta.ca