Foundations Of Cognitive Science

Arbitrary Pattern Classifier

Connectionist networks are frequently used as pattern classifiers.  They accomplish this task by carving a pattern space, in which each pattern is represented as a point at a particular set of coordinates, into decision regions.  Each pattern that falls into the same decision region is assigned the pattern “name” associated with that region.  An arbitrary pattern classifier would be capable of solving any pattern recognition problem, which would mean that it could carve any pattern space into any set of decision regions that could ever be needed.          Could we use hidden units to give a Connectionist network the power, in principle, to be an arbitrary pattern classifier?  It turns out that there is an affirmative answer to this question.    Lippmann (1987, p. 16), by considering the shape of decision regions created by each additional layer of hidden units in systems that use “squashing” activation functions, has shown that a network with only two layers of hidden units (i.e., a three-layer perceptron) is capable of "carving" a pattern space into arbitrary decision regions.  "No more than three layers are required in perceptron-like feed-forward nets".  In other words, modern Connectionist networks have far superior computational power than older systems like perceptrons.  Modern Connectionist networks with no more than two layers of hidden units have the power to carve a pattern space into arbitrarily shaped decision regions, and thus perform any pattern classification task that we are interested in.  Lippmann’s proof that connectionist networks can be arbitrary pattern classifiers is one example of a proof that they are as computationally powerful as universal Turing machines.

References:

  1. Lippmann, R. P. (1987). An introduction to computing with neural nets. IEEE ASSP magazine, April, 4-22.

(Added November 2010)

(780)-492-5175
Google

WWW
www.bcp.psych.ualberta.ca