Foundations Of Cognitive Science

Extra Output Learning

One technique that has been proposed for improving the speed at which a network learns a classification task is called injection of hints or extra output learning (Abu-Mostafa, 1990; Suddarth & Kergosien, 1990).  In extra output learning, one output unit performs the primary classification task (e.g. turns on to members of a class, and off to nonmembers).  Other – extra – output units are included to represent subcategories of the class.  Dividing a classification task into learning subcategories can sometimes improve learning speed.  Dawson et al. (2000) used extra output learning as a method to insert a classical theory into an artificial neural network, by using the extra outputs to indicated “reasons” (i.e. decision rules) that an overall classification was being made.

References:

  1. Abu-Mostafa, Y. S. (1990). Learning from hints in neural networks. Journal of Complexity, 6, 192-198.
  2. Dawson, M. R. W., Medler, D. A., McCaughan, D. B., Willson, L., & Carbonaro, M. (2000). Using extra output learning to insert a symbolic theory into a connectionist network. Minds And Machines, 10, 171-201.
  3. Suddarth, S. C., & Kergosien, Y. L. (1990). Rule-injection hints as a means of improving network performance and learning time. In L. B. Almeida & C. J. Wellekens (Eds.), Neural Networks, Lecture Notes In Computer Science (Vol. 412, pp. 120-129). Berlin: Springer Verlag.

(Added April 2011)

(780)-492-5175
Google

WWW
www.bcp.psych.ualberta.ca