Foundations Of Cognitive Science

Induction Learning

Inductive learning is essentially learning by example. The process itself ideally implies some method for drawing conclusions about previously unseen examples once learning is complete. More formally, one might state: Given a set of training examples, develop a hypothesis that is as consistent as possible with the provided data [1]. It is worthy of note that this is an imperfect technique. As Chalmers points out, "an inductive inference with true premises [can] lead to false conclusions" (Chalmers, 1976). The example set may be an incomplete representation of the true population, or correct but inappropriate rules may be derived which apply only to the example set.

A simple demonstration of this learning is to consider the following set of bit-strings (each digit can only take on the value 0 or 1), each noted as either a positive or negative example of some concept. The task is to infer from this data (or "induce") a rule to account for the given classification:

- 1000101 - 1110100 + 0101
+ 1111 + 10010 + 1100110
- 100 + 111111 - 00010
- 1 - 1101 + 101101
+ 1010011 - 11111 - 001011

A rule one could induce from this data is that strings with an even number of 1's are "+", those with an odd number of 1's are "-". Note that this rule would indeed allow us to classify previously unseen strings (i.e. 1001 is "+").

Techniques for modeling the inductive learning process include: Quinlan's decision trees (results from information theory are used to partition data based on maximizing "information content" of a given sub-classification) (Quinlan, 1993), connectionism (most neural network models rely on training techniques that seek to infer a relationship from examples) (Dawson, 2004) and decision list techniques (Rivest, 1987), among others.

Induction learning is important in cognitive science because many important learning situations -- such as learning a language -- are induction learning problems that must be solved when the environment may not provide enough information (i.e., enough examples) to provide a unique solution (Pinker, 1979).

References

  1. Adapted from lectures in a graduate course in representation & reasoning given by Dr. Peter van Beek, Department of Computing Science, University of Alberta.
  2. Chalmers, A.F. (1976). What is this thing called science?. University of Queensland Press, Australia.
  3. Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.
  4. Quinlan, J.R.( 1993). C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo.
  5. Pinker, S. (1979). Formal models of language learning. Cognition, 7, 217-283.
  6. Rivest, R.L. (1987). Learning decision lists. Machine Learning. 2(3):229-246.

(Revised February, 2010)

(780)-492-5175
Google

WWW
www.bcp.psych.ualberta.ca