Foundations Of Cognitive Science

Machine Learning

The acquisition and application of knowledge plays a central role in describing learning. For the most part, human beings perform this task quite well (for better or worse). It is under the banner of machine learning that researchers, particularly within artificial intelligence, attempt to develop methods for accomplishing this task algorithmically (i.e. on computers).

Dietterich differentiates between three types of learning a system can exhibit [1]:

  • Speed-up learning occurs when a system becomes more efficient at a task over time without external input.
  • Learning by being told occurs when a system acquires new knowledge explicitly from an external source.
  • Inductive learning occurs when a system acquires new knowledge that was neither explicitly nor implicitly available previously.

In order to evaluate the success (or failure) of machine learning techniques, it will be important to define what is meant by "learning". Dietterich suggests that by defining "knowledge", we can simplify the specification of "learning" by defining it to be an increase in this "knowledge" [1]. It is debatable whether this makes the task any easier. A formalism often employed to judge the effectiveness of a learning system is Valiant's definition of what it means for a system to be probably approximately correct [2]: the system should, with high probability, exhibit knowledge that is largely in agreement with the "true" information (i.e. approximately correct).

A problem endemic to most machine learning techniques is a lack of generality. For example, a particular algorithm may perform well on discrete data, whereas application to continuous data is difficult. These issues are invariably task specific---most learning formalisms handle some subset of tasks extremely well while performance on others is substandard. Major performance issues often revolve around the ability of a given system to generalize what it has learned to novel circumstances.


  1. T.G. Dietterich. Machine learning. Annual Review of Computer Science. Vol. 4, Spring 1990.
  2. L.G. Valient. A theory of the learnable. Communications of the ACM. 27:1134-1142, 1984.