Foundations Of Cognitive Science

Vector Transformation

In general, a vector transformation is an operation applied to a vector that changes its orientation, length, or both. The operation is defined in linear algebra, and can be described as premultiplying some column vector v by another tranformation matrix T to produce a new column vector r. That is, r = T·v. Note that for this operation to be carried out, the transformation matrix must have the same number of columns as the two vectors have rows.

Algorithmically, the calculation of the new vector produced by the transformation is straightforward: the i-th entry of r is the inner product of the i-th row of T with the column vector v. This operation is repeated for each entry in r.

Although this kind of operation is very general, it has some very specific interpretations in cognitive science. For instance, if the transformation matrix represents a set of weights in a distributed associative memory, and if the to-be-transformed vector is a set of input unit activities, then this operation defines how memories are retrieved from the memory (Anderson et al., 1977; Dawson, 1991, 2004).

References:

  1. Anderson, J. A., Silverstein, J. W., Ritz, S. A., & Jones, R. S. (1977). Distinctive features, categorical perception and probability learning: Some applications of a neural model. Psychological Review, 84, 413-451.
  2. Dawson, M. R. W. (1991). The how and why of what went where in apparent motion: Modeling solutions to the motion correspondence process. Psychological Review, 98, 569-603.
  3. Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.

(Added January 2010)

(780)-492-5175
Google

WWW
www.bcp.psych.ualberta.ca