Vehicle 7: Concepts


The natural selection innovation detailed in the description of Vehicle 6 is well-suited for survival in an environment that is relatively stable over a very long period of time. But it is not particularly flexible. You need flexibility in the wiring of a vehicle -- the ability of a single vehicle to learn from its unique experience -- for it to succeed in an environment that is quite volatile.

The innovation that Braitenberg describes for Vehicle 7 is designed to let an individual machine learn about its changing environment. Basically, Braitenberg proposes that the connections between the vehicle's components become modifiable. This modification instantiates a particular form of associative learning, which will decay over time if it is not refreshed.

Braitenberg's thought experiment proceeds as follows: "First, we buy a roll of a special wire, called Mnemotrix, which has the following interesting property: its resistance is at first very high and stays high unless the two components that it connects are at the same time traversed by an electric current. When this happens, the resistance of Mnemotrix decreases and remains low for a while, little by little returning to its initial value." This wire, with this magical property, will neatly produce the associative learning of what Braitenberg calls CONCEPTS. For instance, he provides an example of Vehicle 7 moving in an environment in which aggressive vehicles are painted red. Because of this, Vehicle 7 will turn away from the vehicle (because some sensors detect the aggressive movements), but at the same time the red sensor will be signalling as well. In Mnemotrix connects the red sensors with the others, an association between the two will be made, so that signals in one will tend to produce signals in the other (even if the signal for the second is not really there). Braitenberg gives the *very* cute example of Vehicle 7 "seeing red" when confronted with a green or blue aggressive vehicle because of this learned association.

Braitenberg goes on to describe how very complex CONCEPTS might be learned via the associative properties of Mnemotrix. "The straightness of a line in different parts of the visual field, for example, may come to signify the dangerous cliff at the side of the table. And the movement of many objects in different directions may come to represent the concept `region crowded with vehicles'." The complex behaviors observed in machines with this associative ability might also lead to very sophisticated interpretations, too, such as ABSTRACTION and GENERALIZATION.

Before getting lost in the physical absurdity of Mnemotrix, rest very assured that this type of associative learning is at the heart of modern research in artificial neural networks, as well as in the biological and biochemical study of learning in real brains.

The type of learning that Braitenberg is describing here is called Hebb learning by modern connectionists. Hebb's name is used because of the following passage from his 1949 book "The organization of behavior" -- "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased" (p. 62).

Physical accounts of this type of learning in the brain come from the study of long-term potentiation (LTP), in which a synapse becomes much stronger (for a very long period of time) when there is activity in the presynaptic neuron, and at least the beginnings of activity in the postsynaptic neuron. The mechanisms for such associative learning, the NMDA receptor sites, are beginning to be fairly well-understood.

Given all of this, Braitenberg's story is far less fanciful than he himself presents it. However, keep in mind one very important difference between the Hebb rule as described by modern connectionists, and the version of the rule that Braitenberg describes -- Braitenberg's rule modifies connections between processing units that have nonlinear activation functions! (NB: Exactly why is this an important point to make??)


Pearl Street | Vehicles Home Page | Dawson Home Page |