Vehicle 13: Foresight


The innovation that Braitenberg introduces here is the prediction of future events. (NB: To my mind, this raises two related connectionist concepts -- the recurrent connections in Grossberg's ART models, in which one bank of units attempts to predict and therefore cancel out activity in another bank; as well as the idea of a "folded encoder network" trained by backpropagation, in which a network given an input at time t generates a response which is intended to match its input at time t+1.)

The reason for this innovation is Braitenberg's admission that the kind of idea chaining described for Vehicle 12 does not necessarily provide the most suitable account of "thinking". Thought is not an aimless succession of ideas, but it is goal directed. "Move towards desirable future events" is a reasonable associationist view of planning! The idea here is that expectations are extremely important for the survival of an organism. "All that we need is a mechanism to predict future events fast enough so that they will be known before they actually happen."

How is this to be achieved? "Stored sequences of events are all we need for prediction, together with a mechanism forcing them to speed up in the reproduction when necessary, for example, in dangerous situations." This should also involve being able to predict several possible alternative future events, and keeping all of these predictions in mind in parallel.

The move that Braitenberg makes now concerns how fast the Ergotrix wire works. "The Ergotrix wires could work faster, or slower, than the sequences that are impressed upon them. Let them reproduce the sequences at a more rapid pace and you will have a brain that works as a predictor."

Braitenberg now turns to considering the changes in state of a machine that is "in quiet contemplation of the world, the threshold control at rest, and the thresholds set high enough so that only a few ideas stand out over the background."

One way that the internal states of the vehicle might be affected is via "meditation". The idea here is that as some processors remain on, the Mnemotrix connections among them will grow stronger, and thus their activity will increase. This has the potential of upsetting the equilibrium (i.e., static nature) of the internal state, particularly if at some point some other processors get recruited.

A second thing that could occur is a change in the environment, which will result in internal changes to the Vehicle. A transition from one state to another will be aided by Ergotrix connections if the sequence has occurred before, but these connections by themselves will not be sufficient themselves to produce the change.

A third possibility is that sensors signal an environmental condition that has been met frequently in the past, and is strongly encoded with Ergotrix wiring. This will cause a state change driven almost exclusively by the wiring. Indeed, this might be so strong a situation that the machine will be guided by its internal wiring, and will actually be blind to actual sensory input. (NB: This is the hallmark of the so-called New Look in perception, which was a cognitive approach that got lots of scientific press in the 1950s.)

With this picture in mind, Braitenberg now faces the possibility that real and predicted environmental situations class. In this case, error correction mechanisms are required. (NB: Again, this reminds me a bit of ART. It also reminds me of the problems that I've faced in trying to design an autonomous pattern associator that learns by using the delta rule.) "That's why in the case of conflicting information we want to take the information from the realistic half brain more seriously than that from the predictor. We may incorporate a rule: when in doubt, believe the sensors."

As a design issue, though, when this rule is followed, we don't want to merely shut the predictors off. We want to save the prediction for a bit, so that we can learn from the mistake(s) that it embodies. This requires a mental echo, or a short-term memory buffer. Braitenberg feels that short-term memory is easily added to his machines, using the notion of delay lines or chains of interprocessors.

Braitenberg's final refinement comes with the notion of how to learn about exceptions to statistical rules, like the case where all but one of the green vehicles are peaceful. "Since sooner or later an encounter with the green maverick is bound to take place and the victim must be on the defensive. It is better then to give special weight to the rare but decisive experience and to consider green vehicles as generally bad." (NB: I'm not sure that I agree with this approach. Surely you would expect a more interesting behavior to emerge from vehicles as sophisticated as those that B. has described -- vehicles that ignore green others, but are capable of recognizing and escaping the attack of the maverick!)

Braitenberg suggests that the sensory system of a less sophisticated vehicle (Vehicle 6), which has evolved a strong sense of "good" and "bad" as it learns to survive in the environment of other machines, could be added to vehicle 13 to serve as an appropriate good/bad detector. (NB: Ugh!)

How do all of the ideas explored in this chapter fit into one machine?:

"Whenever the Darwinian evaluator D signals an unpleasant turn in the real course of events, or a very pleasant one, the predicting half brain P is disconnected from the input it normally receives from the realistic (sensory) half brain, R. Instead the predicting half brain receives its input from the short-term memory tow steps back. So it will go again through the two instants preceding the important happening. At the same time its output is connected to the input of the short-term memory. So it will receive over and over again vie the short-term memory the succession of the two events, a and b, until the Darwinian evaluator D has calmed down and everything is switched back to normal."

The result? Strong learning of highly emotional consequences, even if they are encountered rarely. (NB: I wonder how this might relate, for instance, to studies of learned, one-shot taste aversion in animals?)

The behavior of Vehicle 13 will be very sophisticated -- eg., prediction might drive some sort of object permanence -- but also very idiosyncratic, because of the powerful learning of unique but important events. This implies, of course, further support for Braitenberg's law of uphill analysis and downhill synthesis.


Pearl Street | Vehicles Home Page | Dawson Home Page |