Dawson Margin Notes On

Pfeifer and Scheier, Understanding Intelligence

Chapter 4: Embodied Cognitive Science: Basic Concepts

 

Purpose of this chapter is to introduce a framework for embodied cognitive science.  “We provide a characterization of what we mean by complete agents, and we show that if we want to model, to synthesize such agents, we must take into account some special considerations relating to the idea of emergence, that is, to the fact that behavior emerges from the agent-environment interaction.”  Key concept here is the “complete agent” – we must recognize that behavior cannot be reduced to an internal mechanism.

 

4.1 Complete Autonomous Agents

 

What is a task?  Something an agent needs to get done.  Biological agents have lots of tasks to do.  “The ability to survive in complex environments is a given for all biological systems.  Achieving this ability in artificial agents turns out to be an extremely hard problem.”

 

Notion of complete autonomous agent is illustrated with the “fungus eater” example. Basic message: don’t analyze – instead, study complete systems even if they are simple.  Some basic properties of agents, which turn out to be core concepts of the book and the approach:

Bottom line: nature has always produced autonomous agents.  We should study their sensory and motor principles, and not the more recent (evolutionarily speaking) cognitive developments. The remainder of this section of the book looks in more detail at the core concepts listed above.

 

“Self-sufficiency means an agent’s ability to sustain itself over extended periods of time.  This implies that the agent must maintain its energy supply.”  Energy collecting must be balanced with energy expending.  If the latter is greater than the former, then we have an irrecoverable deficit.  “Another way of defining self-sufficiency, then, is as follows: An agent is self-sufficient if it can avoid irrecoverable deficits.”  Environments always have cycles – therefore a self-sufficient agent should not incur a deficit over one cycle.

 

Complete systems have to control behavior.  This means choosing what behavior to perform from a possible repetoire.  (NB: This reminds me of the classical view of intelligence as search, from the perspective of researchers like Newell and Simon.)  The text calls this the “action selection problem”.  The embodied approach argues that there are mechanisms for behavior control that do not require internal  representations.

 

No agent is totally autonomous.  “Autonomy generally means freedom from external control.  Autonomy is not an all-or-nothing issue, but a matter of degree.”  Autonomy is governed by two factors: dependence on the environment and dependence on other agents.  Self-sufficiency increases an agent’s degree of autonomy.  “The extent to which one agent can control another depends on the controlling agent’s knowledge of the state and the internal mechanism of the agent to be controlled.”  (NB – how do you achieve this kind of control without representations??) Therefore autonomy is really a property of the relationship between agents, and learning is a key ingredient of autonomy – because the more you learn, the less you can be predicted/controlled/”understood” by other agents.  Learning leads to new behaviors when a situation is repeated.

 

“Autonomous agents are real physical agents; in other words, they are embodied.”  Therefore they interact with their environment, and are subject to the laws of physics.  This can lead to simplifications.  “The focus on embodied agents often leads to surprising insights, and throughout the book, we provide examples of such insights.”

 

Adaptivity is a consequence of self-sufficiency.  There are many meanings of “adaptivity” – evolutionary adaptation, physiological adaptation, sensory adaptation, adaptation by learning.

 

“Animals (and humans) are always designed by evolution for a particular niche.”  Agents require a particular environment for survival – therefore there is no universal animal, and no universal robot.  This non-universality can be contrasted with the universal notion that comes from computation.  (NB: can you really make this contrast?  This doesn’t seem right to me!)  It is not trivial to come up with a taxonomy of niches.  This is because the taxonomy must be made with respect to the agent.

 

4.2 Biological and Artificial Agents

 

Biological agents “are self-sufficient, autonomous, situated, embodied, and they are designed for a particular ecological niche.”  Ideally we should investigate complete agents, but practically we must carve this goal into manageable chunks.  “Our methodology for studying naturally intelligent systems is synthetic, meaning that we have to build artificial agents to mimic natural ones.”  Lets develop a basic framework for doing this.

 

This approach has three goals

  1. Build an agent for a particular task
  2. Study general principles of intelligence
  3. Model certain aspects of natural systems

 

There are two types of agents that can be designed to accomplish one or more of these goals: robotic agents and simulations.  One example of a robotic agent is the Mars sojourner, which is embodied, self-sufficient, and fairly autonomous.  Another example is Webb’s cricket robot.  Note for this latter one, wheels are used instead of legs.  This is an example of a necessary abstraction.  (NB: No discussion here of theory neutrality of this…when are too many abstractions made to prevent any of the goals listed above from being accomplished?) Can robots be used to model human intelligence?  Some are skeptical, but Cog is an example of one attempt.

 

We can simulate any robot that we desire.  For example, simulations exist of insect walking, ant navigation, and fish locomotion in schools.  View of text – both robots and simulations are needed to study intelligence.

 

“Whenever we are making a model, robot, or simulation, we have to make abstractions. … In building a model, we have to choose a level of abstraction, a level at which we are comparing the biological system and the robot system.”  (NB: The key assumption here is that abstractions are theory neutral.  Is this assumption ever tested??)  Agent simulation is not classical simulation.  Classical simulation models aspects of an agent’s behavior in isolation.  This is used to criticize connectionist models.

 

4.3 Designing for Emergence – Logic-Based and Embodied Systems

 

Whenever an agent is designed, the frame of reference problem must be kept in mind.  For this problem, there are three issues to consider:

These points are illustrated in the text by using Simon’s parable of the ant.  “The complexity of the environment is a prerequisite for the complexity of the ant’s behavior”.  To fully explain the ant’s behavior, one must take the ant’s internal mechanisms, the environment, and  the interaction between the two into account.

 

“Whenever we design a system, we have to define the basic concepts or components, the primitives, that the system will use.”  The primitives are defined in an ontology, which can be either high-level or low-level.  It is argued that classical systems always require a high-level ontology.  (NB: This strikes me as being a pretty arbitrary attack, and one that cannot be easily defended.)  In a high-level ontology:

Robots and computational systems, it is claimed, use different primitives (NB: Why? Is this really true?)

 

A tough issue is picking a level for a robot’s primitives.  This is because lower levels of specification are least subject to interpretation.  Some aspects of an architecture are not explicit.  “If a six-legged robot lifts one of its legs, this changes the forces on all the other legs instantaneously, even though no explicit connection needs to be specified.  The connections are implicit:  They are enforced through the environment, because of the robot’s weight, the stiffness of its body, and the surface on which it stands.”  (NB: We saw this with the robot walkers that we built.)  “Because robots, bodies, sensor systems, and motor systems are real physical entities, it is not possible to define neatly what belongs into a low-level specification, certainly not as neatly as we can define the components of a high-level ontology.”  In general, much more is implicit in a low-level specification than in a high-level one.

 

Part of all of this is captured in the notion of spaces.  Sensory space is the set of all possible sensory states for an agent.  A large number of these states is a prerequisite for diversity, and therefore for adaptivity.  Motor space and sensory-motor space can be similarly defined.  The input space for a classical system is usually much smaller.  (NB: This can be tied into the notion of selectionism!).

 

There are three different meanings of the term “emergent”

High-level ontologies leave no room for emergence!  Why not combine both levels of ontology?  The problem is that to do this, you have to make both levels compatible.  Also, you have to face the symbol grounding problem. (NB: This strikes me as an arbitrary criticism again!)

 

4.4 Explaining Behavior

 

Intelligence must be explained at four different levels or time perspectives:

  1. Short-term – why a particular behavior is displayed by an agent right now
  2. Ontogenetic – why events in the more distant past are contributing to a current behavior
  3. Phylogenetic – how did the behavior evolve in the history of the species
  4. Functional – how does the behavior contribute to the overall fitness of the agent

These are called the four whys in biology.

 

How do we study intelligence from the synthetic perspective?  Pfeifer and Scheier outline a generic research program”

  1. Decide on research goal
  2. Define the tasks, desired behavior, ecological niche
  3. Define the low-level specifications
  4. Choose a platform
  5. Define the control architecture
  6. Define the concrete experimental setup and the experiments to be run
  7. Generate predictions, hypotheses etc. before the experiments are run
  8. Perform the experiments, collecting data on agent behavior, internal states, sensory motor states
  9. Describe the agent’s behavior and perform various kinds of statistical analyses
  10. Formulate explanations of the agent’ behavior.  Analyze the model’s limitations, reporting on failures.

(NB: Two or three things to note about this paradigm.  First, it requires analysis as well as synthesis.  Second, it is goal directed – do we really want to perform step 2 in the synthetic approach?  Third, it is largely qualitative, particularly in step 9)