Dawson Margin Notes On

Pfeifer and Scheier, Understanding Intelligence

Chapter 3: The Fundamental Problems Of Classical Artificial Intelligence and Cognitive Science

 

The purpose of this chapter is to provide a closer examination of classical problems.  “The cognitivistic paradigm’s neglect of the fact that intelligent agents, human, animals, and robots are embodied agents that live in a real physical world leads to significant shortcomings in explaining intelligence.”  The main implication of this – a new approach is needed.  NB: We should be very even-minded about evaluating this claim, and also about evaluating the new approach that is to be championed in the book

 

3.1 Real Worlds versus Virtual Worlds

 

One main problem with the classical approach is its focus on virtual worlds.  These worlds lead to ideal problem spaces, quite unlike the real world.  For example, compare chess versus soccer.  In soccer, one has only limited information about the overall situation, and one is under time pressure and subject to the laws of physics.  In other words, the real world subjects agents to noise and other (nonlinear) complications.  “Real worlds differ significantly from virtual ones.  The problems of classical AI and cognitive science have their origin largely in a neglect of these differences.

 

3.2 Some Well-Known Problems with Classical Systems

 

The view presented here:  Classical or symbolic systems are poorly suited for behaving in the real world.  There are a number of standard problems with these systems that have been documented:

NB: These claims are very reminiscent of the critique of the classical approach by connectionism.

 

3.3 The Fundamental Problems

 

“All the fundamental problems of classical AI concern the relation of an agent and the real world, in particular its interaction with it.”

 

For instance, one key problem is the frame problem – how do you model change in a changing world, when that changing world is internally represented?  The problem here is that too many irrelevant changes have to be made, and so only making the key changes is a hard thing to do.  There have been a number of different solutions to this problem that have been proposed.  The embodied approach will make this problem disappear: “In the real world it is not necessary for us to build a representation of the situation in the first place:  We can simply look at it, which relieves us of the need for cumbersome updating processes.  (NB: Shades of ecological perception here!)

 

A second fundamental problem is the symbol grounding problem – how do you relate symbols to the real world (NB: This is Cummins’ problem of representation).  Symbols must be grounded in the world by means of the system’s interactions with the world, not via an ever-present human interpreter.

 

The problems of embodiment and situatedness emerge because intelligence requires a body – if not embodied, then a system suffers from the symbol grounding problem.  Situated systems are systems that can acquire information from the world; embodied systems are systems that are constructed.  NB: Lego robots are both embodied and situated.  Embodiment does not imply situatedness.

 

Other fundamental problems include the homunculus problem and the problem of the underlying substrate.  The latter is the claim that intelligence requires a substrate of a particular type (e.g., the brain).

 

3.4 Remedies and Alternatives

 

A number of options:

NB: We should return to this list at the end of the book, and re-evaluate where we fall in it.  Also, do we agree that this list is necessary?  Are the problems with the classical approach as serious as Pfeifer and Scheier would have us believe?