"The basic idea is that the mind is the program of the brain and that the mechanisms of the mind involve the same sorts of computations over representations that occur in computers."
There are two goals of machine intelligence: to develop useful technologies, and to develop rigorous theories of human mentality. The chapter starts by looking at what intelligence is.
The Turing test.
"One approach to the mind has been to avoid its mysteries by simply defining the mental in terms of the behavioural". This is a behaviourist approach to mind. The Turing test is described -- appropriately, I think -- as a behaviourist definition of intelligence. "Whatever Truing really intended his test to do, it has been widely taken as a proposal concerning the meaning of intelligence in nonmentalistic terms." One problem with the Turing test is that we are not told about how to choose a judge. Defining the abilities of a judge is difficult to do in nonmentalistic terms, and thus this is a weakness in the approach.
A second weakness of the test is that you can be fooled in a Turing test situation by a program that is not intelligent. The example given in the chapter (and in the lectures) is ELIZA. "If the program was just a bundle of tricks like ELIZA, with question types all thought of in advance, and canned responses placed in the machine, the machine would not be intelligent". (NB: Can you argue why the machine would not be intelligent?) "The point against the Turing test conception of intelligence is not that the Aunt Bubbles machine wouldn't process information the way we do, but rather that the way it does process information is unintelligent despite its performance in the Turing test". The point of both the ELIZA and the Aunt Bubbles examples in the chapter is that the Turing test is too weak a definition of intelligence, because systems can pass the test and not be intelligent. Or, more simply, they can give the right responses for the wrong reasons.
Two kinds of definitions of intelligence.
Word definition versus defining something by investigation its true nature in the world. The problem with the Turing test is that it is merely a "word definition" of intelligence. In order for us to say that some system passed this test for the right reasons, we (as cognitive scientists) have to come up a deeper understanding of intelligence. We have to investigate its true nature, so that we can say what the "right reasons" are!
Cognitive science defines or explicates intelligence via functional analysis. The idea of functional analysis is to decompose complicated things into organized patterns of primitive processes. A primitive process is fundamental, bottom-level, "wired into" the system. A primitive process cannot be broken down any further into simpler functions.
"What makes a processor primitive? One answer is that for primitive processors, the question 'How does the processor work?' is not a question for cognitive science to answer". "The question of what a processor does is part of cognitive science, but the question of how it does is not". (Editorial Note: I don't agree with this point. This is a typical Classical position, in which the functional nature of cognition reigns supreme, and as a result implementational issues get ignored. My own view is that in order to know that a processor is primitive, we are likely going to be forced to come up with accounts of how they work, and this is why neuroscientists should play an important role within cognitive science).
How things work aren't important for functional accounts. "Primitive processors are the only computational devices for which behaviourism is true. Two primitive processes (such as gates) count as computationally equivalent if they have the same input-output function (that is, the same behaviour), even if one works hydraulically and the other electrically." (NB: Similar argument can be put in such nice terms to point out that in functional accounts, it isn't what things are built of that is important, but rather, the things that they do are important!)
The Mental and the Biological.
The note above makes the key point of this subsection -- in functional accounts, hardware considerations are irrelevant. "This reveals a sense in which the computer model of the mind is profoundly unbiological. We are beings who have a useful and interesting biological level of description, but the computer model fo the mind aims for a level of description of the mind that abstracts away from the biological realizations of cognitive structures." "If the comptuer model is right, we shold be able to create intelligent machines in our image -- our computational image, that is. If we can do this, we will naturally feel that the most compellling theory of the mind is one that is general enought to apply to both them and us, and this will be a computational theory, not a biological one." (NB: The key point from this section, and the preceding ones, is that the functional nature of algorithmic descriptions make artificial intelligence a distinct possibility. The functional nature of these descriptions also makes computer simulation a viable and plausible methodology for studying psychological issues.)
Intentionality is aboutness -- the intentionality of symbols lets them represent particular meanings. What is the difference between intelligence and intentionality? You can have intentionality without having intelligence (e.g., thermostat, painting).
"The method of functional analysis that explains intelligent processes by reducing them to unintelligent mechanical processes does not explain intentionality. The parts of an intentional system can be jsut as intentional as the whole system." (NB: In other words, functional approach focusses on the algorithmic level, and ignores the semantic level. This is a big research problem for cognitive science -- how does one get intentionality, or how does one bring meaning into a formal system?)
"There is, however, an important relation between intentionality and functional decomposition. The level of primitive processors is the lowest intentional level. That is, though the inputs and outputs of primitive processors are about things, primitive processors do not contain any parts that have states that are themselves about anything."
The Brain as a Syntactic Engine Driving a Semantic Engine.
"Typically, as we functionally decompose a computational system, we reach a point where there is a shift of subject matter from tings in the world to the symbols themselves." The point of this section is to illustrate that there must be some relationship between the physical states of the brain that implement syntactic (or algorithmic or formal) properties and the semantic interpretation of these syntactic states. Of course, we know this has to be the case because it falls out of the tri-level hypothesis.
Is A Wall A Computer?
Searle has argued that everything is a computer, because you can define simple isomorphisms between states of things like walls and computational states. "The problem with this reasoning is that the isomorphism that makes a syntactic engine drive a semantic engine is more full-bodied than Searle acknowledges. In particular, the isomorphism has to include not just a particular computation that the machine does perform, but also all the computations that the machine could have performed."
The computer model of the mind can give a natural account of intelligence, but has problems with intentionality. "It is time to admit that although the comptuer model fo the mind has a natural and straightforward account of intelligence, there is nothing natural or straightforward about its account of intentionality." The Classical approach to dealing with this problem is to assume that intentional contents are simply meanings or internal representations -- "language of thought". "Fodor's (1975) doctrine...the meaning of external language derives from the content of thought and...the content of though derives from the meanings of elements of the language of thought." (NB: Note that this assumption simply boils down to (1) ideas are sentences written in the language of thought. (2) the meanings of these sentences are obtained by identifying the meaning of the simple components of such sentences. This is, in essence, the rule governed assumption once again.) The key issue from this approach is this: how do the basic mental symbols -- the "words" used to build sentences in the "language of thought" -- get their meaning in the first place? Two approaches: (1) Meaning of symbol is just its relation to the real world; (2) functionalism -- "what gives internal symbols their meanings is how they function...The picture is that the internal representations in our heads have a function in our deciding, deliberating, problem solving -- indeed, in our thought in general --and that is what their meanings consist in."
"The emerging picture of how cognitive science can handle intentionality should be becoming clear. Transducers at the periphery and internal primitive processors produce and operate on symbols so as to give them their functional roles. In virtue of their functional roles, these symbols have meanings. The functional role perspective explains the mysterious correlation between the symbols and their meanings."
Objections to the Language of Thought Theory.
(1) We have an infinity of beliefs. (2) Emergent meanings aren't explicitly represented. (3) Language of thought theory must be false, because you can't create a belief just by inserting it into the "belief box".
(1) Productivity or creativity of thought. (2) Thought is systematic and combinatorial.
"Should cognitive-science explanations appeal only to the syntactic elements in the language of thought..., or should they also appeal to the content of these symbols?" Stich favours syntax-only approach. Stich's argument is that in some respects syntax theory is better (more general, more fine grained) than semantics theory. "But there is a fatal flaw in this argument, one that applies to many reductionist arguments. The fact that syntactic explanations are better than content explanations in some respects says nothing about whether content explanations are not also better than syntactic explanations in some respects." Indeed, if this is not the case, the argument succumbs to what Block calls "the Reductionist Cruncher" -- if your believe Stich, then you must also believe (against syntax!) that even better theories exist at more basic levels. "In sum, if we could refute the content approach by showing that the syntactic approach is more general and fine-grained than the content approach, then we could also refute the syntactic approach by exhibitng the same deficiency in it relative to a still deeper theory."
Basic move -- higher-level accounts are indispensable, because they add their own kind of very real predictive power. This means that the content level cannot be tossed away a la Stich.
NB: The point of all of the chapter's work on the language of thought theory is that cognitive science has lots of difficulty accounting for meaning. In fact, the Chinese room argument -- which we go into in detail in class -- essentially points this out. When Searle asks "Where is the understanding of Chinese?", he is really asking "How does meaning enter a formal system?". In class, we will see that one problem with Searle's argument is that it relies on the Turing test. However, the basic theme -- that meaning is difficult for cognitive science -- can't be ignored.
Pearl Street | "An Invitation To Cognitive Science" Home Page | Dawson Home Page |