"How Many Routes In Reading?"

by John Morton

Relevance To Lectures

In the lecture, we emphasize the use of functional decomposition to come up with an algorithmic account of a cognitive phenomenon. This chapter provides a very nice example of this approach in action. Pay attention to the increasing sophistication of the model as it is developed through the chapter. Notice that the model is represented as a "boxology". Also note the kind of evidence that is being used to support the claims that are represented in the model.


Margin Notes

Introduction

Reading is the process of converting print to meaning and to sound. Focus in this chapter is on the processing of single words.

The Nature Of The Problem

Do we have to go through sound to get to meaning? This has been subject of a great debate for over 100 years. How do we deal with this problem? Morton takes a functional, algorithmic approach. "What we have to do, then, is to start thinking about the kinds of representation and kinds of process necessary to do the job we want to do."

The first postulate is an input lexicon and an output lexicon. The input lexicon specifies words by their visual appearance only. The output lexicon specifies the words in terms of the way they are spoken. A third component, semantics, is then added to this model. The issue -- with respect to the title of the chapter and the title of this section -- is the relation of semantics to the two lexicons. One possibility is an indirect model in which an inner voice is required to access meaning. The other possibility is a direct model in which meaning can be accessed without the inner voice. Morton prefers the direct model. Why?

First, "it is possible to understand the meaning of words we cannot pronounce." Second, priming can occur when subjects are not aware of the priming stimulus -- so subjects can't be using the inner voice. Third, deep dyslexics appear to access semantic information independently of phonology.

"What assumption has been made about the cognitive architecture without justification?" The unjustified assumption is the number of lexicons. It could be that there is only one lexicon. This is the central idea in the logogen model, in which there was only one entry for every word, and this one entry had associated with it the visual, phonological, and semantic information.

The aspects of this model are studied (and justified) using priming studies. Identity priming: give subject a stimulus, and then repeat the stimulus immediately after. It will be easier to process the second time. Perceptual priming: prime helps identification of a visually presented word. But a name produced by asking subjects to name a word associated with a definition does not prime performance. "What this means is that the perceptual priming effect takes place in the input lexicon and that this lexicon is not affected when you produce the name in response to a definition." So priming must be occurring in the visual input lexicon -- not in the semantic system nor in the output lexicon. "If we do not separate the two lexicons, we cannot account for the data."

We can read strings of letters which are not words. So, independent of "wordness" there must be a mapping from letters to sound. It would be nice to have a simple set of rules to do this. But... 1) there are a number of sounds that are represented by more than one letter, and sometimes the sound is represented by one letter, sometimes by two. 2) A particular letter or digraph can represent more than 1 sound. 3) Some words have irregular mappings from letters to sound. Bottom line: separate letter identification process has to be added to the model. This suggests that there are two routes involved in reading. "We will sometimes refer to the conversion to speech via the input lexicon as the lexical route, and the grapheme-phoneme route as the non-lexical route."

Lexical Decision

Lexical decision task: subject is presented a stimulus, and has to decide if it is a word or not. Reaction time is the dependent variable of interest. This is different than word identification discussed above. "The lexical decision task seems to depend on a wider variety of process, and there are some experiments which suggest that the visual input lexicon might not be the crucial factor in the decision making. For example, cross-modal priming is found in lexical decision experiments. The cross modal priming in this task leads us to conclude that the lexical decision task relies on more central information."

Lexical decision tasks suggest that the two routes in the model operate in parallel. That is, regular words are decided to be words faster than are irregular words. Presumably, this is because regular words can be recognized by both routes, while irregular words can only be recognized via the lexical route.

Simulation Of Word Processing

Main modeling issue: one route or two? In a number of successful PDP models, there are not dual routes. In contrast, non-PDP models like that proposed by Coltheart have two routes, have very good performance, and can be viewed as challenges to the PDP approach. A second challenge to PDP models involve experiments in which pseudohomophones are used as primes -- the networks don't predict the right results.

The Cognitive Neuropsychology Of Reading

Models of the type discussed to this point in the chapter have been used to account for reading deficits. For example, there are a number of different types of dyslexias -- deep dyslexia, surface dyslexia, and others. "The accounts given of the different kinds of patient have usually been that one or more of the processing elements in the model have been damaged, or that the connections between elements have been cut." NB: The bottom line is that these different kinds of patients provide neuropsychological evidence that supports the dual route model.

How Do We Learn To Read?

Acquisition of reading goes through overlapping stages. The first is the logographic stage, in which words are read more as pictures. Not studied much, because it lasts for a short period of time and contains a small vocabulary. Writing starts at this stage, and is logographic too -- words as pictures. The second stage is the alphabetic stage. "What is happening during this stage is that the child gains access to her own phonological representations and is able to isolate individual phonemes within them." The isolation of phonemes is the most important single aspect of learning to read. This second stage is divided into two phases. In the first phase, words are segmented into component letters. In the second phase, words will be recognized as wholes. The third stage is the orthographic stage, which "is a simple consequence of the interaction of the activity of reading, linguistic knowledge, and general cognitive-abstractive processes."

Stumbling block in learning to read is establishing grapheme-phoneme correspondences. "While the child will have a full phonological system by this time, the phonological knowledge will be implicit, not explicit, and representations of the individual phonemes will not at first be available to enter into the mapping process." Evidence suggests that this problem is overcome by changing the nature of the phoneme representations. This is revealed by studies of "phonological awareness." The issue is the relation between reading acquisition and phonological awareness. Some research on Down's syndrome children suggests that there is no causal relation between the two. Morton argues against this, though, claiming that these studies confound competence with performance. He proposes a general view of capabilities that indicates that there is a link between the two, but which can still account for the Down's syndrome findings.


Pearl Street | "An Invitation To Cognitive Science" Home Page | Dawson Home Page |