Deductive inferences have a "self evident" quality that makes them hard to explain. "Similarly, explaining how children learn the concepts expressed by words such as `and' and `or' quickly runs into the problem of how such learning could ever get started." Recognizing the correctness of simple inferences appears to be a primitive cognitive ability. "In fact, one strategy for cognitive science is to posit that people have a basic repertoire of inferences of this sort, which they use to deal with more complicated problems." But not all cognitive scientists believe this, because, for instance, human reasoning is prone to error, even for "simple" problems. "Certain inferences that also appear quite simple give people real difficulty, and this threatens to undermine the idea that logical inferences play a significant role in human thinking."
Central concept in deduction is the entailment relation. "The entailment relation is a very strong one, in the sense that the entailing statements provide maximal support for the entailed statement. If a set of statements X entails a statement y, then the support that X lends y is equivalent to the support that y lends itself." This idea of "maximal support" is the defining characteristic of the entailment relation. But logic can provide a more rigorous account of entailment.
Proof And Deducibility
"If it is possible to prove sentence y on the basis of the sentences inX, then y is said to be deducible from X, and deducibility is one criterion of entailment." But what counts as a proof? One example is the natural deduction method, in which logic defines a set of legal rules to be used in deduction (e.g., AND Insertion, AND Elimination, IF Elimination). (NB: Think of this in terms of being an example of another formal system!)
Truth And Semantic Entailment
Second approach to defining entailment is to use relations among truth values of statements. "In general, a set of statements X semantically entails a statement y if y is true in all possible circumstances in which each statement in X is true. Notice that this means it is possible for X to semantically entail y even if y is false in our current circumstance." But what is a possible circumstance? Answer to this question involves proposing a mathematical structure called a model. "A set of statements X semantically entails of statement y if and only if y is true in all models in which each statement in X is true. For the sorts of entailments [in simple systems], a model might consist of the set of all true atomic sentences."
"For simple systems, such as those discussed in most elementary logic textbooks, these two criteria conincide, and y will be deducible from X if and only if X semantically entails y. In more complex logical systems, however, the number of semantic entailments outstrip the possiblity of proving them all, even in theory. Such logical systems are said to be incomplete."
People have two different notions of OR -- inclusive vs exclusive. Grice argued that normal notion of "OR" in converstaion indicates uncertainty about which of the two disjoined statements are true, which affects how "OR" is typically interpreted by people in more artificial settings.
The entailment relation has no clear psychological counterpart. How might entailment fit into psychology?
Deduction As Heuristics
First view is that "there are no mental processes devoted exclusively to deductive reasoning. ... Instead, what passes for deductive reasoning may be the result of simple cognitive assessments based on the similarity or the salience of the entailing and the entailed statments." E.g., deduction on basis of previous beliefs, not on basis of logical form. However, some studies have shown that entailment can predict some component of judgement behavior, in contrast to this view.
Deduction As A Limiting Case Of Other Inference Forms
Second view is that "people are able to recognize entailments, but they do so by means of a general mechanism that covers entailment as a special or limiting case. Imagine a metnal process, for example, that is able to assess the degree of support that the premises of an argument give to its conclusion." With such a mechanism, special purpose entailment process would not be necessary. Studies show that people are indeed sensitive to "degree of support."
"If we can account for people's judgements on both deduction and probabilistic tasks by positing just one mental mechanism, then parsimony suggests that we prefer such an explanation to one that requires two or more different processes." But there are obstacles to such a parsimonious theory. First, entailment is not the same as a conditional probability that is equal to 1.00. Second, evidence does not support the claim that all probabilistic inferences are based upon a single mechanism.
Deduction As A Special-Purpose Cognitive Component
Third view: "A more common way to locate deduction in the space of mental processes is to view it as a relatively self-contained system for checking or producing entailments." I.e., according to this view, we have special deductive mechanisms. This is consistent with experimental results, but causes problems for cognitive theory. Lots of higher cognitive processes have a deductive component. "We can regard the deduction component as supplying logical processes, such as instantination, to other mental operations; in doing so, though, we have elevated the component to a role that psychologists have usually ignored." Can we provide a general theory for deduction?
According to this approach, mental models are analogous to models from logic, and are used for deduction. But Rips wants to pursue another approach to a general model.
Assume standard decomposition of memory, in particular a limited capacity working memory. "Limited capacity implies that if a reasongin problem becomes too complicated, information may be forgotten and reasoning performance will degrade." Pointers from item to item in memory represent a wide variety of meaningful relations between items. Processes can write new sentences and create new pointers. "The idea is that the rules create mental proofs by producing newly deduced sentences in working memory, and strining them together with entailment pointers. The proofs might be solutions to logical or matehmatical problems, but they will more usually concern everyday tasks that the system is undertaking."
Basic unit in Rips' system is a metnal predicate. Operations can be used to make more complex sentences out of basic predicates. memory uses labeled entailment pointer to say one sentence has been deduced from another. "We can also label an entailment pointer with the name of the inference rule that is responsible for the new sentence." So, proof literally gets written out, step by step, in working memory!
Rips' system uses basic types of AND Introduction, AND Elimination, and IF Elimination, but requires additional constraints. "Some of the connective rules create massive inefficiencies if we allow them to produce all possible entailments from a set of premises." Solution -- distinguish two types of sentences, assertions vs. subgoals. "The trick to constraining AND Introduction and similar rules is to use them only when they can produce a subgoal." (NB: Basic theme here -- Rips needs to convert logical or computational form of the rule into an algorithmic version that is relative efficient!).
System also needs rules for comparitn names, variables, constants; and for connective rules that can handle variables and constants.
Example of system in action -- Towers of Hanoi puzzle. Previous work has shown that there are lots of strategies for solving this problem, such as goal recursion. This strategy can easily be expressed in terms that Rips' model can handle. The result: the model solves the problem. "This example should help clarify how it is that a system based on deduction rules can guide cognition in tasks that aren't specifically deductive. ... This does not mean that cognition is nothing but deductive reasoning. It is simply that a deduction system is rich enough to suport and to coordinate a variety of procedures, some of which may not directly deal with entailments."
What evidence supports Rips' model? (1) Argument evaluation results -- ratings of problem easiness reflect the number of steps that the model would require to solve the problem. Ditto for ease of actually solving the problem! (2) RT differences are consistent with the model's prediction that forward rules are easier to use than backward ones. (3) Verbal protocols are consistent with the model. (NB: How do these three types of evidence relate to the kind of evidence that we talked about being required to establish the strong equivalence of a model??).
People make mistakes, though, which seems to be evidence that is not consistent with the model. But errors might be due to resource limitations, and not to errors in reasoning. What about effects of bias, or nonlogical reasoning? Rips' move is to say that there is a fall back mechanism, where you use biases or other strategies when, for whatever reason, the deductive system fails.
"How can we reconcile the primitive character of the entailment relaton with evidence for human mistakes on deduction problems? ... We've seen...that there are many ways in which deduction `errors' can materialize even in a system taht operates strictly in accord with logical rules. Although the individual parts of a problem might be obvious, the correct judgement may fail to occur,due to the number or the variety of component steps, to memory or time limits, to interference from related information in memory, and to many other factors."
Pearl Street | "An Invitation To Cognitive Science" Home Page | Dawson Home Page |