How We Solve Problems"
by David Green
Relevance To Lectures
In the lecture, we emphasize the general notion of problem solving being a search through a problem space. Green's chapter de-emphasizes this point, and instead considers a more specific notion - problem solving via mental models. Note that many of the ideas expressed early in the chapter can easily be viewed from the "search perspective". For instance, choosing the right mental model can be construed as a search through a space of possible mental models. Another theme that arises later in Green's chapter is the difference between novice and expert problem solvers. Note here that when he moves to a modeling account of these differences, the architectures that he describes (ACT* and SOAR) are based on production systems.
What is a problem? It exists when obstacles lay between us and a desired goal. "If though and problem solving are in the service of needs, some of which can be met by acting in the world, there must be a relationship between thought, perception, and action." What is this relationship? Green argues that the relationship is mediated via mental models.
General notion: human thought constructs models of reality. "The basic idea is that a mental model has a similar relation-structure to the situation that it represents." World entities are represented by tokens in the mental model. Relations among world entities are represented as relations among tokens. The conclusions that one draws depend crucially on the model that is created. Different models - leading to very different conclusions - can be crated for a single state of affairs in the world. "A problem is more difficult if more than one model has to be considered. Additional models impose a load on limited working memory."
Mental models give a nice account of how one can solve problems involving spatial relations. An alternative approach to problem solving would be rule based. Mental model theorists claim that their data does not support rule based accounts. Mental models can also be used to solve syllogisms. "Model theory supposes that individuals understand the premises and construct a model of the situation to which they refer. There are various ideas about the form of these models. Idea is that each statement in the problem is represented with a model. Then the models are combined. Some of the models are "implicit" - they model possible situations not explicitly stated in the syllogism. One issue that model theory has to deal with is how many models must be considered, and when do people stop considering alternative models? For example, people might stop when they come up with a believable conclusion. (NB: This goes beyond the logical/syntactic properties of the problem!). "The apparent willingness to accept believable but invalid conclusions is consistent with model theory, but does it mean that human beings are irrational? Working memory constraints are also predicted by model theory to affect problem solving.
Analogical Problem Solving
Another approach to problem solving is using analogy. "In seeking to solve a problem individuals can use a solution to a problem they already know." Analogies may be used when technical knowledge is lacking. Choice of model upon which analogy is based determines the success of this approach. What other factors affect the use of analogy? Subjects can be prompted to notice an analogy. "The spontaneous retrieval of an analogy presumably depends on its similarity with the target problem. ... Individuals may be unable to solve a problem because of an inability to access a suitable analogy. ... Indeed, there may be no effective and practical procedure for the discovery of a suitable analogy, unless one can severely constrain the number of potential sources."
There have been three models proposed for how analogical reasoning proceeds. The first is the structure mapping engine (SME). It is based on the systematicity principle: there should be a preference to map coherent sets of relations. How does it work? First, it identifies identical relations between the source and target domain. Second, it selects those matches consistent with the systematicity principle. So, it considers lots of matches and then selects the best matches from those considered. SME is domain general - it operates on relations, not on kinds of content. Mappings from a source domain will depend on the structure of the target domain. SME operates in serial.
The analogical constraint mapping engine (ACME) is unlike SME in that it operates in parallel. Also it allows "pragmatic factors (the goals and purposes of the analogical process) to affect the mapping process directly." It is based on the notion of parallel constrain satisfaction. Nodes in the model represent relations and objects. Matching nodes are linked via excitatory connections. "When the network is constructed it is run until the nodes settle into a stable state. The optimal match between the two domains is reflected by those nodes whose activation exceed a given threshold." Alternative matches to the same relation or boject are given inhibitory links. Pragmatics is worked in by giving important nodes extra activation.
The incremental analogy machine (IAM) builds an optimal mapping in an incremental fashion. So, it as though it takes human working memory limitations into account. "The algorithm first finds a best guess or seed group by finding the part of the source domain with the highest degree of structure. ... At the next step, those relations in the seed-group for which there is no one-to-one mapping are transferred to the target domain as candidate inferences." The mapping is then evaluated. If successful, it stops. Otherwise "it considers alternatives serially and undoes previous mappings."
You can cmpare these three modesl with respect to the number of different mappings considered before deciding upon a solution. (NB: This is akin to viewing each model as performing a search for the best analogy!) IAM looks most like people, but this result is far from conclusive. Furthermore, detailed comparisons between the models and people have not been performed.
Mental model view is consistent with the notion of problem solving as search. "On this view, a mental model is a state within a problem space and operations on the model are equivalent to traversing this space." One very general approach to search is means-end analysis. In this approach, the subject looks at the goal. An attempt is made to find a production that will move directly from the initial state to the goal state. If such a production cannot be found, then there must be some obstacle in the way. The obstacle is identified, and eliminating it becomes a subgoal - look for a production that will get it out of the way. "These subgoals are stored on a stack. ... As soon as a production is found that matches the conditions of the subgoal, the specified action is carried out and the subgoal is removed from the stack and the next one is retrieved and so on." This is called a goal recursion strategy.
So how does all of this relate to expertise? The earliest work on expertise found that expert chess players look at better possible moves than do novices. With respect to memory, "more highly skilled players were better at reconstructing briefly presented board positions when these made sense in chess terms. They did not differ from less skilled players when the same number of pieces were not in a sensible configuration." Why? Because of knowledge of chess stored in LTM - experts have the same memory capacity as novices, but experts have more sophisticated "chunks" of chess information.
Such differences can be modeled with production systems. For example, ACT* has a long term declarative memory, a long term production memory, a working memory, and it can learn. When it starts, information about a new domain is provided in declarative form, and general problem solving strategies are used to find appropriate actions. Second, if they are successful, these actions are compiled into productions. "In the final state, these productions are strengthened (through successful use); generalized (by replacing specific instances with a relevant variable); and discriminated (by blocking the use of a production unless certain conditions are met)." So, according to this theory, experts differ from novices with respect to the use of productions.
SOAR is another approach in which all of its LTM is a production system. Soar also learns, and models expertise in terms of the development of appropriate productions. "The notion that expertise is the result of the automatic firing of compiled or chunked productions (habits) captures the idea that experts perceive patterns rather than isolated elements." But experts are probably also flexible in their use of pre-established routines.
Pearl Street | "An Invitation To Cognitive Science" Home Page | Dawson Home Page |