Computer Models of Mind, Computational Approaches in Theoretical Psych
Margaret A. Boden, 1988

Chapter 1. Introduction
The book begins with a statement describing computational psychology, followed by a brief history of the centuries during which humans have attempted to create models of humans and animals. Most of these efforts concentrated on modeling physical aspects of living things. Only for about the last half-century, have such models been made with the intent of learning how the mind works. The result is much more than the usual rehash of "What Shaky couldn't do, and why". Her summary captures much more of the feelings I had when, after reading a few Jackendoff books (
1983 ), (1997 ), I set out to write an English text parser. It didn't take long to get a good sense of what it was I didn't yet understand about phrase structure. As she says, "AI's emphasis on rigour ... often points to theoretical lacunae".

Finally, we are treated to Boden's typically insightful summary of the many controversies surrounding the use of computers in the study of mind.

Chapter 2. Patterns, Polyhedra, Imagery

The Philosophical Background

Computer Vision: The FIrst Three Phases

Images and Analogues: Conceptual Preliminaries

Imagery in Experiment and Theory

Chapter 3. Connectionist Models of Vision

The Connectionist Core

Computational Psychology According to Marr

Outline of a Theory of Vision

Marr's Methodology in Practice: Representation of the Intensity-Array

Some Objections

Further Examples of Connectionism

Chapter 4. Parsing Natural Language
Like much of the brain, the system for language understanding is not understood in more than the barest outlines. In spite of significant progress in the theoretical basis of language and a great deal of interest from the psychological point of view, the parsing strategies used by the brain are unknown.

Historical Background
Early computer software for language processing had little to say about brain mechanisms. Probably the best known of such programs, ELIZA, did no syntactic analysis at all, but merely searched for preset patterns of words and constructed a pre-programmed reply for particular patterns.

A revolution in linguistic theory started with the publication of Chomsky's Syntactic Structures in 1957. For the first time, the underlying structure of language came into view, differing from earlier descriptions of the superficial patterns of words. However, not only did Chomsky's ideas on the nature of syntax sweep over the linguistics community, but his philosophical views as well took center stage in linguistic thought. Those views offered nothing but disdain for the reality of the workings of the brain, with Chomsky concentrating instead on a disembodied, descriptive approach to the theoretical structure. The result had the effect of a damper on the spread of the new ideas to psychology and AI.

Some interesting parsing software did begin to appear in the 1970s based on the new syntactic structures being described by the linguists. The last paragraph of the section includes a number of references to that work.

Augmented Transition-Networks
An early parser which was designed explicitly to capture the psychological effects of sentence understanding was the Augmented Transition Network model of
Thorne in 1968. A context-free state diagram having words or phrase types as arc labels was augmented in two ways. The entire network state at any node could be pushed onto a stack and parsing would proceed in an embedded sub-network. When the inner network terminated, it would be discarded and the outer network popped from the stack. In addition, a set of registers allowed current parsing information to be saved for use at later points in the parse.

The psychological model represented by these networks held that sentence processing did not involve any form of backtracking, as was often employed by various computer language parsers. A number of psychologists became interested in pursuing questions such as the effort involved in processing various types of sentence structures. Of particular interest were embedded clause patterns, and especially the "garden path" sentences, where the first words led to a false interpretation of the actual sentence structure.

  • The horse raced past the barn fell.
The actual structure can be easily seen by forcing exposure of the clause pattern.
  • The horse (that was) raced past the barn fell.
One type of revealing psychological experiment consisted of asking the subject to note the location of a click superimposed on various types of sentences. It was found that the listeners tended to "hear" the click as misplaced toward a syntactic structure boundary. Other means were also devised for measuring the effort spent on sentence processing. It was clear that extra effort was being spent in the graden path sentences at the point where the clause structure evidently conflicted with the preliminary parse as constructed up to that point.

The ATN parser, per se, is not inherently a non-backtracking model. Various versions with differing psychological consequences were explored. Wanner and Maratsos introduced a model claimed to be closer to psychological reality by adding a "hold" function. Up to three incomplete clause structures could be saved in a buffer to be completed at a later point in the parse. The approach was criticized as having no interest to theoretical linguistics.

The Autonomy of Syntax
In Chomsky's early grammars, syntax was autonomous. What does that mean? Two possibilties are that (1) it is possible to define syntactic structure independently of semantic considerations, and (2) a listener might be able to assign syntactic interpretations independently of syntactic information. (Two other senses will be described in chapter 5, pp 132,134).

A basic psychological question, then, is whether (or to what extent) semantic information really is used in parsing sentences.

How Our Minds Might Determine Our Syntax

Grammar Liberated from Context

The Generation of Syntax

Chapter 5. Meaning and Messages

Semantic Primitives, and Compounds Thereof

Psychological Semantics

Ill-Behaved Sentences, Well-Conducted Conversations

Computer Models of Speech

Chapter 6. Reasoning and Rationality

From GPS to Production Systems

Critiques of Newell and Simon

Can There Be a Theory of Problem-Solving?

Mental Models versus Logical Rules

Chapter 7. Learning and Development

The Poverty of Empiricism

Skills and Task-Analysis

Meta-Epistemology and General Principles of Learning

Is Development Different?

Connectionist Approaches to Learning

Chapter 8. Is Computational Psychology Possible?

Competence and Task-Definition

Formalism: For and Against

Are Programs Pure Syntax?

Computation and Connectionism

Chapter 9. Conclusion


Top of Page | CMM Opinion | Sort by Topic | Sort by Title | Sort by Author