The Simulation of Human Intelligence
Donald Broadbent (ed), 1993

Preface
This series of Wolfson College lectures was given at Oxford in 1991 with the aim of being of interest to the general non-specialist. The rapid developments in computing techniques during the past decade have "stimulted new thinking among psychologists, physiologists and philosophers, about the nature of our minds and brains".

Chapter 1. Setting the Scene: The Claim and the Issues
Roger Penrose

Penrose opens his talk with a summary of some of the philosophical positions currently in vogue on the topic of the mind/brain question. He lists the following four alternate viewpoints ...

1. All thinking is computation, and the carrying out of appropriate computations will evoke feelings of conscious awareness.
2. It is the brain's physical action that evokes awareness and any physical action can in principle be simulated computationally, but computational simulation by itself cannot evoke awareness.
3. Appropriate physical action of the brain evokes awareness, but this physical action cannot even be properly simulated computationally.
4. Awareness cannot be explained in physical, computational, or any other scientific terms.

The first position he identifies as Strong AI or functionalism and describes in fairly unbiassed terms. He correctly points out that this is the view of very many scientifically inclined thinkers. (It is also my position ). Point 2 is redefined on the following pages in behavioral terms and the concept of computer simulation is expanded. Penrose identifies point 3 as the one which he finds to be closest to the truth. It, again, is described with respect to computational simulation. Point 4 is not further discussed.

Chapter 2. The Approach Through Symbols
Allen Newell, Richard Young and Thad Polk

Why did the organizers of this lecture series include the word "simulation" in the name? I have previously discussed
Penrose's reaction . Here, the authors contemplate that question, making a contrast between simulations based on existing theoretical grounds and simulations intended to map out the territory for the determination of new theories. They suggest that the lecture series title has an unfortunate side-effect of placing an unintended distance between the programmers and the theorists.

Newell, Young and Polk propose a few simple questions to clarify what it is they are looking for in intelligence and to make clear how they will look for it. They also set out a few definitions so as to clarify the goals of the search. They further reduce the task to a single question and consider what form of theory of intelligence would be satisfied by the behavior of a system designed to exhibit intelligence. The theory will, in fact, be just exactly the architecture of the system they design to exhibit the expected behavior. They do make the additional, if somewhat cryptic, stipulation that the theory should "exhibit its intelligence in human ways, and definitely not in ways that are far from how humans exercise their intelligence".

Newell, Young and Polk then proceed by forming a hypothesis about the kind of mechanism they will need to accomplish the functions. For this, they decide to use a computational system, which they also characterize as a symbol system, with a footnote warning that not everyone will be happy to include neural networks within the domain of symbol systems.

The rest of the chapter is a fairly detailed description of how one might construct a program capable of solving syllogisms, along with copious observations about the sort of problems which might arise and differences in the ways humans might approach the task as compared to the behavior of the computational system.

Chapter 3. Sub-Symbolic Modelling of Hand-Eye Co-ordination
Dana H. Ballard

Ballard explores a model of visual analysis which leads to simpler models for hand-eye coordination. The central issue is the use of a "fixation point" coordinate system, centered at the point of convergence of the two eye axes and oriented toward the dominant eye, rather than the typical observer-centered coordinate system, as proposed by
Marr and others. The system is used to control a robot arm equipped with a "head-mounted" pair of cameras. The cameras tilt together, but pan independently, allowing fixation at a specific distance.

Objects in the visual field are "indicated" by a symbolic pointer which Ballard calls a marker. The difference between markers and the usual symbolic references to objects is that a marker is a dynamic reference point, associated with the object currently at the fixation point, not with a specific, identified object. There are only very few markers; Ballard limits his model to two. A marker may be either of two types, an overt marker associated with an action which can affect the external world, and a perceptual marker with no such action association, used for internal purposes only.

This fixation-point based marker system allows a very efficient use of resources for observing enough of the external world to be able to manipulate objects. Several human experiments which measured the time for tasks requiring eye saccades and object manipulation appear to support the marker and fixation-point model .

The crowning conclusion Ballard makes after working with this system is that it is not necessary for the observation and manipulation of objects to create any sort of world model. All that exists as a model is the temporary and dynamic reference to the object currently at the point of eye fixation. What seems to us to be a unified, coherent internal model of the external world is really no more than fleeting, fragmentary models of very limited numbers of objects.

An expanded version of this paper is available in the Behavior & Brain Sciences archive .

Chapter 4. Networks in the Brain
Edmund T. Rolls

In this chapter, Rolls presents a
concise and readable account of the nature of the neural connections to, from, and within the hippocampus, and of the way those neural structures work to produce our factual memories (memories that psychologists call episodic and semantic). These brain structures are also put into perspective with some additional material about how certain brain pathologies relate to the parts of the brain covered here.

The chapter ends with some discussion about how the hippocampal structures differ from other brain structures and what sorts of things we might expect to see as the neurologists venture forward and begin to uncover comparable levels of detail in other brain areas.

Chapter 5. Computational Vision
Michael Brady

This chapter is a brief summary of some of the many operators and algorithms which have recently been applied to the task of visual analysis. Much of the emphasis is on operations that can be performed, often on small regions of the image, that can be shown to yield interesting,
and sometimes even useful , properties of the objects represented.

Chapter 6. The Handling of Natural Language
Gerald Gazdar

Gazdar distinguishes between computer handling of text and processing of spoken language, and makes it clear that the chapter deals only with text processing. The remainder of the chapter is essentially a summary of work done during the decade of the 1980s in the field of natural language processing (NLP) and the emphasis has changed for the 1990s.

Gazdar argues that one of the big changes from the 70s to the 80s was the acceptance of the idea that a grammar was necessary as a basis for any significant work with natural language. The major steps in such processing were the unification grammars and chart parsing . Another large advance was the growth of interest in morphological processing; analyzing words into their constituent morphemes.

The big change during this period, however, was a shift from procedural semantics to declarative semantics. If somebody walks up to you on the street and asks how to get to the train station, you can either recite a sequence of directions, "Go to the corner, turn left, go 5 blocks, etc." (a procedural description), or you can hand them a map of the city (a declarative decsription). Declarative processes have the advantage of being more portable, that is, not so closely bound to the details of the present situation.

Perhaps the most interesting subtopic, as Gazdar presents it, is a problem which, in his view, remains into the 1990s as one of the major issues of NLP, that is ambiguity . He lists some examples of the exponential explosion of possible different meanings of a moderate length sentence when on a few alternate interpretations of some of the words are considered.

Gazdar ends the chapter with some descriptive examples of natural language processing systems which are in use in real-world applications.

Chapter 7. The Impact on Philosophy
Margaret A. Boden

As always, Boden offers her insightful comments on this volume of papers on intelligence. In response to Penrose's position, she seems to have more to say to Searle than to Penrose. This reflects the nature of the arguments posed by these two philosophers, as those arguments resonate with the purposes of the present volume.

Chapter 8. Comparison with Human Experiments
Donald Broadbent


Top of Page | SHI Opinion | Sort by Topic | Sort by Title | Sort by Author