The Simulation of Human Intelligence
Donald Broadbent (ed), 1993

Thought Provoking
I found this book to be an extremely interesting collection of thoughtful and relevant papers concerning the nature of intelligence and the application of various AI techniques in exhibiting such intelligence.

What is Strong AI?
Note that Penrose's definition of Strong AI, as given in
point 1 , that computation can account for all thinking and feelings, is not the same as Searle's definition , that the mind is software running in the hardware brain. It was Searle, I believe, who coined the term.

What is Computational Simulation?
In
point 2 , things begin to get a little slippery. (See also Chapter 2 ). In fairness, it should be noted that John Searle is frequently quoted thoughout this section and surely must share some of the brunt of any such charges. Point 2 is stated in terms of computational simulation and Penrose uses this terminology throughout the discussion, but with at least two distinct meanings. In the computer science sense, a simulation would be a computation in which numerical values are substituted for real-world physical quantities, such as simulating the economy by setting up sequences of interest rate changes, productivity, etc. This is clearly the meaning he has in mind in quoting Searle's apparently obvious counterexample, that "A computer simulation of a hurricane, for example, is certainly no hurricane!".

In the rest of the discussion, however, he uses the term "simulation" in a way that seems to be synonymous with any type of computation itself. To make the point specific with a simple device, a thermostat can clearly be implemented by almost any type of computer. It is a completely separate question whether you write a program to simulate the action of the thermostat by reading a list of recorded temperature changes and printing out the times at which the heating would be adjusted, versus a program which reads some real type of temperature sensor located within the room in question and actually turns the heater on and off, in fact, controlling the room temperature. Only the first would be a simulation in the computer science sense. Only the second, which I will call running a thermostat program, can have real-world effects of the intended type.

So the question is not whether a computer simulating a brain could possibly be conscious, but whether a computer running a (sufficiently complex) brain program could possibly be conscious. This is quite a different matter, completely unscathed by the "hurricane" counterexample.

Formal Mathematical Arguments
Penrose repeats here in full detail the argument first presented, I believe, in
The Emperor's New Mind , that the brain cannot be based on formal mathematical computational systems, such as exemplified by a Turing machine. Although this argument has been refuted in several places, Penrose continues to argue that this proves the brain cannot be a computer-based system.

He asks here how a mathematician could possibly construct a complicated proof if that person's brain were limited to the computational power of an "unsound" algorithm. I think Penrose grossly underestimates the power of such approximate "algorithmic" procedures. I use the term "algorithm" here in the same sense as does Dennett in his portrayal of evolution as an algorithm . (Also see below ). Penrose does briefly mention the methods generally lumped as "heuristics" and the largely unknown potential of learning networks. One extremely important class of approximate algorithms he does not mention includes those capable of stepwise refinement of the data. This is surely how mathematical arguments are typically constructed, as well as how proofs are refined and accepted by the mathematical community. These methods become even more powerful when combined with learning capabilities.

Penrose could argue that such stepwise refining processes could be lumped and considered as a single algorithm, but to do so serves little purpose other than to make the result easier to sweep under the corner of his carpet of disregard. And even this strategy is not available to him when one considers the cooperation among multiple, independent processes, such as community contributions to a mathematical proof or a scientific argument. Furthermore, Penrose's argument that the approximate algorithms (he mentions heuristics methods) necessarily give inferior results is not applicable to these stepwise refinement methods. It is true that they do not always give accurate results on the first pass. Their accuracy is a function of the effort expended on the problem.

Penrose's entire tack in this section is to show that the brain does things which cannot be described as algorithmic (in his formal sense). What he claims to have shown with the formal argument is that the brain is able to do things which cannot be proven to be mathematically correct. He then proceeds to say that since mathematicians can in fact produce elaborate proofs, the brain must be capable of doing something which cannot be expressed as an algorithm. I believe I have shown this to be wrong.

What Is an Algorithm?
Penrose uses the term algorithm in a very limited way that pertains only to provable correctness, whereas
Dennett and others would use the term to cover a wider variety of processes. Penrose says that "an 'algorithmic action' is one which could be carried out in principle by a modern general-purpose electronic computer (with an unlimited store) -- which, technically, means a Turing machine". He proceeds to search through the various methods of computation for some "non-algorithmic" influence which could save the day for the brain. Having looked at heuristics, learning systems, a changing environment, randomness, chaos, and even mysticism for that special ingredient, he finally concludes that the answer must reside in another little-understood domain, quantum mechanics.

The question is, of course, whether a mathematician really needs a provably correct algorithmic brain, in the Gödelian or Turing sense, in order to accomplish elaborate mathematical results. My reply, and, I believe, that of many others, is that the formal requirements are way overkill. There are many ways to do computationally fantastic things without requiring access to a provably formally correct computing system. Our brains are sloppy computers, but we manage, nevertheless. And I see nothing (except our rate of scientific progress) to stand in our way of building equivalent (and thus, conscious) mechanical systems.

But What, Then, is Consciousness?
At the root of the problem is that Penrose, like many other philosophers, cannot imagine that a computational system could in any manner of design be conscious. (Some of the same philosophers claim to be able to imagine a brain, functioning just like yours or mine, which is not conscious). Since Penrose agrees with most of us that we are indeed conscious, he then needs something else besides computation to explain the brain.

I will argue, instead, that consciousness arises from purely mechanical actions by the brain that will be completely obvious once we understand those mechanical systems, in much the same way that the chemical bases of life are now considered by most to be obvious, if rather complicated, in the sense that no outside "elan vital" is needed by way of explanation. I am quite sure that those mechanical actions will be found to have a lot to do with the integration of sensory modalities for purposes of memory facilitation, in order for the organism to use that sensory input efficiently in planning a future course of action. Big deal. We knew that. But the clincher is that consciousness results. How and why is the subject of this web site. Stay tuned.

A Curious Contrast
In setting out the requirements of their
theory of intelligence , Newell, Young and Polk make the curious contrast that "Intelligence is a functional capability of the human mind. Emotions, pains and qualia are not". Surely, pains are qualia. But what are emotions? Emotions seem to have two sides, the internal and the external, or as Damasio would put it , the personal response side and the public display side. Now, I would expect that the authors N, Y & P would agree that the public display parts of an emotion are functional capabilities, so they must be talking about the internal, personal parts. But those are surely also qualia. So they are really making the contrast between intelligence (as functional) and consciousness (as non-functional). We see in the next sentence why they want to make this contrast; specifically that they may be able to meet their goals by producing the function " -- by means other than the orginal, perhaps -- but nevertheless to produce the function". Presumably, they recognize that it may not be so easy to "produce the function" of consciousness.

Although I would argue, along with Dennett I think, that consciousness is in fact functional, this does not detract from the original goal of N, Y & P to use functionality to explore intelligence.

Markers are Symbols
Contrary to the title of the piece,
Ballard's system does seem to make use of a kind of symbol in the form of the pointer he calls a marker. However, I would agree that this is not the same category of symbol as is often used in a typical computer world model, in which each object encountered is assigned its own symbolic reference. Here, the reference is temporary and has as a referent certain object properties, rather than the object itself.

Solution by a Change of Coordinate System
I find it very interesting that so much is gained in manipulative power, and at the same time much is lost in reference precision, all due to a change of the coordinate system. It makes me wonder whether there might be some such alternative "coordinate system" suitable for use in forming linguistic concepts. Some such change might be the answer to the apparent failure of self-contained symbol networks, sometimes called "ontologies" (see
Sensu) to be any more than glorified, interconnected dictionaries.

No World Model!
Ballard's conclusion about the
lack of a unified world model seems to agree with Dennett's conclusion that there is no single, unified consciousness -- that what we are aware of is, in fact, nothing but a series of fleeting glimpses of reality. I do generally agree with this position, but I have one caution to offer. It will not do to say that something which we perceive is merely an illusion and is not really represented in our brains, because whatever we perceive must somehow be represented in our brains. Whatever form of binding may be required, the various objects referenced by the fleeting markers must be connected in some way, simply because we perceive them to be connected.

Many Hippocampal Details
Rolls provides a lot of detail here, maybe more than you want to take in at one reading. I certainly did not absorb it all at once. On the other hand, I did not find that any of the details were spurious or unneccesary to make the point. The material includes things you need to know about neurotransmitters and about neuron firing characteristics in order to get the full picture of the way the hippocampus facilitates the formation of memorys. I have recently started an essay on this topic.

Visual Analysis by Formal Math
I found
this chapter , more so than the rest of the book, to be in the spirit of GOFAI (good old-fashioned AI) . Perhaps it was not Brady's intent for it to come out that way, but the overall sense I got was of a lot of formal mathematical operations which you would perform on the image which would result in proofs of the presence of various object properties. I did not get the sense of any hint of neat ways to get the same or equally useful information by some elegant, but simple, twist on the way you are thinking about the problem. back

Semantic Ambiguity
In
chapter 6 , Gazdar argues that semantic ambiguity remains as a major problem in natural language processing. He is confident that the problem is "not about to go away". I find this to be an excellent example of the fundamental advancements which are possible as a result of a seemingly simple shift in processing techniques. In the GOFAI tradition, ambiguity was a concern because the methods of processing required that all alternate pathways through the logical structure had to be evaluated. The use of network semantic systems fundamentally changes this pattern by automatically providing the solution which has the best fit to the descriptors of the current situation. It is not necessary to specifically process the alternatives. This is just the kind of shift which Ballard advocates , in changing from a fixed coordinate system to a gaze-centered coordinate system.


Top of Page | SHI Summary | Sort by Topic | Sort by Title | Sort by Author