Android Epistemology
Kenneth M. Ford et.al. (eds), 1995

Looking Backward
In general, I find this book to be somewhat of a disappointment. With notable exceptions, the chapters by individual authors, many well known and influential thinkers, seemed to be hastily put together from previous existing and often dated materials.

The above line of argument does not fit well with the fact that the book arose out of a conference on Androids. It must apply to the authors' current thinking.

Understand that I have no problem with robotics, machine intelligence, or (to a point) philosophical pondering of these issues. I do not go wild over all of the things that have been attempted or claimed with formal logic.

Most of the chapters, in the best of philosophical traditions, present formal logical arguments for one or another aspect of an android's mentality, all based on vaguely understood operations operating on vaguely understood objects. I rarely find the arguments describing the underlying objects and operations to be without flaw, and it is still more rare to find such a logical chain of reasoning that I am willing to follow through to the conclusion. (You might say that I am not keen on formal logic). Jackendoff suggests that logic is devoid of empirical content in part because it fails to account for natural-language semantics.

In the Introduction, the authors state their intention to present an updated view of current thinking.

If I were to describe the book by one adjective, I would call it hidebound; Constrained by the very methodology of the traditional AI camp which it professes to have outgrown.




Creativity
Boden's chapter 4 is thoughtful and relevant today, though written in the early 90's.




Robot Concepts and Embodiment
This paper (chapter 8) by Ronald Chrisley makes the point I tried to state above about the book in general. Chrisley pursues the distinction between thinking and non-thinking forms of perception, making a case for the latter, which he calls non-conceptual content (NCC). Being non-conceptual in nature, this NCC is not bound by the usual rules of formal logic. The chapter is thus a formal argument for something by nature informal. Give me a break!

Robotic Imagination
Chapter 9 is a proposal for integrating the traditional AI "conceptual" component with the real-world demands placed on a "robotic" component. The core idea, to my knowledge original with Stein, is that an "imagination" component, which allows internal manipulation (via memory recall) of sensory constructs, is an efficient way of implementing an internal world model. This avoids having to maintain a vast database of "common sense", computing such information instead by massaging the world model.

Does this point really belong here? It raises other questions on making inferences from the massage. Read the chapter again to see if she discusses this.

Jackendoff discusses a similar idea, referring to the human brain and individual concepts rather than the full world model. Bickerton takes the idea a step further and maintains two separate concept representations, a primary rep. which receives direct sensory input and leads directly to motor output and a secondary rep. which is isolated from direct I/O and operates with memory contents.






Stereo Vision

Churchland (chapter 15) describes a neural net solution to stereoscopic vision. The basic architecture for how to do this was first discussed by Man & Poggio in 1976 (as referenced here), but Churchland may be the first to demonstrate a working model. This version passes many, but not all, of the tests comparing it to human depth perception. He discusses the pros and cons with a refreshingly direct style.


Top of Page | AE Summary | Sort by Topic | Sort by Title | Sort by Author