Why We Feel: The Science of Human Emotions
Victor S. Johnston, 1999

Everybody Uses Different Words
An interesting thing about talking about emotions is that, while many agree on the basic issues of what they are and how they work, nearly every author uses different words in different ways. The word "emotion" itself sometimes refers only to the internal feelings, other times, only to the external displays, and sometimes, to either. Johnston uses "emotion" to refer to either the internal feeling or the external display. In contrast, for example,
Damasio uses the word "emotion" only for the external display. "Feelings", for him, are only the internal sensation. Johnston introduces another distinction, affects versus feelings. Feelings are sensations which have an associated hedonic tone, a sense of associated pleasure or displeasure. Affects are sensations without hedonic tone.

Wet or Dry
Perhaps a bit more generous than the average evaluation of "dry" cognitivists, Johnston's appraisal is still rather on the harsh side. He does not convince me that "most" computational cognitivists think of the brain as hardware which implements software. Even if we assume that "many" such CCists would accept a neural network model of brain computations (and it is not clear how many would), nearly everyone would agree that the node weights (the real software) is surely coded as synaptic strengths (which surely must be considered as "hardware"). As the neural configurations assume ever more complex arrangements, their structure becomes ever more describable as hardware. Johnston makes the usual error of thinking in terms of today's computers. I think not many computational cognitivists would be so limited.

When it comes to consciousness, without directly saying so, Johnston seems quite clearly to impart his belief that any sort of computational device would simply be incapable of experiencing a conscious sensation. But once we have constructed the systems capable, not only of manipulating symbols which represent objects in the world, but also to use those symbols in ways relevant to the machine's well-being and to contemplate and discuss said symbols, I believe that such systems will also claim to have conscious experiences. Whether we will believe them or not is another story, but thinkers such as Ray Kurzweil seem convinced that we will.

And whether we believe them or not, it seems quite clear that we will be able to explore their program code and understand exactly why it is that they think they experience consciousness. But, of course, I shouldn't say "they think". I should say that they produce for us a verbal report which sounds for all the world like one of our friends talking about his conscious experiences. After all, machines can't think, can they? And don't expect to hear about dreams until we have built in all of the mechanisms to do whatever it is that dreams do for us, physiologically.

Johnston's final note of the "Wet CS" section leaves the question of mind/body interaction in Cartesian terms. But he has just previously discussed emergent properties. He seems to miss the obvious point that emergent abstractions, such as the car's cornering ability, can and do interact directly with the physical world? We shall see in the next section that he didn't miss the point at all.

The Hard Problem
Johnston says that
David Chalmers defines the "hard problem" as explaining the function of our inner experiences. If I understand Chalmers, what he refers to as the hard problem is rather what consciousness is in the first place, how it is that the brain can do things like that. If we knew that, it would no doubt be obvious what its function was.

As most critics do, Johnston misses the point with the zombies. A zombie is, by definition, a being with identical functional and structural properties as those of some person. Therefore, whatever purpose it is that consciouness serves that person, it must serve the zombie in the same way. Johnston, like Chalmers and many other naysayers, only pretends he can actually imagine a zombie, that is, a being who is identical except that it does not experience consciousness. Just because I can imagine a unicorn doesn't make such a beast possible, nor does it allow me to explore the properties of such a hypothetical beast (except as I might imagine said properties).

And what might the purpose be which consciousness serves us? If there is a hard problem, that is it. And that is a problem for which we will surely have an answer as we learn more about the brain. I suspect, not uncommonly, that the purpose has a great deal to do with the manner in which we process and manipulate the properties of objects that we perceive in the world. As we better understand that symbol manipulation process, the nature of consciousness will, I am convinced, become apparent. This is the reason that other thinkers, such as Dennett and Edelman say that the "hard problem" may not be so hard after all.

But, just as we saw above, Johnston has led us on merry chase. Again, as we proceed, we will see that he has not at all overlooked these seemingly difficult questions.


Top of Page | WWF Summary | Sort by Topic | Sort by Title | Sort by Author