Semantics and Cognition
Ray Jackendoff, 1983

Part I. Basic Issues

Chapter 1. Semantic Structure and Conceptual Structure
Meaning in language and the nature of thought are two questions about the same thing. This book will argue that both psychological and grammatical evidence must be considered in order to approach a theory of semantics.

Cognitive psychology makes use of five modes of description,

Linguistics is the study of grammatical structure (Chomsky's linguistic competence). Psycholinguistics is the study of grammatical processing in real time (Chomsky's linguistic performance). Section 1.3 gives a quick overview of the ways in which semantics has been treated by various linguistic theories. A semantic theory must be
  • expressive (account for observed linguistic facts),
  • universal (provide an account for inter-language translation),
  • compositional (account for sentence meaning from word meanings),
  • should assign "semantic properties", such as synonymy, anomaly, presupposition and valid inference.
The theory need not be reducible to software, although attempting to code a version of the theory may have several useful benefits.

The Grammatical Constraint
The purpose of language is to transmit information, thus, a theory of semantics should provide explanations for syntax and lexicon, both in how these are learned and how they are used.

The Cognitive Constraint
Further, a semantic theory must be consistent with known cognitive facts, facts about vision, nonverbal hearing, smell, kinesthesia, and so forth. Jackendoff proposes the Conceptual Structure hypothesis, that there is a single level of mental representation, Conceptual Structure, at which linguistic, sensory, and motor information are compatible. (Compare this to
Bickerton's view that there exist separate levels of conceptual and sensory representation.) He assumes that concepts are generated by a finite set of innate, universal rules. But concepts must be able to encompass at least everything which can be expressed in language, as well as having "the expressive power to deal with the nature of all of the other modalities of experience". For example, a child must learn to understand measurements and amounts, but the relevant conceptual dimensions must already be present.

Is conceptual structure separate from semantic structure, and connected to it by pragmatic rules, as argued by Katz, Fodor, and Jackendoff (1972), or is semantic strructure a subset of conceptual structure, as held by most of Artifical Intelligence work, Fodor, Garrett, and Chomsky (1975)? Chapters 3 to 6 will provide an argument for the latter case.

Chapter 2. Sense and Reference
Here, Jackendoff presents a cogent discussion of the real issues forced upon semantics via the Cognitive Constraint by the fact that we know the world only through the senses, never directly. The extent of the "problem" is clarified through a number of examples from visual processing. I particularly like the question, "Where is Beethoven's Fifth Symphony?" It cannot be in the score. It cannot be any particular performance. It must be an abstract structure that the listener constructs upon hearing a performance.

If we cannot know the real world, but only its projections into our minds, then statements in language cannot be about the real world. Truth is as we perceive it. But then, how can we communicate at all? Katz (1972) answers that we have a common basis for communication because we share the same perceptual projectors. And, on the other hand, we do indeed have wide variations in interpretation and understanding between individuals. (Jackendoff later expands on the effects of this variation.)

A metalanguage is introduced for discussing such matters. The metanotation distinguishes between real-world entities, #projected-world entities# and MENTAL REPRESENTATIONS. There are electromagnetic effects, tissue damages, #light#, #color#, #pain#, LIGHT, COLOR and PAIN. And we can construct linguistics entities separate from, but about, any of these entities.

The projected world is made up of experiences, that is, conscious awareness. With an admittedly complete lack of knowledge of what the projected world is really like, Jackendoff assumes a one-to-one correspondence between #projections# and REPRESENTATIONS of objects, sensations, etc. MENTAL INFORMATION consists of propositional information, such as conceptual structures, and non-propositional information, such as internally constructed model of visual input.

Language conveys expressions of conceptual structure. The reference is not the physical world, but the projected world. Chapter 3 will present a series of ontological presuppositions, which is what linguistics expressions are about. This avoids the philosophical riddle of assuming that properties, such as propositions, sets, and predicates somehow exist in the real world, such that we can talk about them. We only indirectly talk about the real world.

Part II. Cognitive Foundations of Semantics

Chapter 3. Individuation
Pragmatic anaphora requires the listener to form an association between a perceived #thing# and a syntactic element such as a demonstrative pronoun (this, that), an adverbial modifier (this way, that way), etc.

Seven ontological categories are listed. They are (in the metanotation) [THING], [PLACE], [DIRECTION], [ACTION], [EVENT], [MANNER], and [AMOUNT]. These categories relate perceptual abilities to grammatical structure. Jackendoff notes that this list is not to be taken as an exhaustive list. However, "the total set ... must be universal". Jackendoff does not speculate on how many categories might exist, but supposes that particular languages can choose from the available set.

Similar syntactic structures can be applied in a more or less parallel manner to each of the seven ontological categories. This holds for several different kinds of syntactic structure, including stating and asking of individual identification, pairwise identity or non-identity, and quantification, including negative quantification.

Another type of syntactic parallel across some, but not all, of the ontological categories is mentioned in a note. [THING], [EVENT], and to some extent, [PATH] can have subcategories in a bounded/unbounded dimension. [THINGS] also show a further singular/plural dimension. The remaining categories do not seem to share these parallelisms. It is noted that an unbounded [PATH] is essentially the same as a [DIRECTION].

Chapter 4. The Syntax of Conceptual Structure
This chapter attempts to make semantic generalizations based on the syntactic generalizations discussed in the last chapter. Predicate logic has difficulty representing some of these structures, and furthermore, often makes the wrong interpretation of phrases.

A particular case where predicate logic is inadequate is its handling of reference. A specific instantiation is typically required, which is not reflected in the syntactic structure. Jackendoff introduces the Referentiality Principle, that all phrases with conceptual constituents are referential unless there is a specific linguistic marking to the contrary.

A fundamental pattern in syntactic structure is that of modifiers. Modifiers considered here include

Each of these occurs in a variety of phrase categories, Ss, NPs, APs and PPs. Jackendoff offers only a few hints as to how any of these might be treated semantically. He notes, for example, that logical modifiers encompass a range of structures far broader than has been previously discussed.

Conceptual structure appears to involve a conceptual constituent for each major phrasal constituent in the syntax. With the exception of ACTION, single- and double-primed syntactic categories do not correspond to conceptual constituents. The case of ACTION will be revisited in section 9.4.

Chapter 5. Categorization
The opening sentence defines categorization as judging whether a thing is or is not a member of a particular category. It does not say, but seems to imply, that the category is already known, rather than being created at the time of the categorization. Categorization is clearly more general than language. An ant can categorize whether on not an object it has just found is food, to be taken back to the clan. Nevertheless, Jackendoff's treatment is interwoven with the issue of conceptualization, an ability available to a much smaller number of species. Thus, it seems fair to say that the only categorization of interest to Jackendoff is the categorization of concepts.

Later, in Architecture of the Language Faculty , Jackendoff will be more explicit about conceptual structures and how they relate to consciousness and attention.

A basic division of concepts, relevant to categorization, is TYPEs vs TOKENs. When a concept is formed of an object (or of any of the ontological categories ), a [TOKEN] concept is created. When you decide what kind of thing that concept represents, a [TYPE] concept is created. A reference to a [TOKEN] can be added to a [TYPE] concept with the operator INSTANCE OF. A reference to a [TYPE] can be added to a [TOKEN] concept with the operator EXEMPLIFIED BY.

Based on the agrument that you can arbitrarily create a new [TYPE] of the form [THINGS LIKE [TOKEN]], Jackendoff concludes that the array of available [TYPE]s is not fixed, or even innate, as some have argued, but that new [TYPE] concepts can be arbitrarily created as stated above. It is clear that the total set of possible [TOKEN]s is infinite.

The creation of a [TYPE] concept must involve a system of rules which state how to recognize a [TOKEN] of that type rather than simply a list of all of the known [TOKEN]s of that type, although the [TYPE] may well include a list of some such [TOKEN]s. These [TOKEN] recognition rules are not generally available to awareness. Similarly, there must be a set of rules which state how to create a [TYPE] concept, based on a given collection of [TOKEN]s.

Labov's experiments are described in which he asked subjects to indicate whether an open-topped container should be classified as a glass, a cup, or a bowl. The results show that such judgements are graded, not all-or-none.

The syntactic evidence shows that both [TYPE]s and [TOKEN]s can be used in a variety of similar sentence structures. Similar sentences can state that a [TOKEN] is an instance of a [TYPE] or that a [TOKEN] is identical to another [TOKEN]. Whereas formal logic would have to use radically different mechanisms to state these relationships, the syntactic parallel can be easily captured if we assume that [TYPE]s and [TOKEN]s in fact have quite similar concept structures.

One difference between [TYPE]s and [TOKEN]s is that you can call into awareness (or "project") the concept represented by a [TOKEN], but you cannot do this for a [TYPE] concept. The claim is that phrases expressing [TYPE]s do not refer to anything. We have no direct experience of [TYPE]s.

This prompts a revision of the Referentiality Principle, in that we must distinguish between [TYPE]s and [TOKEN]s. The new Referentiality Principle II is that all phrases that express [TOKEN]s are referential unless marked otherwise. All phrases that express [TYPE]s are non-referential.

Chapter 6. Semantic Structure is Conceptual Structure
Each of the three sentences

expresses a relationship between two concepts. In the first, a generic categorization, both concepts are [TYPE]s, [TYPE DOG] is claimed to be a hyponym of [TYPE REPTILE], (ie., [TYPE DOG] is INCLUDED IN [TYPE REPTILE]). The second expresses IDENTITY between two [TOKEN]s and, in the third, [TOKEN MAX] is said to be an INSTANCE OF [TYPE DOG]. The parallelism in the use of the verb BE, not brought out by any of the various logical treatments, is captured if INCLUDED IN, IDENTITY and INSTANCE OF are all considered as cases of BE.

For this to work, there must be cues available to indicate which kind of concept, [TOKEN] or [TYPE], is intended on each side of the verb. This section explores the nature of such cues.

Rules of deduction and induction allow the computation of various conclusions from the relationships among groups of concepts. In general, such conclusions take the form of newly created concepts. The similarity of form between [TOKEN] and [TYPE] concepts allows these rules to capture a substantial chunk of semantic theory, including subordination, superordination, synonymy, antonymy, entailment, inconsistency, semantic redundancy and semantic similarity.

Part III. Word Meanings

Chapter 7. Problems of Lexical Analysis
Recapping his position, Jackendoff begins the discussion of word meanings by stating that all meaning is considered to be grounded internally, within the mind. The only reference to the external world consists of the output of the various sensory analysis processes, as made available to semantic processing. This position insists that there is no external aspect of meaning, as has often been claimed.

The meaning of a sentence is captured in a concept constructed on the basis of concepts representing the meanings of the constituent phrases, which, in turn, are constructed on the basis of the word concepts and, in each case, the syntactic structure of the sentence and the phrases.

The meaning of a lexical item of one of the various syntactic categories (noun, verb, adjective, adverb and preposition) is a function of zero or more arguments, mapped into a concept of one of the major ontological categories . The arguments are also concepts, filled in by the readings of the phrases strictly subcategorized by the lexical item (if any).

Decompositional theories of word meanings, such as Shank's Conceptual Dependency Analysis , are inadequate, first, because noun and verb meanings are treated differently and, second, because a list of necessary and sufficient conditions of a word meaning does not adequately capture the creative aspect of meaning. In each case in which a linguist or philosopher has attempted to set forth the "full and complete" semantic structure of some particular lexical item, there is always left some residue of unexpressed meaning. As Jackendoff puts it, "little descriptive inadequacies ooze around the edges of the decomposition".

Reviewing a number of the arguments which have been proposed for the description of meaning, together with a number of cases in which the proposed meanings do not quite work, Jackendoff comes to the conclusion that meanings are computed as overlapping and sometimes contradictory categorizations, based on the related linguistic and non-linguistic conceptual structures. The rules by which such categorizations are computed are the same rules as described earlier in the computation of categorical judgements . The next chapter will take up a detailed inspection of these preference rule systems.

Chapter 8. Preference Rule Systems

What is missing here is a physiologically plausible mechanism .

Part IV. Applications

Chapter 9. Semantics of Spatial Expressions

Chapter 10. Nonspatial Semantic Fields and the Thematic Relations Hypothesis

Chapter 11. Theory of #Representation#

Top of Page | SC Opinion | Sort by Topic | Sort by Title | Sort by Author