Chomsky 1992 (part 1)

Chomsky begins by describing an argument that he takes Hilary Putnam to have made:

In his John Locke lectures, Hilary Putnam argues “that certain human abilities – language speaking is the paradigm example – may not be theoretically explicable in isolation,” apart from a full model of “human functional organization,” which “may well be unintelligible to humans when stated in any detail.” The problem is that “we are not, realistically, going to get a detailed explanatory model for the natural kind ‘human being’,” not because of “mere complexity” but because “we are partially opaque to ourselves, in the sense of not having the ability to understand one another as we understand hydrogen atoms.” This is a “constitutive fact” about “human beings in the present period,” though perhaps not in a few hundred years (Putnam 1978).

On my reading, Putnam is arguing that our behaviour is the product of very many interacting causal powers (in English: that there are a whole lot of significant factors involved in what we do), and that this interacting mess is just really hopelessly complex. This strikes me as something that not many people would want to disagree with. It seems likely, therefore, that Putnam thinks as well that the organization of language speaking is not informationally encapsulated in Fodor’s sense - namely, that to the extent that there are rules or processes involved, that these might need to take into account global properties of the mind/brain (i.e. a merge operation in syntax might not just need to worry about features, but also about beliefs, mental states, and so on). This kind of position about language seems to have been held as well by the late great cognitive scientist David Marr, who supplied the helpful terminology of Type 1 and Type 2 theories. Here is Marr describing a Type 2 theory:

[That there is no type 1 theory] can happen when a problem is solved by the simultaneous action of a considerable number of processes, whose interaction is its own simplest description, and I shall refer to such a situation as a Type 2 theory. One promising candidate for a Type 2 theory is the problem of predicting how a protein will fold. A large number of influences act on a large polypeptide chain as it flaps and flails in a medium. At each moment only a few of the possible interactions will be important, but the importance of those few is decisive. Attempts to construct a simplified theory must ignore some interactions; but if most interactions are crucial at some stage during the folding, a simplified theory will prove inadequate. Interestingly, the most promising studies of protein folding are currently those that take. a brute force approach, setting up a rather detailed model of the amino acids, the geometry associated with their sequence, hydrophobic interactions with the circumambient fluid, random thermal perturbations etc., and letting the whole set of processes run until a stable configuration is achieved (Levitt and Warshel).

A Type 1 theory is one where this does not obtain, i.e. where there is a single most significant influence on the phenemon, that can then be studied in isolation.

Chomsky goes on to report on Putnam’s lectures:

The “natural kinds” human being and hydrogen atom thus call for different kinds of inquiry, one leading to “detailed explanatory models,” the other not, at least for now. The first category is scientific inquiry, in which we seek intelligible explanatory theories and look forward to eventual integration with the core natural sciences; call this mode of inquiry “naturalistic,” focusing on the character of work and reasonable goals, in abstraction from actual achievement. Beyond its scope, there are issues of the scale of full “human functional organization,” not a serious topic for (current) naturalistic inquiry but more like the study of everything, like attempts to answer such pseudo-questions as “how do things work?” or “why do they happen?” Many questions – including those of greatest human significance, one might argue – do not fall within naturalistic inquiry; we approach them in other ways. As Putnam stresses, the distinctions are not sharp, but they are useful nonetheless.

Chomsky here introduces the term ‘naturalistic inquiry’ and seems to define it as just scientific inquiry. This is what is appropriate for the study of the hydrogen atom. (According to Putnam.) We cannot study ‘the theory of human beings’, because there is no such theory; our behaviour is, like Marr’s example of protein folding, relevantly influenced by everything.

In a critical discussion of “sophisticated mentalism of the MIT variety” (specifically, Jerry Fodor’s “language of thought”; Fodor 1975), Putnam adds some complementary observations on theoretical inquiry that would not help to explain language speaking. He considers the possibility that the brain sciences might discover that when we “think the word cat” (or a Thai speaker thinks the equivalent), a configuration C is formed in the brain. “This is fascinating if true,” he concludes, perhaps a significant contribution to psychology and the brain sciences, “but what is its relevance to a discussion of the meaning of cat” (or of the Thai equivalent, or of C)? – the implication being that there is no relevance (Putnam 1988a).

This is not quite what I think Putnam is talking about in this passage. Putnam is talking about reference, the mysterious relation between language and things in the world, and he is claiming that Fodor’s theory of meaning, namely, that the meanings of words or sentences in diverse languages are represented by a common mental language, ‘Mentalese,’ does not help us with this. Putnam notes that even if Fodor is right, and thus that a Thai speaker’s mental representation of the Thai word meew (‘cat’) is the same as the English speaker’s mental representation of the English word cat, this does not seem to help us at all with the question of how these words (or our common mental representation of them) relate to the world.

We thus have two related theses. First, “language speaking” and other human abilities do not currently fall within naturalistic inquiry. Second, nothing could be learned about meaning (hence about a fundamental aspect of language speaking) from the study of configurations and processes of the brain (at least of the kind illustrated). The first conclusion seems to me understated and not quite properly formulated; the second, too strong. Let’s consider them in turn.

Remember that ‘naturalistic inquiry’ is just ‘scientific inquiry.’ I think that we need to be careful about the meaning of ‘meaning’ in the statement of the second thesis. Putnam is using it, I think, to talk about how words (or sentences) relate to the world.

Chomsky now turns to the ‘understated’ and ‘not quite formulated’ conclusion, namely, that “‘language speaking’ and other human abilities cannot be studied scientifically.”

The concept human being is part of our common-sense understanding, with properties of individuation, psychic persistence, and so on, reflecting particular human concerns, attitudes, and perspectives. The same is true of language speaking. Apart from improbable accident, such concepts will not fall within explanatory theories of the naturalistic variety; not just now, but ever. This is not because of cultural or even intrinsically human limitations (though these surely exist), but because of their nature. We may have a good deal to say about people, so conceived; even low-level accounts that provide weak explanation. But such accounts cannot be integrated into the natural sciences alongside of explanatory models for hydrogen atoms, cells, or other entities that we posit in seeking a coherent and intelligible explanatory model of the naturalistic variety. There is no reason to suppose that there is a “natural kind ‘human being’"; at least if natural kinds are the kinds of nature, the categories discovered in naturalistic inquiry.

Chomsky is here (and below) saying that we shouldn’t expect our common-sense categories to line up perfectly with the real world. As science aims to discover and be about things that really exist, we should expect that our common sense categories will not have scientific theories of them.1

He gives some concrete examples:

The same is true of common-sense concepts generally. Such notions as desk or book or house, let alone more “abstract” ones, are not appro- priate for naturalistic inquiry. Whether something is properly described as a desk, rather than a table or a hard bed, depends on its designer’s intentions and the ways we and others (intend to) use it, among other factors. Books are concrete objects. We can refer to them as such (“the book weighs five pounds”), or from an abstract perspective (“who wrote the book?"; “he wrote the book in his head, but then forgot about it”); or from both perspectives simultaneously (“the book he wrote weighed five pounds,” “the book he is writing will weigh at least five pounds if it is ever published”). If I say “that deck of cards, which is missing a Queen, is too worn to use,” that deck of cards is simultaneously taken to be a defective set and a strange sort of scattered “concrete object,” surely not a mereological sum. The term house is used to refer to concrete objects, but from the standpoint of special human interests and goals and with curious properties. A house can be destroyed and rebuilt, like a city; London could be completely destroyed and rebuilt up the Thames in 1,000 years and still be London, under some circumstances. It is hard to imagine how these could be fit concepts for theoretical study of things, events, and processes in the natural world. Uncontroversially, the same is true of matter, motion, energy, work, liquid, and other common- sense notions that are abandoned as naturalistic inquiry proceeds; a physicist asking whether a pile of sand is a solid, liquid, or gas – or some other kind of substance – spends no time asking how the terms are used in ordinary discourse, and would not expect the answer to the latter question to have anything to do with natural kinds, if these are the kinds in nature (Jaeger and Nagel 1992).

What holds these counterexamples together is their mind (or attitude) dependence. What about purely mind dependent things, like beliefs?

It is only reasonable to expect that the same will be true of belief, desire, meaning, and sound of words, intent, etc., insofar as aspects of human thought and action can be addressed within naturalistic inquiry. To be an Intentional Realist, it would seem, is about as reasonable as being a Desk- or Sound-of-Language- or Cat- or Matter-Realist; not that there are no such things as desks, etc., but that in the domain where questions of realism arise in a serious way, in the context of the search for laws of nature, objects are not conceived from the peculiar perspectives provided by concepts of common-sense. It is widely held that “mentalistic talk and mental entities should eventually lose their place in our attempts to describe and explain the world” (Burge 1992). True enough, but it is hard to see the significance of the doctrine, since the same holds true, uncontroversially, for “physicalistic talk and physical entities” (to whatever extent the “mental”–“physical” distinction is intelligible).

Note that Chomsky is not denying that we can have scientific theories of belief etc, but only that what we end up talking about as ‘belief’ after we have come up with such a theory will not be identical to our current pre-theoretical conceptions of belief. Interestingly, in the last sentence, he address the metaphysical question of mental/spiritual things - is everything ultimately physical (i.e. just quarks/strings) or is there mental ‘stuff’ too. He notes that, following the same reasoning as above, we should expect that the ultimate answer to this question will be not about our common sense notions of physical and mental, but a different notion obtained by starting from these and doing science.

We may speculate that certain components of the mind (call them the “science-forming faculty,” to dignify ignorance with a title) enter into naturalistic inquiry, much as the language faculty (about which we know a fair amount) enters into the acquisition and use of language. The products of the science-forming faculty are fragments of theoretical understanding, naturalistic theories of varying degrees of power and plausibility involving concepts constructed and assigned meaning in a considered and determinate fashion, as far as possible, with the intent of sharpening or otherwise modifying them as more comes to be understood. Other faculties of the mind yield the concepts of common-sense understanding, which enter into natural-language semantics and belief systems. These simply “grow in the mind,” much in the way that the embryo grows into a person. How sharp the distinctions may be is an open question, but they appear to be real nevertheless.

What on earth?! Here he seems to be saying that ‘doing science’ and ‘learning about the world as babies do’ are usefully (if provisionally) thought of as two different things, different modes of interacting with our world. This weird passage is the start of an explication of the idea that we should not expect common-sense categories to line up with scientific categories. I think I would have written this passage in the following way:

It is true that we can do science, and it is true that we end up with a common-sense understanding of our world. While a parsimonious story would perhaps postulate that these two are manifestations of the same cognitive ability, it is prudent not to assume this to be true at the outset. Let us then suppose that there are two components of the mind, one responsible for each ability, and allow that further investigation might reveal them to be one and the same.

Sometimes there is a resemblance between concepts that arise in these different ways; possibly naturalistic inquiry might construct some counterpart to the common-sense notion human being, as H_20 has a rough correspondence to water (though earth, air, and fire, on a par with water for the ancients, lack such counterparts). It is a commonplace that any similarities to common-sense notions are of no consequence for science. It is, for example, no requirement for biochemistry to determine at what point in the transition from simple gases to bacteria we find the “essence of life”; and if some such categorization were imposed, the correspondence to some common-sense notion would matter no more than for (topological) neighborhood, energy, or fish.

Given our assumption (that science and common-sense are supported by different cognitive modules), we shouldn’t expect there to always be overlap between the outputs of either module. I don’t think this assumption is really necessary for these conclusions, and find making them distracts from the argument. Here is how I would have tried to justify the same thing:

Look, we can try to understand the world in lots of ways: our almost instinctive common-sense understanding, our favorite scientific theory, our second favorite scientific theory, etc. Let’s just think of them as theories of the world without worrying how we get them. If I have two theories of ‘the same thing’, there is just no guarantee (or even reason to think) that the one is going to cut up the pie in the same way as the other.

He gives us the example of water, where there is overlap, but notes that the other four categories of antiquity (aka the elements of the Avatar) do not correspond with natural kinds in our current physics (the element ‘air’ corresponds to, if anything, a mixture of N_2, O_2, Ar, and CO_2, to name the major players in order of quantity), and this does not worry anybody. Given that this does not worry us, he continues, why should it worry us that ‘life’, a concept we have that helps us distinguish rocks and water from rabbits and trees, might not cleave the scientific world neatly in two?

This off-topic seeming foray into science vs common sense comes back to the point here:

Similarly, it is no concern of the psychology-biology of organisms to deal with such technical notions of philosophical discourse as perceptual content, with its stipulated properties (sometimes dubiously attributed to “folk psychology,” a construct that appears to derive in part from parochial cultural conventions and traditions of academic discourse). Nor must these inquiries assign a special status to veridical perception under “normal” conditions. Thus, in the study of determination of structure from motion, it is immaterial whether the external event is successive arrays of flashes on a tachistoscope that yield the visual experience of a cube rotating in space, or an actual rotating cube, or stimulation of the retina, or optic nerve, or visual cortex. In any case, “the computational investigation concerns the nature of the internal representations used by the visual system and the processes by which they are derived” (Ullman 1979: 3), as does the study of algorithms and mechanisms in this and other work along lines pioneered by David Marr (1982). It is also immaterial whether people might accept the nonveridical cases as “seeing a cube” (taking “seeing” to be having an experience, whether “as if” or veridical); or whether concerns of philosophical theories of intentional attribution are addressed. A “psychology” dealing with the latter concerns would doubtless not be individualistic, as Martin Davies (1991) argues, but it would also depart from naturalistic inquiry into the nature of organisms, and possibly from authentic folk psychology as well. To take another standard example, on the (rather implausible) assumption that a naturalistic approach to, say, jealousy were feasible, it is hardly likely that it would distinguish between states involving real or imagined objects. If “cognitive science” is taken to be concerned with intentional attribution, it may turn out to be an interesting pursuit (as literature is), but it is not likely to provide explanatory theory or to be integrated into the natural sciences.

I think of philosophy as doing something like science: philosophy is trying to come up with coherent theories of (various aspects of) the world. While it would be awesome if the philosophical theories of perception and the biological theories of ‘perception’ were compatible, this needn’t be the case. Moreover, if they aren’t, it is not automatically the fault of biology. If biologists decide that frogs perceptions of flies allow them to accurately target flies with their tongues not by integrating with their belief systems, but rather because the tongue-shooting neurons are coupled with the neurons that activate when a certain kind of light differential impinges on the retina, we do not need to contort ourselves to recast this description in the philosophical language of representations, knowledge, and belief (this ‘anti-representationalism’ has been adopted in general by the dynamical systems theory of cognition).

More relevant for us is the attack on ‘normal conditions,’ which might be familiar to us as an argument about how to weight corpus (naturalistically occurring) vs elicited data. Here Chomsky is saying that if we want to understand the inner workings of something, we don’t have to limit ourselves to just observing how it behaves in its regular environment, but we can (and do) try to ‘stress-test’ it as much as possible. He keeps coming back to ‘content’ because he is going to be arguing that reference (and in general, mind-world relations) should not be a worry of cognitive science/linguistics. The example of jealousy is a good example of this. Assume that we had a theory (a scientific one) of (our feelings of) jealousy. As we all know, when we feel jealous, it is irrespective of whether or not our partner actually loves someone else, but only on whether we believe them to. Making a distinction between true jealousy (where our partner does in fact love another) and fake jealousy (where she does not) would (I presume) not be useful in our scientific theory, as the way the world actually is does not matter for how we feel.

As understanding progresses and concepts are sharpened, the course of naturalistic inquiry tends towards theories in which terms are divested of distorting residues of common-sense understanding, and are assigned a relation to posited entities and a place in a matrix of principles: real number, electron, and so on. The divergence from natural language is two-fold: the constructed terms abstract from the intricate properties of natural-language expressions; they are assigned semantic properties that may well not hold for natural language, such as reference (we must beware of what Strawson once called “the myth of the logically proper name,” in natural language, and related myths concerning indexicals and pronouns; P. Strawson 1952: 216). As this course is pursued, the divergence from natural language increases; and with it, the divergence between the ways we understand hydrogen atom, on the one hand, and human being (desk, liquid, heavens, fall, chase, London, this, etc.), on the other.

As science progresses, our scientific understanding of phenomena can change the way we use our words. When a chemist talks about water as a chemist, they mean something different from when they talk about water in the pub, or in the desert.

But even a strengthened version of Putnam’s first thesis does not entitle us to move on to the second, more generally, to conclude that naturalistic theories of the brain are of no relevance to understanding what people do. Under certain conditions, people see tachistoscopic presentations as a rotating cube or light moving in ab straight line. A study of the visual cortex might provide understanding of why this happens, or why perception proceeds as it does in ordinary circumstances. And comparable inquiries might have a good deal to say about “language speaking” and other human activities.

Recall that Putnam’s first thesis was that “‘language speaking’ and other human abilities do not currently fall within naturalistic inquiry.” Chomsky’s response to this was that it was formulated the wrong way, because of course our common-sense concept of ‘language speaking’ is not going to be the thing science studies. Is this relevant for Putnam’s argument? Recall that he seemed to be saying that we couldn’t expect to have a scientific theory of language speaking because there’s just too much going on to study it (it requires a type II theory). Chomsky responds by saying that language speaking, as a common-sense concept, is not what we should expect to have a theory of. This is true, but it doesn’t seem to me to address Putnam’s worry.

At any rate, we now turn to the second claim, which Chomsky said he found wanting. (My reading of) Putnam’s claim was that studying the brain was not going to be helpful in understanding how language related to the world. Chomsky has reformulated it as: ‘studying the brain is not going to be helpful in understanding how people behave.’ This is not at all the same claim! But it is going to be relevant for us as linguists, so we will dig into it.


  1. Another way of expressing this idea might be: There is no reason to think that the categories that we have evolved to see and think about are going to line up with the theoretical terms our best scientific theories will come to use. ↩︎