Differential psychology
Brain and psyche

Psychological analysis of attitudes towards death
Psychoanalysis, Linguistics and Logic

Arcana: description
Tarot: Basal Principles


Athenaeum

Richard DeWitt. Vagueness, Semantics, and the Language of Thought

Views: 47


Vagueness, Semantics, and the Language of Thought

Richard DeWitt
Department of Philosophy Fairfield University North Benson Road Fairfield, CT 06430 USA

rdewitt@fair1.fairfield.edu

Copyright (c) Richard DeWitt 1993

PSYCHE, 1(1), December 1993

http://psyche.cs.monash.edu.au/v2/psyche-1-1-dewitt.html

Keywords: intentionality, mental causation, language of thought, vagueness, semantics, mental representation

Abstract: In recent years, a number of well-known intentional realists have focused their energy on attempts to provide a naturalized theory of mental representation. What tends to be overlooked, however, is that a naturalized theory of mental representation will not, by itself, salvage intentional realism. Since most naturalistic properties play no interesting causal role, intentional realists must also solve the problem of showing how intentional properties (such as representational properties), even if naturalized, could be causally efficacious. Because of certain commitments, this problem is especially difficult for intentional realists such as Fodor. In the current paper I focus on the problem as it arises for such realists, and I argue that the best-known solution proposed to date is inadequate. If what I say is correct, then such intentional realists are left with an additional and substantial problem, and one that has generally not been sufficiently appreciated.

Vagueness, Semantics, and the Language of Thought

0.1 A number of well-known researchers working in the philosophy of mind have tended to focus, over the past ten or twelve years, on attempts to provide a naturalized theory of mental representation. For certain theorists (e.g., Fodor), this focus is easy to understand: if one's overall project depends on the view that representational properties are causally efficacious, and one accepts the view that causally-efficacious properties are restricted to naturalistic properties, then one's project requires a naturalized theory of mental representation. Or in other words, the success of such a project will depend on showing that, in spite of appearances to the contrary, representational properties really are naturalistic properties.

0.2 In discussions concerning attempts to provide a naturalized theory of mental representation, I think an important point is being overlooked, and that point provides the motivation for this paper. The point is this: if the goal is to salvage intentional realism? that is, the view that representational properties in particular, and intentional properties in general, are causally efficacious---then a naturalized theory of mental representation is not enough. A naturalized theory of mental representation will show only that representational properties are naturalistic properties. It will not, by itself, show that such properties play any interesting causal role in cognitive functions. Consider the fact that, in the context of understanding the functioning of a system, most naturalistic properties are not causally interesting. For example, the amount of dust on my stereo, the mass of the transistors, the color, the shape, and so on, are all naturalistic properties of the stereo. And while such properties can be causes (a particle of dust on my stereo might cause, within a limited area, certain wavelengths of light to be absorbed, it might at some point cause an allergic reaction in people in the vicinity, and so on), these are not causes that figure in the functional understanding of the stereo. If the goal is to understand how my stereo works, many (and probably most) of the naturalistic properties of the stereo will not be of interest. Likewise, even if we find an acceptable naturalized theory of mental representation, this will not, by itself, achieve the desired end of showing that such properties play an interesting causal role.

0.3 The problem of finding an acceptable naturalized theory of mental representation is proving difficult enough. But this additional problem of showing that such properties can play an interesting causal role has all the appearances of being at least as difficult. However, there is one possible solution that has floated in and out of the literature in recent years, and much of what I say in this paper focuses on this proposed solution. I will take some pains to show that this solution is inadequate. If what I say is correct, then intentional realists such as Fodor are left with an additional and substantial problem, and one that has generally not been sufficiently appreciated.

0.4 I begin, in Section One, with some background material: what exactly the problem is, who it is a problem for, why it needs to be addressed, and what solution has been hinted at in previous discussions. The solution will be seen to rest on a particular analogy, and in Section Two I turn to an analysis of this analogy. The strength of the analogy, as with any, depends on the characteristics of the items claimed to be analogous. Given this, I will be particularly concerned with identifying the characteristics that are needed if the analogy is to be a strong one. In Sections Three through Six I argue that the requisite characteristics are not to be found, and so the analogy is in fact not compelling. In Section Seven I consider possible objections to my argument and show that none of the objections are convincing. Section Eight provides some concluding remarks.

1. The Problem and Proposed Solution

The brain is a physical entity, and as such, transformations between brain states are causal transformations governed by causal laws. Given this, it would appear that transformations among brain states occur because of things like the electrical and chemical properties of those states. So there would seem, at first glance, to be a problem in claiming that such transformations occur because of any representational properties, or intentional properties in general, that may be associated with those states. That is, even if a particular brain state of mine can be said to have intentional properties---for example, even if it represents my grandmother or corresponds to my belief that I am about to run out of gas---the fact that it does so does not appear to be the sort of factor that can figure in that state's causal interactions.

In sharp contrast to this, consider the sort of theory envisioned by intentional realists such as Fodor (see, e.g., Fodor 1975; 1981; 1987). One of the defining characteristics of such theories is that they are committed to generalizations that are to hold precisely because of the representational content of those generalizations. By way of example, consider the (presumably overly simplified) generalization ''if x desires not to run out of gas, fears she/he is about to, and believes that gas can be acquired at the gas station, then, all else equal, x will go to the gas station.'' This is the sort of generalization to which these researchers are committed, and such a generalization, if true, is true largely in virtue of the subject's beliefs and desires about the representational content of the generalization.

So there seems to be an immediate, and fundamental, problem: namely, how could a theory committed to generalizations that are to hold in virtue of representational content explain behavior produced by a device whose causal transformations are blind to that content? At first glance, it looks as if such theories are exactly the wrong sorts of theories to be pursuing.

For convenience, I will refer to this problem as the content problem. The problem is, of course, a puzzle for intentional realists in general, but it is more problematic for some rather than others. For example (and speaking rather roughly), Dennett appears to have few naturalistic, causal, or ontological qualms about saying that if the attribution of intentional properties leads to predictive success, then such properties are real enough and causally efficacious (see, e.g., Dennett 1991). Thus, for a ''realist'' of Dennett's persuasion, with rather liberal views on issues such as natural properties, causation, and ontology, the content problem need not be particularly worrisome. In contrast, consider a researcher such as Fodor, who holds conservative views on such issues. Such realists require much more than mere predictive success in order to consider intentional properties to be real and causally efficacious. For a realist with such commitments, a solution to the content problem will be much more difficult.

The primary focus of the remainder of the paper will be on realists of this latter sort. Fodor is the best-known member of this camp, and it is no accident that the best-known proposed solution to the content problem evolves out of Fodor's Language of Thought (LOT) hypothesis. The details of this proposed solution are rarely specified, and part of what follows will be intended to clarify some of these details. In brief, the idea at the heart of the proposed solution is that the contentful generalizations of a mature psychological theory and the contentless transformations of the brain might mirror one another in something like the way in which, in modern logic, a semantics and an appropriate syntax can mirror one another. For the sake of the analysis to follow later in this paper, this idea will need to be fleshed out a bit.

We know that it is possible for a syntax and semantics to mirror one another?this is what completeness and soundness results in logic demonstrate. Moreover, we know from our experience with computers that it is possible to implement this mirroring in a causal system. For example, imagine we write a program to perform predicate logic derivations. (Given the memory limitations of actual computers, we will have to content ourselves with working with only a fragment of predicate logic, but that will suffice for the sake of the example.) The language, formation rules, axiom schema, rules governing substitutions and derivations, and so on, are all relatively straightforward to implement in a program of the sort we are imagining. Suppose we set up the program to grind away at derivations, so that as it goes about its business, it proves things such as that A is a theorem or that B is derivable from a particular set of sentences.

There is an important sense in which a computer running this program will, at the causal level, be doing syntactic derivations in predicate logic. That is, this will be a syntactic, ^contentless" level. But the computer can be described, perfectly accurately, in semantic terms as well. It would not be difficult to specify a semantics with respect to which the syntactic system implemented in the computer will be complete and sound. Then in the running of the program, a sentence A will be (syntactically/causally) provable as a theorem if and only if A is a (semantic) tautology, and a sentence B will be (syntactically/causally) derivable from a particular set of sentences if and only if, at the semantic level, B is true whenever every member of that set of sentences is true. In this sense, the syntactic and semantic levels will mirror one another.

To tie this back in with the solution to the content problem, the proposal on the table is that, at the causal level, the brain is conducting its business in a LOT. That is, in something like the way that the computer described above provides a causal implementation of a syntactic system (namely, a fragment of the predicate calculus), so also might the brain provide a causal implementation of something that is like a syntactic system (namely, the LOT). Now suppose we envision, as intentional realists such as Fodor often do, a mature psychological theory as something like a semantics for the LOT (see, e.g., Fodor 1987: 16-21 and 97-99). If the LOT and a mature psychological theory can mirror one another in something like the way a traditional syntactic system and semantic system can mirror one another, then this would go a long way toward solving the content problem. For example, as the semantic level of the computer described above could be of substantial interest in predicting and explaining the behavior of a computer running the predicate logic program, so might a mature psychological theory be of substantial interest in predicting and explaining behavior produced by a syntactic/causal device like the brain.

As should be clear, the mirroring relation between the syntactic and semantic levels is an important component in the proposed solution, for it is precisely this relation that makes the semantic level of interest in predicting and explaining the behavior of the computer running the predicate logic program. If the proposed solution to the content problem is to work, it will likewise presumably be some sort of mirroring relation that will make a mature psychological theory of interest in predicting and explaining behavior that results from the workings of a syntactic device like the brain. I want to turn now to the question of whether there is any reason to be optimistic that the LOT and a mature psychological theory might mirror one another.

As I see it, the only reason to be optimistic stems from the analogy with modern logic. For my purposes, it is important to emphasize the analogy that is at the heart of this solution to the content problem. The LOT is being viewed as analogous to a syntactic system, a mature psychological theory is viewed as analogous to a semantics corresponding to that system, and the relationship between the LOT and the mature psychological theory is viewed as analogous to the relationship between a syntactic system and associated semantics with respect to which that syntactic system is complete and sound. (The completeness and soundness are required, of course, for the mirroring.) In the next two sections, it will be convenient to have a name for this analogy, so I will hereafter refer to it as the Tarskian analogy. The name stems from the fact that when certain intentional realists speak of a mature psychological theory being like a semantics, the sort of semantics they generally describe are Tarski-style semantics (see, e.g., Fodor 1987: 97-99). Also, when I speak of Tarski-style semantics, I will have in mind the sort of semantics that proceeds by giving an interpretation for the primitive non-logical vocabulary of the language in question, together with a theory of truth. But in addition, I will typically have in mind extensional, set-theoretic semantics, since these are the most common types of semantics for which we have completeness and soundness results.

I have thus far argued that intentional realists in general, and especially those with commitments similar to Fodor, need a solution to the content problem. Moreover, I have emphasized that the solution suggested by the LOT hypothesis rests on the Tarskian analogy. The strength of any analogy largely depends, of course, on the characteristics of the items claimed to be analogous. My next task is to identify certain characteristics that would be needed in order for this analogy to be compelling.

2. Needed Characteristics

2.1 A mature psychological theory will presumably not be a trivial theory. For example, such a theory should not endorse every possible contentful generalization. So if the theory sanctions a generalization such as that discussed above, "if x desires not to run out of gas, ... , then x will go to the gas station," the theory should not also sanction the generalization "if x desires not to run out of gas, ... , then x will not go to the gas station." For the purposes of the Tarskian analogy, we can safely conclude that we are looking for a Tarski-style semantics that is also non-trivial in the sense being discussed. So at a minimum, a semantics that serves the purposes of the Tarskian analogy should be a consistent semantics. This characteristic should not be controversial, and I mention it primarily to illustrate the point that not just any Tarski-style semantics will serve the purposes of this analogy. Let me turn now to fleshing out some other characteristics that would be required for the Tarskian analogy to be compelling.

Remember that a crucial part of the story involves a mature psychological theory mirroring the syntactic/causal transformations of the LOT. When dealing with systems of logic, the way to show that a semantics and syntax mirror one another is by providing (a) a completeness result, that is, a result showing that if a sentence A is (semantically) true whenever every member of a particular set of sentences is true, then A is (syntactically) derivable from that set of sentences, and (b) a soundness result showing that if a sentence A is derivable from a particular set of sentences, then A is true whenever every member of that set of sentences is true.1> It would appear, then, that the Tarskian analogy needs a semantics for which there are appropriate completeness and soundness results.

For the sake of clarity, two issues need to be addressed at this point: (a) what does this completeness and soundness requirement entail--in particular, what exactly is required to be complete and sound? And, (b) how important is this requirement? I address the first question first.

In the current context, the completeness and soundness requirement can be interpreted in either a stronger (and rather uncharitable) sense, or in a weaker, more charitable sense. In the stronger interpretation (stronger because it makes the requirement much more difficult), the LOT is interpreted as literally being a syntactic system, and the mature psychology is interpreted as literally being a semantics for the LOT. That is, instead of viewing the LOT/mature psychology as being analogous to a syntactic system/semantic system, the LOT/mature psychology are viewed as literally being a syntactic system/semantic system. Under such an interpretation, the completeness and soundness requirement would also be interpreted literally. That is, this interpretation literally requires completeness and soundness results for the syntactic system (the LOT) and the semantic system (the mature psychology).

Some years ago this interpretation was perhaps more common, and even now realists such as Fodor occasionally speak in such terms. In spite of this, it is likely that few if any believe that the LOT is literally a syntactic system and that a mature psychology will literally be a semantics for the LOT. Moreover, there are a variety of reasons for not interpreting the Tarskian analogy in this way. To point out just one reason, consider the fact that completeness and soundness are, by definition, characteristics of systems of mathematics and logic. In contrast, the LOT (as embodied in a physical system like the brain) and a corresponding mature psychology (and one at best some years in the future) are presumably not envisioned as literally being systems of mathematics or logic. As such, with respect to the LOT/mature psychology, it is far from clear what exactly completeness and soundness could amount to. Given such concerns, and in the interest of reading the Tarskian analogy in a more charitable light, it is best that my analysis of the Tarskian analogy not be based on this overly-strong interpretation.

The more charitable interpretation stresses that the Tarskian analogy is, after all, an analogy. As with any analogy, the central claim is that A is like B. Here, of course, A is the pair consisting of the LOT/mature psychology and B is a pair consisting of some appropriate syntactic system/semantic system of mathematics or logic. At this point, then, our question should be clear: are completeness and soundness results required for A, or for B? On the more charitable interpretation, the completeness and soundness results are not required for A, for this would presuppose the overly-strong interpretation. But--and this is the next point I shall argue-- completeness and soundness results are absolutely required for B.

Why are completeness and soundness results important? Completeness and soundness results make for a close mirroring between syntactic and semantic levels, and such close mirroring is needed in order for the Tarskian analogy to provide a solution to the content problem. To help make my case, consider a situation in which a contentful, semantic level only loosely corresponds to a syntactic/causal level. For example, consider the chess-playing program Deep Thought. Deep Thought's behavior can be at least partly predicted and explained using a semantic, contentful level--Deep Thought believes it is better to castle before deploying the queen, wants to keep its knights off the wings, desires to control the center of the board, and so on. However, this semantic level only very loosely corresponds to the syntactic/causal level of Deep Thought's program, as evidenced by the fact that such semantic generalizations about Deep Thought's behavior are at best only roughly accurate.

To continue, notice that intentional realists such as Fodor need a mature psychological theory to carry an ontological commitment to beliefs and desires (Fodor 1987: 24-27). The reason for this is clear: to vindicate intentional realism, beliefs, desires and the like must be causally efficacious, and as such, they must be, in some important sense, real. But for realists such as Fodor, the sort of loose correspondence between the semantic and syntactic/causal levels associated with Deep Thought carries no such ontological commitment. So even though beliefs, desires, and other contentful notions might be useful in (roughly) describing Deep Thought's behavior, the realists in question would not accept that such talk carries an ontological commitment to Deep Thought actually having beliefs and desires. (This is, of course, one of the areas in which there are differences between realists such as Fodor and "realists" such as Dennett. Again, the argument of this paper is directed toward realists of the former persuasion.) In general, a loose correspondence between the syntactic and semantic levels seems incapable of providing the ontological commitments necessary to vindicate Fodor's brand of intentional realism. As such, if these intentional realists are to use the Tarskian analogy as part of a solution to the content problem, then the mirroring relationship--that is, the completeness and soundness results--are an essential ingredient of the analogy.

These considerations on Deep Thought are meant to provide at least a prima facie case for the completeness and soundness requirement. I will turn now to a more direct argument. Suppose, for the sake of a reductio argument, that the LOT/mature psychology are viewed as analogous to a syntactic system/semantic system, but such that the syntactic system is not complete with respect to the accompanying semantics. To say that there is a lack of completeness is just to say that there will be goings-on in the semantic system for which there are no corresponding goings-on in the syntactic system. Since the LOT is supposed to be analogous to the syntactic system and the mature psychology to the semantic system, in this scenario the analogy suggests that there will be goings-on in the mature psychology for which there are no corresponding goings-on in the LOT. This in turn implies that the mature psychology says things that, with respect to the LOT, are not accurate, and such a psychology would simply be an incorrect theory. This, I take it, is an unacceptable scenario for intentional realists such as Fodor. So for the Tarskian analogy to provide an acceptable solution to the content problem, such realists need to appeal to a syntactic system which is complete with respect to its accompanying semantics. 2.10 On the other hand, suppose, again for reductio, that the LOT/mature psychology are viewed as analogous to a syntactic system/semantic system, such that the syntactic system is not sound with respect to its accompanying semantics. To say that there is a lack of soundness is just to say that there will be goings-on in the syntactic system for which there are no corresponding goings-on in the semantic system. Again bearing in mind the analogy, this scenario suggests that there will be goings-on in the LOT which are not reflected in the mature psychology. In other words, the mature psychology will at best be only roughly accurate. Such a scenario would not be unlike the Deep Thought scenario discussed above, and it carries with it much the same problems as the Deep Thought scenario. Most notably, a mature psychology that is at best only roughly accurate is unlikely to carry with it the ontological commitments required by realists such as Fodor. This, I take it, is also an unacceptable scenario for intentional realists of Fodor's persuasion. In conclusion, to be of use to the intentional realists under consideration, the Tarskian analogy needs a syntactic system that is both complete and sound with respect to its accompanying semantics.2>

2.11 There is one last characteristic to be considered. Recall that the sort of mature psychology of interest here is committed to generalizations involving ordinary propositi onal attitudes about ordinary objects and ordinary characteristics of objects. Given this, such a theory is envisioned as being something like a semantics for a language in which some of the terms are names of ordinary objects, and in which some of the predicates correspond to ordinary predicates of natural language. But most, and maybe all, natural language predicates are vague--that is, they have objects in their domain of application such that the predicate neither clearly applies nor clearly fails to apply to those objects. Since a mature psychology is envisioned as being something like a semantics for the LOT, this suggests that the LOT is a vague language. And in fact, there are other reasons to believe this as well. Although the tie between vagueness and the LOT has not been discussed much in the literature--in fact, Sorensen (1991) is the only author to discuss the issue at any length--it is a straightforward point. For example, Fodor often stresses the similarities between the LOT and natural language (see, e.g., Fodor 1975; 1987), and at least some of the syntactic/causal transformations of the LOT are envisioned as involving representations equivalent in content to natural language predicates. By way of example, the LOT presumably includes representations that have the same content as ordinary color predicates (Fodor 1975 suggests that this is required to explain, among other things, our ability to distinguish ordinary projectible predicates from grue-like predicates). Color predicates, of course, are paradigmatic cases of vague predicates. Given the need for some of the syntactic/causal transformations of the LOT to involve representations equivalent in content to ordinary predicates, and given that most, and maybe all, ordinary predicates are vague, the LOT must be a vague language.3> It follows, then, that a syntactic system/semantic system that serves the purposes of the Tarskian analogy should be suitable for a language with vague predicates.

To summarize: the Tarskian analogy is at the heart of the proposed solution to the content problem. The Tarskian analogy claims that A is like B, where A is the pair consisting of the LOT/mature psychology, and B is the pair consisting of an appropriate syntactic system/semantic system. In this section we have seen that, for the analogy to be compelling, whatever systems go in place of B must have certain characteristics. In particular, the analogy needs a syntactic system/semantic system appropriate for a language containing vague predicates, and for which we have appropriate consistency, completeness, and soundness results.

The goal of the following sections is to assess the prospects for finding a syntactic system/semantic system suitable for the Tarskian analogy. Although I begin with a discussion of classical logic, much of what follows focuses on various non-classical systems. Discussions of such systems have tended to focus on their semantics, and in the ensuing discussion it will likewise prove more convenient to focus on semantics.

3. Classical Logic

Consider first the most common Tarski-style semantics for which we have appropriate completeness and soundness results, namely, a set-theoretic, two-valued extensional semantics for classical logic. In such a semantics, every predicate is assigned an extension, such that each object is either in that extension or not in that extension. Such semantics can be shown to be consistent, and there are the right sorts of completeness and soundness results. However, given any predicate and any object, the object is either in or not in the extension of that predicate, and so the predicate either clearly applies or clearly fails to apply to that object. Thus, in such semantics, predicates are not vague. So our most common semantics, that is, two-valued Tarski-style semantics, are not appropriate semantics for the purposes of the Tarskian analogy.

In the present context, the problem with these sorts of two-valued semantics stems, of course, from their unsuitability as semantics for languages with vague predicates. However, in recent years a number of alternative semantics have been argued to be suitable as semantics for vague languages. In particular, finitely-many valued semantics, infinite valued semantics (including fuzzy logic), and supervaluation semantics have all been argued to be appropriate as semantics of vagueness. Let us see if any of these options give us reason to be optimistic about finding a semantics appropriate for the Tarskian analogy.

4. Many-Valued Semantics

Although a three-valued approach is often the first type of semantics to come to mind, such semantics are clearly not of much interest. In typical three-valued semantics, each predicate is assigned an extension and an anti-extension, but such that the two need not exhaust the domain. The idea, of course, is to let an object that is borderline with respect to a predicate be in neither the extension nor anti-extension of that predicate. A sentence asserting that the predicate applies to the object, then, will be considered indeterminate in truth-value.

The obvious objection to such a semantics is that the treatment of vagueness is superficial. It remains the case that given any object and any predicate, the object determinately will be in the extension of the predicate or determinately will not be in the extension of that predicate. In fact, instead of the two precise categories of two-valued semantics, we now have three equally-precise categories, namely, the set of clear cases, clear non-cases, and borderline cases. And this is exactly what fails to be the case with vague ordinary-language predicates. With ordinary language predicates, the clear cases, clear non-cases, and borderline cases do not form precise sets.

To see this, suppose we have before us a series of 1500 colored cards, such that the first reflects monochromatic light of wavelength 5000 angstrom units, with each of the other cards reflecting light of one angstrom unit greater then its predecessor. Suppose we view the cards in ordinary sunlight against a neutral gray background. The first card will be a clear case of a green card, the last card will be a clear case of a red card, and any two consecutive cards will be observationally indistinguishable in color (a typical perceiver will need to travel ten to thirty cards along the series before finding one whose color is observationally distinguishable from a given card). With respect to the predicate 'green', there will be cards that are clear cases, cards that are clear non-cases, and cards that are clearly borderline. But these divisions will not be precise--there will not be any sharp division between, say, the clear cases and the borderline cases. Rather, what we find are cards that are borderline-green, borderline-borderline-green, and so on, but never do we find sharp boundaries between any of these categories.

This lack of sharp boundaries is typical of vague predicates. Thus, a semantics that ignores this characteristic is not a semantics suitable for a language with vague predicates. As such, typical three-valued semantics will not serve the purposes of the Tarskian analogy.

In this type of semantics, nothing important changes if we add four, five, six or any number of truth-values to the semantics. Suppose we let 1 represent definite truth, 0 represent definite falsity, and let the reals between 0 and 1 represent the truth-values between definite truth and definite falsity. In a typical many-valued semantics, regardless of how many truth-values are employed, the objects in the domain still fall into precise sets. Consider, for example, a six-valued semantics in which the set of truth-values consists of {0,.2,.4,.6,.8,1}. Suppose 'F' is some predicate. With respect to this predicate, each object in the domain will fall into one of six precise sets. In one set will be those objects for which sentences expressing that 'F' applies to them are assigned a truth-value of 1. In another set will be those objects for which sentences expressing that 'F' applies to them are assigned a truth-value of 0. And likewise for each of the remaining truth-values. In general, there will be as many precise sets as there are truth-values. Again, this is uncharacteristic of vague ordinary language predicates and strongly suggests that such semantics are not appropriate as semantics for languages with vague predicates.4> For this reason, these semantics likewise seem inappropriate for the purposes of the Tarskian analogy.

Given the characteristics needed for the Tarskian analogy, this problem apparently is intractable. The only way to avoid the problem is for the semantics to have at least some sentences that have no determinate truth-value. But in such a semantics, the truth- conditions for the connectives will no longer be well-defined. For example, in many- valued semantics a disjunction typically takes the maximum value of its disjuncts. But if a sentence does not have a definite truth-value, then the truth-conditions will not be defined for a case in which that sentence is one of the disjuncts. Similar considerations hold for the other connectives. So a semantics that withholds truth-values from certain sentences will not be a well-defined semantics.

Of course, there is nothing that requires a semantics be well-defined in the sense being discussed. However, it is easy to see that a semantics that is not well-defined will not serve the needs of the Tarskian analogy. The problem is simply that a semantics must be well-defined for there to be the needed soundness result.

To see this, suppose a sentence B is (syntactically) derivable from a consistent set of sentences A1, A2, ... , An. Then B v C will also be derivable from A1, A2, ... , An.5> But suppose that C is a sentence whose truth-value is not defined. Then B v C will also not be defined. So even though B v C is derivable from A1, A2, ... , An, B v C will not be true every time A1, A2, ... , An are all true. And so such a semantics loses the needed soundness result. In summary, then, a well-defined many-valued semantics will not be of interest because it will not be appropriate as a semantics for a language with vague predicates, and an ill-defined many-valued semantics will not have the soundness result needed to be of interest.

5. Fuzzy Logic and other Infinite-Valued Semantics

5.1 Although nothing in the preceding discussion hinged on whether the range of truth- values was finite or infinite, given the recent interest in fuzzy logic and other infinite- valued semantics, a few extra words might be in order about these sorts of systems. Such semantics might include either the older infinite-valued systems, such as that of Lukasiewicz and Tarski (1930), or the fuzzy logics discussed in, for example, Zadeh (1975). There is a straightforward reason why such semantics are not interesting with respect to the Tarskian analogy. As shown by Scarpellini (1962) and refined by Morgan and Pelletier (1977), there can be no completeness or soundness results, of the sort needed to be of interest in this context, for infinite-valued semantics. So although infinite-valued semantics might be interesting in certain contexts--and I think they are interesting in certain respects--they are not of interest for the purposes of the Tarskian analogy.

6. Supervaluation Semantics

The only other semantics that has been argued to be appropriate for languages with vague predicates is supervaluation (SV) semantics. SV semantics was first described by van Fraassen (1968) and tailored as a semantics of vagueness by Fine (1975).6> Only a brief overview of Fine's semantics is needed to make the point that it, too, does not give us reason to be optimistic about finding an appropriate semantics for the Tarskian analogy.

On Fine's SV semantics, a vague sentence?that is, a sentence containing vague predicates--is true if that sentence is true on every way of making its vague predicates completely precise, false if it is false on every way of making the vague predicates completely precise, and the truth-value of the sentence is indeterminate (or undefined) otherwise. It is not difficult to show that such a semantics will validate exactly the classical truths and retain the classical theory of deducibility.7> Therefore, the standard consistency, completeness, and soundness results from classical logic apply to Fine's SV approach as well.

The problem with SV semantics is in its appeal (in the truth-conditions) to making predicates completely precise. This is not just an idle part of Fine's SV semantics, for the consistency result depends upon the assumption that predicates can be made completely precise. To see this, suppose we have a sentence whose predicates cannot be made completely precise. Given the truth-conditions for Fine's SV semantics, for that sentence not to be true there must be a way of making the predicates completely precise which results in the sentence not being true. But since the predicates cannot be made completely precise, on SV semantics the sentence comes out true. For exactly parallel reasons, the sentence is also false. This is, of course, just to say that the semantics will be inconsistent. So if predicates cannot be made completely precise, then Fine's SV semantics is inconsistent and hence not of interest for our purposes.8> 6.4 It is worth noting that this inconsistency will arise if even a single predicate cannot be made completely precise. So the question arises as to whether the assumption that every predicate can be made completely precise is a reasonable assumption. I think it is clear that it is not. We do, of course, regularly make predicates more precise. To borrow Alston's (1967) example, we might make the vague term 'city' more precise by redefining it as 'community with more than 50,000 inhabitants.' 'City' will now be more precise, but by no means completely precise, because the term now inherits all the vagueness of 'community' and 'inhabitant.' Who exactly is to count as an inhabitant of a community? Do people who keep their summer homes there count as inhabitants? How about college students? Visiting professors? And what exactly defines the boundaries of a community? We could go on to redefine terms like 'community' and 'inhabitant' in order to make them more precise, but in doing so we will have to employ other terms, and our new definitions likely will inherit the vagueness of those terms. And no matter how long we continue at the game of making the terms more precise, it seems unlikely that we could make any ordinary predicate completely precise.

Vagueness generally is not problematic in our ordinary discourse simply because we usually can make terms precise enough for the purposes at hand. For example, if for reasons of assessing taxes we need to make 'city' more precise, we need not make it completely precise; rather, we need merely make it precise enough for the particular situation. But the fact that we can make predicates precise enough for the purpose at hand should not be confused with the assumption that we can make predicates completely precise. We do not, and probably could not, do this for even a single predicate. And the assumption in Fine's SV semantics that we can make every predicate completely precise certainly is not a plausible assumption. But as mentioned, without this assumption, Fine's SV semantics is inconsistent and hence not of interest in this context.

It is straightforward enough to modify SV semantics to remove the assumption that predicates can be made completely precise, thereby bringing SV semantics more in line with the characteristics of ordinary predicates. The most natural (and seemingly the most appropriate) way to modify SV semantics is to let a vague sentence be true if true on every way of making the predicates more precise (without requiring that the predicates be made completely precise), let the sentence be false if false on every way of making the predicates more precise, and let that sentence be indeterminate otherwise.

However, such a semantics can no longer appeal to the classical soundness result. To see this, suppose 'F' is a predicate that cannot be made completely precise and 'a' is an object such that, on some way of making 'F' more precise, it neither clearly applies nor clearly fails to apply to 'a' (since 'F' cannot be made completely precise, we know there is such an object). On this modified SV semantics, 'Fa v -Fa' will be indeterminate in truth- value. So such a semantics no longer validates the law of excluded middle, and so the classical soundness result will not apply. So while this modified SV semantics avoids the major problem with Fine-style SV semantics, it does so at the cost of losing the soundness result to which Fine-style SV semantics appealed. And given the lack of a soundness result, this semantics also fails to serve the purposes of the Tarskian analogy.

This completes the survey of semantics that, over the past twenty years, have been argued to be appropriate as semantics for languages with vague predicates. That is, these are the semantics that might have been appropriate for the Tarskian analogy. In this survey, we found no semantics that will serve the purposes of this analogy. By itself, this of course does not prove that there can be no such semantics. But the topic of semantics for vague languages has been a popular topic in the literature for over two decades now, and out of this twenty-year discussion has emerged no semantics suitable for the Tarskian analogy. Given this, there seems little reason to be optimistic that such a semantics will be found.

7. Objections

In this section, I briefly consider various objections that have been offered in response to some of the arguments of this paper. I also provide brief responses to those objections.

Objection One: The root of the Tarskian analogy, as with any analogy, is a claim that A is like B. In this particular analogy, A is, again, the LOT/mature psychology and B is some appropriate syntactic system/semantic system. As emphasized in preceding sections, completeness and soundness are important ingredients in this analogy. With respect to syntactic systems/semantic systems in math and logic, it is generally clear what completeness and soundness are. But with respect to the LOT as embodied in a physical system such as the brain, and with respect to a mature psychology envisioned as forthcoming sometime in the future, it is far from clear what completeness and soundness could amount to. In short, the unclarity of important ingredients of the analogy undermines the analysis of that analogy given in this paper.

Reply: There is something to be said for the points made in this objection. But notice that the objection is not so much an objection to the argument of this paper as it is an objection to the Tarskian analogy itself. The objection is emphasizing certain relevant differences between the LOT/mature psychology and a syntactic system/semantic system (for example, we are clear on what completeness and soundness would be for the latter, but not for the former). Of course, any relevant differences between the items claimed to be analogous will undermine the analogy. And since my goal has been to show that the Tarskian analogy is an uncompelling solution to the content problem, the objection presented in the preceding paragraph actually contributes to that goal.

Objection Two: Even if one accepts that the analogy in question is worth considering, requiring completeness and soundness of the syntactic system/semantic system is too strong a requirement. It should be sufficient if the LOT/mature psychology roughly correspond to each other. Hence, for the analogy to be acceptable, we do not really need results as strong as completeness and soundness for the syntactic system/semantic system.

Reply: This issue was discussed at length in Section Two. If the contentful generalizations of a mature psychology only roughly correspond to the syntactic/causal level of the LOT, then, for intentional realists such as Fodor, the ontological commitments necessary to vindicate that brand of intentional realism will be lost. So although completeness and soundness are strong requirements, they are necessary if the Tarskian analogy is to salvage Fodor's brand of intentional realism.

Objection Three: The Tarskian analogy puts too much emphasis on deductive soundness and completeness, whereas much of mental processing is bound to be inductive.

Reply: True enough, but this points to yet another problem with these sorts of attempts to vindicate intentional realism. With respect to the Tarskian analogy, we have restricted our attention to just a deductive, monotonic fragment of reasoning. Even with this restriction, we were unable to find a syntactic system and corresponding semantics that would serve the purposes of the analogy. If we expand our attention to include, for example, inductive and non-monotonic reasoning, it is even less likely that suitable syntactic and semantic systems will be forthcoming. So instead of providing a problem for the central argument of this paper, this objection reinforces that argument.

Objection Five: Much of the problem with finding a semantics suitable for the Tarskian analogy stemmed from issues concerning vagueness and logic. But Tarski himself suggested how vagueness can be handled, simply by matching object language vagueness with metalanguage vagueness. So there is no real problem with finding a suitable semantics for vague languages. One simply needs to use a vague metalanguage.

Reply: This objection overlooks the fact that respecting vagueness is only one of several requirements needed for the Tarskian analogy. True, one can indeed have a semantics that respects vagueness simply by incorporating vagueness into the metalanguage. But no known such semantics preserves the completeness, soundness, and consistency results necessary for the Tarskian analogy. Alternatively, one can preserve the completeness, soundness, and consistency results--for example, by switching to a set- theoretic metalanguage--but as I argued in the preceding sections, doing so comes at the cost of no longer respecting the vagueness necessary for the analogy. The Tarskian analogy needs a semantics that respects all of the requirements of consistency, completeness, soundness, and vagueness. And no known semantics does this.

Objection Six: Vagueness is a general problem, not one specific to the LOT hypothesis or to intentional realists. And a problem for all is a problem for none.

Reply: Vagueness is indeed a general puzzle. But the problem vagueness poses in the current context is a quite specific problem: it undermines the Tarskian analogy and hence undermines attempts to use the Tarskian analogy as a way to vindicate Fodor's style of intentional realism. So, for example, those who are not intentional realists, or those who are willing to give up certain views to which realists such as Fodor are committed (e.g., that the LOT involves representations equivalent in content to ordinary language predicates), will not be touched by this particular problem posed by vagueness. Likewise, those who doubt the LOT hypothesis (e.g., most of those in the connectionist camp) will not be bothered by the problems discussed in this paper. So while vagueness is a general puzzle, it is not a problem for all.

8. Concluding Remarks

8.1 In Section One, I argued that the content problem--the problem of showing how intentional properties could play an interesting causal role--was a substantial puzzle, and that intentional realists such as Fodor especially need a solution to this problem. I went on, in that section, to consider the suggestion that the LOT hypothesis provides a solution to the content problem. But as we saw, the solution rests on the Tarskian analogy. The strength of this analogy, as with any, depends on the characteristics of the items claimed to be analogous, and the concern of Section Two was to identify the characteristics needed to make the analogy a strong one. Having identified certain required characteristics, the central argument of Sections Three through Seven was that no known semantics has those characteristics. Moreover, there seems little reason to be optimistic about the prospects of finding a new semantics that will serve the purposes of the Tarskian analogy. The overall conclusion of the paper, then, is that the Tarskian analogy is not a strong analogy, and thus the LOT hypothesis, in spite of appearances to the contrary, does not provide a satisfying solution to the content problem. As such, intentional realists such as Fodor have not just the problem of finding a naturalized theory of mental representation, but also the problem of showing how intentional properties, even if naturalized, could play an interesting causal role.

8.2 The content problem is an intriguing puzzle. The usual intuitions about the role of content in explanations of human behavior, together with the apparently conflicting intuitions about the syntactic/causal nature of the brain, make the content problem unlike any problem thus far encountered in Western science. Thus it is not surprising that the problem is resisting easy solution. The general solution of interest in this paper, and the Tarskian analogy in particular, were interesting ideas, and they were certainly ideas worth exploring. Philosophical ideas, as Fodor once pointed out, often take the form of "let's try looking over here." Having looked over here and not found what we wanted, perhaps now is a good time to focus attention elsewhere.

Notes

Much of the work on this paper was accomplished during a NEH Summer Seminar on Mental Representation. I would like to thank the members of that seminar for many fruitful discussions. An earlier version of this paper was presented at a Philosophy of Mind Colloquium at Wesleyan University, and I would also like to thank those in attendance for their comments. Finally, I would like specifically to acknowledge Rob Cummins, David Gilboa, Steve Horst, editor Kevin Korb, Michael Losonsky, Bill Robinson, George Schumm, and two anonymous referees for helpful comments on earlier drafts of this paper.

1> For many-valued semantics, the needed completeness and soundness results would amount to showing that A is derivable from a particular set of sentences if and only if A receives a designated value whenever every member of that set receives a designated value. See Rescher (1969) for more on deducibility and designation in many-valued semantics.

2> It is worth noting that although the completeness and soundness requirement is necessary for the Tarskian analogy, it may not be sufficient. To see this, consider an example suggested to me by editor Kevin Korb. Return again to the example of the computer running the predicate logic program. Now imagine another program doing predicate logic derivations, but such that this program uses different axiom schemata, different rules of inference, and so on. It might well be that both of these systems are complete and sound with respect to the same semantics, even though they differ from one another in important ways. In such a scenario, the semantics will likely be of relatively little explanatory value, insofar as the goings-on at the semantic level would have to differ importantly from the goings-on of at least one of the two programs. What this suggests is that the completeness and soundness requirement may not be sufficient for the purposes of the Tarskian analogy. However, this is all compatible with what I am arguing in the current paper, since my argument requires only that the completeness and soundness requirement be necessary for the analogy. And in fact, these considerations help my overall argument, insofar as they provide one more reason to be suspicious of the Tarskian analogy.

3> See Sorensen (1991) for further arguments to the effect that the LOT must be a vague language.

4> See DeWitt (1992) for a more thorough treatment of the adequacy of many-valued semantics as semantics of vagueness, as well as a discussion of the adequacy of fuzzy logic and supervaluation semantics.

5> This assumes that the syntax is not a bizarre syntax. This seems a safe enough assumption, given what Fodor (1987, ch. 1) suggests about his desire that the entire account be fairly normal.

6> Dummett (1975) and Pinkal (1983) have also suggested that SV semantics might be appropriate as a semantics of vagueness. What I say here about Fine's SV approach applies to these other SV approaches as well.

7> See Fine (1975) for elaboration.

8> This point was first brought to my attention by George Schumm.

References

Alston, William (1967). Vagueness. In The Encyclopedia of Philosophy: Vol. 8. New York: MacMillan Publishing Co.

Block, Ned (1990). The computer model of the mind. In D. Osherson & E. Smith Thinking, Cambridge: MIT Press.

Dennett, Daniel (1991). Real patterns. Journal of Philosophy, 88, 27-51.

DeWitt, Richard (1992). Remarks on the current status of the sorites paradox. Journal of Philosophical Research, 17, 93-118.

Dummett, Michael (1975). Wang's paradox. Synthese, 30, 301-324.

Fine, Kit (1975). Vagueness, truth and logic. Synthese, 30, 265-300.

Fodor, Jerry (1975). The language of thought. New York: Thomas Y. Crowell.

Fodor, Jerry (1981). Representations. Cambridge: MIT Press.

Fodor, Jerry (1987). Psychosemantics. Cambridge: MIT Press.

Haugeland, John (Ed.) (1981). Mind design. Cambridge: MIT Press.

Lukasiewicz, Jan & Tarski, Alfred (1930). Investigations into the sentential calculus. In L. Borkowski (Ed.) Jan Lukasiewicz: Selected works. North Holland.

Morgan, C. G. & Pelletier, F. J. (1977). Some notes concerning fuzzy logics. Linguistics and Philosophy, 1, 79-97.

Pinkal, Manfred (1983). Toward a semantics of precization. In Ballmer & Pinkal (Eds.) Approaching vagueness. North-Holland.

Rescher, Nicholas (1969). Many-valued logic. McGraw-Hill Publishing Co.

Scarpellini, B. (1962). Die Nichtaxiomatisierbarkeit des unendlichwertigen Praedikatenkalkuls von Lukasiewicz. Journal of Symbolic Logic, 27, 159-170.

Sorensen, Roy (1991). Vagueness within the language of thought. The Philosophical Quarterly, 41, 389-413.

van Fraassen, Bas C. (1968). Presuppositions, implications and self-reference. Journal of Philosophy, 65.

Zadeh, Lotfi (1975). Fuzzy logic and approximate reasoning. Synthese, 30, 407-428.


This material is non-infringing any natural or legal persons.
If not, contact the site administrator.
Material will be removed immediately.






      Home