Signs, Concepts, Things

Aristotle, Augustine, and Boethius each in their own way discuss signification as a triadic relation, with the soul or concepts in between signs and things. The 13th-century Franciscan Roger Bacon diverges sharply from this older view, arguing instead that signs refer directly to things. Bacon, who with Albert the Great was the first European to lecture publicly on the major works of Aristotle, is said to have initiated the study of Greek and Arabic optics (perspectiva) in the Latin-speaking world. The 1978 discovery of a manuscript of his lost work De Signis (On Signs) has raised scholarly awareness of his semiotics.

Boulnois has previously mentioned that Bacon treats concepts as a kind of sign. Here he contrasts Augustine and Aristotle with Bacon.

“Augustine thinks signification as a triadic relation between a thing, the sensible species perceived by the senses; another, the signified; and an interpreter, the mediating thought…. From the outset, Augustine distinguishes ‘natural’ signs from ‘given’ signs (data). Natural signs do not involve a voluntary production, but correspond to a natural causality: footprints recall the passage of an animal that produced them, smoke the fire that caused it. But the ‘given’ signs presuppose the intentional activity of a living being” (L’Être et représentation, p. 18, my translation throughout).

I like the idea that there is always need for interpretation.

The idea of natural signs is fascinating. These would have to be distinct from the sensible and intelligible “species” whose existence and role were debated by medieval authors. Whereas species are a kind of images or likenesses of things, smoke is not an image of fire, and tracks are not the image of an animal. The natural signs are each interpretable as effects of a particular kind, that point to a particular kind of natural cause. This implies the existence of a natural causality that is real in the sense of being in the things and not imposed by us, even if its particulars require interpretation.

Meanwhile, “given” signs do reflect a sort of imposition, even if the imposition is not the act of an individual. In contrast with the natural signs, they are said to be voluntary. The main example seems to be the words and expressions of a language. Relative to an individual, they are pre-given; but relative to a historical community, they mean what the community in fact takes them to mean.

“Augustine articulates this theory of the sign to that of language and intellection, notably with the idea of the verbum cordis [word of the heart], mental language, interior word, thought fixed on a word, definitional image of the thing in thought: ‘Even without sounding words, the one who thinks speaks in his heart’ ” (p. 19).

On this view, thought is understood as a kind of speaking in one’s heart. Subjective meanings attributable to speakers of spoken language are to be explained in terms of a “mental language” that is different from, but analogous to, any particular spoken language. This is different from the view that speaking in one’s heart is enabled by an interiorization of spoken language, without the need to posit a separate mental language.

Boulnois contrasts Augustine’s view with Aristotle’s “semiotics of inference”.

“But a completely different definition of the sign, of Aristotelian origin, interferes with this…. Here the sign is a proposition, the point of departure for reasoning by inference, such that it founds a demonstration…. The sign is the antecedent of a conditional proposition or of an inference” (ibid). “The sign, which in Augustine grounds a relation between two things, in Aristotle founds induction between two propositions” (p. 20).

Neither of these is equivalent to the simple view that signs stand for things directly, which is closer to what Bacon will defend. Boulnois is reading Augustine as saying that a sign is or grounds a real relation between two things, and Aristotle as saying it is or grounds a relation of implication between two assertions. But for both Aristotle and Augustine, the sign refers primarily to some kind of relation, rather than simply to a thing.

“Besides this semiotics of inference, Aristotle develops a complex semantics at the beginning of the treatise On Interpretation…. The symbolic relation is constitutive of language, but it can also be expressed in the vocabulary of the semeion [sign], of logical inference, which allows a passage from sensible expressions to concepts…. But by the intermediary of the concept, indirectly, signs refer to the thing” (pp. 20-21).

Aristotle and Augustine each develop their own kind of indirect or mediated or “moderate” realism.

“The Aristotelian definition of the sign as a principle of inference is reprised by Peter of Spain…. Whereas Augustine only envisages signs as presenting sensible species, Bacon wants to account for the intelligibles evoked by Aristotle — the concepts. But he makes them representing signs” (pp. 22-23).

Here Boulnois does connect signs with species in Augustine’s case, but their relation is still not one of identity. Many of Augustine’s medieval readers would likely have interpolated a notion of species (e.g., a sensible species of smoke, for smoke) into their understanding of Augustine’s account. In this way we might say that a sensible species of smoke is a sign of fire (“is” of predication, not “is” of identity). But smoke as a sign of fire is not the same as the sensible species of the smoke.

“This reorganization rests on the concept of representation, already used by Peter of Spain: when a sign represents, it constitutes a term in a proposition, and recalls many intentional objects (the signifieds), or it ‘supposes for’ them. With the concept of representation, expressing a theory of supposition (or of reference), Peter of Spain gives himself the means to unify the general relation between sign and signified (signification in Augustine), and the conventional relation between the vocal sound and the thing named. Avoiding here the mediation of the concept, he brings together under a single vocable the natural relation of the concept to the thing and the conventional relation of the vocal sound to the concept. In reprising this vocabulary, Bacon integrates in the same term of representation the relation of the sensible sign to the thing signified and of the concept to the thing known. He takes sides at the same time against Boethius, in posing that the signified of the concept is the thing itself and not an intermediary concept. Thus while Boethius ordered semantics by noetics, the theory of representation puts them on the same plane” (p. 23).

Direct realism was actually a radical innovation, as Boulnois points out.

“Bacon thus can unify all the relations, natural and conventional, between vocal sounds, intellections, and things, under the general concept of the sign. Even though he recognizes that Aristotle concentrates in the treatise On Interpretation on conventional signs, vocal sounds, it is necessary to produce a universal theory of signs, including intellections, vocal sounds, and writing” (pp. 23-24).

One abstract theory of signs and things signified is used to cover both natural and linguistic cases.

“Starting from this Baconian innovation, it will be necessary to examine the challenges of this response to the great semantic controversy over the sign. If the concept is a sign and if the sign represents the thing itself, in what way do the great semantic questions play out based on this fundamental decision? From this foyer can be explained the natural character of the concept, the convention of the linguistic sign, and the importance of an imposition inscribed in a juridical and political order” (p. 24).

From this standpoint, concepts are assimilated to natural signs, whereas linguistic signs are arbitrary and depend on convention. Concepts on this view are individually self-contained. They are what they are independent of any articulation by us. It remains that they must be naturally or supernaturally given to us. The implicit notion of any concept in Aristotle, on the other hand, depends not only on its form, but also more generally on what is (or would be) well said by us, which is to say on its articulation in language, which must be understood against a background of other articulations in language.

Logical Judgment?

It seems to me that “logical judgment” comes in a wide range of forms, from the preconscious syntheses of our evolved common sense that appear to us ready-made, to the most elaborately explicit works of interpretation. I see judgment as referring principally to a process, and only secondarily to the outcome of the process — to the deliberation more than to the verdict, as it were.

There is a traditional use of “judgment” as a synonym for “logical proposition”. I find this a bit odd; it would make more sense to think of a judgment as at the very least an assertion or denial of a proposition, even in contexts where the connotation of interpretive, deliberative process is suppressed, and the focus is only on an outcome.

In combination with traditional ideas about predication, this identification of judgments with propositions led to a notion of acts of logical judgment in which acts of grammatical predication such as construction of the sentence “Socrates is a human” were viewed as prototypical.

Even Kant’s discussion of the application of concepts in the first Critique bears noticeable traces of this predicative analysis of logical judgment. I think Kant across the larger body of his work played a major role in developing alternatives to the predicative approach that narrowly construed “judgment” as the application of a predicate to a subject. Indeed Brandom argues that Kantian concepts are only intelligible in terms of their contribution to the activity of judging. Nonetheless, when Kant talks about subsuming particulars under universals, the discussion still recalls the predicative approach. Certainly the application of universals to particulars is important, but it is only one of several dimensions that come into play in the constitution of meaning, and it is not the most fundamental.

In referring to the constitution of meaning, I have already implicitly moved beyond the predicative analysis. The problem with the predicative analysis is that it takes meanings for granted, and really only addresses their syntactic combination as pre-existing units. We need to address the broader territory of judgments about meaning and value that go below the level of pre-existing units and preconceived identities. Meanings of terms in context turn out to depend on judgments, which in turn depend on others, and it is the ties of mutual dependency that bind together this open-endedly expanding network that give relative definiteness to our determinations.

Judgments

I usually think of judgment as a process of interpretation or a related kind of wisdom, but at least since early modern reformulations of Aristotelian logic, “a” judgment has also traditionally meant a logical proposition, or an assertion of a proposition.

An older, but still post-Aristotelian notion is that what the early moderns called a judgment “A is B” should be understood (on the model of its surface grammar) as the potentially arbitrary predication “A is B”. Such a potentially arbitrary predication by itself does not contain enough information for us to assess whether it is good or bad. The predication model was associated with a non-Aristotelian notion of truth as simple correspondence to supposed fact.

L. M. De Rijk, arguably the 20th century’s leading scholar on medieval Latin logic, developed a very detailed textual argument that the understanding of logical “judgments” in such grammatical terms is actually an unhistorical misreading of Aristotle. In the first volume of his Aristotle: Semantics and Ontology, De Rijk concluded that Aristotle’s own logical or semantic use of “is” or “is not” should be understood not in the traditionally accepted way as a “copula” or binary operator of predication, but rather as a unary operator of assertion on a compound expression — i.e., on the pair (A, B), as opposed to its two elements A and B.

I also want to emphasize that Aristotle himself did not admit simple, potentially arbitrary predications as “judgments”. The special form of Aristotelian propositions makes them express not arbitrary atomic claims as is the case with propositions in the standard modern sense, but two specific ways of compounding subclaims. Aristotle’s two truth-value-forming operations of combination and separation (expressed by “is” and “is not”) limit the scope of what qualifies as a proper Aristotelian “judgment” to cases that are effectively equivalent to what Brandom would call judgments of material consequence or material incompatibility (see Aristotelian Propositions). What the moderns would call Aristotelian “judgments” thus end up more specifically reflecting judgments of what Brandom would call goodness of material inference.

Proper Aristotelian “judgments” thus turn out to express not just arbitrary predications constructed without regard to meaning, but particular kinds of compound claims that can in principle be rationally evaluated for material well-formedness as compound thoughts, based on the actual content of the claims being compounded. (Non-compound claims are just claims, and do not have enough content to be subject to such intrinsic rational evaluation, but as soon as there is some compounding, internal criteria for well-formedness come into play.)

So, fortuitously, modern use of the term “judgment” for these ends up having more substance than it would for arbitrary predications. For Aristotle, truth and falsity only apply to what are actually compound thoughts, because truth and falsity express assessments of material well-formedness, and only compound thoughts can be assessed for such well-formedness. The case for the fundamental role of concerns of normativity rather than simple surface-level predication in Aristotelian truth-valued propositions is further supported by the ways Aristotle uses “said of” relations.

Independent of this sort of better reading of Aristotle, Brandom in the first of his 2007 Woodbridge lectures points out that Kant also strongly rejected the traditional analysis of judgment in terms of predication. Brandom goes on to argue that for Kant, “what makes an act or episode a judging in the first place is just its being subject to the normative demand that it be integrated” [emphasis in original] into a unity of apperception. This holistic, integrative view of Kantian judgment seems to me to be strongly supported by Kant’s discussion of unities of apperception in the second edition of the Critique of Pure Reason, as well as by the broad thrust of the Critique of Judgment.

Thus, a Kantian judgment also has more substance than the standard logical notion, but while an Aristotelian “judgment” gets its substantive, rational character from intra-propositional structure, a Kantian judgment gets it from inter-propositional structure.

Aristotelian Propositions

Every canonical Aristotelian proposition can be interpreted as expressing a judgment of material consequence or material incompatibility. This may seem surprising. First, a bit of background…

At the beginning of On Interpretation, Aristotle says that “falsity and truth have to do with combination and separation” (Ch. 1). On its face, the combination or separation at issue has to do not with propositions but with terms. But it is not quite so simple. The terms in question are canonically “universals” or types or higher-order terms, each of which is therefore convertible with a mentioned proposition that the higher-order term is or is not instantiated or does or does not apply. (We can read, e.g., “human” as the mentioned proposition “x human”.) Thus a canonical Aristotelian proposition is formed by “combining” or “separating” a pair of things that are each interpretable as an implicit proposition in the modern sense.

Propositions in the modern sense are treated as atomic. They are often associated with merely stipulated truth values, and in any case it makes no sense to ask for internal criteria that would help validate or invalidate a modern proposition. But we can always ask whether the combination or separation in a canonical Aristotelian proposition is reasonable for the arguments to which it is applied. Therefore, unlike a proposition in the modern sense, an Aristotelian proposition always implicitly carries with it a suggestion of criteria for its validation.

The only available criteria for critically assessing correctness of such elementary proposition-forming combination or separation are material in the sense that Sellars and Brandom have discussed. A judgment of “combination” in effect just is a judgment of material consequence; a judgment of “separation” in effect just is a judgment of material incompatibility. (This also helps clarify why it is essential to mention both combination and separation affirmatively, since, e.g., “human combines with mortal” canonically means not just that human and mortal are not incompatible, but that if one is said to be human, one is thereby also said to be mortal.)

This means that Aristotle’s concept of the elementary truth and falsity of propositions can be understood as grounded in criteria for goodness of material inference, not some kind of correspondence with naively conceived facts. It also means that every Aristotelian proposition can be understood as expressing a judgment of material consequence or incompatibility, and that truth for Aristotle can therefore be understood as primarily said of good judgments of material consequence or incompatibility. Aristotle thus would seem to anticipate Brandom on truth.

This is the deeper meaning of Aristotle’s statement that a proposition in his sense does not just “say something” but “says something about something”. Such aboutness is not just grammatical, but material-inferential. This is in accordance with Aristotle’s logical uses of “said of”, which would be well explained by giving that a material-inferential interpretation as well.

The principle behind Aristotelian syllogism is a form of composition, formally interpretable as an instance of the composition of mathematical functions, where composition operates on the combination or separation of pairs of terms in each proposition. Aristotelian logic thus combines a kind of material inference in proposition formation and its validation with a kind of formal inference by composition. This is what Kant and Hegel meant by “logic”, apart from their own innovations.

Propositions, Terms

Brandom puts significant emphasis on Kant and Frege’s focus on whole judgments — contrasted with simple first-order terms, corresponding to natural-language words or subsentential phrases — as the appropriate units of logical analysis. The important part of this is that a judgment is the minimal unit that can be given inferential meaning.

All this looks quite different from a higher-order perspective. Mid-20th century logical orthodoxy was severely biased toward first-order logic, due to foundationalist worries about completeness. In a first-order context, logical terms are expected to correspond to subsentential elements that cannot be given inferential meaning by themselves. But in a higher-order context, this is not the case. One of the most important ideas in contemporary computer science is the correspondence between propositions and types. Generalized terms are interpretable as types, and thus also as propositions. This means that (higher-order) terms can represent instances of arbitrarily complex propositions. Higher-order terms can be thus be given inferential meaning, just like sentential variables. This is all in a formal context rather than a natural-language one, but so was Frege’s work; and for what it’s worth, some linguists have also been using typed lambda calculus in the analysis of natural language semantics.

Suitably typed terms compose, just like functions or category-theoretic morphisms and functors. I understand the syllogistic principle on which Aristotle based a kind of simultaneously formal and material term inference (see Aristotelian Propositions) to be just a form of composition of things that can be thought of as functions or typed terms. Proof theory, category theory, and many other technical developments explicitly work with composition as a basic form of abstract inference. Aristotle developed the original compositional logic, and it was not Aristotle but mid-20th century logical orthodoxy that insisted on the centrality of the first-order case. Higher-order, compositionally oriented logics can interpret classic syllogistic inference, first-order logic, and much else, while supporting more inferentially oriented semantics on the formal side, with types potentially taking pieces of developed material-inferential content into the formal context. We can also use natural-language words to refer to higher-order terms and their inferential significance, just as we can capture a whole complex argument in an appropriately framed definition. Accordingly, there should be no stigma associated with reasoning about terms, or even just about words.

In computer-assisted theorem-proving, there is an important distinction between results that can be proved directly by something like algebraic substitution for individual variables, and those that require a more global rewriting of the context in terms of some previously proven equivalence(s). At a high enough level of simultaneous abstraction and detail, such rewriting could perhaps constructively model the revision of commitments and concepts from one well-defined context to another.

The potential issue would be that global rewriting still works in a higher-order context that is expected to itself be statically consistent, whereas revision of commitments and concepts taken simply implies a change of higher-level context. I think this just means a careful distinction of levels would be needed. After all, any new, revised genealogical recollection of our best thoughts will be in principle representable as a new static higher-order structure, and that structure will include something that can be read as an explanation of the transition. It may itself be subject to future revision, but in the static context that does not matter.

The limitation of such an approach is that it requires all the details of the transition to be set up statically, which can be a lot of work, and it would also be far more brittle than Brandom’s informal material inference. (See also Categorical “Evil”; Definition.)

I am fascinated by the fact that typed terms can begin to capture material as well as purely formal significance. How complete or adequate this is would depend on the implementation.