Brandom on Reason

In the introduction to Reason and Philosophy (2009), Brandom identifies with “a venerable tradition that distinguishes us as rational animals, and philosophy by its concern to understand, articulate, and explain the notion of reason….  Kant and Hegel showed us a way forward for a rationalism that is not objectionably Cartesian, intellectualist, or anti- (or super-) naturalist.  Nor need it treat the ‘light of reason’ as unacquired or innate” (pp. 1-2; emphasis in original throughout).

“Rational beings are ones that ought to have reasons for what they do, and ought to act as they have reason to” (p.3).

“Taking something to be subject to appraisals of its reasons, holding it rationally responsible, is treating it as someone: as one of us (rational beings).  This normative attitude toward others is recognition, in the sense of Hegel’s central notion of Anerrkennung” (p. 3).

The role of recognition makes things like authority and responsibility into social statuses.  These “are in principle unintelligible apart from consideration of the practical attitudes of those who hold each other responsible, acknowledge each other’s authority, attribute commitments and entitlements to each other” (pp. 3-4).

If we take meaning seriously, we cannot take it for granted.  Inferential articulation is involved not only in determining what is true, but also in the understanding of meanings.  What we mean and what we believe are actually interdependent.  He refers to Wilfrid Sellars’ thesis that no description can be understood apart from the “space of implications” in which the terminology used in the description is embedded.  “Discursive activity, applying concepts paradigmatically in describing how things are, is inseparable from the inferential activity of giving and asking for reasons” (p. 8).  

“[T]he acts or statuses that are givings of reasons and for which reasons are given – are judgings, claimings, assertings, or believings.  They are the undertakings or acknowledgements of commitments” (p. 9).  “[R]ationality is a normative concept.  The space of reasons is a normative space” (p. 12).  Philosophy should be concerned not just with pure logic and semantics, but with “the acknowledgement and attribution of… statuses such as responsibility and authority, commitment and entitlement” (p. 13).

Searching for a Middle Term

“But nothing, I think, prevents one from in a sense understanding and in a sense being ignorant of what one is learning” (Aristotle, Posterior Analytics; Complete Works revised Oxford edition vol. 1, p. 115). The kind of understanding spoken of here involves awareness “both that the explanation because of which the object is is its explanation, and that it is not possible for this to be otherwise” (ibid). To speak of the “explanation because of which” something is suggests that the concern is with states of affairs being some way, and the “not… otherwise” language further confirms this.

Following this is the famous criterion that demonstrative understanding depends on “things that are true and primitive and immediate and more familiar than and prior to and explanatory of the conclusion…. [T]here will be deduction even without these conditions, but there will not be demonstration, for it will not produce understanding” (ibid). The “more familiar than” part has sometimes been mistranslated as “better known than”, confusing what Aristotle carefully distinguishes as gnosis (personal acquaintance) and episteme (knowledge in a strong sense). I think this phrase is the key to the whole larger clause, giving it a pragmatic rather than foundationalist meaning. (Foundationalist claims only emerged later, with the Stoics and Descartes.) The pedagogical aim of demonstration is to use things that are more familiar to us — which for practical purposes we take to be true and primitive and immediate and prior and explanatory — to showcase reasons for things that are slightly less obvious.

Independent of these criteria for demonstration, the whole point of the syllogistic form is that the conclusion very “obviously” and necessarily follows, by a simple operation of composition on the premises (A => B and B => C, so A=> C). Once we have accepted both premises of a syllogism, the conclusion is already implicit, and that in an especially clear way. We will not reach any novel or unexpected conclusions by syllogism. It is a kind of canonical minimal inferential step, intended not to be profound but to be as simple and clear as possible.

(Contemporary category theory grounds all of mathematics on the notion of composable abstract dependencies, expressing complex dependencies as compositions of simpler ones. Its power depends on the fact that under a few carefully specified conditions expressing the properties of good composition, the composition of higher-order functions with internal conditional logic — and other even more general constructions — works in exactly the same way as composition of simple predications like “A is B“.)

Since a syllogism is designed to be a minimal inferential step, there is never a question of “searching” for the right conclusion. Rather, Aristotle speaks of searching for a “middle term” before an appropriate pair of premises is identified for syllogistic use. A middle term like B in the example above is the key ingredient in a syllogism, appearing both in the syntactically dependent position in one premise, and in the syntactically depended-upon position in the other premise, thus allowing the two to be composed together. This is a very simple example of mediation. Existence of a middle term B is what makes composition of the premises possible, and is therefore what makes pairings of premises appropriate for syllogistic use.

In many contexts, searching for a middle term can be understood as inventing an appropriate intermediate abstraction from available materials. If an existing abstraction is too broad to fit the case, we can add specifications until it does, and then optionally give the result a new name. All Aristotelian terms essentially are implied specifications; the names are just for convenience. Aristotle sometimes uses pure specifications as “nameless terms”.

Named abstractions function as shorthand for the potential inferences that they embody, enabling simple common-sense reasoning in ordinary language. We can become more clear about our thinking by using dialectic to unpack the implications of the abstractions embodied in our use of words. (See also Free Play; Practical Judgment.)

Constructive

Brandom’s inferentialist alternative to representationalism stresses material, meaning-oriented over formal, syntactic inference. Prior to the development of mathematical logic, philosophers typically used a mixture of reasoning about meanings with natural language analogues of simple formal reasoning. People in ordinary life still do this.

Where Brandom’s approach is distinctive is in its unprecedentedly thorough commitment to the reciprocal determination of meaning and inference. We don’t just do inference based on meanings grasped ready-to-hand as well as syntactic cues to argument structure, but simultaneously question and explicitate those very meanings, by bracketing what is ready-to-hand, and instead working out recursive material-inferential expansions of what would really be meant by application of the inferential proprieties in question.

For Brandom, the question of which logic to use in this explicitation does not really arise, because the astounding multiplication of logics — each with different expressive resources — is all in the formal domain. It is nonetheless important to note that formal logics vary profoundly in the degrees of support they offer for broad representationalist or inferentialist commitments.

Michael Dummet in The Logical Basis of Metaphysics argued strongly for the importance of constructive varieties of formal logic for philosophy. Constructive logics are inherently inference-centered, because construction basically just is a form of inference. (Dummet is concerned to reject varieties of realism that I would call naive, but seems to believe the taxonomy of realisms is exhausted at this point. This leads him to advocate a form of anti-realism. His book is part of a rather polarized debate in recent decades about realism and anti-realism. I see significant overlap between non-naive realisms and nonsubjective idealisms, so I would want to weaken his strong anti-realist conclusions, and I think Brandom helps us to do that.)

Without endorsing Dummet’s anti-realism in its strong form, I appreciate his argument for the philosophical preferability of constructive over classical logic. It seems to me that one cannot use modern “classical” formal logic without substantial representationalist assumptions, and a lot of assumed truth as well. If and when we do move into a formal domain, this becomes important.

As used in today’s computer science, constructive logic looks in some ways extremely different in its philosophical implications from Brouwer’s original presentation. Brouwer clouded the matter by mixing good mathematics with philosophical positions on intuition and subjectivity that were both questionable and not nearly as intrinsic to the mathematics as he seemed to believe. The formal parts of his argument now have a much wider audience and much greater interest than his philosophizing.

Constructive logic puts proof or evidence before truth, and eschews appeals to self-evidence. Expressive genealogy puts the material-inferential explicitation of meaning before truth, and eschews appeals to self-evidence. Both strongly emphasize justification, but one is concerned with proof, the other with well-founded interpretation. Each has its place, and they fit well together.

Propositions, Terms

Brandom puts significant emphasis on Kant and Frege’s focus on whole judgments — contrasted with simple first-order terms, corresponding to natural-language words or subsentential phrases — as the appropriate units of logical analysis. The important part of this is that a judgment is the minimal unit that can be given inferential meaning.

All this looks quite different from a higher-order perspective. Mid-20th century logical orthodoxy was severely biased toward first-order logic, due to foundationalist worries about completeness. In a first-order context, logical terms are expected to correspond to subsentential elements that cannot be given inferential meaning by themselves. But in a higher-order context, this is not the case. One of the most important ideas in contemporary computer science is the correspondence between propositions and types. Generalized terms are interpretable as types, and thus also as propositions. This means that (higher-order) terms can represent instances of arbitrarily complex propositions. Higher-order terms can be thus be given inferential meaning, just like sentential variables. This is all in a formal context rather than a natural-language one, but so was Frege’s work; and for what it’s worth, some linguists have also been using typed lambda calculus in the analysis of natural language semantics.

Suitably typed terms compose, just like functions or category-theoretic morphisms and functors. I understand the syllogistic principle on which Aristotle based a kind of simultaneously formal and material term inference (see Aristotelian Propositions) to be just a form of composition of things that can be thought of as functions or typed terms. Proof theory, category theory, and many other technical developments explicitly work with composition as a basic form of abstract inference. Aristotle developed the original compositional logic, and it was not Aristotle but mid-20th century logical orthodoxy that insisted on the centrality of the first-order case. Higher-order, compositionally oriented logics can interpret classic syllogistic inference, first-order logic, and much else, while supporting more inferentially oriented semantics on the formal side, with types potentially taking pieces of developed material-inferential content into the formal context. We can also use natural-language words to refer to higher-order terms and their inferential significance, just as we can capture a whole complex argument in an appropriately framed definition. Accordingly, there should be no stigma associated with reasoning about terms, or even just about words.

In computer-assisted theorem-proving, there is an important distinction between results that can be proved directly by something like algebraic substitution for individual variables, and those that require a more global rewriting of the context in terms of some previously proven equivalence(s). At a high enough level of simultaneous abstraction and detail, such rewriting could perhaps constructively model the revision of commitments and concepts from one well-defined context to another.

The potential issue would be that global rewriting still works in a higher-order context that is expected to itself be statically consistent, whereas revision of commitments and concepts taken simply implies a change of higher-level context. I think this just means a careful distinction of levels would be needed. After all, any new, revised genealogical recollection of our best thoughts will be in principle representable as a new static higher-order structure, and that structure will include something that can be read as an explanation of the transition. It may itself be subject to future revision, but in the static context that does not matter.

The limitation of such an approach is that it requires all the details of the transition to be set up statically, which can be a lot of work, and it would also be far more brittle than Brandom’s informal material inference. (See also Categorical “Evil”; Definition.)

I am fascinated by the fact that typed terms can begin to capture material as well as purely formal significance. How complete or adequate this is would depend on the implementation.