Logic for Expression

In recent times, Robert Brandom has pioneered the idea that the role of logic is primarily expressive. In his 2018 essay “From Logical Expressivism to Expressivist Logic”, he says this means its purpose is “to make explicit the inferential relations that articulate the semantic contents of the concepts expressed by the use of ordinary, nonlogical vocabulary” (p. 70).

In my humble opinion, this is what logic was really supposed to be about in Aristotle, but the tradition did not follow Aristotle. Aristotle insisted that logic is a “tool” not a science, but most later authors have assumed the contrary — that logic was the “science” of correct reasoning, or perhaps the science of consequence relations. Several scholars have nonetheless rediscovered the idea that the purpose of logical demonstration in Aristotle is not to prove truths, but to express reasoned arguments as clearly as possible.

Brandom says that “the task of logic is to provide mathematical tools for articulating the structure of reasoning” (p. 71). People were reasoning in ordinary life long before logic was invented, and continue to do so. But the immensely fertile further development of logic in the late 19th and early 20th centuries was mostly geared toward the formalization of mathematics. Reasoning in most specialized disciplines — such as the empirical sciences, medicine, and law — actually resembles reasoning in ordinary life more than it does specifically mathematical reasoning.

According to Brandom, “The normative center of reasoning is the practice of assessing reasons for and against conclusions. Reasons for conclusions are normatively governed by relations of consequence or implication. Reasons against conclusions are normatively governed by relations of incompatibility. These relations of implication and incompatibility, which constrain normative assessment of giving reasons for and against claims, amount to the first significant level of structure of the practice of giving reasons for and against claims.”

“These are, in the first instance, what Sellars called ‘material’ relations of implication and incompatibility. That is, they do not depend on the presence of logical vocabulary or concepts, but only on the contents of non- or prelogical concepts. According to semantic inferentialism, these are the relations that articulate the conceptual contents expressed by the prelogical vocabulary that plays an essential role in formulating the premises and conclusions of inferences” (pp. 71-72).

“Material” relations of consequence and incompatibility have a different structure from formal ones. Formal consequence is monotonic, which means that adding new premises does not change the consequences of existing premises. Formal contradiction is “explosive”, in the sense that any contradiction whatsoever makes it possible to “prove” anything whatsoever (both true statements and their negations), thereby invalidating the very applicability of proof. But as Brandom reminds us, “outside of mathematics, almost all our actual reasoning is defeasible” (p. 72). Material consequence is nonmonotonic, which means that adding new premises could change the consequences of existing ones. Material incompatibilities can often be “fixed” by adding new, specialized premises. (As I somewhere heard Aquinas was supposed to have said, “When faced with a contradiction, introduce a distinction”.)

Brandom notes that “Ceteris paribus [“other things being equal”] clauses do not magically turn nonmonotonic implications into monotonic ones. (The proper term for a Latin phrase whose recitation can do that is ‘magic spell’.) The expressive function characteristic of ceteris paribus clauses is rather explicitly to mark and acknowledge the defeasibility, hence nonmonotonicity, of an implication codified in a conditional, not to cure it by fiat” (p. 73).

“There is no good reason to restrict the expressive ambitions with which we introduce logical vocabulary to making explicit the rare material relations of implication and incompatibility that are monotonic. Comfort with such impoverished ambition is a historical artifact of the contingent origins of modern logic in logicist and formalist programs aimed at codifying specifically mathematical reasoning. It is to be explained by appeal to historical causes, not good philosophical reasons” (ibid). On the other hand, making things explicit should be conservative in the sense of not changing existing implications.

“…[W]e should not emulate the drunk who looks for his lost keys under the lamp-post rather than where he actually dropped them, just because the light is better there. We should look to shine light where we need it most” (ibid).

For relations of material consequence, the classical principle of “explosion” should be replaced with the weaker one that “if [something] is not only materially incoherent (in the sense of explicitly containing incompatible premises) but persistently so, that is incurably, indefeasibly
incoherent, in that all of its supersets are also incoherent, then it implies everything” (p. 77).

“The logic of nonmonotonic consequence relations is itself monotonic. Yet it can express, in the logically extended object language, the nonmonotonic relations of implication and incompatibility that structure both the material, prelogical base language, and the logically compound sentences formed from them” (p. 82).

Material consequence relations themselves may or may not be monotonic. Instead of requiring monotonicity globally, it can be declared locally by means of a modal operator. “Logical expressivists want to introduce logical vocabulary that explicitly marks the difference between those implications and incompatibilities that are persistent under the addition of arbitrary auxiliary hypotheses or collateral commitments, and those that are not. Such vocabulary lets us draw explicit boundaries around the islands of monotonicity to be found surrounded by the sea of nonmonotonic material consequences and incompatibilities” (p. 83).

Ranges of subjunctive robustness can also be explicitly declared. “The underlying thought is that the most important information about a material implication is not whether or not it is monotonic — though that is something we indeed might want to know. It is rather under what circumstances it is robust and under what collateral circumstances it would be defeated” (p. 85).

“The space of material implications that articulates the contents of the nonlogical concepts those implications essentially depend upon has an intricate localized structure of subjunctive robustness and defeasibility. That is the structure we want our logical expressive tools to help us characterize. It is obscured by commitment to global structural monotonicity—however appropriate such a commitment might be for purely logical relations of implication and incompatibility” (pp. 85-86).

“Logic does not supply a canon of right reasoning, nor a standard of rationality. Rather, logic takes its place in the context of an already up-and-running rational enterprise of making claims and giving reasons for and against claims. Logic provides a distinctive organ of self-consciousness for such a rational practice. It provides expressive tools for talking and thinking, making claims, about the relations of implication and incompatibility that structure the giving of reasons for and against claims” (p. 87).

Pure Difference?

A common theme here is the conceptual priority of difference over identity. I think that identity is a derived concept, and not a primitive one (see also Aristotelian Identity).

The French philosopher Gilles Deleuze (1925-1995) in Difference and Repetition and other works argued that a pure notion of difference is by itself sufficient for a general account of things. In information theory, information is explained as expressing difference. In Saussurean structural linguistics, we are said to recognize spoken words by recognizing elementary differences between sounds. In both cases, the idea is that we get to meaning by distinguishing and relating.

Deleuze initially cites both of these notions of difference, but goes on to develop arguments grounded largely in Nietzsche and Kierkegaard, whom he uses to argue against Plato and Hegel. His very interesting early work Nietzsche and Philosophy was marred by a rather extreme polemic against Hegel, and in Difference and Repetition he announces a program of “anti-Platonism” that reproduces Nietzsche’s intemperate hostility to Plato. Nietzsche blamed Plato for what I regard as later developments. Neither Plato nor Aristotle made the kind of overly strong assertions about identity that became common later on.

In The Sophist and elsewhere, Plato had his characters speak of Same, Other, and the mixing of the two as equally primordial. Hegel took great pains to elaborate the notion of a “difference that makes a difference”. But Deleuze wants to argue that Plato and Hegel both illegitimately subordinate difference to identity. His alternative is to argue that what is truly fundamental is a primitive notion of difference that does not necessarily “make a difference”, and that come before any “making a difference”. (I prefer the thesis of Leibniz that indiscernibility of any difference is just what identity consists in.)

This is related to Deleuze’s very questionable use of Duns Scotus’ notion of the univocity of being, both in general and more particularly in his interpretation of Spinoza. For Deleuze, pure difference interprets Scotist univocal being.

I frankly have no idea what led to Deleuze’s valorization of Scotus. Deleuze is quite extreme in his opposition to any kind of representationalism, while Scotus made representability the defining criterion of his newly invented univocal being. It is hard to imagine views that are further apart. I can only speculate that Deleuze too hastily picked out Scotus because he wanted to provocatively oppose the 20th century neo-Thomism that had considerable prominence in France, and Scotus is a leading medieval figure standing outside the Thomist tradition.

For Deleuze, univocal being is pure difference without any identity. Difference that doesn’t make a difference seems to take over the functional role that identity has in theories that treat it as something underlying that exceeds any discernibility based on criteria. I don’t see why we need either of these.

I think Deleuze’s bête noir Hegel actually did a better job of articulating the priority of difference over identity. Hegel did this not by appealing to a putative monism of difference and nothing else, but by developing correlative notions of “difference that makes a difference”, and a kind of logical consequence or entailment that we attribute to real things as we interpret them, independent of and prior to any elaboration of logic in a formal sense.

In Hegel’s analysis as explicated by Brandom, any difference that makes a difference expresses a kind of “material” incompatibility of meaning that rules out some possible assertions. This is just what “making a difference” means. Meanwhile, all positive assertions can be more specifically analyzed as assertions of some consequence or entailment or other at the level of meaning (see Material Consequence). Every predication is analyzable as an assertion of consequence or entailment between subject and predicate, as Leibniz might remind us. It is always valid to interpret, e.g., “a cat is a mammal” as an inference rule for generating conclusions like if Garfield is a cat, then Garfield is a mammal.

What is missing from Deleuze’s account is anything like entailment, the idea of something following from something else. This notion of “following”, I am convinced, is prior to any notion of identity applicable to real things. Without presupposing any pre-existing identities of things, we can build up an account of the world based on the combination of differences that make a difference, on the one hand, and real-world entailments, on the other. Identity is then a result rather than an assumption. Meanings (and anything like identity) emerge from the interplay of practical real-world entailments and distinctions. It is their interplay that gives them definition in terms of one another.

Deleuze was a sort of ontological anarchist, who wanted being to be free of any pre-existing principles. While I agree that we can’t legitimately just assume such principles, I think this is very far from meaning that principles are irrelevant, or actually harmful. On the contrary, as Kant might remind us, principles are all-important. They aren’t just “given”. We have to do actual work to develop them. But if we have no principles — if nothing truly follows from anything else, or is ruled out by anything else — then we cannot meaningfully say anything at all.