I think most people most of the time are more influenced by apprehended or assumed meanings than by formal logic. What makes us rational animals is first of all the simple fact that we have commitments articulated in language. The interplay of language and commitment opens us to dialogue and the possibility of mutual recognition, which simultaneously ground both values and objectivity. This opening, I’d like to suggest, is what Hegel called Spirit. (See also Interpretation.)
It is extremely common to see references to “predication” as if it were a central concept of Aristotelian logic. We are so used to a grammatical interpretation in terms of relations between subjects and predicates that it is hard to disengage from that. However, historically it was Aristotelian logic that influenced ancient Greek accounts of grammar, not vice versa.
Modern logicians distinguish between a neutral proposition — which might be merely mentioned, rather than asserted — and the assertion of the proposition. Grammatical predication in itself does not imply any logical assertion, only a normatively neutral syntactic relation between sentence components. But “said of” in Aristotle always refers to some kind of meaningful assertion that has a normative character, not to grammatical predication.
Aristotle talks about what we might call kinds of “sayables” (“categories”). He famously says that we can only have truth or falsity when one kind of sayable is “said of” another. Mere words or phrases by themselves don’t assert anything, and hence cannot be true or false; for that we need what modern writers have referred to as a “complete thought”.
The ordinary meaning of “to categorize” in ancient Greek was “to accuse in a court of law”. Aristotle used it to talk about assertions. It didn’t originally connote a classification. The modern connotation of classification seems to stem from the accident that independent of what “category” meant in his usage, Aristotle famously developed a classification of “categories”.
Aristotle also talks about logical “judgment” (apophansis, a different word from practical judgment or phronesis). Husserl for instance transliterated this to German, and followed the traditional association of logical judgment with “predication”. But the ordinary Greek verb apophainein just means to show or make known. Aristotle’s usage suggests a kind of definite assertion or expressive clarification related to demonstration, which makes sense, because demonstrations work by interrelating logical judgments.
All of Aristotle’s words and phrases that get translated with connotations of “predication” actually have to do with normative logical assertion, not any connecting of a grammatical subject with a grammatical predicate. Nietzsche and others have complained about the metaphysical status foisted on grammatical subjects, implicitly blaming Aristotle, but all these connotations are of later date.
The great 20th century scholar of ancient and medieval logic and semantics L. M. de Rijk in his Aristotle: Semantics and Ontology (2002) argued at length that Aristotle’s logical “is” and “is not” should be understood as not as binary operators connecting subjects and predicates, but as unary operators of assertion and negation on whole propositions formed from pairs of terms. (See also Aristotelian Propositions.)
As in similar cases, by no means do I wish to suggest that all the work done on the basis of the common translation of “predication” is valueless; far from it. But I think we can get additional clarity by carefully distinguishing the views and modes of expression of Aristotle himself from those of later commentators and logicians, and I think Aristotle’s own more unique perspectives are far fresher and more interesting than even good traditional readings would allow.
Human reasoning has two sides, that could be called formal and material. Any reasoning applicable to the real world necessarily involves the “material” side that is concerned with actual meaning “content”. It may also involve the “formal” side, which aims to express reasoning in terms of mechanically repeatable operations that are completely agnostic to the actual meanings they are used to operate on. Reasoning in some abstract contexts may rely entirely on the formal side.
Aristotle is usually credited with inventing formal logic, but he paid a lot of attention to the material side as well. In the Latin middle ages both sides were recognized, but the formal side was generally emphasized.
Formal mathematical logic underwent an immense development in the 20th century, somewhat like the earlier success story of mathematical physics. The syntactic devices of mathematical logic seemed so powerful that its rise led to a great neglect of the material, interpretive side of logic. Husserl was one of the few 20th century authors who questioned this from the start. More recently, Brandom has argued that Kant and Hegel were both fundamentally concerned with the material, interpretive side of logic, and that this is what Kant meant by “transcendental” logic (and what Hegel meant by “dialectic”).
Generally when I mention interpretation here, I have the material side in mind, but there is also such a thing as formal interpretation. Formal interpretation or “evaluation” of expressions in terms of other expressions is the most fundamental thing that interpreters and compilers for programming languages do. As with material interpretation, formal interpretation makes meanings explicit by expressing them in terms of more elementary distinctions and entailments, but it uses purely syntactic substitution and rewriting to do so.
Material interpretation can always potentially go on indefinitely, explaining real-world meanings by relating them to other meanings, and those in terms of others, and so on. In practice, we always cut it short at some point, once we achieve a relatively stable network of dependencies.
Formal interpretation on the other hand is usually engineered to be decidable, so that it actually does reach an end at some point. The fact that it reaches an end is closely related to the fact that precise formal models are always in some sense only approximations of a determination of reality that is actually open-ended. Formal models are a sort of syntactic reification of open-ended material interpretation. We may think we have taken them as far as they can go, but in real life it is always possible that some new case will come up that requires new detail in the model.
We also use a kind of formal interpretation alongside material interpretation in our spontaneous understanding of natural language. Natural language syntax helps us understand natural language meaning. It provides cues for how different clauses are intended to relate to one another. Is what is meant in this clause an exception? A consequence? A presupposition? A fact? A recommendation? Something being criticized? (See also Formal and Informal Language.)
One of Edmund Husserl’s works that I had not looked at before is Formal and Transcendental Logic (German ed. 1929). This will be a very shallow first impression.
Although he goes on to argue for the importance of a “transcendental” logic, Husserl is far from denigrating purely formal logic. He explores developments in 19th century mathematics that have some relation to logic, like Riemann’s theory of abstract multiplicities. Formal logic itself comprises both a theory of objects and a theory of forms of judgment; Husserl aims to give a deeper meaning to both. Ultimately, he wants to give a “radical” account of sense, or meaning as distinguished from reference. For Husserl, we get to objects only indirectly, through the long detour of examining sense.
Having previously severely criticized the “psychologistic” account of logic made popular by John Stuart Mill, here he is at some pains to establish the difference between transcendental and psychological views of subjectivity. Husserl often seems overly charitable to Descartes, but here he writes, “At once this Cartesian beginning, with the great but only partial discovery of transcendental subjectivity, is obscured by that most fateful and, up to this day, ineradicable error which has given us the ‘realism’ that finds in the idealisms of a Berkeley and a Hume its equally wrong counterparts. Even for Descartes, an absolute evidence makes sure of the ego (mens sive animus, substantia cogitans [mind or soul, thinking substance]) as a first, indubitably existing, bit of the world…. Even Descartes operates here with a naive apriori heritage…. Thus he misses the proper transcendental sense of the ego he has discovered…. Likewise he misses the properly transcendental sense of the questions that must be asked of experience and of scientific thinking and therefore, with absolute universality, of a logic itself.”
“This unclarity is a heritage latent in the pseudo-clarities that characterize all relapses of epistemology into natural naivete and, accordingly, in the pseudo-clear scientificalness of contemporary realism. It is an epistemology that, in league with a naively isolated logic, serves to prove to the scientist… that therefore he can properly dispense with epistemology, just as he has for centuries been getting along well enough without it anyway.”
“… A realism like that of Descartes, which believes that, in the ego to which transcendental self-examination leads back in the first instance, it has apprehended the real psyche of the human being… misses the actual problem” (pp. 227-228).
“For a radical grounding of logic, is not the whole real world called in question — not to show its actuality, but to bring out its possible and genuine sense and the range of this sense…?” (p. 229).
“The decisive point in this confusion… is the confounding of the ego with the reality of the I as a human psyche” (p. 230).
This last is an argument I have been concerned to make in a Kantian context. However one chooses to pin down the vocabulary (I have been generally using “ego” for the worldly psychological thing, and “I” as actually referring to a nonempirical, transcendental index of certain commitments), the distinction is decisive. Empirical subjectivity in the realm of psychology and transcendental subjectivity in the realm of meaning are extremely different things, even though we live in their interweaving. These days I’m inclined to identify the human expansively with that possible opening onto the transcendental of values — or “Spirit” in a Hegelian sense — rather than contractively with the “merely human” empirical psyche.
Leading programming language theorist Robert Harper refers to so-called constructive or intuitionistic logic as “logic as if people mattered”. There is a fascinating convergence of ideas here. In the early 20th century, Dutch mathematician L. E. J. Brouwer developed a philosophy of mathematics called intuitionism. He emphasized that mathematics is a human activity, and held that every proof step should involve actual evidence discernible to a human. By contrast, mathematical Platonists hold that mathematical objects exist independent of any thought; formalists hold that mathematics is a meaningless game based on following rules; and logicists argue that mathematics is reducible to formal logic.
For Brouwer, a mathematical theorem is true if and only if we have a proof of it that we can exhibit, and each step of that proof can also be exhibited. In the later 19th century, many new results about infinity — and infinities of infinities — had been proved by what came to be called “classical” means, using proof by contradiction and the law of excluded middle. But from the time of Euclid, mathematicians have always regarded reproducible constructions as a better kind of proof. The law of excluded middle is a provable theorem in any finite context. When the law of excluded middle applies, you can conclude that if something is not false it must be true, and vice versa. But it is not possible to construct any infinite object.
The only infinity we actually experience is what Aristotle called “potential” infinity. We can, say, count a star and another and another, and continue as long as you like, but no actually infinite number or magnitude or thing is ever available for inspection. Aristotle famously defended the law of excluded middle, but in practice only applied it to finite cases.
In mathematics there are conjectures that are not known to be true or false. Brouwer would say, they are neither true nor false, until they are proved or disproved in a humanly verifiable way.
The fascinating convergence is that Brouwer’s humanly verifiable proofs turn out also to exactly characterize the part of mathematics that is computable, in the sense in which computer scientists use that term. Notwithstanding lingering 20th century prejudices, intuitionistic math actually turns out to be a perfect fit for computer science. I use this in my day job.
I am especially intrigued by what is called intuitionistic type theory, developed by Swedish mathematician-philosopher Per Martin-Löf. This is offered simultaneously as a foundation for mathematics, a higher-order intuitionistic logic, and a programming language. One might say it is concerned with explaining ultimate bases for abstraction and generalization, without any presuppositions. One of its distinctive features is that it uses no axioms, only inference rules. Truth is something emergent, rather than something presupposed. Type theory has deep connections with category theory, another truly marvelous area of abstract mathematics, concerned with how different kinds of things map to one another.
What especially fascinates me about this work are its implications for what logic actually is. On the one hand, it puts math before mathematical logic– rather than after it, as in the classic early 20th century program of Russell and Whitehead — and on the other, it provides opportunities to reconnect with logic in the different and broader, less formal senses of Aristotle and Kant, as still having something to say to us today.
Homotopy type theory (HoTT) is a leading-edge development that combines intuitionistic type theory with homotopy theory, which explores higher-order paths through topological spaces. Here my ignorance is vast, but it seems tantalizingly close to a grand unification of constructive principles with Cantor’s infinities of infinities. My interest is especially in what it says about the notion of identity, basically vindicating Leibniz’ thesis that what is identical is equivalent to what is practically indistinguishable. This is reflected in mathematician Vladimir Voevodsky’s emblematic axiom of univalence, “equivalence is equivalent to equality”, which legitimizes much actual mathematical practice.
So anyway, Robert Harper is working on a variant of this that actually works computationally, and uses some kind of more specific mapping through n-dimensional cubes to make univalence into a provable theorem. At the cost of some mathematical elegance, this avoids the need for the univalence axiom, saving Martin-Löf’s goal to avoid depending on any axioms. But again — finally getting to the point of this post — in a 2018 lecture, Harper says his current interest is in a type theory that is in the first instance computational rather than formal, and semantic rather than syntactic. Most people treat intuitionistic type theory as a theory that is both formal and syntactic. Harper recommends that we avoid strictly equating constructible types with formal propositions, arguing that types are more primitive than propositions, and semantics is more primitive than syntax.
Harper disavows any deep philosophy, but I find this idea of starting from a type theory and then treating it as first of all informal and semantic rather than formal and syntactic to be highly provocative. In real life, we experience types as accessibly evidenced semantic distinctions before they become posited syntactic ones. Types are first of all implicit specifications of real behavior, in terms of distinctions and entailments between things that are more primitive than identities of things.
So, I want to say that distinction is something good, not a defect we ought to remedy. It is a fundamental symptom of life. Stoics, Buddhists and others remind us that it is best not to be too attached to particular forms. This is a wise counsel, but not the whole truth. I am tempted to say there is no compassion without some passion. Caring about anything inevitably involves distinction. It is better to care than not to care.
Everything flows, Heraclitus said. But in order to make distinctions, it has to be possible to compare things. Things must have a character, even if they do not quite ever stay still within their frames. Having a character is being this way and not that. Real being is always being some way or other. Its diversity is something to celebrate.
It is not immoral to prefer one thing to another. We can’t be who we are without definite commitments. Perfect apathy would lead to many sins of omission. It is better to have lived fully. We are not apart from the world, but inhabit the oceans of difference, and sometimes must take a side.
As far as I know, the explicit term “nondualism” was first used in certain strands of Mahayana Buddhism. I believe it later was adopted by the Vedanta school of Hindu scholastic philosophy. I was fascinated with these as a young man, and was for a time much absorbed in developing a sort of Alan Watts style interpretation of Plotinus’ emphasis on the One as a similar kind of radical nondualism.
Radical nondualism goes beyond the rejection of sharply dualist views like those of Descartes on mind and world, and the different religious dualisms like those of Augustine, the Zoroastrians, the Gnostics, the Manichaeans, or the Samkhya school of Hinduism. Each of these latter has important differences from the others, but what unites them is the strong assertion of some fundamental duality at the heart of things. Radical nondualism aims to consistently reject not only these but any vestige of duality in the basic account of things.
The point of view I would take now is that many useful or arguably necessary distinctions are often formulated in naive, overly blunt ways. We should strive to overcome our naivete and our excessive bluntness, but that does not in any way mean we should try to overcome distinction per se. There can be no meaning — even of the most spiritual sort — without some sort of distinction between things. “All is One” is at best only a half-truth, even if it is a profoundly spiritual one.
The French philosopher Gilles Deleuze (1925-1995) in Difference and Repetition and other works argued that a pure notion of difference is by itself sufficient for a general account of things. In information theory, information is explained as expressing difference. In Saussurean structural linguistics, we are said to recognize spoken words by recognizing elementary differences between sounds. In both cases, the idea is that we get to meaning by distinguishing and relating.
Deleuze initially cites both of these notions of difference, but goes on to develop arguments grounded largely in Nietzsche and Kierkegaard, whom he uses to argue against Plato and Hegel. His very interesting early work Nietzsche and Philosophy was marred by a rather extreme polemic against Hegel, and in Difference and Repetition he announces a program of “anti-Platonism” that reproduces Nietzsche’s intemperate hostility to Plato. Nietzsche blamed Plato for what I regard as later developments. Neither Plato nor Aristotle made the kind of overly strong assertions about identity that became common later on.
In The Sophist and elsewhere, Plato had his characters speak of Same, Other, and the mixing of the two as equally primordial. Hegel took great pains to elaborate the notion of a “difference that makes a difference”. But Deleuze wants to argue that Plato and Hegel both illegitimately subordinate difference to identity. His alternative is to argue that what is truly fundamental is a primitive notion of difference that does not necessarily “make a difference”, and that come before any “making a difference”. (I prefer the thesis of Leibniz that indiscernibility of any difference is just what identity consists in.)
This is related to Deleuze’s very questionable use of Duns Scotus’ notion of the univocity of being, both in general and more particularly in his interpretation of Spinoza. For Deleuze, pure difference interprets Scotist univocal being.
I frankly have no idea what led to Deleuze’s valorization of Scotus. Deleuze is quite extreme in his opposition to any kind of representationalism, while Scotus made representability the defining criterion of his newly invented univocal being. It is hard to imagine views that are further apart. I can only speculate that Deleuze too hastily picked out Scotus because he wanted to implicitly oppose Thomist orthodoxy, and Scotus is a leading medieval figure outside the Thomist tradition.
For Deleuze, univocal being is pure difference without any identity. Difference that doesn’t make a difference seems to take over the functional role that identity has in theories that treat it as something underlying that exceeds any discernibility based on criteria. I don’t see why we need either of these.
I think Deleuze’s bête noir Hegel actually did a better job of articulating the priority of difference over identity. Hegel did this not by appealing to a putative monism of difference and nothing else, but by developing correlative notions of “difference that makes a difference”, and a kind of logical consequence or entailment that we attribute to real things as we interpret them, independent of and prior to any elaboration of logic in a formal sense.
In Hegel’s analysis as explicated by Brandom, any difference that makes a difference expresses a kind of “material” incompatibility of meaning that rules out some possible assertions. This is just what “making a difference” means. Meanwhile, all positive assertions can be more specifically analyzed as assertions of some consequence or entailment or other at the level of meaning (see Material Consequence). Every predication is analyzable as an assertion of consequence or entailment between subject and predicate, as Leibniz might remind us. It is always valid to interpret, e.g., “a cat is a mammal” as an inference rule for generating conclusions like if Garfield is a cat, then Garfield is a mammal.
What is missing from Deleuze’s account is anything like entailment, the idea of something following from something else. This notion of “following”, I am convinced, is prior to any notion of identity applicable to real things. Without presupposing any pre-existing identities of things, we can build up an account of the world based on the combination of differences that make a difference, on the one hand, and real-world entailments, on the other. Identity is then a result rather than an assumption. Meanings (and anything like identity) emerge from the interplay of practical real-world entailments and distinctions. It is their interplay that gives them definition in terms of one another.
Deleuze was a sort of ontological anarchist, who wanted being to be free of any pre-existing principles. While I agree that we can’t legitimately just assume such principles, I think this is very far from meaning that principles are irrelevant, or actually harmful. On the contrary, as Kant might remind us, principles are all-important. They aren’t just “given”. We have to do actual work to develop them. But if we have no principles — if nothing truly follows from anything else, or is ruled out by anything else — then we cannot meaningfully say anything at all.
I tremendously admire Leibniz, but have always been very puzzled by his notion of “substance”. Clearly it is different from that of Aristotle, which I still ought to develop more carefully, based on the hints in my various comments on Aristotle’s very distinctive approaches to “dialectic” and “being”. (See also Form, Substance.)
Leibniz compounds a criterion of simplicity — much emphasized in the neoplatonic and scholastic traditions — with his own very original notion of the complete concept of a thing, which is supposed to notionally encompass every possible detail of its description. He also emphasizes that every substance is “active”. Leibniz’ famous monads are identified by him with substances.
A substance is supposed to be simple. He explicitly says this means it has no parts. In part, he seems to have posited substances as a sort of spiritual atoms, with the idea that it is these that fundamentally make up the universe. The true atoms, Leibniz says, are fundamentally spiritual rather than material, though he also had great interest in science, and wanted to vindicate both mathematical and Aristotelian physics. Leibniz’ notion of spiritual atoms seems to combine traditional attributes of the scholastic “intellectual soul” (which, unlike anything in Aristotle, was explicitly said by its advocates to be a simple substance) with something like Berkeley’s thesis that what can truly be said to exist are just minds.
On the other hand, a substance is supposed to be the real correlate of a “complete” concept. The complete concept of a thing for Leibniz comprises absolutely everything that is, was, or will be true of the thing. This is related to his idea that predicates truly asserted of a grammatical subject must be somehow “contained” within the subject. Leibniz also famously claimed that all apparent interaction between substances is only an appearance. The details of apparent interaction are to be explained by the details contained within the complete concept of each thing. This is also related to his notions of pre-established harmony and possible worlds, according to which God implicitly coordinates all the details of all the complete concepts of things in a world, and makes judgments of what is good at the level of the infinite detail of entire worlds. One of Kant’s early writings was a defense of real interaction against Leibniz.
Finally, every monad is said by Leibniz to contain both a complete microcosm of the world as expressed from its distinctive point of view, and an infinite series of monads-within-monads within it. Every monad has or is a different point of view from every other, but they all reflect each other.
At least in most of his writings, Leibniz accordingly wanted to reduce all notions of relation to explanations in terms of substances. In late correspondence with the Jesuit theologian Bartholomew Des Bosses, he sketched an alternate view that accepted the reality of relations. But generally, Leibniz made the logically valid argument that it is far simpler to explain the universe in terms of each substance’s unique relation to God, rather than in terms of infinities of infinities of relations between relations. For Leibniz all those infinities of infinities are still present, but only in the mind of God, and in reflection in the interior of each monad.
Leibniz’ logically simpler account of relations seems like an extravagant theological fancy, but however we may regard that, and however much we may ultimately sympathize with Kant over Leibniz on the reality of interaction and relations, Leibniz had very advanced intuitions of logical-mathematical structure, and he is fundamentally right that from a formal point of view, extensional properties of things can all be interpreted in an “intensional” way. Intension in logic refers to internal content of a concept, and to necessary and sufficient conditions that constitute its formal definition. This is independent of whatever views we may have about minds. (See also Form as a Unique Thing.)
So, there is much of interest here, but I don’t see how these ultra-rich notional descriptions can be true of what are also supposed to be logical atoms with no parts. In general, I don’t see how having a rich description could be compatible with being logically atomic. I think the notion of logical atomicity is only arrived at through abstraction, and doesn’t apply to real things.
I think any finite activity requires some sort of embodiment, and consequently that anything like the practically engaged spirits Berkeley talks about must also have some embodiment. On the other hand, the various strands of activity from which our eventual essence is precipitated over time — commitments, thoughts, feelings — are not strictly tied to single individuals, but are capable of being shared or spread between individuals.
Most notably, this often happens with parents and their children, but it also applies whenever someone significantly influences the commitments, thoughts, and feelings of someone else. I feel very strongly that I partially embody the essence and characters of both my late parents — who they were as human beings — and I see the same in my two sisters. Aristotle suggests that this concrete transference of embodied essence from parents to children is a kind of immortality that goes beyond the eternal virtual persistence of our essence itself.
Our commitments, thoughts, and feelings are not mere accidents, but rather comprise the activity that constitutes our essence. I put commitments first, because they are the least ephemeral. In mentioning commitments I mean above all the real, effective, enduring commitments embodied in what we do and how we act.