Logic for People

Leading programming language theorist Robert Harper refers to so-called constructive or intuitionistic logic as “logic as if people mattered”. There is a fascinating convergence of ideas here. In the early 20th century, Dutch mathematician L. E. J. Brouwer developed a philosophy of mathematics called intuitionism. He emphasized that mathematics is a human activity, and held that every proof step should involve actual evidence discernible to a human. By contrast, mathematical Platonists hold that mathematical objects exist independent of any thought; formalists hold that mathematics is a meaningless game based on following rules; and logicists argue that mathematics is reducible to formal logic.

For Brouwer, a mathematical theorem is true if and only if we have a proof of it that we can exhibit, and each step of that proof can also be exhibited. In the later 19th century, many new results about infinity — and infinities of infinities — had been proved by what came to be called “classical” means, using proof by contradiction and the law of excluded middle. But from the time of Euclid, mathematicians have always regarded reproducible constructions as a better kind of proof. The law of excluded middle is a provable theorem in any finite context. When the law of excluded middle applies, you can conclude that if something is not false it must be true, and vice versa. But it is not possible to construct any infinite object.

The only infinity we actually experience is what Aristotle called “potential” infinity. We can, say, count a star and another and another, and continue as long as you like, but no actually infinite number or magnitude or thing is ever available for inspection. Aristotle famously defended the law of excluded middle, but in practice only applied it to finite cases.

In mathematics there are conjectures that are not known to be true or false. Brouwer would say, they are neither true nor false, until they are proved or disproved in a humanly verifiable way.

The fascinating convergence is that Brouwer’s humanly verifiable proofs turn out also to exactly characterize the part of mathematics that is computable, in the sense in which computer scientists use that term. Notwithstanding lingering 20th century prejudices, intuitionistic math actually turns out to be a perfect fit for computer science. I use this in my day job.

I am especially intrigued by what is called intuitionistic type theory, developed by Swedish mathematician-philosopher Per Martin-Löf. This is offered simultaneously as a foundation for mathematics, a higher-order intuitionistic logic, and a programming language. One might say it is concerned with explaining ultimate bases for abstraction and generalization, without any presuppositions. One of its distinctive features is that it uses no axioms, only inference rules. Truth is something emergent, rather than something presupposed. Type theory has deep connections with category theory, another truly marvelous area of abstract mathematics, concerned with how different kinds of things map to one another.

What especially fascinates me about this work are its implications for what logic actually is. On the one hand, it puts math before mathematical logic– rather than after it, as in the classic early 20th century program of Russell and Whitehead — and on the other, it provides opportunities to reconnect with logic in the different and broader, less formal senses of Aristotle and Kant, as still having something to say to us today.

Homotopy type theory (HoTT) is a leading-edge development that combines intuitionistic type theory with homotopy theory, which explores higher-order paths through topological spaces. Here my ignorance is vast, but it seems tantalizingly close to a grand unification of constructive principles with Cantor’s infinities of infinities. My interest is especially in what it says about the notion of identity, basically vindicating Leibniz’ thesis that what is identical is equivalent to what is practically indistinguishable. This is reflected in mathematician Vladimir Voevodsky’s emblematic axiom of univalence, “equivalence is equivalent to equality”, which legitimizes much actual mathematical practice.

So anyway, Robert Harper is working on a variant of this that actually works computationally, and uses some kind of more specific mapping through n-dimensional cubes to make univalence into a provable theorem. At the cost of some mathematical elegance, this avoids the need for the univalence axiom, saving Martin-Löf’s goal to avoid depending on any axioms. But again — finally getting to the point of this post — in a 2018 lecture, Harper says his current interest is in a type theory that is in the first instance computational rather than formal, and semantic rather than syntactic. Most people treat intuitionistic type theory as a theory that is both formal and syntactic. Harper recommends that we avoid strictly equating constructible types with formal propositions, arguing that types are more primitive than propositions, and semantics is more primitive than syntax.

Harper disavows any deep philosophy, but I find this idea of starting from a type theory and then treating it as first of all informal and semantic rather than formal and syntactic to be highly provocative. In real life, we experience types as accessibly evidenced semantic distinctions before they become posited syntactic ones. Types are first of all implicit specifications of real behavior, in terms of distinctions and entailments between things that are more primitive than identities of things.


So, I want to say that distinction is something good, not a defect we ought to remedy. It is a fundamental symptom of life. Stoics, Buddhists and others remind us that it is best not to be too attached to particular forms. This is a wise counsel, but not the whole truth. I am tempted to say there is no compassion without some passion. Caring about anything inevitably involves distinction. It is better to care than not to care.

Everything flows, Heraclitus said. But in order to make distinctions, it has to be possible to compare things. Things must have a character, even if they do not quite ever stay still within their frames. Having a character is being this way and not that. Real being is always being some way or other. Its diversity is something to celebrate.

It is not immoral to prefer one thing to another. We can’t be who we are without definite commitments. Perfect apathy would lead to many sins of omission. It is better to have lived fully. We are not apart from the world, but inhabit the oceans of difference, and sometimes must take a side.


As far as I know, the explicit term “nondualism” was first used in certain strands of Mahayana Buddhism. I believe it later was adopted by the Vedanta school of Hindu scholastic philosophy. I was fascinated with these as a young man, and was for a time much absorbed in developing a sort of Alan Watts style interpretation of Plotinus’ emphasis on the One as a similar kind of radical nondualism.

Radical nondualism goes beyond the rejection of sharply dualist views like those of Descartes on mind and world, and the different religious dualisms like those of Augustine, the Zoroastrians, the Gnostics, the Manichaeans, or the Samkhya school of Hinduism. Each of these latter has important differences from the others, but what unites them is the strong assertion of some fundamental duality at the heart of things. Radical nondualism aims to consistently reject not only these but any vestige of duality in the basic account of things.

The point of view I would take now is that many useful or arguably necessary distinctions are often formulated in naive, overly blunt ways. We should strive to overcome our naivete and our excessive bluntness, but that does not in any way mean we should try to overcome distinction per se. There can be no meaning — even of the most spiritual sort — without some sort of distinction between things. “All is One” is at best only a half-truth, even if it is a profoundly spiritual one.

Pure Difference?

A common theme here is the conceptual priority of difference over identity. I think that identity is a derived concept, and not a primitive one (see also Aristotelian Identity).

The French philosopher Gilles Deleuze (1925-1995) in Difference and Repetition and other works argued that a pure notion of difference is by itself sufficient for a general account of things. In information theory, information is explained as expressing difference. In Saussurean structural linguistics, we are said to recognize spoken words by recognizing elementary differences between sounds. In both cases, the idea is that we get to meaning by distinguishing and relating.

Deleuze initially cites both of these notions of difference, but goes on to develop arguments grounded largely in Nietzsche and Kierkegaard, whom he uses to argue against Plato and Hegel. His very interesting early work Nietzsche and Philosophy was marred by a rather extreme polemic against Hegel, and in Difference and Repetition he announces a program of “anti-Platonism” that reproduces Nietzsche’s intemperate hostility to Plato. Nietzsche blamed Plato for what I regard as later developments. Neither Plato nor Aristotle made the kind of overly strong assertions about identity that became common later on.

In The Sophist and elsewhere, Plato had his characters speak of Same, Other, and the mixing of the two as equally primordial. Hegel took great pains to elaborate the notion of a “difference that makes a difference”. But Deleuze wants to argue that Plato and Hegel both illegitimately subordinate difference to identity. His alternative is to argue that what is truly fundamental is a primitive notion of difference that does not necessarily “make a difference”, and that come before any “making a difference”. (I prefer the thesis of Leibniz that indiscernibility of any difference is just what identity consists in.)

This is related to Deleuze’s very questionable use of Duns Scotus’ notion of the univocity of being, both in general and more particularly in his interpretation of Spinoza. For Deleuze, pure difference interprets Scotist univocal being.

I frankly have no idea what led to Deleuze’s valorization of Scotus. Deleuze is quite extreme in his opposition to any kind of representationalism, while Scotus made representability the defining criterion of his newly invented univocal being. It is hard to imagine views that are further apart. I can only speculate that Deleuze too hastily picked out Scotus because he wanted to implicitly oppose Thomist orthodoxy, and Scotus is a leading medieval figure outside the Thomist tradition.

For Deleuze, univocal being is pure difference without any identity. Difference that doesn’t make a difference seems to take over the functional role that identity has in theories that treat it as something underlying that exceeds any discernibility based on criteria. I don’t see why we need either of these.

I think Deleuze’s bête noir Hegel actually did a better job of articulating the priority of difference over identity. Hegel did this not by appealing to a putative monism of difference and nothing else, but by developing correlative notions of “difference that makes a difference”, and a kind of logical consequence or entailment that we attribute to real things as we interpret them, independent of and prior to any elaboration of logic in a formal sense.

In Hegel’s analysis as explicated by Brandom, any difference that makes a difference expresses a kind of “material” incompatibility of meaning that rules out some possible assertions. This is just what “making a difference” means. Meanwhile, all positive assertions can be more specifically analyzed as assertions of some consequence or entailment or other at the level of meaning (see Material Consequence). Every predication is analyzable as an assertion of consequence or entailment between subject and predicate, as Leibniz might remind us. It is always valid to interpret, e.g., “a cat is a mammal” as an inference rule for generating conclusions like if Garfield is a cat, then Garfield is a mammal.

What is missing from Deleuze’s account is anything like entailment, the idea of something following from something else. This notion of “following”, I am convinced, is prior to any notion of identity applicable to real things. Without presupposing any pre-existing identities of things, we can build up an account of the world based on the combination of differences that make a difference, on the one hand, and real-world entailments, on the other. Identity is then a result rather than an assumption. Meanings (and anything like identity) emerge from the interplay of practical real-world entailments and distinctions. It is their interplay that gives them definition in terms of one another.

Deleuze was a sort of ontological anarchist, who wanted being to be free of any pre-existing principles. While I agree that we can’t legitimately just assume such principles, I think this is very far from meaning that principles are irrelevant, or actually harmful. On the contrary, as Kant might remind us, principles are all-important. They aren’t just “given”. We have to do actual work to develop them. But if we have no principles — if nothing truly follows from anything else, or is ruled out by anything else — then we cannot meaningfully say anything at all.