Formal and Material Interpretation

Human reasoning has two sides, that could be called formal and material. Any reasoning applicable to the real world necessarily involves the “material” side that is concerned with actual meaning “content”. It may also involve the “formal” side, which aims to express reasoning in terms of mechanically repeatable operations that are completely agnostic to the actual meanings they are used to operate on. Reasoning in some abstract contexts may rely entirely on the formal side.

Aristotle is usually credited with inventing formal logic, but he paid a lot of attention to the material side as well. In the Latin middle ages both sides were recognized, but the formal side was generally emphasized.

Formal mathematical logic underwent an immense development in the 20th century, somewhat like the earlier success story of mathematical physics. The syntactic devices of mathematical logic seemed so powerful that its rise led to a great neglect of the material, interpretive side of logic. Husserl was one of the few 20th century authors who questioned this from the start. More recently, Brandom has argued that Kant and Hegel were both fundamentally concerned with the material, interpretive side of logic, and that this is what Kant meant by “transcendental” logic (and what Hegel meant by “dialectic”).

Generally when I mention interpretation here, I have the material side in mind, but there is also such a thing as formal interpretation. Formal interpretation or “evaluation” of expressions in terms of other expressions is the most fundamental thing that interpreters and compilers for programming languages do. As with material interpretation, formal interpretation makes meanings explicit by expressing them in terms of more elementary distinctions and entailments, but it uses purely syntactic substitution and rewriting to do so.

Material interpretation can always potentially go on indefinitely, explaining real-world meanings by relating them to other meanings, and those in terms of others, and so on. In practice, we always cut it short at some point, once we achieve a relatively stable network of dependencies.

Formal interpretation on the other hand is usually engineered to be decidable, so that it actually does reach an end at some point. The fact that it reaches an end is closely related to the fact that precise formal models are always in some sense only approximations of a determination of reality that is actually open-ended. Formal models are a sort of syntactic reification of open-ended material interpretation. We may think we have taken them as far as they can go, but in real life it is always possible that some new case will come up that requires new detail in the model.

We also use a kind of formal interpretation alongside material interpretation in our spontaneous understanding of natural language. Natural language syntax helps us understand natural language meaning. It provides cues for how different clauses are intended to relate to one another. Is what is meant in this clause an exception? A consequence? A presupposition? A fact? A recommendation? Something being criticized? (See also Formal and Informal Language.)

Logic for People

Leading programming language theorist Robert Harper refers to so-called constructive or intuitionistic logic as “logic as if people mattered”. There is a fascinating convergence of ideas here. In the early 20th century, Dutch mathematician L. E. J. Brouwer developed a philosophy of mathematics called intuitionism. He emphasized that mathematics is a human activity, and held that every proof step should involve actual evidence discernible to a human. By contrast, mathematical Platonists hold that mathematical objects exist independent of any thought; formalists hold that mathematics is a meaningless game based on following rules; and logicists argue that mathematics is reducible to formal logic.

For Brouwer, a mathematical theorem is true if and only if we have a proof of it that we can exhibit, and each step of that proof can also be exhibited. In the later 19th century, many new results about infinity — and infinities of infinities — had been proved by what came to be called “classical” means, using proof by contradiction and the law of excluded middle. But from the time of Euclid, mathematicians have always regarded reproducible constructions as a better kind of proof. The law of excluded middle is a provable theorem in any finite context. When the law of excluded middle applies, you can conclude that if something is not false it must be true, and vice versa. But it is not possible to construct any infinite object.

The only infinity we actually experience is what Aristotle called “potential” infinity. We can, say, count a star and another and another, and continue as long as you like, but no actually infinite number or magnitude or thing is ever available for inspection. Aristotle famously defended the law of excluded middle, but in practice only applied it to finite cases.

In mathematics there are conjectures that are not known to be true or false. Brouwer would say, they are neither true nor false, until they are proved or disproved in a humanly verifiable way.

The fascinating convergence is that Brouwer’s humanly verifiable proofs turn out also to exactly characterize the part of mathematics that is computable, in the sense in which computer scientists use that term. Notwithstanding lingering 20th century prejudices, intuitionistic math actually turns out to be a perfect fit for computer science. I use this in my day job.

I am especially intrigued by what is called intuitionistic type theory, developed by Swedish mathematician-philosopher Per Martin-Löf. This is offered simultaneously as a foundation for mathematics, a higher-order intuitionistic logic, and a programming language. One might say it is concerned with explaining ultimate bases for abstraction and generalization, without any presuppositions. One of its distinctive features is that it uses no axioms, only inference rules. Truth is something emergent, rather than something presupposed. Type theory has deep connections with category theory, another truly marvelous area of abstract mathematics, concerned with how different kinds of things map to one another.

What especially fascinates me about this work are its implications for what logic actually is. On the one hand, it puts math before mathematical logic– rather than after it, as in the classic early 20th century program of Russell and Whitehead — and on the other, it provides opportunities to reconnect with logic in the different and broader, less formal senses of Aristotle and Kant, as still having something to say to us today.

Homotopy type theory (HoTT) is a leading-edge development that combines intuitionistic type theory with homotopy theory, which explores higher-order paths through topological spaces. Here my ignorance is vast, but it seems tantalizingly close to a grand unification of constructive principles with Cantor’s infinities of infinities. My interest is especially in what it says about the notion of identity, basically vindicating Leibniz’ thesis that what is identical is equivalent to what is practically indistinguishable. This is reflected in mathematician Vladimir Voevodsky’s emblematic axiom of univalence, “equivalence is equivalent to equality”, which legitimizes much actual mathematical practice.

So anyway, Robert Harper is working on a variant of this that actually works computationally, and uses some kind of more specific mapping through n-dimensional cubes to make univalence into a provable theorem. At the cost of some mathematical elegance, this avoids the need for the univalence axiom, saving Martin-Löf’s goal to avoid depending on any axioms. But again — finally getting to the point of this post — in a 2018 lecture, Harper says his current interest is in a type theory that is in the first instance computational rather than formal, and semantic rather than syntactic. Most people treat intuitionistic type theory as a theory that is both formal and syntactic. Harper recommends that we avoid strictly equating constructible types with formal propositions, arguing that types are more primitive than propositions, and semantics is more primitive than syntax.

Harper disavows any deep philosophy, but I find this idea of starting from a type theory and then treating it as first of all informal and semantic rather than formal and syntactic to be highly provocative. In real life, we experience types as accessibly evidenced semantic distinctions before they become posited syntactic ones. Types are first of all implicit specifications of real behavior, in terms of distinctions and entailments between things that are more primitive than identities of things.

Syntax, Semantics, Ethics

How we understand one another in our social interactions is of paramount importance for ethics. Pointing, gestures, and similar cues can get us started, but we cannot get very far without considering linguistic meaning, and we cannot get very far with natural language meaning without considering implicit and explicit inferences.

In concrete utterances, syntax still plays a large role in specifying the overall shape of the inferences a speaker is implicitly asking us to endorse with respect to some content in question. Here we are concerned not with formal definition of syntactic features, but with specific, concrete usage that we implicitly, defeasibly take as specifying definite higher-level inferential connections by virtue of the grammar employed.

By understanding the structure of a speaker’s overall argument from syntactic as well as semantic cues, we get all sorts of nuances like intended qualifications and specifications of scope that can be all-important in assessing the reasonableness of what is being said. How well we do this depends at least partly on us as well as what was said, and also on our familiarity with the speaker’s particular speech patterns, which may vary from what is common or usual. We can also silently compare the speaker’s speech patterns to what is common or usual, and wonder if what they seemed to say was what they actually meant; or kindly point out to them that what they actually said could be misunderstood. (See also Inferential Semantics; Honesty, Kindness.)