Context

The better we can interpret a context, the better we can understand the significance of things within it. In deliberation, the more grasp we have of the relevant context, the more it becomes possible to reach definite determinations.

An Aristotelian sensitivity to the distinctness and complexity of each situation in no way compromises an ethical ideal of universality like Kant promoted — quite the opposite. It is what enables us to apply that ideal well in each case.

In the world, differences in context also sometimes get used as a pretext for false distinctions that negate ethical universality, or are simply self-serving. But if we truly respect ethical universality, this will be of great help in seeing those cases for what they are.

Context provides a kind of anchor for modality, which plays a very great role in the intelligibility of things. Conversely, modality gives context a greater definiteness. Context is also perhaps the most fundamental concept for historiography.

Several Aristotelian concepts are concerned with context. Potentiality captures most of the aspects related to modality, but contingent fact and circumstance as such are associated with Aristotelian “material causes”, and operating means to ends are treated as “efficient causes”. The interpretation of context complements the classic questions of what and why.

From a Brandomian point of view, practical implications of context will be especially important in normative pragmatics, but context also affects determinations of meaning in inferential semantics.

Why Modality?

Why should we care about something as seemingly esoteric as modality? Without modal concepts like necessary, possible, and should, there could be no knowledge beyond mere acquaintance with particulars, and no ethics at all. Nothing we said could have any force. We could not really even form any general concepts. Nothing would really follow from anything else. The fact that necessary or should is sometimes applied in too strong a way — or that possible is sometimes applied in too abstract a way — does not negate their essential role.

I want to say that modal concepts are properly meaningful only when bound to a context. It is only the lack of proper binding to a context that makes their application too strong or too abstract.

This relation actually goes both ways. Modal assertions not limited by context sound like dogmatism or would-be despotism; but equally, an emphasis on context with no consideration of modality at all would lead to bad relativism or particularism. Modality and context have a kind of complementarity, and need to go together. Either one without the other causes trouble, but the two together ground ethics, knowledge, and wisdom. Their combination is another good example of an Aristotelian mean. (See also Modality and Variation.)

Modality and Variation

David Corfield suggests that modality has to do with ranges of variation. This seems extremely helpful. He connects Brandom’s notion of ranges of counterfactual robustness with mathematical analyses of variation. Corfield approvingly cites Brandom’s argument that in order to successfully apply empirical concepts at all, we must already be able to apply modal concepts like possibility and necessity. This always seemed right to me, but the talk about possible worlds made me worry about what sounded like impossibly strong quantification over infinities of infinities. Corfield also points out that Saul Kripke originally cautioned against uncareful extension of his possible-worlds talk.

It now seems to me that Brandom’s counterfactual robustness and Corfield’s mathematically analyzable variation together can be taken as an explanation for modal notions of necessity that previously seemed to be simply posited, or pulled out of the blue. Modality suddenly looks like a direct consequence of the structure of ranges of variation. Previously, I associated both structure and Brandom’s modally robust counterfactuals with Aristotelian potentiality, so this fits well.

Corfield also relates this to work done by the important 20th century neo-Kantian, Ernst Cassirer, on invariants behind the various systems of Euclidean and non-Euclidean geometry. He points out that Cassirer thought similar concerns of variation and invariance implicitly arise in ordinary visual perception, and connects this with Brandom’s thesis that modality is already there in our everyday application of empirical concepts.

The British Empiricist David Hume famously criticized common-sense assumptions about causality and necessity, preferring to substitute talk about our psychological tendencies to associate things that we have experienced together. Hume pointed out that from particular facts, no knowledge of causality or necessity can ever be derived. This is true; no knowledge of necessity could arise from acquaintance with particular facts. But if necessity and other modalities are structural, as Corfield suggests, they do not need to be inferred from particular facts, or to be arbitrarily posited.

The kind of necessity associated with structural determination is quite different from unconditional predestination. I want to affirm the first, and deny the second. Structural determination only applies within well-defined contexts, so it is bounded. If we step outside of the context where it applies, it no longer has force. (See also New Approaches to Modality; Free Will and Determinism.)

Leibniz is more familiar to me than Kripke, so when I hear “possible worlds”, I have tended to imagine complete alternate universes à la Leibniz. “Worlds”, however, could be read much more modestly as just referring to Corfield’s ranges of variation.

Form as a Unique Thing

Ever since Plato talked about Forms, philosophers have debated the status of so-called abstract entities. To my mind, referring to them as “entities” is already prejudicial. I like to read Plato himself in a way that minimizes existence claims, and instead focuses on what I think of as claims about importance. Importance as a criterion is practical in a Kantian sense — i.e., ultimately concerned with what we should do. As Aristotle might remind us, what really matters is getting the specific content of our abstractions right for each case, not the generic ontological status of those abstractions.

One of Plato’s main messages, still very relevant today, is that what he called Form is important. A big part of what makes Form important is that it is good to think with, and a key aspect of what makes Plato’s version good to think with is what logically follows from its characterization as something unique in a given case. (Aristotle’s version of form has different, more mixed strengths, including both a place for uniqueness and a place for polyvocality or multiple perspectives, making it simultaneously more supple and more difficult to formalize.) In principle, such uniqueness of things that nonetheless also have generality makes it possible to reason to conditionally necessary outcomes in a constructive way, i.e., without extra assumptions, as a geometer might. Necessity here just means that in the context of some given construction, only one result of a given type is possible. (This is actually already stronger than the sense Aristotle gave to “necessity”. Aristotle pragmatically allowed for defeasible empirical judgments that something “necessarily” follows from something else, whenever there is no known counter-example.)

In the early 20th century, Bertrand Russell developed a very influential theory of definite descriptions, which sparked another century-long debate. Among other things (here embracing an old principle of interpretation common in Latin scholastic logic), he analyzed definite descriptions as always implying existence claims.

British philosopher David Corfield argues for a new approach to formalizing definite descriptions that does not require existence claims or other assumptions, but only a kind of logical uniqueness of the types of the identity criteria of things. His book Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy, to which I recently devoted a very preliminary article, has significant new things to say about this sort of issue. Corfield argues inter alia that many and perhaps even all perceived limits of formalization are actually due to limits of the particular formalisms of first-order classical logic and set theory, which dominated in the 20th century. He thinks homotopy type theory (HoTT) has much to offer for a more adequate formal analysis of natural language, as well as in many other areas. Corfield also notes that most linguists already use some variant of lambda calculus (closer to HoTT), rather than first-order logic.

Using first-order logic to formalize natural language requires adding many explicit assumptions — including assumptions that various things “exist”. Corfield notes that ordinary language philosophers have questioned whether it is reasonable to suppose that so many extra assumptions are routinely involved in natural language use, and from there reached pessimistic conclusions about formalization. The vastly more expressive HoTT, on the other hand, allows formal representations to be built without additional assumptions in the representation. All context relevant to an inference can be expressed in terms of types. (This does not mean no assumptions are involved in the use of a representation, but rather only that the formal representation does not contain any explicit assumptions, as by contrast it necessarily would with first-order logic.)

A main reason for the major difference between first-order logic and HoTT with respect to assumptions is that first-order logic applies universal quantifications unconditionally (i.e., for all x, with x free or completely undefined), and then has to explicitly add assumptions to recover specificity and context. By contrast, type theories like HoTT apply quantifications only to delimited types, and thus build in specificity and context from the ground up. Using HoTT requires closer attention to criteria for identities of things and kinds of things.

Frege already had the idea that logical predicates are a kind of mathematical function. Mathematical functions are distinguished by invariantly returning a unique value for each given input. The truth functions used in classical logic are also a kind of mathematical function, but provide only minimal distinction into “true” and “false”. From a purely truth-functional point of view, all true propositions are equivalent, because we are only concerned with reference, and their only reference (as distinguished from Fregean sense) is to “true” as distinct from “false”. By contrast, contemporary type theories are grounded in inference rules, which are kinds of primitive function-like things that preserve many more distinctions.

In one section, Corfield discusses an HoTT-based inference rule for introduction of the definite article “the” in ordinary language, based on a property of many types called “contractibility” in HoTT. A contractible type is one that can be optionally taken as referring to a formally unique object that can be constructed in HoTT, and whose existence therefore does not need to be assumed. This should also apply at least to Platonic Forms, since for Plato one should always try to pick out the Form of something.

In HoTT, every variable has a type, and every type carries with it definite identity criteria, but the identity criteria for a given type may themselves have a type from anywhere in the HoTT hierarchy of type levels. In a given case, the type of the identity criteria for another type may be above the level of truth-functional propositions, like a set, groupoid, or higher groupoid; or below it, i.e., contractible to a unique object. This sort of contractibility into a single object might be taken as a contemporary formal criterion for a specification to behave like a Platonic Form, which seems to be an especially simple, bottom-level case, even simpler than a truth-valued “mere” proposition.

The HoTT hierarchy of type levels is synthetic and top-down rather than analytic and bottom-up, so everything that can be expressed on a lower level is also expressible on a higher level, but not necessarily vice versa. The lower levels represent technically “degenerate” — i.e., less general — cases, to which one cannot “compile down” in some instances. This might also be taken to anachronistically explain why Aristotle and others were ultimately not satisfied with Platonic Forms as a general basis for explanation. Importantly, this bottom, “object identity” level does seem to be adequate to account for the identity criteria of mathematical objects as instances of mathematical structures, but not everything is explainable in terms of object identities, which are even less expressive than mere truth values.

Traditionally, mathematicians have used the definite article “the” to refer to things that have multiple characterizations that are invariantly equivalent, such as “the” structure of something, when the structure can be equivalently characterized in different ways. From a first-order point of view, this has been traditionally apologized for as an “abuse of language” that is not formally justified. HoTT provides formal justification for the implicit mathematical intuition underpinning this generally accepted practice, by providing the capability to construct a unique object that is the contractible type of the equivalent characterizations.

With this in hand, it seems we won’t need to make any claims about the existence of structures, because from this point of view — unlike, e.g., that of set theory — mathematical talk is always already about structures.

This has important consequences for talk about structuralism, at least in the mathematical case, and perhaps by analogy beyond that. Corfield argues that anything that has contractible identity criteria (including all mathematical objects) just is some structure. He quotes major HoTT contributor Steve Awodey as concluding “mathematical objects simply are structures. Could there be a stronger formulation of structuralism?”

Thus no ontology or theory of being in the traditional (historically Scotist and Wolffian) sense is required in order to support talk about structures (or, I would argue, Forms in Plato’s sense). (In computer science, “ontology” has been redefined as an articulation of some world or domain into particular kinds, sorts, or types, where what is important is the particular classification scheme practically employed, rather than theoretical claims of real existence that go beyond experience. At least at a very high level, this actually comes closer than traditional “metaphysical” ontology did to Aristotle’s original practice of higher-order interpretation of experience.)

Corfield does not discuss Brandom at length, but his book’s index has more references to Brandom than to any other named individual, including the leaders in the HoTT field. All references in the text are positive. Corfield strongly identifies with the inferentialist aspect of Brandom’s thought. He expresses optimism about HoTT representation of Brandomian material inferences, and about the richness of Brandom’s work for type-theoretic development.

Corfield is manifestly more formally oriented than Brandom, and his work thus takes a different direction that does not include Brandom’s strong emphasis on normativity, or on the fundamental role of what I would call reasonable value judgments within material inference. From what I take to be an Aristotelian point of view, I greatly value both the inferentialist part of Brandom that Corfield wants to build on, and the normative pragmatic part that he passes by. I think Brandom’s idea about the priority of normative pragmatics is extremely important; but with that proviso, I still find Corfield’s work on the formal side very exciting.

In a footnote, Corfield also directs attention to Paul Redding’s recommendation that analytic readers of Hegel take seriously Hegel’s use of Aristotelian “term logic”. This is not incompatible with a Kantian and Brandomian emphasis on the priority of integral judgments. As I have pointed out before, the individual terms combined or separated in canonical Aristotelian propositions are themselves interpretable as judgments.

“What” by Inferential Semantics

Brandom’s inferential semantics can be seen as providing a general framework for answering “what is…” questions. Semantics is about meaning — especially of concrete things said — and inferential semantics is about understanding meaning as a kind of practical doing involved with reasons. Looked at this way, a meaning reflects an inferential role, or role in real-world reasoning. Such roles always have two sides — conditions for appropriate use, and consequences of using this rather than that. Brandom identifies conceptual content with such inferential roles, and focuses on a contrast between these and simple definition, but I want to emphasize instead that all simple definition should be understood as a kind of summary of what implicitly distinguishes a particular inferential role from others.

The kind of meaning of interest here is in principle shareable rather than subjective, private, or psychological. Meaning is social and essentially involved with communication, but it is not a matter of empirical fact. Rather than explaining communication in terms of empirical facts, we should ultimately explain what we call empirical facts in terms of well-founded shareable meaning. The more we are able to explicitly spell out conditions of use and consequences of things that are said, the more substantive content we can share with others.

The “what is…” questions classically asked by Plato and Aristotle have an open-ended character because they are concerned with what something means for a reasoning being in general, which is an open-ended context. To have meaning for a reasoning being is to make a difference in the way the being reasons in life. In this way, Plato and Aristotle also were deeply concerned with the inferential roles of things, and practiced a kind of inferential semantics. This is ultimately inseparable from questions of goodness of reasoning. Here, too, inferential semantics depends on normative pragmatics.

“Why” by Normative Pragmatics

Brandom’s normative pragmatics can be seen as providing a general framework for answering “why” questions. Pragmatics is initially about the practical use of language, and normative pragmatics is about good use, which for Brandom especially means good inferential use. Thus, normative pragmatics ends up being broadly concerned with good informal reasoning in life, i.e., with the quality of our ethical and other judgments.

In my view, this concern with the goodness of reasons and judgments also ends up emphasizing the ethical dimensions of judgment in general. There is really no such thing as “value free” judgment. Even what is called mathematical “intuition” is really an acquired practical skill having to do with judgment of what next step is contextually appropriate.

Classically, “why” asks for reasons, or about the goodness of reasons. Taken far enough, this leads to questions about ends.

Aristotle, too, typically framed inquiries in terms of what is well “said of” something. This is a kind of analysis of language use, with a normative or ethical intent, that ends up being inseparable from questions of what is right and what is true. This general approach is actually a form of what Brandom would call normative pragmatics. Brandom would tell us that semantics — or the investigation of meaning — depends on this sort of inquiry. My ascription of a fundamentally semantic orientation to Aristotle carries a similar implication.

What and Why

I want to say that questions of what and why of the sort asked by Plato and Aristotle are of vital importance for all ethically concerned people. These are questions of interpretation, and of what I have been broadly calling meaning. For the moment, I’m leaving aside obvious questions of what to do, in favor of these broader questions that implicitly inform them.

What something is and why it is the way it is — or should be the way it should be — are deeply intertwined. Aristotle provides many good illustrations of this. Also, at any given moment, our thinking about why depends on many assumptions about what we are concerned with that may call for review. Conversely, our thinking about each what implicitly depends on many more detailed judgments of why.

It is not practical to question everything at once, so we do it serially as the need arises, striving to be deeply honest with ourselves in our assessments of the relative levels of such needs. We seek the appropriate best balance of considerations, as well as a good balance between thoroughness of questioning on the one hand, and practical responsiveness or needed decisiveness on the other. (See also Context.)

The question why is quite open-ended. It asks for reasons or causes — and then potentially for more reasons or causes behind those — sincerely seeking to explain or justify, in the spirit of Hegel’s notion of a faith in reasonableness without presupposed truths. It arises in ethical deliberation, in general dialogue, and in many other practical circumstances, as well as in more broadly philosophical considerations. It always involves a dimension of explicit or implicit judgments of value and importance, and often interrelates with questions of fact or interpretation of fact. We should pursue it in a spirit of mutual recognition and expansive agency. Brandom’s normative pragmatics provides a good outer frame for why questions, and valuable technical tools for addressing them. (See also “Why” by Normative Pragmatics.)

The question what honestly faces the provisional character of our implicit and explicit classifications and identifications of things. As Kant might remind us, the what-it-is that we “immediately” apprehend depends upon complex processes of synthesis. Every what encapsulates many judgments and inferences. That does not mean our apprehensions are necessarily wrong — far from it — but it opens another huge space of questions an ethically concerned person should be aware of as possibly relevant, and should monitor for potential warning flags. As with why, questions of what also interrelate with questions of fact or interpretation of fact. Brandom’s inferential semantics provides a good outer frame and technical apparatus for approaching what questions. (See also “What” by Inferential Semantics.)

Abstract and Concrete

In contrast to later traditional “metaphysics”, Aristotle recommended we start with the concrete, but then aim to dialectically rise to higher understanding, which is still of the concrete. In any inquiry, we should begin with the things closer to us, but as Wittgenstein said in a different context, we should ultimately aim to kick away the ladder upon which we climbed.

What Aristotle would have us eventually kick away is by no means the concrete itself, but only our preliminary understanding of it as a subject of immediate, simple reference. Beginnings are tentative, not certain. We reach more solid, richer understanding through development.

Aristotle’s discussion of “primary” substance in Categories has often been turned into a claim that individuals are ontologically more primary than form. This is to misunderstand what Categories is talking about. Aristotle explicitly says Categories will be about “things said without combination” [emphasis added], i.e., about what is expressed by kinds of apparently atomic sayings that are used in larger sayings.

The initial definition of substance in the strict or “primary” sense — which he will eventually kick away in the Metaphysics — is of a thing (said) “which is neither said of something underlying nor in something underlying”. (Aristotle often deliberately leaves it open whether he is talking about a referencing word or a referenced thing — or says one and implies the other — because in both cases, the primary concern is the inferential meaning of the reference.)

This initial definition is a negative one that suffices to distinguish substance from the other categories. By implication, it refers to something that is said simply of something, in the way that a proper name is. As examples, he gives (namings of) an individual human, or an individual horse.

“Socrates” would be said simply of Socrates, and would thus “be” — or refer to — a primary substance in this sense. The naming of Socrates is an apparently simple reference to what we might call an object. As Brandom has noted, this picks out a distinctive semantic and inferential role that applies only to references to singular things.

Aristotle then says that more universal namings or named things like “human” and “horse” are also “substances” — i.e., can also refer to singular objects — in a secondary sense, as in “that horse”. Then substance in general is further distinguished, by saying it is something A such that when something else B is said of it, both the naming and the “what-it-is” of B are said of the primary or secondary substance A. (See also Form; Things in Themselves; Definition.)

If a horse as such “is” a mammal of a certain description, then that horse must be a mammal of that description. If a mammal as such “is” warm-blooded, then that horse “is” warm-blooded.

These are neither factual nor ontological claims, but consequences of a rule of interpretation telling us what it means to say these kinds of things. Whether or not something is a substance in this sense is surely a key distinction, for it determines the validity or invalidity of a large class of inferences.

Based on the classification of A as an object reference and B as something said of A, we can make valid inferences about A from B.

When something else C is said of the non-substance B, by contrast, we still have a “naming” of B, but the “what-it-is” or substantive meaning of C does not apply to B itself, but only modifies it, because B is not an object reference. Applying the substantive meaning of C to B — i.e., making inferences about B from the meaning of C — would be invalid in this case.

Just because, say, warm-blooded as such “is” a quality, there is no valid inference that mammals “are” qualities, or that that horse “is” a quality. The concern here is with validity of a certain kind of inference and interpretation, not ontology (or epistemology, either).

In the Metaphysics, the initial referential notion of substance as something underlying is explicitly superseded through a far more elaborate development of “what it was to have been” a thing that emphasizes form, and ultimately actuality and potentiality. The appearance of what might be mistaken for a sort of referential foundationalism is removed. (See also Aristotelian Dialectic.)

I also think he wanted to suggest that practically, a kind of preliminary grasp of some actuality has to come first in understanding. Actuality is always concrete and particular, and said to be more primary. But potentiality too plays an irreducible role, in underwriting the relative persistence of something as the “same” something through change, which motivated the earlier talk about something underlying. The persistence of relatively stable identities of things depends on their counterfactual potentiality, which can only be apprehended in an inferential way. (See also Aristotelian Demonstration.)

It does make sense to say that things like actuality and substance inhere more in the individual than in the species, but that is due to the meanings of actuality and substance, not to an ontological status.

Syllogism

Aristotle invented logic as a discipline, and in Prior Analytics developed a detailed theory of so-called syllogisms to codify deductive reasoning, which also marks the beginning of formalization in logic. Although there actually were interesting developments in the European middle ages with the theory of so-called supposition as a kind of semi-formal semantics, Kant famously said Aristotle had said all there was to say about logic, and this went undisputed until the time of Boole and De Morgan in the mid-19th century. Boole himself said he was only extending Aristotle’s theory.

The fundamental principle of syllogistic reasoning is best understood as a kind of function composition. Aristotle himself did not have the concept of a mathematical function, which we owe mainly to Leibniz, but he clearly used a concept of composition of things we can recognize as function-like. In the late 19th century, Frege pointed out that the logical meaning of grammatical predication in ordinary language can be considered as a kind of function application.

Aristotle’s syllogisms were expressed in natural language, but in order to focus attention on their form, he often substituted letters for concrete terms. The fundamental pattern is

(quantifier A) op B
(quantifier B) op C
Therefore, A op C

where each instance of “quantifier” is either “some” or “all”; each instance of “op” is either what Aristotle called “combination” or “separation”, conventionally represented in natural language by “is” or “is not”; and each letter is a type aka “universal” aka higher-order term. (In the middle ages and later, individuals were treated as a kind of singleton types with implicit universal quantification, so it is common to see examples like “Socrates is a human”, but Aristotle’s own concrete examples never included references to individuals.) Not all combinations of substitutions correspond to valid inferences, but Prior Analytics systematically described all the valid ones.

In traditional interpretations, Aristotle’s use of conventionalized natural language representations sometimes led to analyses of the “op” emphasizing grammatical relations between subjects and predicates. However, Aristotle did not concern himself with grammar, but with the more substantive meaning of (possibly negated) “said of” relations, which actually codify normative material inferences. His logic is thus a fascinating hybrid, in which each canonical proposition represents a normative judgment of a material-inferential relation between types, and then the representations are formally composed together.

The conclusion B of the first material inference, which is also the premise of the second, was traditionally called the “middle term”, the role of which in reasoning through its licensing of composition lies behind all of Hegel’s talk about mediation. The 20th century saw the development of category theory, which explains all mathematical reasoning and formal logic in terms of the composition of “morphisms” or “arrows” corresponding to primitive function- or inference-like things. Aside from many applications in computer science and physics, category theory has also been used to analyze grammar. The historical relation of Aristotle to the Greek grammarians goes in the same direction — Aristotle influenced the grammarians, not the other way around. (See also Searching for a Middle Term; Aristotelian Demonstration; Demonstrative “Science”?)

New Approaches to Modality

I periodically peek at the groundbreaking work on formal systems that is going on in homotopy type theory (HoTT), and in doing so just stumbled on an intriguing treatment of modal HoTT that seems much more philosophically promising to me than standard 20th century modal logic.

Types can be taken as formalizing major aspects of the Aristotelian notions of substance and form. Type theory — developed by Swedish philosopher Per Martin-Löf from early 20th century work by the British philosopher Bertrand Russell and the American mathematician Alonzo Church — is the most important thing in the theory of programming languages these days. It is both a higher-order constructive logic and an abstract functional programming language, and was originally developed as a foundation for constructive mathematics. Several variants of type theory have also been used in linguistics to analyze meaning in natural language.

Homotopy type theory combines this with category theory and the categorical logic pioneered by American mathematician William Lawvere, who was also first suggested a category-theory interpretation of Hegelian logic. HoTT interprets types as paths between topological spaces, higher-order paths between paths, and so on, in a hierarchy of levels that also subsumes classical logic and set theory. It is a leading alternative “foundation” or framework for mathematics, in the less epistemologically “foundationalist” spirit of previous proposals for categorical foundations. It is also a useful tool for higher mathematics and physics that includes an ultra-expressive logic, and has a fully computational interpretation.

There is a pretty readable new book on modal HoTT by British philosopher David Corfield, which also gives a nice introductory prose account of HoTT in general and type theory in general. (I confess I prefer pages of mostly prose — of which Corfield has a lot — to forests of symbolic notation.) Corfield offers modal HoTT as a better logic for philosophy and natural language analysis than standard 20th century first-order classical logic, because its greater expressiveness allows for much richer distinctions. He mentions Brandom several times, and says he thinks type theory can formally capture many of Brandom’s concerns, as I previously suggested. Based on admittedly elementary acquaintance with standard modal logic, I’ve had a degree of worry about Brandom’s use of modal constructs, and this may also help with that.

The worry has to do with a concept of necessity that occasionally sounds overly strong to my ear, and is related to my issues with necessity in Kant. I don’t like any universal quantification on untyped variables, let alone applied to all possible worlds, which is the signature move of standard modal logic. But it seems that adding types into the picture changes everything.

Before Corfield brought it to my attention, I was only dimly aware of the existence of modal type theory (nicely summarized in nLab). This apparently associates modality with the monads (little related to Leibnizian ones) that I use to encapsulate so-called effects in functional programming for my day job. Apparently William Lawvere already wrote about geometric modalities, in which the modal operator means something like “it is locally the case that”. This turns modality into a way of formalizing talk about context, which seems far more interesting than super-strong generalization. (See also Modality and Variation; Deontic Modality; Redding on Morals and Modality).

It also turns out Corfield is a principal contributor to the nLab page I previously reported finding, on Hegel’s logic as a modal type theory.

Independent of his discussion of modality, Corfield nicely builds on American programming language theorist Robert Harper’s notion of “computational trinitarianism”, which stresses a three-way isomorphism between constructive logic, programming languages, and mathematical category theory. The thesis is that any sound statement in any one of these fields should have a reasonable interpretation in both of the other two.

In working life, my own practical approach to software engineering puts a high value on a kind of reasoning inspired by a view of fancy type theory and category theory as extensions or enrichments of simple Aristotelian logic, which on its formal side was grounded in the composition of pairs of informally generated judgments of material consequence or material incompatibility. I find the history of these matters fascinating, and view category theory and type theory as a kind of vindication of Aristotle’s emphasis on composition (or what could be viewed as chained function application, or transitivity of implicit implication, since canonical Aristotelian propositions actually codify material inferences) as the single most important kind of formal operation in reasoning.