Split Subject, Contradiction

The Žižekians, referencing Lacan, like to talk about a “split subject” that is noncoincident with itself. In broad terms, I think this is useful. What we call subjectivity is divided, and lacking in strong unity. (See also Pure Negativity?; Acts in Brandom and Žižek.) But it seems to me that if we try to speak carefully about this, we should not then go on using singular articles like “the” or “a”.

I tend to think subjectivity is not just fractured or un-whole, but also actually consists of a complex overlay of different things that we tend to blur together. In particular, it seems clear to me that a common-sense, biographical “self” whose relative unity over time is trackable by relation to the “same” physical body — or by Lockean continuity of memory — is not the same as what we might in a given moment view from a distance as an individualized ethos, or up close as a unity of apperception. This is, I believe, the same distinction that Brandom discusses in terms of sentience and sapience.

Ethos and unity of apperception, and their constituent values and conceptions — the very things that most properly say “I”, and play the functional role of an ethical “subject”, or of a subject of knowledge — are profoundly involved with language, social relations, and what Lacan in his earlier work called the Symbolic and the “Other”. These instances of sapience are pure forms whose identity can only be expressed in terms of sameness of form — nonempirical, but inseparable from a larger ethical world — and simultaneously intimate to us, but by no means strictly “ours”. (See also Self, Subject.)

Where I am still a bit torn is that I also feel that emotions — which I’ve been locating on the former, “self” side — are fundamental to subjectivity as a whole, but I have theoretically separated them from the main locus of transcendental ethical and epistemic subjectivity, even though they play an essential role in making it possible. One logical solution would be to say this just means subjectivity as a whole is more than just ethical and epistemic. Another would be to say that there is a separate kind of emotional subjectivity. I’m not entirely satisfied yet, because I think feeling combines these, but the noncoincidence of our factual selves with our ethical and epistemic being seems very important in understanding how we overcome empirical limitations.

The Žižekians will perhaps remind us that they were not talking about a split between self and subject, but about a split within the subject. I think we habitually overstate the degree of unity and identity we attribute to selves, subjects, and things in general, so I’m fine with that, too. They also want to expand this into a general “ontological” point, which I see as a semantic point.

Perhaps the Žižekians are more comfortable talking about “a” or “the” subject in part due to their doctrine of the ubiquity of contradiction. Todd McGowan in Emancipation After Hegel (2019) nicely distinguishes the Žižekian notion from the old confusion between contradiction and conflict or polarity — and from immediate self-contradiction — but still wants to maintain that the standard logical law of noncontradiction ultimately “refutes itself”, and that Hegel thought this as well. This argument combines a laudable awareness of some of the practical issues with identity, with a logically invalid use of the distinction between explicit and implicit self-contradiction.

Hegel meditated profoundly on the difficulties of applying logic to meaningful content and to real life. He strained language to the breaking point trying to express his conclusions.

On the frontiers of mathematical logic today, the so-called law of identity has been replaced by a requirement to specify identity criteria for each formally defined type, and identity in general has been weakened to isomorphism. (See also Form as a Unique Thing.)

Real-world applications of strong identity typically involve loose “extensional” reference to things assumed to be the same, and a lot of forgetting. The linchpin of old “identity thinking” was inattention to difficulties of formalization from ordinary language — basically an illegitimate moving back and forth between formal and informal domains, resulting in lots of homogenizing confusion of things that ought to be distinct. Weaker, “intensional” assertions about identity as specifiable sameness of form make it the exception rather than the rule. What come first conceptually are distinctions within the manifold, not pre-synthesized things already possessed of identity. Where things are not the same to begin with, contradiction — far from being omnipresent — is not even potentially at issue. (See also Self-Evidence?)

Meanwhile, Sellars and Brandom have revived material inference about meant realities in contrast to formal logic, which deals with purely syntactic relations between presumed extensional “things” with presumed identity. Things Kant and Hegel said about Understanding and Reason can be nicely understood in terms of the relation between syntactic inference about symbolic terms standing for formless extensional “things” and substantive, material inference about the actual form of meant realities. Especially in the reading of Hegel, not having the resource of this distinction available now seems positively crippling.

Finally, Aristotle, who originated the law of noncontradiction as a kind of ethical imperative, and stands in the background to all of Hegel’s discussions of logic, was himself rather cautious and tentative about applying identity to real things, and in his logic was also mainly concerned with (composition of) material inferences, which have more to do with the actual form of things .

Hegel never violated Aristotle’s imperative not to say opposite things about the same thing said in the same way. What he did was to constantly point out the gap between reality and traditional semi-formal logic applied to ordinary language — not to encourage us to reject logic, but rather to refine and sublimate it. (See also Aristotelian and Hegelian Dialectic.)

Form as a Unique Thing

Ever since Plato talked about Forms, philosophers have debated the status of so-called abstract entities. To my mind, referring to them as “entities” is already prejudicial. I like to read Plato himself in a way that minimizes existence claims, and instead focuses on what I think of as claims about importance. Importance as a criterion is practical in a Kantian sense — i.e., ultimately concerned with what we should do. As Aristotle might remind us, what really matters is getting the specific content of our abstractions right for each case, not the generic ontological status of those abstractions.

One of Plato’s main messages, still very relevant today, is that what he called Form is important. A big part of what makes Form important is that it is good to think with, and a key aspect of what makes Plato’s version good to think with is what logically follows from its characterization as something unique in a given case. (Aristotle’s version of form has different, more mixed strengths, including both a place for uniqueness and a place for polyvocality or multiple perspectives, making it simultaneously more supple and more difficult to formalize.) In principle, such uniqueness of things that nonetheless also have generality makes it possible to reason to conditionally necessary outcomes in a constructive way, i.e., without extra assumptions, as a geometer might. Necessity here just means that in the context of some given construction, only one result of a given type is possible. (This is actually already stronger than the sense Aristotle gave to “necessity”. Aristotle pragmatically allowed for defeasible empirical judgments that something “necessarily” follows from something else, whenever there is no known counter-example.)

In the early 20th century, Bertrand Russell developed a very influential theory of definite descriptions, which sparked another century-long debate. Among other things (here embracing an old principle of interpretation common in Latin scholastic logic), he analyzed definite descriptions as always implying existence claims.

British philosopher David Corfield argues for a new approach to formalizing definite descriptions that does not require existence claims or other assumptions, but only a kind of logical uniqueness of the types of the identity criteria of things. His book Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy, to which I recently devoted a very preliminary article, has significant new things to say about this sort of issue. Corfield argues inter alia that many and perhaps even all perceived limits of formalization are actually due to limits of the particular formalisms of first-order classical logic and set theory, which dominated in the 20th century. He thinks homotopy type theory (HoTT) has much to offer for a more adequate formal analysis of natural language, as well as in many other areas. Corfield also notes that most linguists already use some variant of lambda calculus (closer to HoTT), rather than first-order logic.

Using first-order logic to formalize natural language requires adding many explicit assumptions — including assumptions that various things “exist”. Corfield notes that ordinary language philosophers have questioned whether it is reasonable to suppose that so many extra assumptions are routinely involved in natural language use, and from there reached pessimistic conclusions about formalization. The vastly more expressive HoTT, on the other hand, allows formal representations to be built without additional assumptions in the representation. All context relevant to an inference can be expressed in terms of types. (This does not mean no assumptions are involved in the use of a representation, but rather only that the formal representation does not contain any explicit assumptions, as by contrast it necessarily would with first-order logic.)

A main reason for the major difference between first-order logic and HoTT with respect to assumptions is that first-order logic applies universal quantifications unconditionally (i.e., for all x, with x free or completely undefined), and then has to explicitly add assumptions to recover specificity and context. By contrast, type theories like HoTT apply quantifications only to delimited types, and thus build in specificity and context from the ground up. Using HoTT requires closer attention to criteria for identities of things and kinds of things.

Frege already had the idea that logical predicates are a kind of mathematical function. Mathematical functions are distinguished by invariantly returning a unique value for each given input. The truth functions used in classical logic are also a kind of mathematical function, but provide only minimal distinction into “true” and “false”. From a purely truth-functional point of view, all true propositions are equivalent, because we are only concerned with reference, and their only reference (as distinguished from Fregean sense) is to “true” as distinct from “false”. By contrast, contemporary type theories are grounded in inference rules, which are kinds of primitive function-like things that preserve many more distinctions.

In one section, Corfield discusses an HoTT-based inference rule for introduction of the definite article “the” in ordinary language, based on a property of many types called “contractibility” in HoTT. A contractible type is one that can be optionally taken as referring to a formally unique object that can be constructed in HoTT, and whose existence therefore does not need to be assumed. This should also apply at least to Platonic Forms, since for Plato one should always try to pick out the Form of something.

In HoTT, every variable has a type, and every type carries with it definite identity criteria, but the identity criteria for a given type may themselves have a type from anywhere in the HoTT hierarchy of type levels. In a given case, the type of the identity criteria for another type may be above the level of truth-functional propositions, like a set, groupoid, or higher groupoid; or below it, i.e., contractible to a unique object. This sort of contractibility into a single object might be taken as a contemporary formal criterion for a specification to behave like a Platonic Form, which seems to be an especially simple, bottom-level case, even simpler than a truth-valued “mere” proposition.

The HoTT hierarchy of type levels is synthetic and top-down rather than analytic and bottom-up, so everything that can be expressed on a lower level is also expressible on a higher level, but not necessarily vice versa. The lower levels represent technically “degenerate” — i.e., less general — cases, to which one cannot “compile down” in some instances. This might also be taken to anachronistically explain why Aristotle and others were ultimately not satisfied with Platonic Forms as a general basis for explanation. Importantly, this bottom, “object identity” level does seem to be adequate to account for the identity criteria of mathematical objects as instances of mathematical structures, but not everything is explainable in terms of object identities, which are even less expressive than mere truth values.

Traditionally, mathematicians have used the definite article “the” to refer to things that have multiple characterizations that are invariantly equivalent, such as “the” structure of something, when the structure can be equivalently characterized in different ways. From a first-order point of view, this has been traditionally apologized for as an “abuse of language” that is not formally justified. HoTT provides formal justification for the implicit mathematical intuition underpinning this generally accepted practice, by providing the capability to construct a unique object that is the contractible type of the equivalent characterizations.

With this in hand, it seems we won’t need to make any claims about the existence of structures, because from this point of view — unlike, e.g., that of set theory — mathematical talk is always already about structures.

This has important consequences for talk about structuralism, at least in the mathematical case, and perhaps by analogy beyond that. Corfield argues that anything that has contractible identity criteria (including all mathematical objects) just is some structure. He quotes major HoTT contributor Steve Awodey as concluding “mathematical objects simply are structures. Could there be a stronger formulation of structuralism?”

Thus no ontology or theory of being in the traditional (historically Scotist and Wolffian) sense is required in order to support talk about structures (or, I would argue, Forms in Plato’s sense). (In computer science, “ontology” has been redefined as an articulation of some world or domain into particular kinds, sorts, or types, where what is important is the particular classification scheme practically employed, rather than theoretical claims of real existence that go beyond experience. At least at a very high level, this actually comes closer than traditional “metaphysical” ontology did to Aristotle’s original practice of higher-order interpretation of experience.)

Corfield does not discuss Brandom at length, but his book’s index has more references to Brandom than to any other named individual, including the leaders in the HoTT field. All references in the text are positive. Corfield strongly identifies with the inferentialist aspect of Brandom’s thought. He expresses optimism about HoTT representation of Brandomian material inferences, and about the richness of Brandom’s work for type-theoretic development.

Corfield is manifestly more formally oriented than Brandom, and his work thus takes a different direction that does not include Brandom’s strong emphasis on normativity, or on the fundamental role of what I would call reasonable value judgments within material inference. From what I take to be an Aristotelian point of view, I greatly value both the inferentialist part of Brandom that Corfield wants to build on, and the normative pragmatic part that he passes by. I think Brandom’s idea about the priority of normative pragmatics is extremely important; but with that proviso, I still find Corfield’s work on the formal side very exciting.

In a footnote, Corfield also directs attention to Paul Redding’s recommendation that analytic readers of Hegel take seriously Hegel’s use of Aristotelian “term logic”. This is not incompatible with a Kantian and Brandomian emphasis on the priority of integral judgments. As I have pointed out before, the individual terms combined or separated in canonical Aristotelian propositions are themselves interpretable as judgments.

New Approaches to Modality

I periodically peek at the groundbreaking work on formal systems that is going on in homotopy type theory (HoTT), and in doing so just stumbled on an intriguing treatment of modal HoTT that seems much more philosophically promising to me than standard 20th century modal logic.

Types can be taken as formalizing major aspects of the Aristotelian notions of substance and form. Type theory — developed by Swedish philosopher Per Martin-Löf from early 20th century work by the British philosopher Bertrand Russell and the American mathematician Alonzo Church — is the most important thing in the theory of programming languages these days. It is both a higher-order constructive logic and an abstract functional programming language, and was originally developed as a foundation for constructive mathematics. Several variants of type theory have also been used in linguistics to analyze meaning in natural language.

Homotopy type theory combines this with category theory and the categorical logic pioneered by American mathematician William Lawvere, who was also first suggested a category-theory interpretation of Hegelian logic. HoTT interprets types as paths between topological spaces, higher-order paths between paths, and so on, in a hierarchy of levels that also subsumes classical logic and set theory. It is a leading alternative “foundation” or framework for mathematics, in the less epistemologically “foundationalist” spirit of previous proposals for categorical foundations. It is also a useful tool for higher mathematics and physics that includes an ultra-expressive logic, and has a fully computational interpretation.

There is a pretty readable new book on modal HoTT by British philosopher David Corfield, which also gives a nice introductory prose account of HoTT in general and type theory in general. (I confess I prefer pages of mostly prose — of which Corfield has a lot — to forests of symbolic notation.) Corfield offers modal HoTT as a better logic for philosophy and natural language analysis than standard 20th century first-order classical logic, because its greater expressiveness allows for much richer distinctions. He mentions Brandom several times, and says he thinks type theory can formally capture many of Brandom’s concerns, as I previously suggested. Based on admittedly elementary acquaintance with standard modal logic, I’ve had a degree of worry about Brandom’s use of modal constructs, and this may also help with that.

The worry has to do with a concept of necessity that occasionally sounds overly strong to my ear, and is related to my issues with necessity in Kant. I don’t like any universal quantification on untyped variables, let alone applied to all possible worlds, which is the signature move of standard modal logic. But it seems that adding types into the picture changes everything.

Before Corfield brought it to my attention, I was only dimly aware of the existence of modal type theory (nicely summarized in nLab). This apparently associates modality with the monads (little related to Leibnizian ones) that I use to encapsulate so-called effects in functional programming for my day job. Apparently William Lawvere already wrote about geometric modalities, in which the modal operator means something like “it is locally the case that”. This turns modality into a way of formalizing talk about context, which seems far more interesting than super-strong generalization. (See also Modality and Variation; Deontic Modality; Redding on Morals and Modality).

It also turns out Corfield is a principal contributor to the nLab page I previously reported finding, on Hegel’s logic as a modal type theory.

Independent of his discussion of modality, Corfield nicely builds on American programming language theorist Robert Harper’s notion of “computational trinitarianism”, which stresses a three-way isomorphism between constructive logic, programming languages, and mathematical category theory. The thesis is that any sound statement in any one of these fields should have a reasonable interpretation in both of the other two.

In working life, my own practical approach to software engineering puts a high value on a kind of reasoning inspired by a view of fancy type theory and category theory as extensions or enrichments of simple Aristotelian logic, which on its formal side was grounded in the composition of pairs of informally generated judgments of material consequence or material incompatibility. I find the history of these matters fascinating, and view category theory and type theory as a kind of vindication of Aristotle’s emphasis on composition (or what could be viewed as chained function application, or transitivity of implicit implication, since canonical Aristotelian propositions actually codify material inferences) as the single most important kind of formal operation in reasoning.

Categorical “Evil”

If we are aiming at any kind of true unity of apperception, then in any given logical moment we should aim to reason in ways that are invariant under isomorphism. Over time our practical and theoretical reasoning may and will iteratively change, but synchronically we should aim to ensure that reasoning about equivalent things will be invariant within the scope of each iteration.

In higher mathematics, difficulties arise when one structure is represented by or in another structure that has a different associated notion of equivalence. This requires maintaining a careful distinction of levels. The expected consequence relation for the represented notion may not work well with the representation. Such failures of reasoning to be invariant under isomorphism are informally, half-jokingly referred to by practitioners of higher category theory as “evil”. This is a mathematical idea with a clear normative aspect and a very high relevance to philosophy.

The serious slogan implied by the half-joke is that evil should be avoided. More positively, a principle of equivalence-invariance has been articulated for this purpose. One version states that all grammatically correct properties of objects in a fixed category should be invariant under isomorphism. Another states that isomorphic structures should have the same structural properties. On the additional assumption that the only properties of objects we are concerned with are structural properties, this is said to be equivalent to the first.

There are numerous examples of such “evil”, usually associated with uncareful use of equality (identity) between things of different sorts. A significant foundational one is that material set theories such as ZFC allow arbitrary sets to be putatively compared for equality, without providing any means to effect the comparison. Comparison of completely arbitrary things is of course is not computable, so it cannot be implemented in any programming language. It is also said to violate equivalence invariance, which means that material set theories allow evil. The root of this evil is that such theories inappropriately privilege pre-given, arbitrary elements over definable structural properties. (This issue is another reason I think definition needs to be dialectically preserved or uplifted in our more sophisticated reflections, rather than relegated to the dustbin in favor of a sole emphasis on recollective genealogy. A concern to define structures and structural properties of things appears in this context as the determinate negation of the effective privileging of putatively pre-given elements over any and all rational considerations.) ZFC set theory offers a nice illustration of the more general evil of Cartesian-style bottom-up foundationalism.

The evil-generating supposition that utterly arbitrary things can be compared (and that we don’t need to care that we can’t even say how this would be accomplished) implicitly presupposes that all things whatsoever have a pre-given “Identity” that is independent of their structural properties, but mysteriously nonetheless somehow contentful and somehow magically immediately epistemically available as such. This is a mathematical version of the overly strong but still common notion of Identity that I and many others have been concerned to reject. Such bad notions of Identity are deeply involved with the ills of Mastery diagnosed by Hegel and Brandom.

We should not allow evil in foundations, so many leading mathematicians interested in foundations are now looking for an alternative to the 20th century default of ZFC. Some combination of dependent type theory for syntax with higher category theory for semantics seems most promising as an alternative. The recent development of homotopy type theory (HoTT) is perhaps the most vigorous candidate.

Another way to broadly characterize this mathematical “evil” is that it results from treating representation as prior to inference in the order of explanation, as Brandom might say, which means treating correspondence to something merely assumed as given as taking precedence over coherence of reasoning. This is a variant of what Sellars famously called the Myth of the Given. It is a philosophical evil as well as a mathematical one. Besides their intrinsic importance, these mathematical issues make more explicit some of the logical damage done by the Myth of the Given.

Another broad characterization has to do with mainstream 20th century privileging of classical logic over constructive logic, of first-order logic over higher-order logic, and of model theory over proof theory. Prior to the late 19th century, nearly all mathematics was constructive. Cantor’s development of transfinite mathematics was the main motivation for mathematicians to begin working in a nonconstructive style. Gödel’s proof that first-order logic was the richest logic for which all propositions that are true in all models are also true was thought to make it better for foundational use. Logical completeness and even soundness are standardly defined in ways that privilege model theory, which is the formal theory of representation.

It is now known, however, that there are several ways of embedding and representing classical logic — with no loss of fidelity — on a constructive foundation, so the old claim that constructive logic was less powerful has been refuted. Going in the other direction, however, classical logic has no way of recovering the computability that is built into constructive logic once it has been violated, so it is increasingly recognized that a constructive logic provides the more flexible and comprehensive starting point. (Also, transfinite mathematics can reportedly now be given a constructive foundation under HoTT.)

Since the mid-20th century there has been an immense development of higher-order concepts in formal domains, including mathematical foundations; the theory of programming languages; and the implementation of theorem-proving software. Higher-order formalisms offer a huge improvement in expressive power. (As a hand-waving analogy, imagine how hard it would be to do physics with only first-order equations.)

Type theory, proof theory, and the theory of programming languages are kinds of formalism that put inference before pre-given representations. Category theory seems to take an even-handed approach.

Although I noted some interest in Brandom on the part of people working in a higher-order constructive context, Brandom himself seems much more interested in things that would be described by paraconsistent logics, such as processes of belief revision or of the evolution of case law or common law, or of normativity writ large. (In the past, he engaged significantly with Michael Dummett’s work, while to my knowledge remaining silent on Dummet’s arguments in favor of the philosophical value of constructive logic.)

Paraconsistency is a property of some consequence relations, such that in absence of an explicit assumption that from a contradiction anything follows, not everything can in fact be proven to follow from a given contradiction, so the consequence relation does not “explode” (collapse into triviality).

In view of the vast proliferation of alternative formalisms of all sorts since the mid-20th century, it may very well be inappropriate to presume that we will ever get back to one formalism to rule them all. I do expect that homotopy type theory or something like it will eventually come to dominate work on mathematical foundations and related aspects of computer science (and everything else that falls under Hegelian Understanding, taken as a positive moment in the larger process); but as hugely important as I think these are, I am also sympathetic to Brandom’s Kantian/Hegelian idea that considerations of normativity form an outer frame around everything else, as well as to the Aristotelian view that considerations of normativity tend to resist formalization.

On the formal side, it seems it is not possible to synchronically reconcile HoTT with paraconsistency, which would seem to be a problem. (At the opposite, simple end of the scale, my other favorite logical mechanism — Aristotelian syllogism interpreted as function composition — apparently can be shown to have a paraconsistency property, since it syntactically constrains conclusions to be semantically relevant to the premises.)

Diachronically, though, perhaps we could paraconsistently evolve from one synchronically non-evil, HoTT-expressible view of the world to a dialectically better one, while the synchronic/diachronic distinction could save us from a conflict of requirements between the respective logics.

I think the same logical structure needed to wrap a paraconsistent recollective genealogy around a formal development would also account for iterative development of HoTT-expressible formal specifications, where each iteration would be internally consistent, but assumptions or requirements may change between iterations.

Identity, Isomorphism

Many strands of Western thought — from Augustinian theology to Cartesianism to set theory — have suffered from overly strong notions of what amounts to a privileged, originary, self-evident, contentful Identity of things. (There are also many significant exceptions. With their emphasis on distinctions of form, Plato and Aristotle only needed a weak identity. Spinoza’s emphasis on relations; Leibniz’s identity of indiscernibles; Hume’s dispersive empiricism; and Kant’s critical perspective are all closer to Plato and Aristotle in this regard. Hegel makes identity derivative from a Difference associated with Aristotelian contrariety or Brandomian material incompatibility. Nietzsche, Wittgenstein, and many 20th century continentals explicitly criticized the overly strong concept.)

21st century mathematics has seen tremendously exciting new work on foundations that bears on this question. Homotopy type theory very strongly suggests among other things that the identity needed to develop all of mathematics is no stronger than isomorphism. This provides a formal justification of the common practical attitude of mathematicians that isomorphic structures can be substituted for one another in a proof by an acceptable “abuse of notation”.

More generally, type theory and category theory provide an independent basis in contemporary mathematics for reaffirming the priority of form as difference over identity. I am tempted to say that they exemplify a kind of inferentialism in mathematics. (To those who say mathematics holds no lessons for philosophy, I would say that generalization disregards the specific character of these developments. nLab, the website for higher category theory, even has a page on Hegel’s logic as a modal type theory that explicitly refers to Brandom’s interpretation of Hegel.)