Categorical “Evil”

If we are aiming at any kind of true unity of apperception, then in any given logical moment we should aim to reason in ways that are invariant under isomorphism. Over time our practical and theoretical reasoning may and will iteratively change, but synchronically we should aim to ensure that reasoning about equivalent things will be invariant within the scope of each iteration.

In higher mathematics, difficulties arise when one structure is represented by or in another structure that has a different associated notion of equivalence. This requires maintaining a careful distinction of levels. The expected consequence relation for the represented notion may not work well with the representation. Such failures of reasoning to be invariant under isomorphism are informally, half-jokingly referred to by practitioners of higher category theory as “evil”. This is a mathematical idea with a clear normative aspect and a very high relevance to philosophy.

The serious slogan implied by the half-joke is that evil should be avoided. More positively, a principle of equivalence-invariance has been articulated for this purpose. One version states that all grammatically correct properties of objects in a fixed category should be invariant under isomorphism. Another states that isomorphic structures should have the same structural properties. On the additional assumption that the only properties of objects we are concerned with are structural properties, this is said to be equivalent to the first.

There are numerous examples of such “evil”, usually associated with uncareful use of equality (identity) between things of different sorts. A significant foundational one is that material set theories such as ZFC allow arbitrary sets to be putatively compared for equality, without providing any means to effect the comparison. Comparison of completely arbitrary things is of course is not computable, so it cannot be implemented in any programming language. It is also said to violate equivalence invariance, which means that material set theories allow evil. The root of this evil is that such theories inappropriately privilege pre-given, arbitrary elements over definable structural properties. (This issue is another reason I think definition needs to be dialectically preserved or uplifted in our more sophisticated reflections, rather than relegated to the dustbin in favor of a sole emphasis on recollective genealogy. A concern to define structures and structural properties of things appears in this context as the determinate negation of the effective privileging of putatively pre-given elements over any and all rational considerations.) ZFC set theory offers a nice illustration of the more general evil of Cartesian-style bottom-up foundationalism.

The evil-generating supposition that utterly arbitrary things can be compared (and that we don’t need to care that we can’t even say how this would be accomplished) implicitly presupposes that all things whatsoever have a pre-given “Identity” that is independent of their structural properties, but mysteriously nonetheless somehow contentful and somehow magically immediately epistemically available as such. This is a mathematical version of the overly strong but still common notion of Identity that I and many others have been concerned to reject. Such bad notions of Identity are deeply involved with the ills of Mastery diagnosed by Hegel and Brandom.

We should not allow evil in foundations, so many leading mathematicians interested in foundations are now looking for an alternative to the 20th century default of ZFC. Some combination of dependent type theory for syntax with higher category theory for semantics seems most promising as an alternative. The recent development of homotopy type theory (HoTT) is perhaps the most vigorous candidate.

Another way to broadly characterize this mathematical “evil” is that it results from treating representation as prior to inference in the order of explanation, as Brandom might say, which means treating correspondence to something merely assumed as given as taking precedence over coherence of reasoning. This is a variant of what Sellars famously called the Myth of the Given. It is a philosophical evil as well as a mathematical one. Besides their intrinsic importance, these mathematical issues make more explicit some of the logical damage done by the Myth of the Given.

Another broad characterization has to do with mainstream 20th century privileging of classical logic over constructive logic, of first-order logic over higher-order logic, and of model theory over proof theory. Prior to the late 19th century, nearly all mathematics was constructive. Cantor’s development of transfinite mathematics was the main motivation for mathematicians to begin working in a nonconstructive style. Gödel’s proof that first-order logic was the richest logic for which all propositions that are true in all models are also true was thought to make it better for foundational use. Logical completeness and even soundness are standardly defined in ways that privilege model theory, which is the formal theory of representation.

It is now known, however, that there are several ways of embedding and representing classical logic — with no loss of fidelity — on a constructive foundation, so the old claim that constructive logic was less powerful has been refuted. Going in the other direction, however, classical logic has no way of recovering the computability that is built into constructive logic once it has been violated, so it is increasingly recognized that a constructive logic provides the more flexible and comprehensive starting point. (Also, transfinite mathematics can reportedly now be given a constructive foundation under HoTT.)

Since the mid-20th century there has been an immense development of higher-order concepts in formal domains, including mathematical foundations; the theory of programming languages; and the implementation of theorem-proving software. Higher-order formalisms offer a huge improvement in expressive power. (As a hand-waving analogy, imagine how hard it would be to do physics with only first-order equations.)

Type theory, proof theory, and the theory of programming languages are kinds of formalism that put inference before pre-given representations. Category theory seems to take an even-handed approach.

Although I noted some interest in Brandom on the part of people working in a higher-order constructive context, Brandom himself seems much more interested in things that would be described by paraconsistent logics, such as processes of belief revision or of the evolution of case law or common law, or of normativity writ large. (In the past, he engaged significantly with Michael Dummett’s work, while to my knowledge remaining silent on Dummet’s arguments in favor of the philosophical value of constructive logic.)

Paraconsistency is a property of some consequence relations, such that in absence of an explicit assumption that from a contradiction anything follows, not everything can in fact be proven to follow from a given contradiction, so the consequence relation does not “explode” (collapse into triviality).

In view of the vast proliferation of alternative formalisms of all sorts since the mid-20th century, it may very well be inappropriate to presume that we will ever get back to one formalism to rule them all. I do expect that homotopy type theory or something like it will eventually come to dominate work on mathematical foundations and related aspects of computer science (and everything else that falls under Hegelian Understanding, taken as a positive moment in the larger process); but as hugely important as I think these are, I am also sympathetic to Brandom’s Kantian/Hegelian idea that considerations of normativity form an outer frame around everything else, as well as to the Aristotelian view that considerations of normativity tend to resist formalization.

On the formal side, it seems it is not possible to synchronically reconcile HoTT with paraconsistency, which would seem to be a problem. (At the opposite, simple end of the scale, my other favorite logical mechanism — Aristotelian syllogism interpreted as function composition — apparently can be shown to have a paraconsistency property, since it syntactically constrains conclusions to be semantically relevant to the premises.)

Diachronically, though, perhaps we could paraconsistently evolve from one synchronically non-evil, HoTT-expressible view of the world to a dialectically better one, while the synchronic/diachronic distinction could save us from a conflict of requirements between the respective logics.

I think the same logical structure needed to wrap a paraconsistent recollective genealogy around a formal development would also account for iterative development of HoTT-expressible formal specifications, where each iteration would be internally consistent, but assumptions or requirements may change between iterations.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s