Aristotle flourished before the great flowering of Greek mathematics that gave us Euclid, Ptolemy, Apollonius, and Aristarchus. In his day, mathematics amounted to just arithmetic and simple geometry. In spite of the famous Pythagorean theorem that the square constructed from the hypotenuse of a right triangle is equal in area to the sum of the squares constructed from the other two sides, the historic reality of the Pythagorean movement had more to do with number mysticism, other superstitions, and curious injunctions like “don’t eat beans” than it did with real mathematics.

I think Aristotle was entirely right to conclude that arithmetic and simple geometry were of little use for explaining change in the natural world. I’ve characterized his physics as grounded in a kind of semantic inquiry that Aristotle pioneered. We are not used to thinking about science this way, as fundamentally involved with a very human inquiry about the meaning of experience in life, rather than predictive calculation. For Aristotle, the gap between natural science and thoughtful reflection about ordinary experience was much smaller than it is for us.

Aristotle invented the notion of cause as a semantic tool for expressing the reasons why changes occur. Aristotle’s notion is far more abstract than the metaphor of impulse or something pushing on something else that guided early modern mechanism. Even though the notion of cause was originally developed in a text included in Aristotle’s Physics, the “semantic” grounding of Aristotelian physics places it closer to logic than to modern physical inquiries.

I think the discussion of the kinds of causes could equally well have been grouped among his “logical” works. In fact, the form in which we have Aristotle’s works today is the result of the efforts of multiple ancient editors, who sometimes stitched together separate manuscripts, so there is room for a legitimate question whether the discussion of causes was originally a separate treatise. We tend to assume that there must be something inherently “physical” about the discussion of causes, but this is ultimately due to a circular argument from the fact that the more detailed version of it came down to us as part of the Physics (there is another, briefer one that came down to us as part of the Metaphysics).

Since Hume and especially since the later 19th century, many authors have debated about the role of causes in science. Bertrand Russell argued in the early 20th century that modern science does not in fact depend on what I have called the modern notion of cause.

More recently, Robert Brandom has argued that the purpose of logic is “to make explicit the inferential relations that articulate the semantic contents of the concepts expressed by the use of ordinary, nonlogical vocabulary”. I see Aristotelian causes in this light.

I want to recommend a return to a notion of causes in general as explanatory reasons rather than things that exert force. This can include all the mathematics used in modern science, as well as a broader range of reasons relevant to life. (See also Aristotelian Causes; Mechanical Metaphors; Causes: Real, Heuristic?; Effective vs “Driving”; Secondary Causes.)

Figurative Synthesis

I wanted to extract a few more key points from Beatrice Longuenesse’s landmark study Kant and the Capacity to Judge. She strongly emphasizes that judgment for Kant refers to a complex activity, not a simple reaching of conclusions. She especially stresses the role of a capacity to judge that precedes any particular judgment and is grounded in a synthesis of imagination. (See Capacity to Judge; Imagination: Aristotle, Kant; Kantian Synthesis.)

At issue here is the very capacity for discursive thought, as well as “the manner in which things are given to us” (p. 225, emphasis in original), which for Kant involves what he called intuition. (See also Beauty and Discursivity).

Through careful textual analysis, Longuenesse argues that Kant’s claim to derive logical categories from forms of judgment makes far more sense than most previous commentators had recognized. For Kant, she argues, the “forms of judgment” are not just logical abstractions but essential cognitive acts that reflect “universal rules of discursive thought” (p. 5).

She recalls Kant’s insistence that the early modern tradition was wrong to take categorical judgments (simple predications like “A is B“) as the model for judgments in general. For Kant, hypothetical and disjunctive judgments (“if A then B” and “not both A and B“, respectively) are more primitive. These correspond to the judgments of material consequence and material incompatibility that Brandom argues form the basis of real-world reasoning.

Another distinctive Kantian thesis is that space and time are neither objective realities nor discursive concepts that we apply. Rather, they are intuitions and necessary forms of all sensibility. Kantian intuitions are produced by the synthesis of imagination according to definite rules.

“[I]ntuition is a species of cognition (Erkenntnis), that is, a conscious representation related to an object. As such it is distinguished from mere sensation, which is a mere state of the subject, by itself unrelated to any object…. One might say that, in intuition, the object is represented even if it is not recognized (under a concept).” (pp. 219-220, emphasis in original).

Before we apply any concepts or judgments, “Representational receptivity, the capacity to process affections into sensations (conscious representations), must also be able to present these sensations in an intuition of space and an intuition of time. This occurs when the affection from outside is the occasion for the affection from inside — the figurative synthesis. The form of the receptive capacity is thus a merely potential form, a form that is actualized only by the figurative synthesis” (p. 221, emphasis in original).

“[A]ccording to Locke, in this receptivity to its own acts the mind mirrors itself, just as in sensation it mirrors outer objects…. Kant shares with Locke the conception of inner sense as receptivity, but he no longer considers the mind as a mirror, either in relation to itself or in relation to objects…. Just as the thing in itself that affects me from outside is forever unknowable to me, I who affect myself from within by my own representative act am forever unknowable to me” (p. 239, emphasis added).

The point that the mind is not a mirror — either of itself or of the world — is extremely important. The mirror analogy Kant is rejecting is a product of early modern representationalism. We can still have well-founded beliefs about things of which we have no knowledge in a strict sense.

“Kant’s explanation is roughly this: our receptivity is constituted in such a way that objects are intuited as outer objects only in the form of space. But the form of space is itself intuited only insofar as an act, by which the ‘manifold of a given cognition is brought to the objective unity of apperception’, affects inner sense. Thanks to this act the manifold becomes consciously perceived, and this occurs only in the form of time” (p. 240, emphasis in original).

She develops Kant’s idea that mathematics is grounded in this kind of intuition, ultimately derived from the conditions governing imaginative synthesis. In particular, for Kant our apprehensions of unities and any kind of identification of units are consequences of imaginative synthesis.

“Extension and figure belong to the ‘pure intuition’ of space, which is ‘that in which the manifold of appearances can be ordered’, that is, that by limitation of which the extension and figure of a given object are delineated. Therefore, space and time provide the form of appearances only insofar as they are themselves an intuition: a pure intuition, that is, an intuition preceding and conditioning all empirical intuition; and an undivided intuition, that is, an intuition that is presupposed by other intuitions rather than resulting from their combinations” (p. 219, emphasis in original).

“According to Locke, the idea of unity naturally accompanies every object of our senses, and the idea of number arises from repeating the idea of unity and associating a sign with each collection thus generated by addition of units…. But for Kant, the idea (the concept) of a unit is not given with each sensory object. It presupposes an act of constituting a homogeneous multiplicity…. Thus the idea of number is not the idea of a collection of given units to which we associate a sign, but the reflected representation of a rule for synthesis, that is, for the act of constituting a homogeneous multiplicity. When such an act is presented a priori in intuition, a concept of number is constructed.” (p. 260, emphasis in original).

“Mathematics has no principles in the absolute sense required by reason. Axioms are not universal propositions cognized by means of pure concepts. They may be universally and apodeictically true, but their truth is based on the pure intuition of space, not derived from pure concepts according to the principle of contradiction” (p. 287).

Incidentally, Longuenesse thinks it does not follow from Kant’s account that space is necessarily Euclidean, as many commentators have believed and Kant himself suggested.

Whitehead: Process, Events

The originally British mathematician and philosopher Alfred North Whitehead (1861–1947) was profoundly concerned with the inter-relatedness of things. His later “philosophy of organism” inspired a movement of so-called “process theology”.

Whitehead was one of the inventors of universal algebra, which extends algebraic principles to symbolic representations of things that are not numbers. He collaborated with Bertrand Russell on the famous Principia Mathematica (1910, 1912, 1913) , which sought to ground all of mathematics in the new mathematical logic, but was less attached than Russell to the goal of reducing math to logic.

He did work in electrodynamics and the theory of relativity, emphasizing a holistic approach and the nonlocal character of electromagnetic phenomena. Counter to the spirit of the time, he developed a philosophy of science that aimed to be faithful to our intuitions of the interconnectedness of nature. He characterized mathematics as the abstract study of patterns of connectedness. In Science and the Modern World (1926), rejecting the world views of Newton and Hume as understood by the logical empiricists, he developed alternatives to then-dominant atomistic causal reductionism and sensationalist empiricism. Eventually, he turned to what he and others called metaphysics.

His Process and Reality (1929) is a highly technical work that is full of interesting insights and remarks. It aims to present a logically coherent system that radicalizes the work of John Locke in particular, but also that of Descartes, Spinoza, and Leibniz. As with many systematic works, however, it doesn’t engage in depth with the work of other philosophers.

Whitehead’s radicalization involves, among other things, a systematic rejection of mind-body dualism; of representationalism; of metaphysical applications of the subject-predicate distinction; and of Locke’s distinction between “primary” (mathematical) and “secondary” (nonmathematical) qualities. Plato and Aristotle both get positive mention. Whitehead thoroughly repudiates the sensationalist direction in which Hume took Locke’s work; aims deliberately to be “pre-Kantian”; and seems to utterly ignore Hegel, though he gives positive mention to the “absolute idealist” F. H. Bradley.

He wants to promote a thoroughgoing causal realism and to avoid any subjectivism, while eventually taking subjective factors into account. He wants to reinterpret “stubborn fact” on a coherentist basis. He is impressed by the work of Bergson, and of the pragmatists William James and John Dewey.

For Whitehead, “experience” encompasses everything, but he gives this an unusual meaning. Experience need not involve consciousness, sensation, or thought. He stresses the realist side of Locke, and wants to apply some of Locke’s analysis of the combination of ideas to realities in general.

He says that the world consists fundamentally of “actual entities” or “actual occasions” or “concrescences”, which he compares to Descartes’ extended substances. However, he interprets Einstein’s theory of relativity as implying that substances mutually contain one another, a bit like the monads in Leibniz.

For Whitehead, every actual entity has a kind of self-determination, which is intended to explain both human freedom and quantum indeterminacy. On the other hand, he also says God is the source of novelty in the universe. Whitehead recognizes what he calls eternal objects, which he compares to Platonic ideas, and identifies with potentiality.

Compared to the Aristotelian notions of actuality and potentiality I have been developing here, his use of actuality and potentiality seems rather thin. Actuality is just factuality viewed in terms of the connections of things, and potentiality consisting in eternal objects amounts to a kind of abstract possibility. His notion of causality seems to be a relatively standard modern efficient causality, modified only by his emphasis on connections between things and his idea of the self-determination of actual entities. His philosophy of science aims to be value-free, although he allows a place for values in his metaphysics.

According to Whitehead, perception has two distinct modes — that of presentational immediacy, and that of causal efficacy. Humean sensationalism, as codified by early 20th century theories of “sense data”, tries to reduce everything to presentational immediacy, but it is our intuitions of causal efficacy that connect things together into the medium-sized wholes recognized by common sense. As far as it goes, I can only applaud this move away from presentational immediacy, though I have also tried to read Hume in a less reductionist way. (I also want to go further, beyond intuitions of efficient causality in the modern sense, to questions of the constitution of meaning and value that I think are more general.)

In his later works, he emphasizes a more comprehensive notion of feeling, which he sees as grounded in subjective valuations, glossed as having to do with how we take various eternal objects. Compared to the logical empiricism that dominated at the time, this is intriguing, but I want to take the more radically Aristotelian (and, I would argue, also Kantian) view that values or ends (which are themselves subjects of inquiry, not simply given) also ultimately drive the constitution of things we call objective. I also don’t see “metaphysics” as a separate domain that would support the consideration of values, over and above a “science” that would ostensibly be value-free.

Whitehead considered the scientific reductionism of his day to exemplify what he called the “fallacy of misplaced concreteness”. What I think he wanted to question by this was the idea that scientific abstractions are more real or more true than common-sense apprehensions of concrete things. I would phrase it a bit differently, but the outcome is the same. Abstractions can have great interpretive value, but they are things entirely produced by us that have value because they help us understand concrete things that are more independent of us.

Attempting to take into account the idea from quantum mechanics that reality is not only relational but also granular, he made what is to me the peculiar statement that “the ultimate metaphysical truth is atomism”. Whitehead is certainly not alone in this kind of usage; indeed, the standard modern physical notion of “atoms” allows them to have parts and internal structure. That concept is fine in itself, but “atom” is a terrible name for it, because “atom” literally means “without parts”. The word “atom” ought to denote something analogous to a point in geometry, lacking any internal features or properties whatsoever.

Be that as it may, Whitehead sees an analogy between the granularity of events in quantum mechanics and the “stream of consciousness” analyzed by William James. “Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and on reflection you can divide these into components, but as immediately given, they come totally or not at all” (Process and Reality, p. 68). To me, this is an expression not of atomism but of a kind of irreducibility of medium-sized things.

Anyway, Whitehead’s “atomic” things are events. Larger events are composed of smaller events, but he wants to say there is such a thing as a minimal event, which still may have internal complexity, and to identify this with his notion of actual occasion or actual entity.

I like the identification of “entities” with occasions. For Whitehead, these are a sort of what I call “medium-sized” chunks of extension in space-time. Whitehead’s minimal events are nonpunctual.

Freed of its scholastic rigidifications, this is close to what the Aristotelian notion of “primary substance” was supposed to be. I think of the latter as a handle for a bundle of adverbial characterizations that has a kind of persistence — or better, resilience — in the face of change. Only as a bundle does it have this kind of resilience.

Although — consistent with the kind of grounding in scientific realism he is still aiming at — Whitehead emphasizes the extensional character of actual occasions, they implicitly incorporate a good deal of intensional (i.e., meaning-oriented, as distinguished from mathematical-physical) character as well. Following Brandom’s reading of Kant on the primacy of practical reason, I think it is better to explain extensional properties in terms of intensional ones, rather than vice versa. But I fully agree with Whitehead that “how an actual entity becomes constitutes what that actual entity is” (p. 23, emphasis in original), and I think Aristotle and Hegel would, too.

According to the Stanford Encyclopedia of Philosophy, Whitehead’s work was attractive to theologians especially because it offered an alternative to the traditional notion of an omnipotent God creating everything from nothing. Whitehead argued that the Christian Gospel emphasizes the “tenderness” of God, rather than dominion and power: “not… the ruling Caesar, or the ruthless moralist, or the unmoved mover. It dwells upon the tender elements in the world, which slowly and in quietness operate by love” (p. 343). “The purpose of God is the attainment of value in the world” (Whitehead, Religion in the Making, p. 100). God for Whitehead is a gentle persuader, not a ruler.

(I would not put unmoved moving in anywhere near the same bucket as ruling omnipotence. Unmoved moving in Aristotle is attraction or inspiration by a pure end, where all the motion occurs in the moved thing. It is not some kind of ruling force that drives things.)

Logic for People

Leading programming language theorist Robert Harper refers to so-called constructive or intuitionistic logic as “logic as if people mattered”. There is a fascinating convergence of ideas here. In the early 20th century, Dutch mathematician L. E. J. Brouwer developed a philosophy of mathematics called intuitionism. He emphasized that mathematics is a human activity, and held that every proof step should involve actual evidence discernible to a human. By contrast, mathematical Platonists hold that mathematical objects exist independent of any thought; formalists hold that mathematics is a meaningless game based on following rules; and logicists argue that mathematics is reducible to formal logic.

For Brouwer, a mathematical theorem is true if and only if we have a proof of it that we can exhibit, and each step of that proof can also be exhibited. In the later 19th century, many new results about infinity — and infinities of infinities — had been proved by what came to be called “classical” means, using proof by contradiction and the law of excluded middle. But from the time of Euclid, mathematicians have always regarded reproducible constructions as a better kind of proof. The law of excluded middle is a provable theorem in any finite context. When the law of excluded middle applies, you can conclude that if something is not false it must be true, and vice versa. But it is not possible to construct any infinite object.

The only infinity we actually experience is what Aristotle called “potential” infinity. We can, say, count a star and another and another, and continue as long as you like, but no actually infinite number or magnitude or thing is ever available for inspection. Aristotle famously defended the law of excluded middle, but in practice only applied it to finite cases.

In mathematics there are conjectures that are not known to be true or false. Brouwer would say, they are neither true nor false, until they are proved or disproved in a humanly verifiable way.

The fascinating convergence is that Brouwer’s humanly verifiable proofs turn out also to exactly characterize the part of mathematics that is computable, in the sense in which computer scientists use that term. Notwithstanding lingering 20th century prejudices, intuitionistic math actually turns out to be a perfect fit for computer science. I use this in my day job.

I am especially intrigued by what is called intuitionistic type theory, developed by Swedish mathematician-philosopher Per Martin-Löf. This is offered simultaneously as a foundation for mathematics, a higher-order intuitionistic logic, and a programming language. One might say it is concerned with explaining ultimate bases for abstraction and generalization, without any presuppositions. One of its distinctive features is that it uses no axioms, only inference rules. Truth is something emergent, rather than something presupposed. Type theory has deep connections with category theory, another truly marvelous area of abstract mathematics, concerned with how different kinds of things map to one another.

What especially fascinates me about this work are its implications for what logic actually is. On the one hand, it puts math before mathematical logic– rather than after it, as in the classic early 20th century program of Russell and Whitehead — and on the other, it provides opportunities to reconnect with logic in the different and broader, less formal senses of Aristotle and Kant, as still having something to say to us today.

Homotopy type theory (HoTT) is a leading-edge development that combines intuitionistic type theory with homotopy theory, which explores higher-order paths through topological spaces. Here my ignorance is vast, but it seems tantalizingly close to a grand unification of constructive principles with Cantor’s infinities of infinities. My interest is especially in what it says about the notion of identity, basically vindicating Leibniz’ thesis that what is identical is equivalent to what is practically indistinguishable. This is reflected in mathematician Vladimir Voevodsky’s emblematic axiom of univalence, “equivalence is equivalent to equality”, which legitimizes much actual mathematical practice.

So anyway, Robert Harper is working on a variant of this that actually works computationally, and uses some kind of more specific mapping through n-dimensional cubes to make univalence into a provable theorem. At the cost of some mathematical elegance, this avoids the need for the univalence axiom, saving Martin-Löf’s goal to avoid depending on any axioms. But again — finally getting to the point of this post — in a 2018 lecture, Harper says his current interest is in a type theory that is in the first instance computational rather than formal, and semantic rather than syntactic. Most people treat intuitionistic type theory as a theory that is both formal and syntactic. Harper recommends that we avoid strictly equating constructible types with formal propositions, arguing that types are more primitive than propositions, and semantics is more primitive than syntax.

Harper disavows any deep philosophy, but I find this idea of starting from a type theory and then treating it as first of all informal and semantic rather than formal and syntactic to be highly provocative. In real life, we experience types as accessibly evidenced semantic distinctions before they become posited syntactic ones. Types are first of all implicit specifications of real behavior, in terms of distinctions and entailments between things that are more primitive than identities of things.

Cartesian Metaphysics

For Descartes, according to Gueroult, “metaphysics” is the universal science or the system of science, and also a kind of introduction to more concrete studies. Here we are far from Aristotle and much closer, I think, to Duns Scotus. Without knowledge of God and oneself, Descartes says, it would never be possible to discover the principles of physics. Gueroult says that Descartes insists on an “incomprehensibility” of God that is neither unknowability nor irrationality but the “formal reason of the infinite” (Descartes selon l’order des raisons, p. 17). This again has a somewhat Scotist sound to my ear.

The infinitude of God puts God absolutely first, as the first truth that founds all others. Gueroult quotes Descartes saying, “It is a ‘blasphemy’ to say that the truth of something precedes God’s knowledge of it…, because the existence of God is the first and the most eternal of all the truths that can be, and the truth from which all the others proceed” (ibid; my translation).

Descartes says that God “freely creates” eternal truths. I have no idea what creation of eternal truths could even possibly mean, though such a notion seems to be at least implicit in the teaching of Duns Scotus. To be eternal is to have no before and after. Therefore, it seems to me, all eternal things must be co-eternal. This point of view accommodates part of Descartes’ thesis, insofar as if all eternal things are co-eternal, then an eternal truth would not “precede” God’s knowledge of it. In broadly neoplatonic terms, eternal truths could plausibly be regarded as aspects of the “nature” of God. I can also grasp the idea of truths following logically from the “nature” of God, but I suspect Descartes would either follow Scotus in arguing that God’s infinite power is not a “nature”, or follow Aquinas in arguing that God is pure existence and has no other “nature”. I don’t see how anything more specific can directly follow from either infinite power or pure existence.

For Descartes, though, God’s omnipotence “excludes the possibility of error” and “alone founds the objective validity of my intellectual faculty” (ibid). Descartes aims at “a total system of certain knowledge, at the same time metaphysical and scientific, … entirely immanent to mathematical certitude enveloped in the clear and distinct intellect, … in its requirement of absolute rigor. This totality of the system is in no way that of an encyclopedia of material knowledge effectively acquired, but the fundamental unity of the first principles from which follow all possible certain knowledge” (p. 18). Descartes’ doctrine is for him “a single block of certainty” (p. 19) that would be falsified by adding or removing any detail. All this seems way too strong to me.

Gueroult points out that Descartes wants to contrast an “order of reasons” with an “order of material”, as being more principled. However, unlike geometry, the total system of metaphysical reasoning for Descartes has “psychological” as well as logical requirements. Gueroult says it is for this reason that the Meditations best represent Descartes’ paradigm of rigorous analytic demonstration.

Granted that there is a clear “psychological” aspect to the Meditations, at this point I’m unsure what it means to relate that to the claimed rigor of the system. Moreover, adding a “psychological” dimension to what was said before about mathematical reasoning affects the very meaning of the claim of rigor. I think I understand what mathematical rigor is. I do not understand what “psychological” rigor would be in this context, but I suspect it may be wrapped up with what I would call extraordinary presumptions of absolute self-transparency and immediate reflexivity.

Gueroult on Descartes

Having been greatly impressed by Martial Gueroult’s two extant volumes on Spinoza’s Ethics, I wanted to challenge myself to get some sense of the detail of his magisterial Descartes selon l’order des raisons (1968). Sometimes called a “structuralist” in the history of philosophy, Gueroult systematically developed the fine grain of argument in Spinoza’s demonstrations, and here he does the same for Descartes’ Meditations.

Beginning with a distinction between understanding and explanation, Gueroult announces his intention to subordinate the former to the latter (p. 9). Here “understanding” is a sort of intuitive or imaginative grasp of the whole, whereas “explanation” develops the details in their interrelation. I am reminded of Paul Ricoeur’s great theme of the value of the “long detour”.

Gueroult says Descartes viewed “isolated thoughts” with a sort of horror. This is already interesting. I have long puzzled over Brandom’s treatment of Descartes as a proto-inferentialist, when Descartes has seemed to me on the contrary like an arch-representationalist who plucked “truths” out of thin air. Both Gueroult and Brandom take Descartes’ “method” very seriously. Brandom’s work previously set me on a path that led me to radically change my views of Kant and Hegel. Perhaps I’ll have to revise or modulate some of my judgments of Descartes as well.

For Gueroult, it is objective structures of argument that distinguish philosophy from poetry, spiritual or mystical elevation, general scientific theory, or mere metaphysical opinions. He says that even while “excommunicating” the history of philosophy, Descartes nonetheless formulated a good principle of reading, rejecting eclectic tendencies to pull out this or that idea from a great author, in favor of a systematic approach. Descartes is quoted saying the “precious fruit” must come from “the entire body of the work” (p. 11). This is an important complement to his one-sided insistence elsewhere on beginning with what is simple. However, Descartes is also quoted insisting that all conflicts of interpretation are due to shallow eclecticism and deficiency of method, and that wherever there is such a conflict, one side must certainly be wrong (pp. 13-14).

This insistence on univocal interpretation is one of my big issues with Descartes. It works well for things like geometry, but much less well for sorting out arguments about power or potentiality, for instance. Pushing univocal interpretation as far as it can go can be a very valuable exercise, but as soon as we leave pure mathematics, it also shows its limits. I think that while mathematical necessity can be understood as something we “ought” to recognize for a multitude of reasons, sound ethical judgment must in principle reach beyond what can be expressed with certainty by formal equations. Much as I admire a good mathematical development, I therefore think ethics is more fundamental for us humans than mathematics, and philosophy is more ethical than mathematical.

According to Gueroult, the seminal idea guiding all of Descartes’ work is that human knowledge has unavoidable limits due to the limits of thought, but within those limits it is capable of perfect certainty (p. 15). For Descartes, we do not know thought by things, but we know things by thought. As a matter of principle, we should doubt everything that does not come from the certainty of thought. We are thus offered a stark division between that which is supposed to be certain beyond question, and that which is vain and useless. I think this results both in a treatment of too many things as certain, and in a premature dismissal of aspects of human reality that are uncertain, but nonetheless have real value.

I agree that mathematical reasoning is capable of (hypothetical) certainty, but I contend that we humans live mainly on middle ground that is neither certainty nor mere vanity.

Infinity, Finitude

Here is another area where I find myself with mixed sympathies.

Plato seems to have regarded infinity — or what he called the Unlimited — as something bad. Aristotle argued that infinity exists only in potentiality and not in actuality, a view I find highly attractive. I think I encounter a world of seemingly infinite structure but only finite actualization.

Some time in the later Hellenistic period, notions of a radical spiritual infinity seem to have appeared in the West for the first time, associated with the rise of monotheism and the various trends now commonly called Gnostic. This kind of intensive rather than extensive infinity sometimes seems to be folded back on itself, evoking infinities of infinities and more. The most sophisticated development of a positive theological infinite in later Western antiquity occurred in the more religious rethinking of Greek philosophy by neoplatonists like Plotinus, Proclus, and Damascius.

In 14th century CE Latin Europe, Duns Scotus developed an influential theology that made infinity the principal attribute of God, in contrast to the pure Being favored by Aquinas. Giordano Bruno, burned at the stake in 1600, was a bombastic early defender of Copernican astronomy and notorious critic of established religion who espoused a curious hybrid of Lucretian atomistic materialism, neoplatonism, and magic. He proclaimed the physical existence of an infinity of worlds like our Earth.

Mathematical applications of infinity are a later development, mainly associated with Newton and Leibniz. Leibniz in particular enthusiastically endorsed a speculative reversal of Aristotle’s negative verdict on “actual infinity”. Nineteenth century mathematicians were embarrassed by this, and developed more rigorous reformulations of the calculus based on limits rather than actual infinity. The limit-based formulation is what is generally taught today. Cantor seemingly went in the opposite direction, developing infinities of infinities in pure mathematics. I believe there has been another reformulation of analysis using category theory that claims to equal the rigor of 19th century analysis while recovering an approach closer to that of Leibniz, which might be taken to refute an argument against infinity based solely on lack of rigor according to the standards of contemporary professional mathematicians. One might accept this and still prefer an Aristotelian interpretation of infinity as not applicable to actual things, though it is important to recall that for Aristotle, the actual is not all there is.

The philosophy of Spinoza and even more so Leibniz is permeated with a positive view of the infinite — both mathematical and theological — that in a more measured way was later also taken up by Hegel, who distinguished between a “bad” infinite that seems to have been an “actual” mathematical infinite having the form of an infinite regress, and a “good” infinite that I would gloss as having to do with the interpretation of life and all within it. Nietzsche’s Eternal Return seems to involve an infinite folding back on itself of a world of finite beings. (See also Bounty of Nature; Reason, Nature; Echoes of the Deed; Poetry and Mathematics.)

On the side of the finite, I am tremendously impressed with Aristotle’s affirmative development of what also in a more Kantian style might be termed a multi-faceted “dignity” of finite beings. While infinity may be inspiring or even intoxicating, I think we should be wary of the possibility that immoderate embrace of infinity may lead — even if unwittingly — to a devaluation of finite being, and ultimately of life. I also believe notions of infinite or unconditional power (see Strong Omnipotence; Occasionalism; Arbitrariness, Inflation) are prone to abuse. In any case, ethics is mainly concerned with finite things.

Poetry and Mathematics

Philosophy is neither poetry nor mathematics, but a discursive development.  Poetry may give us visionary symbolism or language-on-language texturings that deautomate perception.  Mathematics offers a paradigm of exactitude, and develops many beautiful structures.  But philosophy is the home of ethics, dialogue, and interpretation.  It is — dare I say it — the home of the human.

Poetry and mathematics each in their own way show us an other-than-human beauty that we as humans can be inspired by.  Ethics on the other hand is the specifically human beauty, the beauty of creatures that can talk and share meaning with one another.


The last post suggests another nuance, having to do with how “total” and “totality” are said in many ways. This is particularly sensitive, because these terms have both genuinely innocent senses and other apparently innocent senses that turn out to implicitly induce evil in the form of a metaphorically “totalitarian” attitude.

Aiming for completeness as a goal is often a good thing.

There is a spectrum of relatively benign errors of over-optimism with respect to where we are in achieving such goals, which at one end begins to shade into a less innocent over-reach, and eventually into claims that are obviously arrogant, or even “totalitarian”.

Actual achievements of completeness are always limited in scope. They are also often somewhat fragile.

I’ll mention the following case mainly for its metaphorical value. Mathematical concepts of completeness are always in some sense domain-specific, and precisely defined. In particular, it is possible to design systems of domain-specific classification that are complete with respect to current “knowledge” or some definite body of such “knowledge”, where knowledge is taken not in a strong philosophical sense, but in some practical sense adequate for certain “real world” operations. The key to using this kind of mathematically complete classification in the real world is including a fallback case for anything that does not fit within the current scheme. Then optionally, the scheme itself can be updated. In less formal contexts, similar strategies can be applied.

There are also limited-scope, somewhat fragile practical achievements of completeness that are neither mathematical nor particularly ethical.

When it comes to ethics, completeness or totality is only something for which we should strive in certain contexts. About this we should be modest and careful.

Different yet again is the arguably trivial “totality” of preconceived wholes like individuals and societies. This is in a way opposite to the mathematical case, which worked by precise definition; here, any definition is implicitly suspended in favor of an assumed reference.

Another kind of implicit whole is a judgment resulting from deliberation. At some point, response to the world dictates that we cut short our in principle indefinitely extensible deliberations, and make a practical judgment call.

Form as a Unique Thing

Ever since Plato talked about Forms, philosophers have debated the status of so-called abstract entities. To my mind, referring to them as “entities” is already prejudicial. I like to read Plato himself in a way that minimizes existence claims, and instead focuses on what I think of as claims about importance. Importance as a criterion is practical in a Kantian sense — i.e., ultimately concerned with what we should do. As Aristotle might remind us, what really matters is getting the specific content of our abstractions right for each case, not the generic ontological status of those abstractions.

One of Plato’s main messages, still very relevant today, is that what he called Form is important. A big part of what makes Form important is that it is good to think with, and a key aspect of what makes Plato’s version good to think with is what logically follows from its characterization as something unique in a given case. (Aristotle’s version of form has different, more mixed strengths, including both a place for uniqueness and a place for polyvocality or multiple perspectives, making it simultaneously more supple and more difficult to formalize.) In principle, such uniqueness of things that nonetheless also have generality makes it possible to reason to conditionally necessary outcomes in a constructive way, i.e., without extra assumptions, as a geometer might. Necessity here just means that in the context of some given construction, only one result of a given type is possible. (This is actually already stronger than the sense Aristotle gave to “necessity”. Aristotle pragmatically allowed for defeasible empirical judgments that something “necessarily” follows from something else, whenever there is no known counter-example.)

In the early 20th century, Bertrand Russell developed a very influential theory of definite descriptions, which sparked another century-long debate. Among other things (here embracing an old principle of interpretation common in Latin scholastic logic), he analyzed definite descriptions as always implying existence claims.

British philosopher David Corfield argues for a new approach to formalizing definite descriptions that does not require existence claims or other assumptions, but only a kind of logical uniqueness of the types of the identity criteria of things. His book Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy, to which I recently devoted a very preliminary article, has significant new things to say about this sort of issue. Corfield argues inter alia that many and perhaps even all perceived limits of formalization are actually due to limits of the particular formalisms of first-order classical logic and set theory, which dominated in the 20th century. He thinks homotopy type theory (HoTT) has much to offer for a more adequate formal analysis of natural language, as well as in many other areas. Corfield also notes that most linguists already use some variant of lambda calculus (closer to HoTT), rather than first-order logic.

Using first-order logic to formalize natural language requires adding many explicit assumptions — including assumptions that various things “exist”. Corfield notes that ordinary language philosophers have questioned whether it is reasonable to suppose that so many extra assumptions are routinely involved in natural language use, and from there reached pessimistic conclusions about formalization. The vastly more expressive HoTT, on the other hand, allows formal representations to be built without additional assumptions in the representation. All context relevant to an inference can be expressed in terms of types. (This does not mean no assumptions are involved in the use of a representation, but rather only that the formal representation does not contain any explicit assumptions, as by contrast it necessarily would with first-order logic.)

A main reason for the major difference between first-order logic and HoTT with respect to assumptions is that first-order logic applies universal quantifications unconditionally (i.e., for all x, with x free or completely undefined), and then has to explicitly add assumptions to recover specificity and context. By contrast, type theories like HoTT apply quantifications only to delimited types, and thus build in specificity and context from the ground up. Using HoTT requires closer attention to criteria for identities of things and kinds of things.

Frege already had the idea that logical predicates are a kind of mathematical function. Mathematical functions are distinguished by invariantly returning a unique value for each given input. The truth functions used in classical logic are also a kind of mathematical function, but provide only minimal distinction into “true” and “false”. From a purely truth-functional point of view, all true propositions are equivalent, because we are only concerned with reference, and their only reference (as distinguished from Fregean sense) is to “true” as distinct from “false”. By contrast, contemporary type theories are grounded in inference rules, which are kinds of primitive function-like things that preserve many more distinctions.

In one section, Corfield discusses an HoTT-based inference rule for introduction of the definite article “the” in ordinary language, based on a property of many types called “contractibility” in HoTT. A contractible type is one that can be optionally taken as referring to a formally unique object that can be constructed in HoTT, and whose existence therefore does not need to be assumed. This should also apply at least to Platonic Forms, since for Plato one should always try to pick out the Form of something.

In HoTT, every variable has a type, and every type carries with it definite identity criteria, but the identity criteria for a given type may themselves have a type from anywhere in the HoTT hierarchy of type levels. In a given case, the type of the identity criteria for another type may be above the level of truth-functional propositions, like a set, groupoid, or higher groupoid; or below it, i.e., contractible to a unique object. This sort of contractibility into a single object might be taken as a contemporary formal criterion for a specification to behave like a Platonic Form, which seems to be an especially simple, bottom-level case, even simpler than a truth-valued “mere” proposition.

The HoTT hierarchy of type levels is synthetic and top-down rather than analytic and bottom-up, so everything that can be expressed on a lower level is also expressible on a higher level, but not necessarily vice versa. The lower levels represent technically “degenerate” — i.e., less general — cases, to which one cannot “compile down” in some instances. This might also be taken to anachronistically explain why Aristotle and others were ultimately not satisfied with Platonic Forms as a general basis for explanation. Importantly, this bottom, “object identity” level does seem to be adequate to account for the identity criteria of mathematical objects as instances of mathematical structures, but not everything is explainable in terms of object identities, which are even less expressive than mere truth values.

Traditionally, mathematicians have used the definite article “the” to refer to things that have multiple characterizations that are invariantly equivalent, such as “the” structure of something, when the structure can be equivalently characterized in different ways. From a first-order point of view, this has been traditionally apologized for as an “abuse of language” that is not formally justified. HoTT provides formal justification for the implicit mathematical intuition underpinning this generally accepted practice, by providing the capability to construct a unique object that is the contractible type of the equivalent characterizations.

With this in hand, it seems we won’t need to make any claims about the existence of structures, because from this point of view — unlike, e.g., that of set theory — mathematical talk is always already about structures.

This has important consequences for talk about structuralism, at least in the mathematical case, and perhaps by analogy beyond that. Corfield argues that anything that has contractible identity criteria (including all mathematical objects) just is some structure. He quotes major HoTT contributor Steve Awodey as concluding “mathematical objects simply are structures. Could there be a stronger formulation of structuralism?”

Thus no ontology or theory of being in the traditional (historically Scotist and Wolffian) sense is required in order to support talk about structures (or, I would argue, Forms in Plato’s sense). (In computer science, “ontology” has been redefined as an articulation of some world or domain into particular kinds, sorts, or types, where what is important is the particular classification scheme practically employed, rather than theoretical claims of real existence that go beyond experience. At least at a very high level, this actually comes closer than traditional “metaphysical” ontology did to Aristotle’s original practice of higher-order interpretation of experience.)

Corfield does not discuss Brandom at length, but his book’s index has more references to Brandom than to any other named individual, including the leaders in the HoTT field. All references in the text are positive. Corfield strongly identifies with the inferentialist aspect of Brandom’s thought. He expresses optimism about HoTT representation of Brandomian material inferences, and about the richness of Brandom’s work for type-theoretic development.

Corfield is manifestly more formally oriented than Brandom, and his work thus takes a different direction that does not include Brandom’s strong emphasis on normativity, or on the fundamental role of what I would call reasonable value judgments within material inference. From what I take to be an Aristotelian point of view, I greatly value both the inferentialist part of Brandom that Corfield wants to build on, and the normative pragmatic part that he passes by. I think Brandom’s idea about the priority of normative pragmatics is extremely important; but with that proviso, I still find Corfield’s work on the formal side very exciting.

In a footnote, Corfield also directs attention to Paul Redding’s recommendation that analytic readers of Hegel take seriously Hegel’s use of Aristotelian “term logic”. This is not incompatible with a Kantian and Brandomian emphasis on the priority of integral judgments. As I have pointed out before, the individual terms combined or separated in canonical Aristotelian propositions are themselves interpretable as judgments.