Categorical “Evil”

If we are aiming at any kind of true unity of apperception, then in any given logical moment we should aim to reason in ways that are invariant under isomorphism. Over time our practical and theoretical reasoning may and will iteratively change, but synchronically we should aim to ensure that reasoning about equivalent things will be invariant within the scope of each iteration.

In higher mathematics, difficulties arise when one structure is represented by or in another structure that has a different associated notion of equivalence. This requires maintaining a careful distinction of levels. The expected consequence relation for the represented notion may not work well with the representation. Such failures of reasoning to be invariant under isomorphism are informally, half-jokingly referred to by practitioners of higher category theory as “evil”. This is a mathematical idea with a clear normative aspect and a very high relevance to philosophy.

The serious slogan implied by the half-joke is that evil should be avoided. More positively, a principle of equivalence-invariance has been articulated for this purpose. One version states that all grammatically correct properties of objects in a fixed category should be invariant under isomorphism. Another states that isomorphic structures should have the same structural properties. On the additional assumption that the only properties of objects we are concerned with are structural properties, this is said to be equivalent to the first.

There are numerous examples of such “evil”, usually associated with uncareful use of equality (identity) between things of different sorts. A significant foundational one is that material set theories such as ZFC allow arbitrary sets to be putatively compared for equality, without providing any means to effect the comparison. Comparison of completely arbitrary things is of course is not computable, so it cannot be implemented in any programming language. It is also said to violate equivalence invariance, which means that material set theories allow evil. The root of this evil is that such theories inappropriately privilege pre-given, arbitrary elements over definable structural properties. (This issue is another reason I think definition needs to be dialectically preserved or uplifted in our more sophisticated reflections, rather than relegated to the dustbin in favor of a sole emphasis on recollective genealogy. A concern to define structures and structural properties of things appears in this context as the determinate negation of the effective privileging of putatively pre-given elements over any and all rational considerations.) ZFC set theory offers a nice illustration of the more general evil of Cartesian-style bottom-up foundationalism.

The evil-generating supposition that utterly arbitrary things can be compared (and that we don’t need to care that we can’t even say how this would be accomplished) implicitly presupposes that all things whatsoever have a pre-given “Identity” that is independent of their structural properties, but mysteriously nonetheless somehow contentful and somehow magically immediately epistemically available as such. This is a mathematical version of the overly strong but still common notion of Identity that I and many others have been concerned to reject. Such bad notions of Identity are deeply involved with the ills of Mastery diagnosed by Hegel and Brandom.

We should not allow evil in foundations, so many leading mathematicians interested in foundations are now looking for an alternative to the 20th century default of ZFC. Some combination of dependent type theory for syntax with higher category theory for semantics seems most promising as an alternative. The recent development of homotopy type theory (HoTT) is perhaps the most vigorous candidate.

Another way to broadly characterize this mathematical “evil” is that it results from treating representation as prior to inference in the order of explanation, as Brandom might say, which means treating correspondence to something merely assumed as given as taking precedence over coherence of reasoning. This is a variant of what Sellars famously called the Myth of the Given. It is a philosophical evil as well as a mathematical one. Besides their intrinsic importance, these mathematical issues make more explicit some of the logical damage done by the Myth of the Given.

Another broad characterization has to do with mainstream 20th century privileging of classical logic over constructive logic, of first-order logic over higher-order logic, and of model theory over proof theory. Prior to the late 19th century, nearly all mathematics was constructive. Cantor’s development of transfinite mathematics was the main motivation for mathematicians to begin working in a nonconstructive style. Gödel’s proof that first-order logic was the richest logic for which all propositions that are true in all models are also true was thought to make it better for foundational use. Logical completeness and even soundness are standardly defined in ways that privilege model theory, which is the formal theory of representation.

It is now known, however, that there are several ways of embedding and representing classical logic — with no loss of fidelity — on a constructive foundation, so the old claim that constructive logic was less powerful has been refuted. Going in the other direction, however, classical logic has no way of recovering the computability that is built into constructive logic once it has been violated, so it is increasingly recognized that a constructive logic provides the more flexible and comprehensive starting point. (Also, transfinite mathematics can reportedly now be given a constructive foundation under HoTT.)

Since the mid-20th century there has been an immense development of higher-order concepts in formal domains, including mathematical foundations; the theory of programming languages; and the implementation of theorem-proving software. Higher-order formalisms offer a huge improvement in expressive power. (As a hand-waving analogy, imagine how hard it would be to do physics with only first-order equations.)

Type theory, proof theory, and the theory of programming languages are kinds of formalism that put inference before pre-given representations. Category theory seems to take an even-handed approach.

Although I noted some interest in Brandom on the part of people working in a higher-order constructive context, Brandom himself seems much more interested in things that would be described by paraconsistent logics, such as processes of belief revision or of the evolution of case law or common law, or of normativity writ large. (In the past, he engaged significantly with Michael Dummett’s work, while to my knowledge remaining silent on Dummet’s arguments in favor of the philosophical value of constructive logic.)

Paraconsistency is a property of some consequence relations, such that in absence of an explicit assumption that from a contradiction anything follows, not everything can in fact be proven to follow from a given contradiction, so the consequence relation does not “explode” (collapse into triviality).

In view of the vast proliferation of alternative formalisms of all sorts since the mid-20th century, it may very well be inappropriate to presume that we will ever get back to one formalism to rule them all. I do expect that homotopy type theory or something like it will eventually come to dominate work on mathematical foundations and related aspects of computer science (and everything else that falls under Hegelian Understanding, taken as a positive moment in the larger process); but as hugely important as I think these are, I am also sympathetic to Brandom’s Kantian/Hegelian idea that considerations of normativity form an outer frame around everything else, as well as to the Aristotelian view that considerations of normativity tend to resist formalization.

On the formal side, it seems it is not possible to synchronically reconcile HoTT with paraconsistency, which would seem to be a problem. (At the opposite, simple end of the scale, my other favorite logical mechanism — Aristotelian syllogism interpreted as function composition — apparently can be shown to have a paraconsistency property, since it syntactically constrains conclusions to be semantically relevant to the premises.)

Diachronically, though, perhaps we could paraconsistently evolve from one synchronically non-evil, HoTT-expressible view of the world to a dialectically better one, while the synchronic/diachronic distinction could save us from a conflict of requirements between the respective logics.

I think the same logical structure needed to wrap a paraconsistent recollective genealogy around a formal development would also account for iterative development of HoTT-expressible formal specifications, where each iteration would be internally consistent, but assumptions or requirements may change between iterations.

Actuality

Aristotelian energeia — traditionally translated as actuality — captures the status of being active or effectively operative in a process. I have somewhat awkwardly rendered it as “at-work-ness”. “Being-at-work” sounds like better English, but might wrongly be taken to refer to a kind of Being in the intransitive sense qualified by a predicate of at-work-ness. (I think Aristotle was in fact very little interested in Being in an intransitive sense. He devotes much more attention to several transitive senses.) There is no “being” at all in the Greek. Energeia is most literally “in-work-ness”, but I and others have preferred to substitute “at” for “in”, as better conveying the intended connotation in English.

Contrary to Plato’s doubts about the possibility of understanding becoming, Aristotle is committed to eliciting its intelligibility. Rather than looking for generative powers behind things as Plato had obscurely suggested might be our best hope, part of Aristotle’s strategy is to draw our attention to what is immanently at work in a process as a kind of methodological starting point. The discernment of what it was to have been such and such a thing begins from the indistinct apprehension of something we merely take to have been effectively operative. (That something would be a mediated immediacy in Hegelian terms.) It is eventually constituted with greater precision and a degree of universality through inferential elaboration of the counterfactual potentiality of what we initially took to have been effectively operative, as well as through the implicit correction over time of errors that become apparent in the course of this elaboration.

Worlds away from the dry stereotype of “essentialism”, Aristotle is if anything more of a process thinker or pragmatist. He directs our attention to the concrete actualization of things, which “essentially” involves the interweaving of effectively operative actuality with both counterfactual potentiality and material contingency. Hegel makes large use of this Aristotelian concept. Brandom associates Hegelian actualization with expression and making explicit.

There is a very interesting distinction suggested by Aristotle and developed by later writers between a “first” and “second” actuality. Whereas the first actuality of an organic body is not too far from the later Stoic conatus as an internal source of primitive desiring activity, second actuality applies to things associated with evolved practice like habit, character, and intellect.

Aristotle also speaks about the “First” cause as pure at-work-ness, with no admixture of potentiality. I take this to mean that the “First” cause — just as the higher-order goal at which everything indirectly aims — is effectively operative in things, but unlike other effectively operative things, it has no counterfactual aspect (because it has no factual aspect, because it exactly is a pure aim rather than something having an aim). It functions as an ideal of normativity that we can retroactively see to have been at work, as a sort of virtual, uplifting attractor of purely natural desire, and also more speculatively as a posited virtual attractor for the directionality in material tendencies. (See also Aristotelian Actualization; Moved, Unmoved.)

Identity, Isomorphism

Many strands of Western thought — from Augustinian theology to Cartesianism to set theory — have suffered from overly strong notions of what amounts to a privileged, originary, self-evident, contentful Identity of things. (There are also many significant exceptions. With their emphasis on distinctions of form, Plato and Aristotle only needed a weak identity. Spinoza’s emphasis on relations; Leibniz’s identity of indiscernibles; Hume’s dispersive empiricism; and Kant’s critical perspective are all closer to Plato and Aristotle in this regard. Hegel makes identity derivative from a Difference associated with Aristotelian contrariety or Brandomian material incompatibility. Nietzsche, Wittgenstein, and many 20th century continentals explicitly criticized the overly strong concept.)

21st century mathematics has seen tremendously exciting new work on foundations that bears on this question. Homotopy type theory very strongly suggests among other things that the identity needed to develop all of mathematics is no stronger than isomorphism. This provides a formal justification of the common practical attitude of mathematicians that isomorphic structures can be substituted for one another in a proof by an acceptable “abuse of notation”.

More generally, type theory and category theory provide an independent basis in contemporary mathematics for reaffirming the priority of form as difference over identity. I am tempted to say that they exemplify a kind of inferentialism in mathematics. (To those who say mathematics holds no lessons for philosophy, I would say that generalization disregards the specific character of these developments. nLab, the website for higher category theory, even has a page on Hegel’s logic as a modal type theory that explicitly refers to Brandom’s interpretation of Hegel!)

Matter, Potentiality

I’ve suggested nonstandard readings of both Aristotelian matter and Aristotelian potentiality. While traditionally there is thought to be a loose analogy such that matter is to form as potentiality is to actuality, the two concepts as I am reading them are sharply distinct. Matter captures the accumulation of contingent fact. Potentiality captures counterfactually robust inference. Matter particularizes, while potentiality universalizes.

Potentiality seems to me to be a kind of form. This is a bit tricky, because an important classical sense of Aristotelian matter that I have not been emphasizing is associated with a disposition to respond in certain ways when acted upon. This, however, sounds like counterfactual potentiality to me.

Objectivity

“Objectivity” is said primarily of some shapes of subjectivity that have a high degree of universality. It could not mean simple passive assimilation of an object just as it was supposed to be. The path to universality lies through a robustness or resilience of inferences across counterfactual cases. Universality and objectivity are closely tied to considerations of all kinds of appropriateness in particular cases.

Universality is inherently a journey through many things, not a destination. The objectivity of objects is derivative from such an open, free process of interaction with material contingency, governed by an end of unity of apperception and mutual recognition. (See also Truth, Beauty.)

Instances of consideration of objectivity in particular contexts appear throughout the Ethics; Reason; Semantics; Historiography; Philosophy of Math etc. sections here.

Historically, there has been a near reversal of the meaning of the term “objective”.

Definition

The deeper Hegelian truth of a conceptual content can only be approached diachronically, via a historical recollective expressive genealogy. But in passing in the course of his world-historically groundbreaking interpretation, Brandom says Hegel rejects the very possibility of conveying a conceptual content by defining it, without saying what definition is or elaborating on what this denial means for the status of definition (Spirit of Trust, p.7). I find this to be ambiguous, and potentially a little misleading. At least within any given synchronic context and to some extent even more broadly, I believe definition in the sense of an Aristotelian “what it is” still has a positive role to play. It would not be reasonable to suppose that Brandom really means to ban the philosophical use of definitions; otherwise, we would have an extreme nominalism incompatible with his stated goals, which include what he calls conceptual realism. (See also Abstract and Concrete.)

The ambiguity in the passage has to do with how strong a sense we give to “conveying”. We should not expect a run-of-the-mill definitional representation to literally convey conceptual (inferential) content in its explicit form. But such a representation absolutely does address or concern conceptual content, and therefore can still “convey” that content in the weaker sense of referring to it or reliably picking it out. (We could also atypically construct definitions in terms of explicit material incompatibilities and consequences. These would presumably in a stronger sense convey the conceptual content isomorphic to them. We could even atypically construct definitions in terms of the current best expressive genealogy, so I don’t really see these as counterposed.)

I do not think Hegel would go so far as to deny the high pragmatic value of definition in synchronic contexts. This is part of the necessary moment(s) of determinacy (and Understanding) in the larger process of the development of Spirit. He just wants to make the larger point that diachronically, any realized ground-level definition is ultimately just a stopping point along the way. That does not mean we should not attempt to sum up the best understanding we have achieved at each moment. I think we are deontically obligated to do just that. Every ground-level definition is contextualized by its historical situation and therefore subject to change, but at every moment we should still strive to speak and act in accordance with the best definitions we can achieve. Representational clarity is imperfect and always dependent on other considerations in the background, but it is still a moment to be preserved.

We should distinguish the conceptual-content-related doing associated with developing a definition from the representation produced. Further, I find it difficult to separate a concern for definition from a methodological concern for problems of definition, as evinced by Plato and Aristotle for instance. From this perspective, definition has more to do with a line of questioning than a putative answer. The question of the “what” or conceptual content of things is actually far more substantial and interesting than those of mere fact or abstract existence. Even if it aims at a representation, definition as a practical task is all about inquiry into that whatness of things. The norm to which synchronic representation of whatness is responsible comes down to the best achievable view of the relevant difference and mediation, or material incompatibility and material consequence (as Brandom would put it) in the circumstances of that logical moment. This I think is actually independent of the diachronic moves of expressive genealogy.

Hegel’s “Substance that is also Subject” is explicitly presented as an extension of Aristotle’s (expressive meta) concept of ousia, and I think Aristotle anticipates even more than Hegel recognizes. (Expressive genealogy is distinctively Hegelian, but Substance certainly not, and Hegel himself notes in the History of Philosophy lectures that the concerns he groups under “Subject” were significantly addressed by Socrates, Plato, and Aristotle.)

If Brandom is right that Hegel intended to exclude such expressive metaconcepts from the general prognosis that all (ground-level) concepts eventually elicit their own negation, then it is at least logically possible that Aristotle’s metaconcept had already achieved the requisite stability to be incorporated by Hegel without negating the subordinate aspect of ousia that for Aristotle corresponds to a definition.

Without prejudice to claims about what Hegel added, I would argue that Hegel did in this way intend to incorporate all the multiple nuances of Aristotelian ousia, including the definitional one. With due respect for Brandom’s distinction between determination as Hegelian process and determinateness as Kantian/Fregean property (and the importance of the process as a superior point of view), I also think we need to forgivingly recollect all best attempts at determinateness. (See also Classification.)

I wonder what Brandom would say about the role of definitions in the articulation of mathematical conceptual content. The doing of mathematics seems to join the doing of history as problematic for simple subsumption under a genealogical approach as Brandom has described it. Mathematics needs definitions, and history needs to evaluate data without Whiggish filtering. (But Brandom does not exactly disallow either, and I can’t imagine that he would want to. The meaning of mathematical theorems can certainly be expressed in terms of material incompatibility and consequence, and the concepts used in non-Whiggish historiography could themselves be Whiggishly genealogically grounded.)

We should think about the functional inferential role of stipulative definitions, as well as the definitions of empirical concepts that I expect Brandom has foremost in mind. We could say that in both cases, the meaning sought by definition — as distinct from the definiens — is actually constituted through material incompatibility and material consequence. But a stipulative definition is a making rather than a taking. It in a sense starts a whole course of reasoning, whereas empirical concepts implicitly summarize results of reasoning.

Also, mathematical definition is mostly concerned with structures and structural properties. I believe a case could be made that in general, such structures and structural properties are expressive metaconcepts in much the same sense that logical concepts are.

I don’t think it’s historically right that expressive metaconcepts are a “discovery or invention” of German Idealism (p.5). Aristotle already had quite a few expressive metaconcepts, as at least partially exhibited in this blog. I believe Hegel himself recognized this.

Potentiality

Potentiality (dynamis) is yet another great Aristotelian expressive metaconcept. Plato had the intriguing idea of explaining things and states of affairs in terms of power (also dynamis), but left power as an unexplained explainer, and required it to be postulated as pre-existent. Aristotle thoroughly reconceptualized the term to eliminate these weaknesses. Every Aristotelian potentiality begins from actuality or at-work-ness.

Instead of referring to postulated powers behind things or abstract logical possibility, Aristotelian potentiality is a way of talking about the aspects of a conceptual content captured by what Brandom would call modally robust counterfactual inference. Such robustness of inference across counterfactual cases is implicitly central to the most elementary meaning of Aristotelian substance or “what it was to have been” a thing (ousia), as what grounds the weak unity that allows us to talk about the same “thing” persisting through time even though something about it changed.

The semantic importance of counterfactual inference in determining the sense of what things are is a thesis shared by Aristotle, Hegel, and Brandom. It is explicit in Brandom and Brandom’s Hegel, and implicit in Aristotle. We cannot even really form a view of any thing as a thing of a certain kind unless we at least implicitly consider its potentiality.

Aristotle was clear that potentiality is an irreducible ingredient in things, and potentiality clearly captures counterfactuals. Brandom has made the role of counterfactuals in the development of universality more explicit. Facts alone give us at best a very brittle structure of assertions with no real conceptual articulation or interpretation, so perspectives that try to ground things on facts alone are doomed to ultimate failure. (In this light, Nietzsche‘s elimination of potentiality also turns out to have been a very serious error.) Overly strong, question-begging notions of the Identity of things have helped obscure the vital role of counterfactual inference in stabilizing our experience of the world. (See also Modality and Variation.)

Tentatively mapping this to Brandom’s Fregean terminology, I think Aristotle would intend the relation of potentiality to actuality to be one of reciprocal sense dependence paired with asymmetrical reference dependence. That is to say, at a level of determination of meaning, potentiality and actuality are interdependent and equally important, but in the order of logical truth about representations, actuality or the concrete is the starting point in terms of which potentiality is evaluated. Potentialities are potentialities of some actuality. (See also The Importance of Potentiality; Potentiality, Actuality; Structure, Potentiality; Matter, Potentiality.)

Philosophy

No mere expression of opinion counts as philosophy, as Plato was wont to remind us. One minimal necessary (but not sufficient) identifying mark of philosophy seems to me to be the recognition of at least some questions as genuine questions — that is to say, questions for which we explicitly acknowledge that we do not have immediate answers.

I also have a candidate for a necessary and sufficient condition. It now seems to me that all the questions that have traditionally been regarded as philosophical can be interpreted as at least indirectly having normative import, regardless of whether all those discussing them thought in that way. So we could say philosophy as a practice is the recognition of questions with normative import as genuine questions (and this is the way to the good life for a rational animal).

By this definition, we should expect to find no philosopher in any time exemplifying the attitude that all normative questions are already settled. I believe this is also true for all those who have in fact been commonly called philosophers in any serious sense (but see Antiphilosophy). If modernity is defined typologically as any step away from the attitude that all normative questions are already settled, then all philosophy would be “modern” in this somewhat unusual sense. (I’m not a big fan of the pre-Socratics, but I do think they fit this description. Most serious theology through the centuries has been enough influenced by philosophy to recognize that there are genuine normative questions, and in that measure I count it too as philosophy.) (See also History of Philosophy.)

I’m almost tempted to suggest substituting “philosophy” for “modernity” in the discussion of the history of normativity. But there may be unphilosophical modernity. The Sophists strike me as “modern” under this criterion and I don’t consider them philosophers, since I am taking Plato and Aristotle at their word that the Sophists claimed either to have all the answers or that there were no real answers. (I have not examined recent literature on the Sophists, some of which I believe argues for a different assessment.)

I suspect the first glimmerings of typological modernity (as distinct from philosophy) go back at least as far as the first cities, and possibly as far as the relatively long-distance trade in the late Upper Paleolithic that began to put people raised in traditional attitudes face-to-face with others reared with different traditional attitudes. However, Aristotle and Hegel would remind us that fully fledged forms are more relevant than origins for most purposes. (See also Interpretation; Ethical Reason.)

Plotinus

As a very young man, I was deeply invested in a holistic, minimally unworldly reading of Plotinus. At the time, I was impressed by his view of Intellect (nous) as a sort of synoptic rational intuition or vision. I liked his (actually Aristotelian) view that the good of any being is its natural act, which leaves it to us to determine what that actually is. I read the One as the All viewed sub specie aeternitatis (“under the form of eternity”, in Spinoza’s later phrase). I was fascinated by so-called “emanation” or “procession”, which obscurely suggested a sort of rational unfolding into detail from a more purely holistic starting point.

Plotinus was a 3rd century CE Alexandrian Greek who founded the so-called “neoplatonic” school that came to dominate philosophy and theology in late antiquity. He combined Platonic, Aristotelian, and various religious influences. His work The Enneads was a major inspiration to the greatest early Catholic thinker Augustine, and part of it was later translated to Arabic and Latin under the misleading title Theology of Aristotle. Plotinus associated the Good of Plato’s Republic with the One of Plato’s Parmenides.

Too briefly, one might say that for Plotinus and the neoplatonists generally, the One unfolds into the One-Many of Intellect, which unfolds into the Many-One of Soul, which unfolds into the Many of nature, and then it all re-folds back into itself, forming a big eternally repeating M.C. Escher loop. To say it in a more Aristotelian way, in that loop, what would be an Aristotelian unmoved mover and “first” cause that is really an end — along with everything it attracts — gets folded back into itself, making it literally also the beginning and the complete cause of everything, unlike anything in Aristotle. (As a youth who enjoyed mixing things up, I liked to imagine that the big Escher loop was also Nietzsche’s eternal return.)

Soul for Plotinus has no inherent dependency on the body — all the dependency at least ought to run in the other direction. Soul “There” seems to have connotations of simple immediate enjoyment of the intelligible realm, but “Here” is agitated and disturbed. He suggested a model of meditative discipline in which higher principles should detach themselves from immersive involvement in the layer beneath, but function as unmoved movers for it, leaving the lower layer to function autonomously except for the unmoved-mover influence of the higher layer.

He made an interesting suggestion that each Platonic form in a way includes all the others.

Neoplatonism is finally getting better treatment from scholars these days. 19th and 20th century summary accounts often reflected little acquaintance with texts, and were full of hostile stereotypes. Even the name is now considered misleading. The Stanford Encyclopedia of Philosophy article on the web is a decent starting point, though it anachronistically talks about “Consciousness”. (In fact, that English term was coined by Cambridge Platonist Ralph Cudworth in the 17th century for use in his translations of Plotinus. But in my opinion, the word has far too many modern connotations to be a good choice for historical scholarship. While such anachronism is expected in Hegelian/Brandomian recollective genealogy, that is because such genealogy serves different purposes from historical scholarship.)

The most impressive large-scale study I’ve seen in English is Kevin Corrigan’s Plotinus’ Theory of Matter-Evil and the Question of Substance: Plato, Aristotle, and Alexander of Aphrodisias, which addresses a broader scope than the title suggests, while tackling Plotinus’ most apparently objectionable thesis head-on. (While Plotinus idiosyncratically identified Alexander’s abstract prime matter with evil due to its complete lack of form, he strongly defended the goodness of the manifestation of the physical world that includes ordinary matter against the gnostics.) Corrigan’s book is especially interesting because it highlights an abundance of implicit dialogue with Aristotle and Alexander — unnoticed by previous scholars — in Plotinus’ texts that contributes substantially to the Plotinian synthesis.

In French, there is an excellent treatment of the differences between Aristotle and Plotinus from an Aristotelian point of view: Gwenaëlle Aubry’s Dieu sans la puissance: dunamis et energeia chez Aristote et Plotin. (Neither Plotinus nor Aristotle sees any temporal origin of the world or beginning of time. The key difference is that Aristotle’s “First” cause is also not supposed to be any kind of eternal origin either. It is purely that which everything ultimately aims at, a “final cause”. For Plotinus, by contrast, the One is simultaneously that which everything aims at and the eternal origin of everything.) Aubry takes as a starting point Aristotle’s notion that the “First” cause is just pure actuality, with no admixture of the power Plato talks about, let alone the Stoic-inflected omnipotence averred by Plotinus (or the even stronger unconditional counterfactual omnipotence claimed by Philo of Alexandria and later theological voluntarists). Aubry has also written extensively on subjectivity in Plotinus.

Nowadays my sympathies are entirely on the Aristotelian side, but Plotinus is still an important figure worthy of serious attention — in his own right; as a reader of Aristotle; and as an important influence on later neoplatonically inflected Aristotelianisms as well as later Platonisms. (See also Plotinus on Intellectual Beauty; Beauty and Discursivity; Subjectivity in Plotinus; Power of the One?; Neoplatonic Critique of Identity?).

Alienation, Modernity

The positively connotated (and actually not anti-naturalist) “alienation” of Spirit from nature noted earlier did turn out to be an exception. Hegel’s more usual, negatively connotated talk about alienation is explained by Brandom as picking out any asymmetry between authority claimed and responsibility acknowledged. On this reading, traditional Sittlichkeit that takes responsibility for too much would be just as alienated as the modernity that takes responsibility for too little.

The model of a positively connotated alienation is still interesting, though, and may possibly shed further light on the vexed question of how modernity is to be picked out and assessed. Perhaps the thought is not only that any move in any direction away from the unquestioned governance of tradition is ultimately progressive, even if only through its eventual consequences, but also that a given degree of asymmetry on the modern side is therefore less bad than an equivalent asymmetry on the traditional side, because the modern one starts a dynamic that (normatively, not causally) leads to something better, while the traditional one just preserves the status quo.

Karl Mannheim in his 1925 essay on the sociology of knowledge adopted a vaguely Hegelian notion of modernity as the progressive self-relativization of thought. (He was at pains to argue that this did not lead to the “relativism” decried by some of his contemporaries.) I was fascinated by this in my youth. Here is a modernity with a Hegelian pedigree that bears no trace of Cartesianism. Mannheim’s version is more practical-epistemological than normative, and merely programmatic rather than really developed, where Brandom has a very thorough account of recognition-based normativity in many different circumstances. But it does seem to correlate with the move away from tradition that Brandom talks about. It focuses more on the notion of progress itself, and less on a particular achieved status.