Weak Nature Alone

Adrian Johnston’s latest, A Weak Nature Alone (volume 2 of Prolegomena to Any Future Materialism) aims among other things at forging an alliance with John McDowell’s empiricist Hegelianism, and gives positive mention to McDowell’s use of the Aristotelian concept of second nature. Johnston is the leading American exponent of Slavoj Žižek’s Lacanian Hegelian provocations, and a neuroscience enthusiast. He wants to promote a weak naturalism that would nonetheless be directly grounded in empirical neuroscience. He claims neuroscience already by itself directly undoes “bald” naturalist philosophy from within natural-scientific practice. That sounds like a logical confusion between very different discursive domains, but I am quite interested in a second-nature reading of Hegel.

Broadly speaking, the idea of a weak naturalism sounds good to me. I distinguish between what I think of as relaxed naturalisms and realisms of an Aristotelian sort that explicitly make a place for second nature and assume no Givenness, and what I might privately call “obsessive-compulsive” naturalisms and realisms that build in overly strong claims of univocal causality and epistemological foundations.

Johnston likes McDowell’s rejection of the coherentism of Donald Davidson. McDowell’s basic idea is that coherence can only be a subjective “frictionless spinning in a void”, and that it thus rules out a realism he wants to hold onto. I enjoyed McDowell’s use of Hegel and Aristotle, but thought the argument against Davidson the weakest part of the book when I read Mind and World. If you circularly assume that coherentism must be incompatible with realism, as McDowell tacitly does, then his conclusion follows; otherwise, it doesn’t.

Nothing actually justifies the characterization of coherence as frictionless spinning. This would apply to something like Kantian thought, if it were deprived of all intuition, which for Kant is never the case. Kant sharply distinguishes intuition from thought or any other epistemic function, but nonetheless insists that real experience is always a hylomorphic intertwining of thought and intuition. Brandom brilliantly explains Kantian intuition’s fundamental role in the progressive recognition of and recovery from error, which — along with the recursively unfolding reciprocity of mutual recognition — is essential to the constitution of objectivity.

I want to tendentiously say that as far back as Plato’s account of Socrates’ talk about his daimon, intuition among good philosophers has played a merely negative and hence nonepistemic role. (By “merely” negative, I mean it involves negation in the indeterminate or “infinite” sense, which in contrast to Hegelian inferential determinate negation could never be sufficient to ground knowledge.) On the other hand, that merely negative role of intuition has extreme practical importance.

The progressive improvement of (the coherence of) a unity of apperception that is essential to the distinction of reality from appearance is largely driven by noncognitive mere intuition of error. Intuitions of error or incongruity explicitly bring something like McDowell’s “friction” into the mix.

Charles Pierce reputedly referred to the hand of the sheriff on one’s shoulder as a sign of reality. Like an intuition of error, this is not any kind of positive knowledge, just an occasion for an awareness of limitation. It is just the world pushing back at us.

According to Johnston, McDowell stresses “the non-coherentist, non-inferentialist realism entailed by the objective side of Hegel’s absolute idealism” (p.274). Johnston wants to put results of empirical neuroscience here, as some kind of actual knowledge. But there could be no knowledge apart from some larger coherence, and we are clearly talking past one another. Neuroscience is indeed rich with philosophical implications, but only a practice of philosophy can develop these. (See also Radical Empiricism?)

Johnston wants to revive the Hegelian philosophy of nature. Very broadly speaking, I read the latter as a sort of Aristotelian semantic approach to nature that was also actually well-informed by early 19th century science. I could agree with Johnston that the philosophy of nature should probably get more attention, but still find it among the least appealing of Hegelian texts, and of less continuing relevance than, say, Aristotle’s Physics.

Johnston also likes Friedrich Engels’ Dialectics of Nature. In this case, I actually get more takeaway from Engels than from Hegel. Engels was not a real philosopher, but he was well-read and thoughtful, and a brilliant essayist and popularizer. His lively and tentative sketches were ossified into dogma by others. He did tend to objectify dialectic as happening in the world rather than in language, where I think Plato, Aristotle, and Hegel all located it.

But “dialectic” for Engels mainly entails just a primacy of process; a primacy of relations over things; and a recognition that apparent polar opposites are contextual, fluid, and reciprocal. However distant from the more precise use of dialectic in Aristotle and Hegel, these extremely general principles seem unobjectionable. (The old Maoist “One divides into Two” line, explicitly defended by Badiou and implicitly supported by Žižek and Johnston, not only completely reverses Engels on the last point, but also reverses Hegel’s strong programmatic concern to replace “infinite” negation with determinate negation.)

Engels did infelicitously speak of dialectical “laws” governing events, but his actual examples were harmless qualitative descriptions of very general phenomena. Much of 19th century science outside of physics and chemistry was similarly loose in its application of exact-sounding terms. In Anti-Dühring, however, Engels argued explicitly that Marx never intended to derive any event from a dialectical “law”, but only to apply such “laws” in retrospective interpretation. The “dialectics of nature” is another exercise in Aristotelian semantics. (See also Aristotelian Matter; Efficient Cause.)

It sounds like Johnston wants ontologized dialectical laws of nature, and will want to say they are confirmed by neuroscience results. Johnston also highlights incompatibilities between Brandom and McDowell that are somewhat hidden by their mutual politeness. This in itself is clarifying. I now realize McDowell is further away than I thought, in spite of his nice Aristotelian references. (See also Johnston’s Pippin.)

Substance Also Subject

Hegel’s many references to Aristotle should help to clarify the Hegelian claim that “Substance is also Subject”. In particular, Aristotle’s own thesis of the identity of thought with the thing thought is relevant, as is his dialectical development of the different senses of ousia (“substance”) in the Metaphysics.

A thought for Aristotle is identical with its content. It just is a discursively articulable meaning, not a psychological event. What we care about in thought is shareable reasoning. Moreover, this shareable reasoning has a fundamentally ethical character.

Thought in this sense is essentially self-standing, and unlike the mental-act sense not dependent in the determination of its meaning on a “thinker” (who optionally instantiates it, and if so is responsible for the occurrence of a related event). This gives a nice double meaning to the autonomy of reason. (What such thoughts do depend on is other such thoughts with which they are inferentially connected.)

The primary locus of Aristotelian intellect is directly in shareable thoughts of this sort and their interconnection, rather than in a sentience that “has” them. Hegel adopts all of this.

Concepts in a unity of apperception are forms to be approached discursively, not mental representations or intentional acts. They are more like custom rules for material inference. The redoubling implied in apperception, like that of the Aristotelian “said of” relation, hints at the recursive structure of inferential articulation. The Hegelian Absolute, or “the” Concept, just nominalizes such an inferential coherence of concepts.

Thus, “Substance is also Subject” has nothing to do with attributing some kind of sentience to objects, or to the world. Rather, it is the claim that Substance properly understood (in the Aristotelian conceptual sense of “what it was to have been” a thing, rather than in the naive sense of a real-world object, or of a substrate of a real-world object, that Aristotle starts with but then discards) is already the right sort of thing to be able to play the functional role of a transcendental subject. A “Subject” for Hegel just is a concept or commitment, or a constellation of concepts and commitments. (See also Subject and Substance, Again; Substance and Subject.)

Consistent with this general approach, I consider the direct locus of the subject-function to be in things like Brandomian commitments and Kantian syntheses. The subject-function is also indirectly attributable to “self-conscious individuals” by metonymy or inheritance, and to empirical persons by a further metonymy or inheritance. (See also Subject; Substance; Aristotelian Dialectic; Brandom and Kant; Rational/Talking Animal; Second Nature.)

Mutual Recognition

Hegelian mutual recognition puts ethical considerations of reciprocity with others to the fore. In part, it is a more sophisticated version of the idea behind the golden rule. It also suggests that anyone’s authority and responsibility for anything should always be evenly balanced. It is also a social, historical theory of the genesis of meaning, value, and identity. Hegel’s notion was partly anticipated by Fichte.

Brandom reads mutual recognition as central to Hegel’s ethics or practical philosophy, and Hegel’s practical philosophy as central to his philosophy as a whole. Prior to the publication of A Spirit of Trust (2019), what I take to be Brandom’s own deep ethical engagement was often not recognized. I hope the situation will soon improve.

Consistent with Brandom’s general approach, the ethics of A Spirit of Trust appears in a highly mediated form. Much of the work of ethics for Brandom comes down to the implementation and practice of normative pragmatics and inferential semantics, which he has been expounding at least since Making It Explicit (1994). So, I think he has been laying the groundwork for a long time.

One recent commentator (Lewis 2018) suggested that ethics proper was just missing from Brandom’s earlier accounts. His citations for this were to Robert Pippin and Terry Pinkard, whose readings of Hegel are often compared to Brandom’s. I cannot find the text of Pinkard’s 2007 article, but Pippin in the course of his searching but still very sympathetic review “Brandom’s Hegel” (2005) had suggested there was at that time an important gap in Brandom’s reading, related to Hegel’s lifelong concern with a critical treatment of positivity, i.e., received views and institutionalized claims.

Pippin cited an ambiguous argument from Making It Explicit that seemed to support the social legitimacy of a commitment to enlist in the Navy by a drunken sailor who was tricked into a contract by accepting a shilling for more beer. Brandom has since clarified in several places that he did not mean to himself endorse this argument, based as it is on a partial perspective (see, e.g., Hegel’s Ethical Innovation). In Spirit of Trust terms, Brandom’s point in such a context would be to emphasize that the freedom associated with agency does not entail mastery, and in particular that we do not have mastery over the content of our own commitments. The issue for Pippin in 2005 was that Brandom appeared to put sole responsibility and authority for determining the content of commitments on the audience. Pippin found with respect to positivity “not so much a problem as a gap, a lacuna that Brandom obviously feels comfortable leaving unfilled” in Making It Explicit. I suspect Brandom’s lack of discomfort was directly tied to a deferral of such considerations to his 40-year magnum opus project, A Spirit of Trust.

For years, something like Pippin’s positivity issue was a main topic of discussion between my late father and me. For both of us, it was the big hurdle to overcome in fully recognizing Brandom as the world-historic giant we both thought he would probably turn out to be. I thought the positivity issue already began to be addressed in the early web draft of A Spirit of Trust, and I suspect it was a significant focus while Brandom was working on the final text.

In any event, I think it is clear that in the published Spirit of Trust, the determination of the content of commitments is envisioned not as stopping with an immediate audience, but as involving an indefinitely recursive expansion of mutually determining I-Thou relationships. On my reading, normative statuses that are both fully determinate and unconditionally deontically binding would only emerge from the projection of this expansion into infinity. But in practical contexts, we never deal with actual infinity, only with indefinite recursive expansions that have been cut off at some relatively early point. (See also Hegelian Genealogy.)

We always work with defeasible approximations — finite truncations of a recursive expansion through many relationships of reciprocal determination. This means in particular that judgments of deontic bindingness are defeasible approximations.

Further, the kind of approximation at issue here is not a statistical one, but a more Aristotelian sort of “probability”. It therefore cannot be assumed to monotonically improve as the expansion progresses, so it is not guaranteed that further expansion will not suddenly require a significant revision of previous commitments or concepts, as Brandom explicitly points out (see Error).

This means that the legitimacy of the queen’s shilling and any other received truth is actually open to dispute and therefore open to any rational argument, including those the sobered-up sailor might make. In Brandom’s favorite example, new case law — though of course subject to higher-level canons of determinate negation in its own future interpretation and evaluation — may significantly revise existing case law in unforeseeable ways.

I believe this gives us all the space we need for social criticism. We need have no fear that Brandom’s version of the mutual recognition principle will bind us to positivity. Nothing is out of bounds for the autonomy of reason. We only have to be honest about the conceptual content we encounter in the detail of the recursive expansion. I believe this is the answer to the lingering concerns I expressed in Robust Recognition and Genealogy. Even if Brandom himself were to turn out not to go quite this far, I think at worst this is a friendly amendment that does not disrupt the framework. (See also Edifying Semantics; Reasonableness.)

The recursive expansion of mutual recognition pushes it toward the kind of universality on which Kant based the categorical imperative. Practical outcomes from the two approaches ought to be similar. Hegel’s version is useful because it is grounded in social relationships rather than a pure metaphysics of morals, but still escapes empirical, “positive” constraints by indefinitely expanding the network toward the concrete universality of a universal community of rational beings. (See also Mutual Recognition Revisited; Pippin on Mutual Recognition; Hegel’s Ethical Innovation).

Johnston’s Pippin

Adrian Johnston’s A New German Idealism just arrived, and I’m taking a quick look. It is mainly concerned with Slavoj Žižek’s work. But for now, I’m just concerned with chapter 2 — where Johnston launches a broadside against “deflationary” readings of Hegel, particularly the one he attributes to Robert Pippin — and the preface.

Johnston can be forgiven for not addressing Pippin’s 2018 work on the Logic, but I do not understand why he ignores my favorite book by Pippin, Hegel’s Practical Philosophy (2008).

There, Pippin dwells extensively on Hegel’s Aristotelian side. Much of interest could be said on what it means to be Aristotelian in a post-Kantian context. Many received views will be challenged by such an examination. (For a beginning, see Aristotle and Kant.) As I have said, I read Hegel as both Kantian and Aristotelian (as well as original).

In any case, Johnston seems to think Pippin in Hegel’s Idealism (1989) was intent on reducing Hegel to Kant. That book was indeed concerned to show a strong Kantian element in Hegel. But I did not think of it as reductive. If anything, I read Pippin’s book as a salutary response to those who want to reduce Hegel to a pre-Kantian, and to read Hegel as rolling back from Kant rather than moving forward from Kant. Because he assumes a bad old subjectivist reading of Kant, Johnston seems to think Pippin’s reading of Hegel necessarily rules out the possibility of seeing a realist side to Hegel.

The whole challenge of Hegel is to understand how it it is possible in his terms to be both Critical and realist, without engaging in logical nonsense. (But see Realism, Idealism.) This sort of thing typically requires significant semantic labor, but the achievement of such semantic elaboration is the whole point. Here I worry where Johnston intends to go with his defense of “undialectical” distinctions in the preface. It is one thing to recognize that Hegel does not intend to just do away with Understanding and its distinctions, and quite another to treat those distinctions as final. (See also Univocity.)

Johnston’s lengthy discussion of the positive value of Understanding in the preface does not address how it relates to dialectical transitions. He mainly wants to defend Žižek’s tactic of presenting forced binary choices at particular moments. In particular cases and circumstances this conceivably can be good pedagogy, but it is the details that matter, and Johnston offers no advice on how we are to distinguish a pedagogically good forced choice from a bad one.

(I suspect Žižek’s tactic may be related to his friend Badiou’s defense of the Maoist “One divides into Two” line, which always seemed like blustering nonsense to me. There have been some very rational strands within Marxism; I do not comprehend why someone as intelligent as Badiou would prefer to apologize for the coarsest and most anti-intellectual, but to a lesser extent Althusser did as well. See also Democracy and Social Justice.)

(Worlds away from this, Brandom has a wonderfully clear account of the nonfinality of Understanding’s particular conclusions, illustrated precisely by its very important positive role in the recognition and resolution of error, in which the operations of the Understanding on its own terms give rise to dialectical transitions at the level of Reason, understood in terms of the revision of commitments and possibly of concepts.)

Johnston also seems to assume there is something necessarily reductive about a non-ontological (or not primarily ontological) reading of Hegel. Again, I don’t see why.

I think Aristotle’s metaphysics was basically a semantic investigation, just like his physics. It is the historic forcing of this inquiry back from the wide universe of meaning onto narrow registers of being and existence that I see as reductive.

Based on the work of Olivier Boulnois on the role of the medieval theologian Duns Scotus in the reinterpretation of metaphysics as ontology, I have come to think that in general, modern emphasis on ontology tends to reflect what I take to be historically a medieval Scotist mystification of things Aristotle approached in clearer terms we should recognize today as mainly semantic. (For what it’s worth, the homonymous use of “ontology” in computer science is also mainly semantic.)

Metaphysics or “first philosophy” or “wisdom” was supposed to help us with higher-order understanding, not to be a place where strange existence claims are made.

Archaeology of Knowledge

In the old days, my favorite text of Foucault was the beginning of the Archaeology of Knowledge (online here), revised from his “Réponse au Cercle d’épistémologie”, published summer 1968 (o pregnant time!) in Cahiers pour L’Analyse, the original of which is separately translated in Essential Works vol. 1. There is a nice summary of the original and its historical context here.

At this time, Foucault and Althusser were both working toward what has been called a rationalist philosophy of the Concept related to the work of Jean Cavaillés and Georges Canguilhem, in contrast to then popular existential/phenomenological philosophies of the Subject. (See Knox Peden, Spinoza Contra Phenomenology: French Rationalism from Cavaillés to Deleuze.)

The Epistemological Circle that Foucault was responding to was a group of Althusser’s students interested in the philosophy and history of science, as well as structural Marxism and Lacanian psychoanalysis, who had asked Foucault a series of methodological questions. Althusser was something like the dean of France’s most prestigious university. He had actually written his dissertation (which I have still not seen) on the Concept in Hegel. By this time he was in high anti-Hegelian mode, as was Foucault.

Foucault himself acknowledged considerable debt to his Hegelian mentor Jean Hyppolite, who translated the Phenomenology to French. Hyppolite read Hegel as focused more on discourse than on subjectivity. His 1952 Language and Existence, referred to by Foucault as “one of the great books of our time”, argued strongly for the importance of language in Hegel. (It was also very favorably reviewed by the young Deleuze.) Foucault had written a thesis on “The Constitution of a Historical Transcendental in Hegel’s Phenomenology of Spirit” under Hyppolite in 1949.

There is more good historical background in James Muldoon, “Foucault’s Forgotten Hegelianism”. While I don’t endorse, e.g., Muldoon’s remarks on Hegel and free will, his suggestion that an identification with certain specifics of Hyppolite’s reading of Hegel — particularly the attribution of a strong “totalizing” impulse — contributed significantly to the anti-Hegelian turn of Foucault and others is quite interesting.

Though I don’t recall this from his translated works, Hyppolite apparently both saw a strong element of totalization in Hegel and strongly rejected it, while continuing to identify as a Hegelian. (Previously, in absence of more specific evidence I had surmised it was mainly a reaction against Alexandre Kojève’s reading that drove the French anti-Hegelian turn. Muldoon also says Hyppolite’s reading was initially welcomed as a contrast to Jean Wahl’s more phenomenologically oriented 1929 book on the unhappy consciousness, which apparently also contributed to French perceptions of Hegel as subject-centered.)

In any case, the Hegel whom Foucault, Althusser, Deleuze and others famously rejected in the 1960s was identified as the proponent of a totalizing historical teleology of the Subject. Each of the three components of this was independently strongly rejected — the subject-centeredness, the historical teleology, and the totalization. I still agree today that these are all serious errors that should be rejected.

However, Hegel read in a broadly Brandomian way is utterly untouched by this criticism. There is no historical teleology at all in what Brandom calls Hegelian genealogy (so a fortiori not a totalizing one), and there is no subject-centeredness in the analysis of conceptual content. Subjectivity is never invoked as an unexplained explainer. Brandom’s exposition of the Hegelian critique of Mastery offers us a Hegel utterly opposed to the kind of totalization attributed to him by Foucault, Althusser, and Deleuze.

Foucault presented a long list of forms of discontinuity that should be attended to in the history of ideas. Each of these could be analyzed in Brandomian/Hegelian terms as a determinate negation.

I agree with Foucault that it is very important not to take the simple continuity of a tradition for granted. In principle, such things need to be shown. However, I still think defeasible assertions about “traditions” and other such unities that should be questioned can play a useful role in historical discussion. (See also Ricoeur on Foucault; Structuralism; Structure, Potentiality; Difference; Identity, Isomorphism; Univocity; Historiography; Genealogy.)

History of Philosophy

Philosophy is best conceived as a dialogue with the best insights of our fellow rational animals over the centuries. It is something far more valuable than just views or opinions — a sustained rational development aimed at progressive improvement in distinguishing the better from the worse.

Hegel wrote that the history of philosophy is inseparable from philosophy itself, and I find that to be very true. He was actually the first major philosopher to write explicitly about the history of philosophy. Medieval scholasticism had treated the history of philosophy as a valuable repository of possible opinions and arguments, but was little concerned with issues of historical interpretation. Early modernity largely ignored the history of philosophy and wanted to start over, every man for himself. Anti-scholastic prejudice ran so high that apart from Leibniz, no major modern philosopher until Hegel treated Aristotle as anything more than a straw man. But since the 19th century and especially since the later 20th century, innumerable rich and sophisticated contributions to the historically informed interpretation of individual philosophers have been made, along with many excellent analyses of periods and trends.

I find it useful to alternate between consideration of a small number of essential reference points among the greatest of the great, and a much broader scope including many “minor” figures. (See History of Philosophy and Historiography sections.)

Hegelian Genealogy

[The title above was conceived as an initial answer to the question posed below about the main ways Hegel extends Aristotle, but the article then wanders away from Hegelian genealogy in further pursuit of that question.]

Hegel was at the same time deeply Aristotelian, deeply Kantian, and highly original. Across numerous posts, I have been pointing out Hegel’s connections with Aristotle. This implicitly poses the question, how should we summarize the aspects of Hegel’s contributions that go beyond Aristotle?

What Brandom has called Hegel’s genealogy captures most of this at a high level. A Hegelian genealogy is a recollective making more explicit of our current best self-understanding in terms of a backward projection of part of that current understanding onto what we take to be its historic roots, in order to then trace a sequence of its development into our full current understanding. I would note that this sort of understanding involves the kind of interweaving of history and creative fiction that has been discussed at length by Paul Ricoeur.

Hegel is at one with Aristotle in recognizing that the end goal of a process is emergent rather than pre-established from the beginning, as someone like Leibniz or Plotinus might suggest. He does not mean to literally assert, e.g., that Socrates already explicitly thought in terms of German Idealist concepts like Subjectivity and Freedom. In part, he is deliberately using anachronistic terms as a sort of pedagogy for a contemporary audience. More significantly, he is making a historical claim based on current understanding that the roots of German Idealism go all the way back to Socrates.

On the other hand, while Aristotle and Hegel are both very concerned with development and take a retrospective perspective on it, Aristotle does not explicitly address the development of large cultural formations or development over long periods as Hegel does. Aristotle takes large formations in a mostly synchronic way.

On a small scale, while Aristotle makes heavy use of both material incompatibility and material consequence, he does not tightly combine these as Hegel does.

Aristotle recognizes that a concern for error and its rectification is integral to the pursuit of truth, but does not apply this to whole social formations or historical periods the way Hegel does. He does not have Hegel’s positive vision of the necessity of error for learning, and of a path to greater rationality that can only be achieved through the successive resolution of errors.

Aristotle treats mutual recognition as an important part of the description of the key ethical goal of friendship or love. Hegel further develops the idea of mutual recognition, makes it more primary, broadens its applicability, and also uses it to explain how normativity is socially and historically constituted.

Hegel also takes over Kant’s idea that normativity forms an outer frame around all other concerns. (See also Aristotle and Kant; Brandom and Kant; History of Philosophy; Edifying Semantics; Mutual Recognition.)

Definition

The deeper Hegelian truth of a conceptual content can only be approached diachronically, via a historical recollective expressive genealogy. But in passing in the course of his world-historically groundbreaking interpretation, Brandom says Hegel rejects the very possibility of conveying a conceptual content by defining it, without saying what definition is or elaborating on what this denial means for the status of definition (Spirit of Trust, p.7). I find this to be ambiguous, and potentially a little misleading. At least within any given synchronic context and to some extent even more broadly, I believe definition in the sense of an Aristotelian “what it is” still has a positive role to play. It would not be reasonable to suppose that Brandom really means to ban the philosophical use of definitions; otherwise, we would have an extreme nominalism incompatible with his stated goals, which include what he calls conceptual realism. (See also Abstract and Concrete.)

The ambiguity in the passage has to do with how strong a sense we give to “conveying”. We should not expect a run-of-the-mill definitional representation to literally convey conceptual (inferential) content in its explicit form. But such a representation absolutely does address or concern conceptual content, and therefore can still “convey” that content in the weaker sense of referring to it or reliably picking it out. (We could also atypically construct definitions in terms of explicit material incompatibilities and consequences. These would presumably in a stronger sense convey the conceptual content isomorphic to them. We could even atypically construct definitions in terms of the current best expressive genealogy, so I don’t really see these as counterposed.)

I do not think Hegel would go so far as to deny the high pragmatic value of definition in synchronic contexts. This is part of the necessary moment(s) of determinacy (and Understanding) in the larger process of the development of Spirit. He just wants to make the larger point that diachronically, any realized ground-level definition is ultimately just a stopping point along the way. That does not mean we should not attempt to sum up the best understanding we have achieved at each moment. I think we are deontically obligated to do just that. Every ground-level definition is contextualized by its historical situation and therefore subject to change, but at every moment we should still strive to speak and act in accordance with the best definitions we can achieve. Representational clarity is imperfect and always dependent on other considerations in the background, but it is still a moment to be preserved.

We should distinguish the conceptual-content-related doing associated with developing a definition from the representation produced. Further, I find it difficult to separate a concern for definition from a methodological concern for problems of definition, as evinced by Plato and Aristotle for instance. From this perspective, definition has more to do with a line of questioning than a putative answer. The question of the “what” or conceptual content of things is actually far more substantial and interesting than those of mere fact or abstract existence. Even if it aims at a representation, definition as a practical task is all about inquiry into that whatness of things. The norm to which synchronic representation of whatness is responsible comes down to the best achievable view of the relevant difference and mediation, or material incompatibility and material consequence (as Brandom would put it) in the circumstances of that logical moment. This I think is actually independent of the diachronic moves of expressive genealogy.

Hegel’s “Substance that is also Subject” is explicitly presented as an extension of Aristotle’s (expressive meta) concept of ousia, and I think Aristotle anticipates even more than Hegel recognizes. (Expressive genealogy is distinctively Hegelian, but Substance certainly not, and Hegel himself notes in the History of Philosophy lectures that the concerns he groups under “Subject” were significantly addressed by Socrates, Plato, and Aristotle.)

If Brandom is right that Hegel intended to exclude such expressive metaconcepts from the general prognosis that all (ground-level) concepts eventually elicit their own negation, then it is at least logically possible that Aristotle’s metaconcept had already achieved the requisite stability to be incorporated by Hegel without negating the subordinate aspect of ousia that for Aristotle corresponds to a definition.

Without prejudice to claims about what Hegel added, I would argue that Hegel did in this way intend to incorporate all the multiple nuances of Aristotelian ousia, including the definitional one. With due respect for Brandom’s distinction between determination as Hegelian process and determinateness as Kantian/Fregean property (and the importance of the process as a superior point of view), I also think we need to forgivingly recollect all best attempts at determinateness. (See also Classification.)

I wonder what Brandom would say about the role of definitions in the articulation of mathematical conceptual content. The doing of mathematics seems to join the doing of history as problematic for simple subsumption under a genealogical approach as Brandom has described it. Mathematics needs definitions, and history needs to evaluate data without Whiggish filtering. (But Brandom does not exactly disallow either, and I can’t imagine that he would want to. The meaning of mathematical theorems can certainly be expressed in terms of material incompatibility and consequence, and the concepts used in non-Whiggish historiography could themselves be Whiggishly genealogically grounded.)

We should think about the functional inferential role of stipulative definitions, as well as the definitions of empirical concepts that I expect Brandom has foremost in mind. We could say that in both cases, the meaning sought by definition — as distinct from the definiens — is actually constituted through material incompatibility and material consequence. But a stipulative definition is a making rather than a taking. It in a sense starts a whole course of reasoning, whereas empirical concepts implicitly summarize results of reasoning.

Also, mathematical definition is mostly concerned with structures and structural properties. I believe a case could be made that in general, such structures and structural properties are expressive metaconcepts in much the same sense that logical concepts are.

I don’t think it’s historically right that expressive metaconcepts are a “discovery or invention” of German Idealism (p.5). Aristotle already had quite a few expressive metaconcepts, as at least partially exhibited in this blog. I believe Hegel himself recognized this.

Alienation, Modernity

The positively connotated (and actually not anti-naturalist) “alienation” of Spirit from nature noted earlier did turn out to be an exception. Hegel’s more usual, negatively connotated talk about alienation is explained by Brandom as picking out any asymmetry between authority claimed and responsibility acknowledged. On this reading, traditional Sittlichkeit that takes responsibility for too much would be just as alienated as the modernity that takes responsibility for too little.

The model of a positively connotated alienation is still interesting, though, and may possibly shed further light on the vexed question of how modernity is to be picked out and assessed. Perhaps the thought is not only that any move in any direction away from the unquestioned governance of tradition is ultimately progressive, even if only through its eventual consequences, but also that a given degree of asymmetry on the modern side is therefore less bad than an equivalent asymmetry on the traditional side, because the modern one starts a dynamic that (normatively, not causally) leads to something better, while the traditional one just preserves the status quo.

Karl Mannheim in his 1925 essay on the sociology of knowledge adopted a vaguely Hegelian notion of modernity as the progressive self-relativization of thought. (He was at pains to argue that this did not lead to the “relativism” decried by some of his contemporaries.) I was fascinated by this in my youth. Here is a modernity with a Hegelian pedigree that bears no trace of Cartesianism. Mannheim’s version is more practical-epistemological than normative, and merely programmatic rather than really developed, where Brandom has a very thorough account of recognition-based normativity in many different circumstances. But it does seem to correlate with the move away from tradition that Brandom talks about. It focuses more on the notion of progress itself, and less on a particular achieved status.