Heidegger

Martin Heidegger (1889–1976) was a tremendously original, highly influential, and troublesome philosopher. What makes his work troublesome is not only conceptual difficulty and a deliberate practice of translating the familiar into the unfamiliar, but also his never clearly repudiated attempt to influence the Nazi movement in Germany. He seems to have been a cultural and linguistic chauvinist who rejected pseudo-biological racism, but nonetheless put hopes in an “inner truth and greatness” of National Socialism as an alternative to American and Soviet materialism. This identification puts a dark cloud over the interpretation of his writing, which was, however, generally very far removed from politics. The question is, how much it is possible to detach his work from a stance that seems worse than one of mere bad judgment.

An influential but controversial reader of Aristotle, Plato, Kant, Hegel, and Nietzsche, Heidegger combined a sympathetic but critical take on Husserl’s phenomenology with an interest in the hermeneutics of Wilhem Dilthey. Widely read as an “existentialist”, he sharply repudiated Sartre’s appropriation of his work. In his later works, he approached philosophy as a kind of poetic meditation.

His most famous thesis was that Western thought largely lost its way from Plato onward, neglecting the question of the meaning of Being in favor of preoccupation with things. While he made good points about the preconceptions involved in our ordinary encounters with things, I think he too sharply rejected “ontic” engagement with empirical, factual concerns in favor of a purified ontology. He also promoted a valorization of what I would call the pre-philosophical thought of the pre-Socratics Heraclitus and Parmenides. I think Plato and especially Aristotle represented a gigantic leap forward from this.

Some of Heidegger’s very early work was on the medieval theologian Duns Scotus, who seems to have originated the standard notion of ontology later promoted by Wolff and others. In sharp contrast to the tradition stemming from Scotus, Heidegger argued that Being is not the most generic concept, and wanted to emphasize a “Being of beings” in contrast to their factual, empirical presentation. He did not follow the path of Aquinas in identifying pure Being with God, either, and Aquinas probably would have rejected his talk of the Being of beings.

I think his most important contribution was an emphasis on what he called “being-in-the-world” as a way of overcoming the dichotomy of subject and object. His associated critique of Cartesian subjectivity has been highly influential. In later works, he also recommended putting difference before identity, and relations before things. Although the way he expounded these notions was quite original, I prefer to emphasize their roots in Aristotle, Kant, and Hegel. (See also Being, Existence; Being, Consciousness; Beings; Phenomenological Reduction?; Memory, History, Forgetfulness — Conclusion.)

Meaning, Consciousness

I generally translate talk about consciousness into talk about meaning and related commitments. It doesn’t seem to me that anything is lost in the conversion; all the content is still there.

The notion of consciousness as a sort of generalized transparent medium of immediate presence that is somehow also tied to our sense of self and agency may seem intuitive, but it is actually the product of a long cultural development. It seems to belong to what Lacan called the Imaginary. Plato and Aristotle addressed the full range of human experience without any dependency on something like this. (See also Intentionality.)

Contradiction

Contradiction is a kind of logical judgment of error in things said. It applies when things said are either syntactically or semantically incompatible with one another. To be incompatible is to be incapable of “properly” coexisting in a single context or unity of apperception. Aristotle strongly emphasized this normative aspect of the principle of noncontradiction.

In the syntactic case, the concern is with purely formal rules for the well-formedness of expressions. A syntactic contradiction would be something like “A, and also not-A”, where either A and not-A have both been explicitly said, or both are implied by things that have been said. In this case, we need know nothing at all about the meaning of “A”. We are only concerned with generic rules for the application of logical operators like “and” and “not”.

In the semantic case, contradiction involves the specific meanings of concrete expressions, applied together to some one meant reality. Unlike the syntactic case, background knowledge is essential to judging whether or not meanings can compatibly coexist. We may also think we know the whole story when we don’t. New facts or understandings may change our generalizations and schemas of classification. (See also Interpretation; Error.)

Nothing follows from the principle of noncontradiction alone. Given some inputs, we can judge whether or not they are contradictory — by rigorous analysis in the syntactic case, and up to some level of practical confidence in the semantic case.

Hegel sometimes used the word “contradiction” in an idiosyncratic, highly metonymical or metaphorical way, straining language to the breaking point as part of a larger effort to draw out the complexities and subtleties involved in applying logic to concrete meanings and the real world, when no vocabulary existed for many of the subtleties involved. (See also Three Logical Moments.)

Some people, mainly Marxists, have talked about real-world conflict and social injustice as “contradictions” objectively existing in the world. Conflict and injustice are very real, but it is a misunderstanding of Hegelian dialectic and an inappropriate mixing of levels to associate them directly with contradiction. (See also Contradiction vs Polarity.)

Especially since the mid-20th century, many authors have pointed out common errors and issues associated with too-easy assumptions about identity. (See also Aristotelian Identity.) The Žižekian school has developed a sophisticated variant of the old talk about objective contradictions, by explaining it largely in terms of the issues with identity. If this were just a new metonymical or metaphorical usage in the style of Hegel, we could simply note that “contradiction” is being said in a nonstandard way, and move on. But unfortunately, the Žižekians have gone further, and also claimed that the logical principle of noncontradiction ultimately fails to hold, even though this logical (or illogical) claim is not necessary to address the social concerns that according to them need to be addressed, or to explain the things that according to them need to be explained. (See Split Subject, Contradiction.) We have to be very careful in moving back and forth between very different levels of analysis like this.

Just as on an interpersonal level we can reduce conflict by omitting those too-easy assumptions about identity, omitting those assumptions with respect to things said — and thus making more distinctions — also greatly reduces the potential for logical contradiction.

It is a category mistake to talk about contradiction driving events. Actual change does not result in contradiction either. Different things are true at different times, and the explanation for that is not “contradiction” but change.

Why is this important? The simple answer is that denial of the principle of noncontradiction allows someone to argue absolutely anything, including nonsensical and false things, and to sophistically respond to any refutation by simply introducing more inconsistency. This rejection of responsibility effectively ends the possibility of dialogue.

There ought to be no conflict between social criticism and the possibility of dialogue. Social criticism should be based on shareable, rational analysis. It may be unreasonable to suppose that all social issues can be resolved through dialogue (see Stubborn Refusal), but I do think all those concerned with doing something about those issues ought to be able to resolve their differences through dialogue.

I think Brandom has made an epic contribution in this area by finding a new way to simultaneously affirm — as Aristotle implicitly anticipated long ago — both the world’s recalcitrance to mastery and identity and its fundamentally rational, intelligible character. (See also Self-Evidence?)

Split Subject, Contradiction

The Žižekians, referencing Lacan, like to talk about a “split subject” that is noncoincident with itself. In broad terms, I think this is useful. What we call subjectivity is divided, and lacking in strong unity. (See also Pure Negativity?; Acts in Brandom and Žižek.) But it seems to me that if we try to speak carefully about this, we should not then go on using singular articles like “the” or “a”.

I tend to think subjectivity is not just fractured or un-whole, but also actually consists of a complex overlay of different things that we tend to blur together. In particular, it seems clear to me that a common-sense, biographical “self” whose relative unity over time is trackable by relation to the “same” physical body — or by Lockean continuity of memory — is not the same as what we might in a given moment view from a distance as an individualized ethos, or up close as a unity of apperception. This is, I believe, the same distinction that Brandom discusses in terms of sentience and sapience.

Ethos and unity of apperception, and their constituent values and conceptions — the very things that most properly say “I”, and play the functional role of an ethical “subject”, or of a subject of knowledge — are profoundly involved with language, social relations, and what Lacan in his earlier work called the Symbolic and the “Other”. These instances of sapience are pure forms whose identity can only be expressed in terms of sameness of form — nonempirical, but inseparable from a larger ethical world — and simultaneously intimate to us, but by no means strictly “ours”. (See also Self, Subject.)

Where I am still a bit torn is that I also feel that emotions — which I’ve been locating on the former, “self” side — are fundamental to subjectivity as a whole, but I have theoretically separated them from the main locus of transcendental ethical and epistemic subjectivity, even though they play an essential role in making it possible. One logical solution would be to say this just means subjectivity as a whole is more than just ethical and epistemic. Another would be to say that there is a separate kind of emotional subjectivity. I’m not entirely satisfied yet, because I think feeling combines these, but the noncoincidence of our factual selves with our ethical and epistemic being seems very important in understanding how we overcome empirical limitations.

The Žižekians will perhaps remind us that they were not talking about a split between self and subject, but about a split within the subject. I think we habitually overstate the degree of unity and identity we attribute to selves, subjects, and things in general, so I’m fine with that, too. They also want to expand this into a general “ontological” point, which I see as a semantic point.

Perhaps the Žižekians are more comfortable talking about “a” or “the” subject in part due to their doctrine of the ubiquity of contradiction. Todd McGowan in Emancipation After Hegel (2019) nicely distinguishes the Žižekian notion from the old confusion between contradiction and conflict or polarity — and from immediate self-contradiction — but still wants to maintain that the standard logical law of noncontradiction ultimately “refutes itself”, and that Hegel thought this as well. This argument combines a laudable awareness of some of the practical issues with identity, with a logically invalid use of the distinction between explicit and implicit self-contradiction.

Hegel meditated profoundly on the difficulties of applying logic to meaningful content and to real life. He strained language to the breaking point trying to express his conclusions.

On the frontiers of mathematical logic today, the so-called law of identity has been replaced by a requirement to specify identity criteria for each formally defined type, and identity in general has been weakened to isomorphism. (See also Form as a Unique Thing.)

Real-world applications of strong identity typically involve loose “extensional” reference to things assumed to be the same, and a lot of forgetting. The linchpin of old “identity thinking” was inattention to difficulties of formalization from ordinary language — basically an illegitimate moving back and forth between formal and informal domains, resulting in lots of homogenizing confusion of things that ought to be distinct. Weaker, “intensional” assertions about identity as specifiable sameness of form make it the exception rather than the rule. What come first conceptually are distinctions within the manifold, not pre-synthesized things already possessed of identity. Where things are not the same to begin with, contradiction — far from being omnipresent — is not even potentially at issue. (See also Self-Evidence?)

Meanwhile, Sellars and Brandom have revived material inference about meant realities in contrast to formal logic, which deals with purely syntactic relations between presumed extensional “things” with presumed identity. Things Kant and Hegel said about Understanding and Reason can be nicely understood in terms of the relation between syntactic inference about symbolic terms standing for formless extensional “things” and substantive, material inference about the actual form of meant realities. Especially in the reading of Hegel, not having the resource of this distinction available now seems positively crippling.

Finally, Aristotle, who originated the law of noncontradiction as a kind of ethical imperative, and stands in the background to all of Hegel’s discussions of logic, was himself rather cautious and tentative about applying identity to real things, and in his logic was also mainly concerned with (composition of) material inferences, which have more to do with the actual form of things .

Hegel never violated Aristotle’s imperative not to say opposite things about the same thing said in the same way. What he did was to constantly point out the gap between reality and traditional semi-formal logic applied to ordinary language — not to encourage us to reject logic, but rather to refine and sublimate it. (See also Aristotelian and Hegelian Dialectic.)

Brandomian Choice

Aristotle had a reasonable, noninflationary concept of real choice. Choice is up to us, but it is far from arbitrary. Unfortunately, later treatments have largely oscillated between extremes of voluntarism and determinism, making choice either arbitrary or only an unreal appearance.

One of Brandom’s great contributions to ethics is a new account of choice that is reasonable and noninflationary like Aristotle’s. Aristotle developed a notion of real but nonarbitrary choice by defining it as the result of an open deliberation subject to normative standards of inquiry. Brandom reaches a complementary conclusion following a different path. The core of it is a combination of two theses. First, there is a view he associates with the Enlightenment that makes values binding on us only when we have implicitly or explicitly endorsed them. This secures the practical reality of choice, without any ontological assumptions. Second, there is Brandom’s own view that the meaning of the values we endorse is not up to us, but depends on articulation in the space of reasons. As with Aristotle’s notion of deliberation, this establishes the nonarbitrary nature of choice. (See also Intentionality; Self, Subject; Fragility of the Good; Freedom Without Sovereignty.)

Mutual Recognition Revisited

Mutual recognition has two distinct senses.

The first is an ethical ideal with roots in Aristotle’s discussion of friendship and love, as generalized by Fichte, and especially Hegel. Brandom and others consider it central to the understanding of what Hegel was really trying to do.

The second is a nonreductive meta-ethical theory of how normativity or the “ought” in general comes to be. Such a theory was broadly suggested by Hegel, and has been recently developed in great detail by Brandom in A Spirit of Trust. It addresses the emergence of normativity, but bootstraps itself from within the domain of a clarified understanding of normativity itself. Other accounts of the emergence of normativity have generally explained it in terms of something else, effectively reducing the “ought” to some kind of facts.

While I don’t see how anyone could reasonably object to the ethical ideal, its meta-ethical elaboration into a “normative all the way down”, self-bootstrapping theory of the constitution of normativity is an extensive, highly original, many-faceted theoretical account building on the first that no one could be expected to fully grasp on merely hearing it mentioned. I think its combination of detail and coherence is an amazing and unprecedented accomplishment, confirming Brandom’s place among the greatest philosophers who could be counted on one hand, but it takes real work to assimilate. (See Hegel’s Ethical Innovation; Brandom on Postmodernity; Mutual Recognition; Pippin on Mutual Recognition; Recognition; Kantian Respect; Trust as a Principle.)

Moved, Unmoved

Aristotle distinguished “moved movers” from “unmoved movers”. The most famous examples of unmoved movers come from his accounts of astronomical phenomena. I’ve previously noted that in a lesser-known text, he also reached the perhaps surprising conclusion that there is an unmoved mover involved in the movements of an animal’s leg joint. This additional case suggests a vast generalization of the concept of an unmoved mover.

In both the biological and the astronomical case, an unmoved mover is associated with the geometrical form of an axis of rotation. Putting to one side considerations of the special perfection of circular motions, I’d like to focus on the characterization of a mathematical description of a motion as an “unmoved mover”. In this same sense, modern mathematical-physical laws arguably qualify as Aristotelian unmoved movers.

On a yet broader level, I would propose that Aristotelian form and ends are kinds of things that can function as unmoved movers, and means of realization can also contextually do so, whereas material conditions function exclusively as moved movers. (Something can be effectively operative in a process without itself being moved or changed, or it may also itself be moved or changed. In functional programming, for instance, it is actually possible to completely define all computational work to be done using static constructs, pushing any use of non-static constructs down to a purely instrumental compiled-execution level.)

In a more “metaphysical” way, Plotinus anticipated such a generalization, e.g., in his essay on “The Impassivity of the Unembodied”. Going in a very different overall direction from Aristotle, he effectively made unmoved-moving into a kind of paradigm for all causality. On a poetic level, perhaps the ultimate guide to thinking in terms of unmoved moving is the work of Lao Tzu.

The Kantian transcendental acts like a generalized unmoved mover, but the historical-linguistic-social character of Hegelian Geist makes it a moved mover on the side of form and ends.

Empathy

Kant preferred to treat respect for others as a kind of duty. He seems to have had severe doubts about empathy or sympathy as a kind of feeling, on the ground that all such feeling involves our empirical inclinations, rather than pure moral concern.

Feeling is a mixed form that involves both emotional and rational elements. Although he did recognize the important ethical role of something like character formation — which would seem to necessarily involve a significant emotional component — Kant’s treatment of emotion often seems closer to the Stoic position that all “passion” must be something bad, than it does to the Aristotelian alternative that we should seek a healthy interweaving of reason and emotion.

I want to take a more optimistic, Aristotelian view of the place of emotion in a life of reason. Kant makes a valid point that inclination in general may lead us to deceive ourselves, but I think he went too far in distrusting anything toward which we feel inclined. We may be inclined to do what could independently be assessed as the right thing, and in such cases I think the inclination ought to be welcomed. (See also Kant’s Groundwork; Aristotle and Kant; Ethos, Hexis; Practical Judgment.)

Judgments

I usually think of judgment as a process of interpretation or a related kind of wisdom, but at least since early modern reformulations of Aristotelian logic, “a” judgment has also traditionally meant a logical proposition, or an assertion of a proposition.

An older, but still post-Aristotelian notion is that what the early moderns called a judgment “A is B” should be understood (on the model of its surface grammar) as the potentially arbitrary predication “A is B”. Such a potentially arbitrary predication by itself does not contain enough information for us to assess whether it is good or bad. The predication model was associated with a non-Aristotelian notion of truth as simple correspondence to supposed fact.

L. M. De Rijk, arguably the 20th century’s leading scholar on medieval Latin logic, developed a very detailed textual argument that the understanding of logical “judgments” in such grammatical terms is actually an unhistorical misreading of Aristotle. In the first volume of his Aristotle: Semantics and Ontology, De Rijk concluded that Aristotle’s own logical or semantic use of “is” or “is not” should be understood not in the traditionally accepted way as a “copula” or binary operator of predication, but rather as a unary operator of assertion on a compound expression — i.e., on the pair (A, B), as opposed to its two elements A and B.

I also want to emphasize that Aristotle himself did not admit simple, potentially arbitrary predications as “judgments”. The special form of Aristotelian propositions makes them express not arbitrary atomic claims as is the case with propositions in the standard modern sense, but two specific ways of compounding subclaims. Aristotle’s two truth-value-forming operations of combination and separation (expressed by “is” and “is not”) limit the scope of what qualifies as a proper Aristotelian “judgment” to cases that are effectively equivalent to what Brandom would call judgments of material consequence or material incompatibility (see Aristotelian Propositions). What the moderns would call Aristotelian “judgments” thus end up more specifically reflecting judgments of what Brandom would call goodness of material inference.

Proper Aristotelian “judgments” thus turn out to express not just arbitrary predications constructed without regard to meaning, but particular kinds of compound claims that can in principle be rationally evaluated for material well-formedness as compound thoughts, based on the actual content of the claims being compounded. (Non-compound claims are just claims, and do not have enough content to be subject to such intrinsic rational evaluation, but as soon as there is some compounding, internal criteria for well-formedness come into play.)

So, fortuitously, modern use of the term “judgment” for these ends up having more substance than it would for arbitrary predications. For Aristotle, truth and falsity only apply to what are actually compound thoughts, because truth and falsity express assessments of material well-formedness, and only compound thoughts can be assessed for such well-formedness. The case for the fundamental role of concerns of normativity rather than simple surface-level predication in Aristotelian truth-valued propositions is further supported by the ways Aristotle uses “said of” relations.

Independent of this sort of better reading of Aristotle, Brandom in the first of his 2007 Woodbridge lectures points out that Kant also strongly rejected the traditional analysis of judgment in terms of predication. Brandom goes on to argue that for Kant, “what makes an act or episode a judging in the first place is just its being subject to the normative demand that it be integrated” [emphasis in original] into a unity of apperception. This holistic, integrative view of Kantian judgment seems to me to be strongly supported by Kant’s discussion of unities of apperception in the second edition of the Critique of Pure Reason, as well as by the broad thrust of the Critique of Judgment.

Thus, a Kantian judgment also has more substance than the standard logical notion, but while an Aristotelian “judgment” gets its substantive, rational character from intra-propositional structure, a Kantian judgment gets it from inter-propositional structure.

Logic as Semantics

I think of logic in general as mainly concerned with the perspicuous rendering of distinctions for use in reasoning, rather than with the arbitration of truth based on some other presumed truth as a starting point.

An emphasis on this expressive or semantic role was, I think, what led Aristotle to insist that what modern people call logic should be viewed as a tool (organon) and not a “science”.

The great scholar of Latin medieval logic L. M. De Rijk, in his major study Aristotle: Semantics and Ontology (2002), recommended replacing references to Aristotle’s own “logic” with references to semantics, or investigation of meaning.

Hegel contended that traditional metaphysics should be replaced by a kind of “logic” that addresses meaningful content.

Brandom has given us an unprecedentedly thorough and clear account of the conditions that make meaningful content possible in the first place.

On the formal side, type theory and category theory provide a new, unified view of logic, mathematics, and formal languages that fits very well with this “meaning before truth” perspective.