Weak Nature Alone

Adrian Johnston’s latest, A Weak Nature Alone (volume 2 of Prolegomena to Any Future Materialism) aims among other things at forging an alliance with John McDowell’s empiricist Hegelianism, and gives positive mention to McDowell’s use of the Aristotelian concept of second nature. Johnston is the leading American exponent of Slavoj Žižek’s Lacanian Hegelian provocations, and a neuroscience enthusiast. He wants to promote a weak naturalism that would nonetheless be directly grounded in empirical neuroscience. He claims neuroscience already by itself directly undoes “bald” naturalist philosophy from within natural-scientific practice. That sounds like a logical confusion between very different discursive domains, but I am quite interested in a second-nature reading of Hegel.

Broadly speaking, the idea of a weak naturalism sounds good to me. I distinguish between what I think of as relaxed naturalisms and realisms of an Aristotelian sort that explicitly make a place for second nature and assume no Givenness, and what I might privately call “obsessive-compulsive” naturalisms and realisms that build in overly strong claims of univocal causality and epistemological foundations.

Johnston likes McDowell’s rejection of the coherentism of Donald Davidson. McDowell’s basic idea is that coherence can only be a subjective “frictionless spinning in a void”, and that it thus rules out a realism he wants to hold onto. I enjoyed McDowell’s use of Hegel and Aristotle, but thought the argument against Davidson the weakest part of the book when I read Mind and World. If you circularly assume that coherentism must be incompatible with realism, as McDowell tacitly does, then his conclusion follows; otherwise, it doesn’t.

Nothing actually justifies the characterization of coherence as frictionless spinning. This would apply to something like Kantian thought, if it were deprived of all intuition, which for Kant is never the case. Kant sharply distinguishes intuition from thought or any other epistemic function, but nonetheless insists that real experience is always a hylomorphic intertwining of thought and intuition. Brandom brilliantly explains Kantian intuition’s fundamental role in the progressive recognition of and recovery from error, which — along with the recursively unfolding reciprocity of mutual recognition — is essential to the constitution of objectivity.

I want to tendentiously say that as far back as Plato’s account of Socrates’ talk about his daimon, intuition among good philosophers has played a merely negative and hence nonepistemic role. (By “merely” negative, I mean it involves negation in the indeterminate or “infinite” sense, which in contrast to Hegelian inferential determinate negation could never be sufficient to ground knowledge.) On the other hand, that merely negative role of intuition has extreme practical importance.

The progressive improvement of (the coherence of) a unity of apperception that is essential to the distinction of reality from appearance is largely driven by noncognitive mere intuition of error. Intuitions of error or incongruity explicitly bring something like McDowell’s “friction” into the mix.

Charles Pierce reputedly referred to the hand of the sheriff on one’s shoulder as a sign of reality. Like an intuition of error, this is not any kind of positive knowledge, just an occasion for an awareness of limitation. It is just the world pushing back at us.

According to Johnston, McDowell stresses “the non-coherentist, non-inferentialist realism entailed by the objective side of Hegel’s absolute idealism” (p.274). Johnston wants to put results of empirical neuroscience here, as some kind of actual knowledge. But there could be no knowledge apart from some larger coherence, and we are clearly talking past one another. Neuroscience is indeed rich with philosophical implications, but only a practice of philosophy can develop these. (See also Radical Empiricism?)

Johnston wants to revive the Hegelian philosophy of nature. Very broadly speaking, I read the latter as a sort of Aristotelian semantic approach to nature that was also actually well-informed by early 19th century science. I could agree with Johnston that the philosophy of nature should probably get more attention, but still find it among the least appealing of Hegelian texts, and of less continuing relevance than, say, Aristotle’s Physics.

Johnston also likes Friedrich Engels’ Dialectics of Nature. In this case, I actually get more takeaway from Engels than from Hegel. Engels was not a real philosopher, but he was well-read and thoughtful, and a brilliant essayist and popularizer. His lively and tentative sketches were ossified into dogma by others. He did tend to objectify dialectic as happening in the world rather than in language, where I think Plato, Aristotle, and Hegel all located it.

But “dialectic” for Engels mainly entails just a primacy of process; a primacy of relations over things; and a recognition that apparent polar opposites are contextual, fluid, and reciprocal. However distant from the more precise use of dialectic in Aristotle and Hegel, these extremely general principles seem unobjectionable. (The old Maoist “One divides into Two” line, explicitly defended by Badiou and implicitly supported by Žižek and Johnston, not only completely reverses Engels on the last point, but also reverses Hegel’s strong programmatic concern to replace “infinite” negation with determinate negation.)

Engels did infelicitously speak of dialectical “laws” governing events, but his actual examples were harmless qualitative descriptions of very general phenomena. Much of 19th century science outside of physics and chemistry was similarly loose in its application of exact-sounding terms. In Anti-Dühring, however, Engels argued explicitly that Marx never intended to derive any event from a dialectical “law”, but only to apply such “laws” in retrospective interpretation. The “dialectics of nature” is another exercise in Aristotelian semantics. (See also Aristotelian Matter; Efficient Cause.)

It sounds like Johnston wants ontologized dialectical laws of nature, and will want to say they are confirmed by neuroscience results. Johnston also highlights incompatibilities between Brandom and McDowell that are somewhat hidden by their mutual politeness. This in itself is clarifying. I now realize McDowell is further away than I thought, in spite of his nice Aristotelian references. (See also Johnston’s Pippin.)

Coherence

Aiming at coherence is a moral necessity. Serious people are serious about avoiding material inconsistency, as Aristotle noted in the Metaphysics, and Brandom has more recently thematized. (Unity of apperception is a moral imperative, not a fact, and certainly not something that could be simply possessed.)

Reality or objectivity is measured by the counterfactual robustness of our generalizations; our ability to recognize incongruities; and our commitment to resolving them. This one way of formulating what is sometimes referred to as a coherence theory of truth, or “coherentism”. Reality is not something you could point at, but a normative criterion, admitting of degree. (See also Objectivity of Objects; Foundations?)

The thing that complements coherence is not correspondence, but rather non-correspondence. Putative correspondence provides no additional assurance of veracity, but non-correspondence tells us something is wrong with our conceptions, which is valuable information. From an intuition of incongruity arises a task to improve our understanding. (See also Error; Obstacles to Synthesis.)

Justification

Epistemological foundationalism always sounded like a patent logical absurdity to me, an attempt to escape the natural indefinite regress of serious inquiry by sheer cheating — a bald pretense that some content could be self-certifying and therefore exempt from the need for justification. I have a hard time being polite about this; such a claim feels to me like a deep moral wrong.

The kind of justification we should care about is not some guarantee of absolute epistemic certainty, but a simple explanation why a claim is reasonable, accompanied by a willingness to engage in dialogue. All claims without exception should be subject to that. As Sellars said, “in characterizing an episode or state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.” (Empiricism and the Philosophy of Mind, p.76.) Aristotle would agree. (See also Verificationism?; Empiricism; Free Will and Determinism.)

Verificationism?

If something is true, it ought to make a difference, and in some sense that difference ought to be verifiable. Generically, this is the space inhabited by logical positivism, but the logical positivists had rather specific, foundationalist notions of verifiability that I would not wish to follow. (Moritz Schlick, the founder of the Vienna Circle, spoke passionately of verification against “the Given”, which was supposed to be a bedrock of pure, uninterpreted empirical fact that would anchor the whole enterprise of science. He also literally talked about foundational “pointing”. But he had a good critique of epistemic claims for intuition and images; emphasized conceptual development, form, and structure; made interesting use of relations; and reportedly spoke of laws of nature as inference rules.)

Logic by itself will not reform the world. However, the analysis of illogic is generally salutary.

The kind of verification that seems most applicable to the sorts of meta-ethical theses I am mainly interested in would be pragmatic. I imagine general pragmatic verifiability as just extensive openness to rational examination, with a responsibility for due diligence. Obviously, this is a loose criterion, but as Aristotle would remind us, we should not seek more precision than is appropriate to the subject matter.

In principle, material-inferential things can be verified “as far as you like” by a sort of recursive expansion of material consequences and material incompatibilities.

Purely formal-inferential things can be rigorously verified by mathematical construction or something resembling it. In constructive logic, proof comes before truth, so verifiability is built in.

Empiricism

Already in the 1950s, analytic philosophers began to seriously question empiricism. Quine’s “Two Dogmas of Empiricism” (1951), Wittgenstein’s Philosophical Investigations (1954), and Sellars’ “Empiricism and the Philosophy of Mind” (1956) all contributed to this.

Brandom explicates Sellars’ pivotal critique of the empiricist “Myth of the Given” as belief in a kind of awareness that counts as a kind of knowledge but does not involve any concepts. (If knowledge is distinguished by the ability to explain, as Aristotle suggested, then any claim to knowledge without concepts is incoherent out of the starting gate.) Building on Sellars’ work, Brandom’s Making It Explicit (1994) finally offered a full-fledged inferentialist alternative. He has rounded this out with a magisterial new reading of Hegel.

The terms “empiricism” and “rationalism” originally referred to schools of Greek medicine, not philosophy. The original empirical school denied the relevance of theory altogether, arguing that medical practice should be based exclusively on observation and experience.

Locke famously began his Essay Concerning Human Understanding (1689) with an argument that there are no innate ideas. I take him to have successfully established this. Unfortunately, he goes on to argue that what are in effect already contentful “ideas” become immediately present to us in sensible intuition. This founding move of British empiricism seems naive compared to what I take Aristotle to have meant. At any rate, I take it to have been decisively refuted by Kant in the Critique of Pure Reason (1781; 2nd ed. 1787). Experience in Kant is highly mediated. “Intuitions without concepts are blind.” (See also Ricoeur on Locke on Personal Identity; Psyche, Subjectivity.)

In the early 20th century, however, there was a great flourishing of phenomenalism, or the view that all knowledge is strictly reducible to sensation understood as immediate awareness. Kant himself was often read as an inconsistent phenomenalist who should be corrected in the direction of consistent phenomenalism. Logical empiricism was a diverse movement with many interesting developments, but sense data theories were widely accepted. Broadly speaking, sense data were supposed to be mind-dependent things of which we are directly aware in perception, and that have the properties they appear to have in perception. They were a recognizable descendent of Cartesian incorrigible appearances and Lockean sensible intuition. (Brandom points out that sense data theory is only one of many varieties of the Myth of the Given; it seems to me that Husserlian phenomenology and its derivatives form another family of examples.)

Quine, Wittgenstein, and Sellars each pointed out serious issues with this sort of empiricism or phenomenalism. Brandom’s colleague John McDowell in Mind and World (1994) defended a very different sort of empiricism that seems to be a kind of conceptually articulated realism. In fact, there is nothing about the practice of empirical science that demands a thin, phenomenalist theory of knowledge. A thicker, more genuinely Kantian notion of experience as always-already conceptual and thus inseparable from thought actually works better anyway.

Thought and intuition are as hylomorphically inseparable in real instances of Kantian experience as form and matter are in Aristotle. A positive role for Kantian intuition as providing neither knowledge nor understanding, but crucial instances for the recognition of error leading to the improvement of understanding, is preserved in Brandom’s A Spirit of Trust. (See also Radical Empiricism?; Primacy of Perception?; Aristotle, Empiricist?)

One, Many

The unity associated with logical coherence and the flexibility and richness associated with the right measure of pluralism both seem to be worthy goals. As usual, we aim for a kind of structural mean, or the best of both worlds.

The two are not fundamentally opposed. Something like unity of apperception involves no suppression of appropriate distinctions. Similarly, the pluralism we want involves no suppression of practically achievable stability or coherence. So in principle, reconciliation ought to be possible.

They even ought to be combinable like product and sum types in type theory, which are like structures nested inside an n-ary logical AND or OR operation. A single consistent view is representable as a product type. Pluralism at a given logical level of interest is representable as a sum type.

Following Plato’s metaphor in the Phaedrus, we want to cut at the joints, as it were — to recognize unity where there should be unity, and difference where there should be difference. Of course, those “joints” are not just simply given to us; we have to find them.

Pluralism

One of the underappreciated aspects of Aristotle’s thought is his pluralism. A thing will typically have multiple causes. Important words are “said in many ways”. We should be careful not to make claims that are too strong.

There has been a tendency to read Aristotle as a systematizer — which he is, but only up to a point — that has interfered with recognition of the principled and not just incidental nature of Aristotelian pluralism. Aristotle’s pluralism is part of a deep and admirable commitment to what in a modern context would be called antireductionism. This is just part of his extraordinary, methodologically sophisticated intellectual honesty, which is stronger than his desire to systematize.

Historically, Aristotle’s immediate successors were the Stoics, who did aim at extremely strong systematicity, and claimed to have achieved it. Philosophy after that, including what was called Aristotelian philosophy, largely proceeded on the Stoic model. Strong systematic claims became de rigeur. (See also The Epistemic Modesty of Plato and Aristotle; Univocity; Mean; Aristotelian Dialectic; Free Will and Determinism.)

Modern Science

My main concern here is with a sort of meta-ethical discourse, and my critical remarks on topics like modern univocal causality should be taken in that context. Though I have deep appreciation for the cultural accomplishments of antiquity and even the middle ages, I am not any kind of Luddite. I am interested in science; admire higher mathematics; work with high technology; and use univocal causality in an instrumental way on a daily basis.

Efficient Cause

Each of Aristotle’s four “causes” or kinds of reasons why a thing is the way it is picks out a distinct kind of conceptual content. Actually, none of them — including the efficient cause — should be thought of in terms of anything like a mechanical impulse or force or the exertion of a force. An efficient cause is also not primarily a thing that exerts a force. Rather, an Aristotelian “efficient” cause exercises what in modern terms most closely resembles a sort of structural causality, associated with the form and materiality of the means by which a thing is realized as the sort of thing it is. It acts in an instrumental way that is more “logical” than physical.

In an example of the production of a statue, the efficient cause is not the sculptor, or the sculptor’s will, or the blows of the sculptor’s hammer and chisel. It is the art (objectively characterizable technique) by which the statue is produced. Many people have certainly made contrary assertions about this, but there is, e.g., a good discussion in the Stanford Encyclopedia of Philosophy that supports the above interpretation. (In this simple example, the end is the finished work of the statue. The form and matter are the form and matter of the finished work.)

While the efficient cause is perhaps a little closer to a “cause” in the usual modern sense, it is still far from the same, even though it has a much closer connection to the less common notion of structural causality. Aristotle himself put either form or end first, but influential late scholastics such as Suarez elevated the efficient cause above the other three, perhaps on the ground that God was considered to be pre-eminently an efficient cause (whereas for Aristotle, the “First” cause is primarily an end). I seem to recall some reference to late scholastics treating creation ex nihilo as an example of efficient causality. In any event, Suarez is regarded as treating all four Aristotelian causes on the model of the efficient cause. This helped pave the way for early modern mechanism’s reduction of all causality to a single, univocal form.

Aristotle’s semantically oriented science aims not so much at prediction of what we would call physical events as at a retrospective understanding of why things have turned out the way they have, in a humanly relevant, pragmatic way. Aristotelian “causes” are pluralistic and nonunivocal. They are just reasons why something came out the way it did.

Ends

The nature of ends is addressed in book 1 of Aristotle’s Nicomachean Ethics. “Every art and every inquiry, and likewise every action and choice, seems to aim at some good, and hence it has been beautifully said that the good is that at which all things aim.” (Sachs translation, p.1.) The Kantian primacy of practical reason and the primacy of normativity in Brandom express a similar insight.

“Good”, however, is meant in as many ways as “being” is, so there is no common good that is one and universal, no good-in-itself.

In the course of this discussion, Aristotle repeatedly emphasizes that one should not seek more precision in a given subject than is appropriate to it. One also should not try to derive conclusions that are more exact than what they are derived from. In areas like ethics and politics especially, one should be content to point out the truth roughly and in outline, and to say what is true for the most part.

Aristotle would object to the notion of “value free” science. Even his physics is a pragmatic, broadly semantic inquiry. His notion of cause (aitia) is much broader and more pluralistic than the modern one. An end is a kind of cause in Aristotle’s sense, but not in the modern sense. Aristotelian ends are orthogonal to modern causality. Not until Kant and Hegel did the modern world begin to recover a similar sophistication. (See also Univocity; Free Will and Determinism.)

There is nothing subjective about an Aristotelian end. Aristotelian teleology does not involve any mental intentions of spiritual beings (see God and the Soul). An end is just what Brandom would call the conceptual content of something sought or achieved. It is a pure form. (An aim on the other hand is an end that is taken up subjectively.)

An end may be sought on its own account, or for the sake of something else. The realization of an end may involve the realization of subordinate ends, which may involve the realization of further subordinate ends, and so on. Ends for the sake of which other ends are realized and ends sought on their own account are considered to be of greater value. An end may be a way of being at work, or a work produced. An end sought on its own account is typically a way of being at work. Aristotle suggests that the most comprehensive and therefore most valuable end for humans is politics as an activity, which is concerned with the good of all. In general, a good or end is better the more complete and self-sufficient it is.

In accordance with the emphasis on completeness, the end of an individual is to live a whole life that is good, which can only be judged retrospectively. The work of a human being is “a being-at-work of the soul in accordance with reason, or not without reason… and actions that go along with reason… [done] well and beautifully” (p.11). (See also Reasonableness; Reasons; Commitment; Happiness.)

People are good at making distinctions about the things they are acquainted with. “This is why one who is going to listen adequately to things that are beautiful and just, and generally about things that pertain to political matters, needs to have been beautifully brought up by means of habits.” (p.4.)

I read Aristotle as suggesting that immanent ends of natural beings are ultimately the most influential of the “causes” or reasons why things turn out as they do. Yet they are a kind of soft “cause” that only attracts. All of Aristotle’s causes are soft in one way or another. Each of the four interacts with the others in quasi-reciprocal fashion, and none of them results in the sort of hard determination classically attributed to early modern mechanical impulse. (See also Efficient Cause; Form; Aristotelian Matter.)

Nothing in this is incompatible with also incorporating modern mathematics into the account, but Aristotle’s main concern is with a pragmatic semantics of experience.

It is relatively easy for us to imagine how nonrational, sentient beings that still have desire are moved by internal ends. Nonsentient things do not literally have desire, and we have been taught not to think as if they did. It is only a metaphor to say, e.g., that heavy objects “want” to fall, but there is no inherent category mistake or personification in talking about an apparent material tendency as exhibiting a kind of apparent end, below the level of sentient desire.

In quasi-Brandomian terms, Aristotelian ends are an expressive metaconcept useful in the interpretation of experience, not a hypothesis about something beyond experience. (See also Natural Ends; Kant’s Recovery of Ends.)

The same could be said of Aristotle’s view that the “first” principle of all things is actually related to all things as an ultimate end that attracts them.