Sociology of Knowledge?

In my youth, I was very interested in Karl Mannheim’s attempt to develop a sociology of knowledge. Mannheim belongs to the tradition of classical German sociology, which was always much more philosophical than its American counterpart. As a young man in Hungary, he was close to Georg Lukács. Later, he taught at Frankfurt and interacted with members of the early Frankfurt school.

In his doctoral dissertation, Mannheim had argued that epistemology cannot be self-grounding, and suggested that what he at the time called “ontology” should come first. In “The Problem of a Sociology of Knowledge” (1925), he argued that the principal characteristic of modernity was a progressive “self-relativization” of knowledge, and attempted to generalize Marx’s concept of ideology into a theory of something like culture.

His most famous work, Ideology and Utopia (1929), was concerned with the fragility of democracy. His naive hopes that a “free-floating intelligentsia” would lead the way to social peace were severely criticized by Max Horkheimer and Theodor Adorno. While rejecting economic determinism, Mannheim saw general social-scientific value in the Marxist thesis that “being determines consciousness”. Like the Marxists, what Mannheim had in mind in speaking of “being” was mainly concrete social-historical circumstance. He spoke of thought as inseparable from such being, and sought to distinguish his own “dynamic relationism” from relativism. Later, as a refugee from the Nazis, he among other things proposed a broader “sociology of mind”, with some reference to Hegel.

(Mannheim did not much rely on the term “consciousness”, mentioned above. For a long time now, I have shied away from programmatic use of that term. It does vaguely refer to something, but that something can be more clearly discussed in other ways. Phenomenologists, existentialists, and Marxists tend to indiscriminately broaden the term “consciousness” to include all phases of the Hegelian phenomenology, but in Hegel, Consciousness refers in particular to the most primitive and inadequate phase, which posits a naive, unproblematic distinction between mind and world. In Brandomian terms, such indiscriminate references to “consciousness” imply a reduction of sapience to mere sentience. In common parlance, “consciousness” suggests a naive notion of a transparent mental substance or medium, or a container of mental objects. I’ve many times registered my objection to programmatic “being” talk, as well. See also Being, Existence.)

In spite of preferring to avoid reliance on terms like “being” and “consciousness”, I do still see an important real asymmetry that is loosely picked out by a phrase like “being determines consciousness”. Reality and thought are asymmetrically mutually determining (see Subject, Object). The real (never simply possessed by us, but rather as that which pushes back) always has an edge over thought, and at any given moment exceeds it, provoking further development. That (in conjunction with mutual recognition) is how a non-naive realism can be recovered, and relativism avoided.

Abstraction

Abstraction in Aristotle is sometimes made out to be mysterious. I think it is just straightforward subtraction of features of a thing that have been previously recognized as “accidental” for the pertinent context of evaluation. Abstraction is neither a way of magically laying bare the true inner essence of a thing, as envisioned by some medieval realists, nor the mental creation of a universal ex nihilo, as envisioned by some nominalists. It is also does not have any necessary dependency on induction.

What counts as accidental may vary with the context of evaluation. While distinctions of essence and accident are fairly stable within a given context, they are ultimately relative and contextual. The pertinent context includes not only contingent facts about what is being evaluated, but also the purpose of the evaluation.

In other, non-Aristotelian contexts, Badiou has recently made it somewhat fashionable to speak literally about “subtraction” instead of “abstraction”. Though I have many issues with his thought, this is actually a useful clarification.

Ontology

Ontology as a supposed science of being acquired its basic shape in the middle ages, as a sort of reification of Aristotelian semantics. Duns Scotus was very proud of his ontological “improvement” of Aristotle. Aristotle himself preferred to shift clumsy, sterile discussions of sheer being onto more subtle and fruitful registers of form and meaning at the earliest opportunity.

Kant pointed out that existence is not a property, and Hegel pointed out the equivalence of Being to Nothing. When Hegel talks about “logic” as the form of future metaphysics, this means a return to the original meaning of “metaphysics” as Aristotelian dialectical semantics, not an ontologization of dialectic. Broadly Aristotelian dialectical semantics give us all the “ontology” we will ever need.

For the historical back story of how Scotus invented ontology as we know it today, if you read French, see Olivier Boulnois, Être et représentation: Une généalogie de la métaphysique moderne à l’époque de Duns Scot (XIIIe–XIVe siècle). As suggested by the title, this work also has extremely important things to say about the premodern history of strongly representationalist views. The famous univocal “being” invented by Scotus was defined in terms of representability. (See also Being, Existence; Aristotelian Dialectic; Objectivity of Objects; Form; Repraesentatio.)

Weak Nature Alone

Adrian Johnston’s latest, A Weak Nature Alone (volume 2 of Prolegomena to Any Future Materialism) aims among other things at forging an alliance with John McDowell’s empiricist Hegelianism, and gives positive mention to McDowell’s use of the Aristotelian concept of second nature. Johnston is the leading American exponent of Slavoj Žižek’s Lacanian Hegelian provocations, and a neuroscience enthusiast. He wants to promote a weak naturalism that would nonetheless be directly grounded in empirical neuroscience. He claims neuroscience already by itself directly undoes “bald” naturalist philosophy from within natural-scientific practice. That sounds like a logical confusion between very different discursive domains, but I am quite interested in a second-nature reading of Hegel.

Broadly speaking, the idea of a weak naturalism sounds good to me. I distinguish between what I think of as relaxed naturalisms and realisms of an Aristotelian sort that explicitly make a place for second nature and assume no Givenness, and what I might privately call “obsessive-compulsive” naturalisms and realisms that build in overly strong claims of univocal causality and epistemological foundations.

Johnston likes McDowell’s rejection of the coherentism of Donald Davidson. McDowell’s basic idea is that coherence can only be a subjective “frictionless spinning in a void”, and that it thus rules out a realism he wants to hold onto. I enjoyed McDowell’s use of Hegel and Aristotle, but thought the argument against Davidson the weakest part of the book when I read Mind and World. If you circularly assume that coherentism must be incompatible with realism, as McDowell tacitly does, then his conclusion follows; otherwise, it doesn’t.

Nothing actually justifies the characterization of coherence as frictionless spinning. This would apply to something like Kantian thought, if it were deprived of all intuition, which for Kant is never the case. Kant sharply distinguishes intuition from thought or any other epistemic function, but nonetheless insists that real experience is always a hylomorphic intertwining of thought and intuition. Brandom brilliantly explains Kantian intuition’s fundamental role in the progressive recognition of and recovery from error, which — along with the recursively unfolding reciprocity of mutual recognition — is essential to the constitution of objectivity.

I want to tendentiously say that as far back as Plato’s account of Socrates’ talk about his daimon, intuition among good philosophers has played a merely negative and hence nonepistemic role. (By “merely” negative, I mean it involves negation in the indeterminate or “infinite” sense, which in contrast to Hegelian inferential determinate negation could never be sufficient to ground knowledge.) On the other hand, that merely negative role of intuition has extreme practical importance.

The progressive improvement of (the coherence of) a unity of apperception that is essential to the distinction of reality from appearance is largely driven by noncognitive mere intuition of error. Intuitions of error or incongruity explicitly bring something like McDowell’s “friction” into the mix.

Charles Pierce reputedly referred to the hand of the sheriff on one’s shoulder as a sign of reality. Like an intuition of error, this is not any kind of positive knowledge, just an occasion for an awareness of limitation. It is just the world pushing back at us.

According to Johnston, McDowell stresses “the non-coherentist, non-inferentialist realism entailed by the objective side of Hegel’s absolute idealism” (p.274). Johnston wants to put results of empirical neuroscience here, as some kind of actual knowledge. But there could be no knowledge apart from some larger coherence, and we are clearly talking past one another. Neuroscience is indeed rich with philosophical implications, but only a practice of philosophy can develop these. (See also Radical Empiricism?)

Johnston wants to revive the Hegelian philosophy of nature. Very broadly speaking, I read the latter as a sort of Aristotelian semantic approach to nature that was also actually well-informed by early 19th century science. I could agree with Johnston that the philosophy of nature should probably get more attention, but still find it among the least appealing of Hegelian texts, and of less continuing relevance than, say, Aristotle’s Physics.

Johnston also likes Friedrich Engels’ Dialectics of Nature. In this case, I actually get more takeaway from Engels than from Hegel. Engels was not a real philosopher, but he was well-read and thoughtful, and a brilliant essayist and popularizer. His lively and tentative sketches were ossified into dogma by others. He did tend to objectify dialectic as happening in the world rather than in language, where I think Plato, Aristotle, and Hegel all located it.

But “dialectic” for Engels mainly entails just a primacy of process; a primacy of relations over things; and a recognition that apparent polar opposites are contextual, fluid, and reciprocal. However distant from the more precise use of dialectic in Aristotle and Hegel, these extremely general principles seem unobjectionable. (The old Maoist “One divides into Two” line, explicitly defended by Badiou and implicitly supported by Žižek and Johnston, not only completely reverses Engels on the last point, but also reverses Hegel’s strong programmatic concern to replace “infinite” negation with determinate negation.)

Engels did infelicitously speak of dialectical “laws” governing events, but his actual examples were harmless qualitative descriptions of very general phenomena. Much of 19th century science outside of physics and chemistry was similarly loose in its application of exact-sounding terms. In Anti-Dühring, however, Engels argued explicitly that Marx never intended to derive any event from a dialectical “law”, but only to apply such “laws” in retrospective interpretation. The “dialectics of nature” is another exercise in Aristotelian semantics. (See also Aristotelian Matter; Efficient Cause.)

It sounds like Johnston wants ontologized dialectical laws of nature, and will want to say they are confirmed by neuroscience results. Johnston also highlights incompatibilities between Brandom and McDowell that are somewhat hidden by their mutual politeness. This in itself is clarifying. I now realize McDowell is further away than I thought, in spite of his nice Aristotelian references. (See also Johnston’s Pippin.)

Coherence

Aiming at coherence is a moral necessity. Serious people are serious about avoiding material inconsistency, as Aristotle noted in the Metaphysics, and Brandom has more recently thematized. (Unity of apperception is a moral imperative, not a fact, and certainly not something that could be simply possessed.)

Reality or objectivity is measured by the counterfactual robustness of our generalizations; our ability to recognize incongruities; and our commitment to resolving them. This one way of formulating what is sometimes referred to as a coherence theory of truth, or “coherentism”. Reality is not something you could point at, but a normative criterion, admitting of degree. (See also Objectivity of Objects; Foundations?)

The thing that complements coherence is not correspondence, but rather non-correspondence. Putative correspondence provides no additional assurance of veracity, but non-correspondence tells us something is wrong with our conceptions, which is valuable information. From an intuition of incongruity arises a task to improve our understanding. (See also Error; Obstacles to Synthesis.)

Justification

Epistemological foundationalism always sounded like a patent logical absurdity to me, an attempt to escape the natural indefinite regress of serious inquiry by sheer cheating — a bald pretense that some content could be self-certifying and therefore exempt from the need for justification. I have a hard time being polite about this; such a claim feels to me like a deep moral wrong.

The kind of justification we should care about is not some guarantee of absolute epistemic certainty, but a simple explanation why a claim is reasonable, accompanied by a willingness to engage in dialogue. All claims without exception should be subject to that. As Sellars said, “in characterizing an episode or state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.” (Empiricism and the Philosophy of Mind, p.76.) Aristotle would agree. (See also Verificationism?; Empiricism; Free Will and Determinism.)

Verificationism?

If something is true, it ought to make a difference, and in some sense that difference ought to be verifiable. Generically, this is the space inhabited by logical positivism, but the logical positivists had rather specific, foundationalist notions of verifiability that I would not wish to follow. (Moritz Schlick, the founder of the Vienna Circle, spoke passionately of verification against “the Given”, which was supposed to be a bedrock of pure, uninterpreted empirical fact that would anchor the whole enterprise of science. He also literally talked about foundational “pointing”. But he had a good critique of epistemic claims for intuition and images; emphasized conceptual development, form, and structure; made interesting use of relations; and reportedly spoke of laws of nature as inference rules.)

Logic by itself will not reform the world. However, the analysis of illogic is generally salutary.

The kind of verification that seems most applicable to the sorts of meta-ethical theses I am mainly interested in would be pragmatic. I imagine general pragmatic verifiability as just extensive openness to rational examination, with a responsibility for due diligence. Obviously, this is a loose criterion, but as Aristotle would remind us, we should not seek more precision than is appropriate to the subject matter.

In principle, material-inferential things can be verified “as far as you like” by a sort of recursive expansion of material consequences and material incompatibilities.

Purely formal-inferential things can be rigorously verified by mathematical construction or something resembling it. In constructive logic, proof comes before truth, so verifiability is built in.

Empiricism

Already in the 1950s, analytic philosophers began to seriously question empiricism. Quine’s “Two Dogmas of Empiricism” (1951), Wittgenstein’s Philosophical Investigations (1954), and Sellars’ “Empiricism and the Philosophy of Mind” (1956) all contributed to this.

Brandom explicates Sellars’ pivotal critique of the empiricist “Myth of the Given” as belief in a kind of awareness that counts as a kind of knowledge but does not involve any concepts. (If knowledge is distinguished by the ability to explain, as Aristotle suggested, then any claim to knowledge without concepts is incoherent out of the starting gate.) Building on Sellars’ work, Brandom’s Making It Explicit (1994) finally offered a full-fledged inferentialist alternative. He has rounded this out with a magisterial new reading of Hegel.

The terms “empiricism” and “rationalism” originally referred to schools of Greek medicine, not philosophy. The original empirical school denied the relevance of theory altogether, arguing that medical practice should be based exclusively on observation and experience.

Locke famously began his Essay Concerning Human Understanding (1689) with an argument that there are no innate ideas. I take him to have successfully established this. Unfortunately, he goes on to argue that what are in effect already contentful “ideas” become immediately present to us in sensible intuition. This founding move of British empiricism seems naive compared to what I take Aristotle to have meant. At any rate, I take it to have been decisively refuted by Kant in the Critique of Pure Reason (1781; 2nd ed. 1787). Experience in Kant is highly mediated. “Intuitions without concepts are blind.” (See also Ricoeur on Locke on Personal Identity; Psyche, Subjectivity.)

In the early 20th century, however, there was a great flourishing of phenomenalism, or the view that all knowledge is strictly reducible to sensation understood as immediate awareness. Kant himself was often read as an inconsistent phenomenalist who should be corrected in the direction of consistent phenomenalism. Logical empiricism was a diverse movement with many interesting developments, but sense data theories were widely accepted. Broadly speaking, sense data were supposed to be mind-dependent things of which we are directly aware in perception, and that have the properties they appear to have in perception. They were a recognizable descendent of Cartesian incorrigible appearances and Lockean sensible intuition. (Brandom points out that sense data theory is only one of many varieties of the Myth of the Given; it seems to me that Husserlian phenomenology and its derivatives form another family of examples.)

Quine, Wittgenstein, and Sellars each pointed out serious issues with this sort of empiricism or phenomenalism. Brandom’s colleague John McDowell in Mind and World (1994) defended a very different sort of empiricism that seems to be a kind of conceptually articulated realism. In fact, there is nothing about the practice of empirical science that demands a thin, phenomenalist theory of knowledge. A thicker, more genuinely Kantian notion of experience as always-already conceptual and thus inseparable from thought actually works better anyway.

Thought and intuition are as hylomorphically inseparable in real instances of Kantian experience as form and matter are in Aristotle. A positive role for Kantian intuition as providing neither knowledge nor understanding, but crucial instances for the recognition of error leading to the improvement of understanding, is preserved in Brandom’s A Spirit of Trust. (See also Radical Empiricism?; Primacy of Perception?; Aristotle, Empiricist?)

One, Many

The unity associated with logical coherence and the flexibility and richness associated with the right measure of pluralism both seem to be worthy goals. As usual, we aim for a kind of structural mean, or the best of both worlds.

The two are not fundamentally opposed. Something like unity of apperception involves no suppression of appropriate distinctions. Similarly, the pluralism we want involves no suppression of practically achievable stability or coherence. So in principle, reconciliation ought to be possible.

They even ought to be combinable like product and sum types in type theory, which are like structures nested inside an n-ary logical AND or OR operation. A single consistent view is representable as a product type. Pluralism at a given logical level of interest is representable as a sum type.

Following Plato’s metaphor in the Phaedrus, we want to cut at the joints, as it were — to recognize unity where there should be unity, and difference where there should be difference. Of course, those “joints” are not just simply given to us; we have to find them.

Pluralism

One of the underappreciated aspects of Aristotle’s thought is his pluralism. A thing will typically have multiple causes. Important words are “said in many ways”. We should be careful not to make claims that are too strong.

There has been a tendency to read Aristotle as a systematizer — which he is, but only up to a point — that has interfered with recognition of the principled and not just incidental nature of Aristotelian pluralism. Aristotle’s pluralism is part of a deep and admirable commitment to what in a modern context would be called antireductionism. This is just part of his extraordinary, methodologically sophisticated intellectual honesty, which is stronger than his desire to systematize.

Historically, Aristotle’s immediate successors were the Stoics, who did aim at extremely strong systematicity, and claimed to have achieved it. Philosophy after that, including what was called Aristotelian philosophy, largely proceeded on the Stoic model. Strong systematic claims became de rigeur. (See also The Epistemic Modesty of Plato and Aristotle; Univocity; Mean; Aristotelian Dialectic; Free Will and Determinism.)