Pragmatism and the Enlightenment

Brandom adds some more background in support of Rorty’s claim that American pragmatism represents a kind of second Enlightenment.

“The motor of the first Enlightenment was the rise of the new natural science — in particular, the mathematized physics of Galileo, Descartes, and Newton…. Because their thought was principally oriented by this project, all of the canonical philosophers from Descartes through Kant can sensibly be seen as at base philosophers of science” (Pragmatism and Idealism, pp. 18-19).

“The physical science they were inspired by and interpreters of put forward mathematical theories in the form of impersonal, immutable principles formulating universal, eternal, necessary laws. Enlightenment empiricism sought to ground all our knowledge in self-contained, self-intimating sensory episodes whose brute occurrence is the most basic kind of knowing. Just how the natural light of reason could extract secure and certain knowledge of things as law-governed from those deliverances of fallible perception was a perennial puzzle” (p. 19).

To put it bluntly, the empiricist theory of knowledge lacks the resources to explain the results of modern mathematicized science. The emperor has no clothes.

“Even had Hume succeeded in his aspiration to become ‘the Newton of the mind’ by perfecting Locke’s theoretical efforts to understand the psychological processes of understanding in terms of the mechanisms of association and abstraction, the issue of how the subject of that science was to be found among the furniture of the universe described by the real Newton would have survived untouched, as an apparently intractable embarrassment” (ibid).

“The founding genius of American pragmatism, Charles Sanders Pierce, was, like the original Enlightenment philosophes, above all, a philosopher of science…. He was impressed by the broadly selectional forms of explanation that he presciently saw as common to Darwinian evolutionary biology, at the level of species, and the latest psychological theories of learning, at the level of individual organisms. And he was impressed by the new forms of statistical explanation that were both essential to the new physical science of thermodynamics and becoming increasingly central to the new social sciences of the late nineteenth century” (pp. 19-20).

“Accounts that appeal to natural selection in biology, or to supervised selection in learning, or to statistical likelihood (whether in physics or sociology or economics), show how observed order can arise, contingently, but explicably, out of an irregular background of variation…. Pierce saw this as nothing less than a new form of intelligibility. Understanding whose paradigm is Darwin’s evolutionary theory is a concrete, situated narrative of local, contingent, mutable, practical, reciprocal accommodations of particular creatures and habitats. Pierce speculatively generalized this model to a vision in which even the most fundamental laws of physics are understood as contingently emerging by selectional processes from primordial indeterminateness. No less than the behavior of biological organisms, those laws are to be understood as adaptational habits, each of which is in a statistical sense relatively stable and robust in the environment provided by the rest” (pp. 20-21).

My late father would have appreciated this tribute to the importance of Pierce, in the face of Dewey and Rorty’s neglect. While writing his dissertation on Pierce in the 1950s, he was denied access to various manuscripts by the executors of the Pierce archive at Harvard. He speculated that the executors, who were very concerned to make Pierce “fit in” with the narrow orthodoxy that dominated American academic philosophy at the time, were suppressing evidence of Pierce’s broader interests. Years later, it turned out he was right.

Many writers in the late 19th and early 20th centuries treat a new appreciation for process and the emergence of new forms as characteristic of modernity. Of course, they were preceded in this by Hegel. (And if we read Aristotle on his own terms, rather than in ways beholden to later religious traditions, then behind Hegel stands Aristotle as a philosopher of process and emergence.)

“On the pragmatist understanding, … knower and known are alike explicable by appeal to the same general mechanisms that bring order out of chaos, settled habit from random variation: the statistical selective structure shard by processes of evolution and of learning. That selectional structure ties together all the members of a great continuum of being stretching from the processes by which physical regularities emerge, through those by which the organic evolves locally and temporarily stable forms, through the learning processes by which the animate acquire locally and temporarily adaptive habits, to the intelligence of the untutored common sense of ordinary language users, and ultimately to the methodology of the scientific theorist — which is just the explicit, systematic refinement of the implicit, unsystematic but nonetheless intelligent procedures characteristic of everyday practical life…. This unified vision stands at the center of the classical American pragmatists’ second Enlightenment” (pp. 24-25).

The selectional structure Brandom speaks of here is not necessarily normative. Darwinian natural selection in terms of utility and practical success is its main inspiration. But it does already go beyond a narrowly mechanical view of causality.

“This happy concord and consilience between the distinctively pragmatist versions of naturalism in ontology and empiricism in epistemology stands in stark contrast, not only to the prior traditional British empiricism of the Enlightenment, but also to the subsequent twentieth-century logical empiricism of the Vienna Circle. The reductive physicalist version of naturalism and the reductive phenomenalist version of empiricism they inclined to endorse were exceptionally difficult to reconcile with each other. Hume had already shown how difficult it is to provide suitable empiricist credentials for the way in which mathematical laws supporting subjunctive reasoning — the crowning glory of Newtonian physics — outran observable regularities, not only epistemically, but semantically. Adding the powerful methods of modern logic to articulate the phenomenal deliverances of sense did not alter this fundamental mismatch. A threatening and recalcitrant tension accordingly concerned how to proceed when respect for the deliverances of natural science as the measure of what there is and how it is in nature collides with empiricist strictures on when we are entitled to claim to know what there is and how it is” (p. 25).

Without hyperbole, Brandom points out the conflict between mechanist and phenomenalist strategies for explanation.

He exalts the original Enlightenment in the following terms.

“The Enlightenment marks the ending of humanity’s self-imposed tutelage, the achievement of our majority and maturity, for the first time taking adult responsibility for our own character and destiny. It is our emancipation from submission to the alien, nonhuman-because-superhuman authority of Old Nobodaddy in matters of practical conduct. Henceforth we should deem it incompatible with our human dignity to understand ourselves as subject to any laws other than those we have in one way or another laid down for ourselves. No longer should our ideas about what is right and good be understood as having to be dictated to us by a superhuman authority” (p. 27).

“Old Nobodaddy” is a reference to the poetry of William Blake.

(I like to tell a similar story about the birth of ethical reason with Socrates, Plato, and Aristotle. For me, it is Plato and Aristotle (humanity’s greatest teachers, in Hegel’s words) who are the original sources of this “adulthood” of humanity that Brandom so eloquently commends. They certainly did not take what is right and good to be dictated to us by a superhuman authority.  Most of the leading lights of the Enlightenment were more timid by comparison. But Brandom also does not acknowledge the ways in which Hegel uses Aristotle to solve Kantian problems, pointed out so well by Robert Pippin. Dewey, Rorty, and Brandom all show little interest in pre-modern philosophy. Even the great have weaknesses.)

“The first Enlightenment, as Rorty construed it, concerned our emancipation from nonhuman authority in practical matters: issues of what we ought to do and how things ought to be. The envisaged second Enlightenment is to apply this basic lesson to our emancipation from nonhuman authority in theoretical, cognitive matters” (p. 28, emphasis in original).

The “non-human authority” in this latter case is what Rorty calls Reality with a capital R, which is supposed to be what it is completely independent of human discourse and judgment, and which is nonetheless claimed to be somehow known as such by some humans. This was already an implicit target of Kant’s critique of dogmatism. (And once again, Aristotle discusses being principally in terms of the normative saying of “is”, and everywhere inquires about the natures of real things in ways that cannot be separated from a consideration of discourse, language, and judgment. Our nature is to be animals that are in some degree capable of discourse, which is the origin of second nature.) But Rorty and Brandom are quite right in the sense that the kinds of things that Kant collectively called dogmatism have by no means disappeared from the scene today, even though they have long been called out by name.

Berkeley on Perception

George Berkeley (1685-1753) is most famous for his provocative claim that material objects don’t really exist. Positively, he claimed that “to be is to be perceived”. Berkeley took as a starting point the view of Descartes and Locke that perceptions are “ideas” in the mind, but took issue with the further assumption of Descartes and Locke that ideas nonetheless also “represent” things that exist independent of the mind. It seems to me that the implicit concept of mind in this kind of usage assumes way too much, but for now I won’t dwell on that.

Berkeley has been the subject of superficial ridicule as a poster child for extreme subjectivism, but that is a caricature. Famously, he is supposed to have maintained, e.g., that a tree falling in the woods and heard by no one makes no sound. As 20th century analytic philosophers have noted, however, even if his positions are ultimately untenable, the quality of his arguments is actually quite high. Apart from the abstract “metaphysical” question of the real existence of external objects, he also generally wanted to vindicate common sense.

Far from denying the existence of any objective reality, what he really wanted to do was articulate an alternate account of objectivity, based on something other than the independent existence of discrete objects. He had two different kinds of responses on the falling tree. One invokes counterfactual conditions; all that is of practical relevance to us are the conditions under which a perception would occur. The other invokes God as a universal witness.

From within the tradition of British empiricism, Berkeley partially anticipates the non-representationalist accounts of objectivity developed by Kant and Hegel, using the resources of a kind of Christian Platonism. Unlike Kant and Hegel, he flatly asserts that what really exists are what he calls spirits, which combine Christian-Platonic attributes with those of minds in a broadly Cartesian-Lockean sense.

A bit like the monads of Leibniz but without the infinite nesting and mutual inclusion Leibniz posited, Berkeley’s spirits are inherently active, and inherently endowed with perception. Spirits have experience that is expressed in purely immanent and immediate — but entirely passive and inert — contentful ideas.

Berkeley wrote an important early work on the theory of vision, arguing that what we really see is immediate phenomena of light and color, rather than inferred “things”. This was an important source for phenomenalism in early 20th century philosophy of science. Like the later phenomenalists, he tried to explain all cognitive error as bad inference from good immediate perception. From this point of view, “ideas” cannot be wrong, because they are purely immediate and purely inert; the possibility of error depends on the actions of finite spirits.

The common tradition of Cartesianism and British empiricism insists that there is a layer of immediate apprehension that is immune to error, and wants to ground knowledge and science by more authentically getting back to that immediate layer. I think Kant and Hegel convincingly showed that everything we experience as immediate actually has a prehistory, so that immediacy itself is only an appearance, and all immediacy that we experience is really what Hegel called mediated immediacy. Mediated immediacy has the same general kind of explanation as what is called “habit” in translations of Aristotle. We “just know” how to ride a bicycle once we have already learned. We don’t have to think about it; we just spontaneously do it. Similarly, I think “immediate” perception involves a complex unconscious application of categories that is affected by large bodies of previous experience.

Thus I want to say that there is no layer of human experience that is immune to error. On the other hand, through reflection and well-rounded judgment, we genuinely but fallibly participate in objectivity. Objectivity is not something that is simply “out there”; it is a real but always finite and relative achievement.

Empiricism

Already in the 1950s, analytic philosophers began to seriously question empiricism. Quine’s “Two Dogmas of Empiricism” (1951), Wittgenstein’s Philosophical Investigations (1954), and Sellars’ “Empiricism and the Philosophy of Mind” (1956) all contributed to this.

Brandom explicates Sellars’ pivotal critique of the empiricist “Myth of the Given” as belief in a kind of awareness that counts as a kind of knowledge but does not involve any concepts. (If knowledge is distinguished by the ability to explain, as Aristotle suggested, then any claim to knowledge without concepts is incoherent out of the starting gate.) Building on Sellars’ work, Brandom’s Making It Explicit (1994) finally offered a full-fledged inferentialist alternative. He has rounded this out with a magisterial new reading of Hegel.

The terms “empiricism” and “rationalism” originally referred to schools of Greek medicine, not philosophy. The original empirical school denied the relevance of theory altogether, arguing that medical practice should be based exclusively on observation and experience.

Locke famously began his Essay Concerning Human Understanding (1689) with an argument that there are no innate ideas. I take him to have successfully established this. Unfortunately, he goes on to argue that what are in effect already contentful “ideas” become immediately present to us in sensible intuition. This founding move of British empiricism seems naive compared to what I take Aristotle to have meant. At any rate, I take it to have been decisively refuted by Kant in the Critique of Pure Reason (1781; 2nd ed. 1787). Experience in Kant is highly mediated. “Intuitions without concepts are blind.” (See also Ricoeur on Locke on Personal Identity; Psyche, Subjectivity.)

In the early 20th century, however, there was a great flourishing of phenomenalism, or the view that all knowledge is strictly reducible to sensation understood as immediate awareness. Kant himself was often read as an inconsistent phenomenalist who should be corrected in the direction of consistent phenomenalism. Logical empiricism was a diverse movement with many interesting developments, but sense data theories were widely accepted. Broadly speaking, sense data were supposed to be mind-dependent things of which we are directly aware in perception, and that have the properties they appear to have in perception. They were a recognizable descendent of Cartesian incorrigible appearances and Lockean sensible intuition. (Brandom points out that sense data theory is only one of many varieties of the Myth of the Given; it seems to me that Husserlian phenomenology and its derivatives form another family of examples.)

Quine, Wittgenstein, and Sellars each pointed out serious issues with this sort of empiricism or phenomenalism. Brandom’s colleague John McDowell in Mind and World (1994) defended a very different sort of empiricism that seems to be a kind of conceptually articulated realism. In fact, there is nothing about the practice of empirical science that demands a thin, phenomenalist theory of knowledge. A thicker, more genuinely Kantian notion of experience as always-already conceptual and thus inseparable from thought actually works better anyway.

Thought and intuition are as hylomorphically inseparable in real instances of Kantian experience as form and matter are in Aristotle. A positive role for Kantian intuition as providing neither knowledge nor understanding, but crucial instances for the recognition of error leading to the improvement of understanding, is preserved in Brandom’s A Spirit of Trust. (See also Radical Empiricism?; Primacy of Perception?; Aristotle, Empiricist?)