Berkeley on Perception

George Berkeley (1685-1753) is most famous for his provocative claim that material objects don’t really exist. Positively, he claimed that “to be is to be perceived”. Berkeley took as a starting point the view of Descartes and Locke that perceptions are “ideas” in the mind, but took issue with the further assumption of Descartes and Locke that ideas nonetheless also “represent” things that exist independent of the mind. It seems to me that the implicit concept of mind in this kind of usage assumes way too much, but for now I won’t dwell on that.

Berkeley has been the subject of superficial ridicule as a poster child for extreme subjectivism, but that is a caricature. Famously, he is supposed to have maintained, e.g., that a tree falling in the woods and heard by no one makes no sound. As 20th century analytic philosophers have noted, however, even if his positions are ultimately untenable, the quality of his arguments is actually quite high. Apart from the abstract “metaphysical” question of the actual existence of external objects, he also generally wanted to vindicate common sense.

Far from denying the existence of any objective reality, what he really wanted to do was articulate an alternate account of objectivity, based on something other than the independent existence of discrete objects. He had two different kinds of responses on the falling tree. One invokes counterfactual conditions; all that is of practical relevance to us are the conditions under which a perception would occur. The other invokes God as a universal witness.

From within the tradition of British empiricism, Berkeley partially anticipates the non-representationalist accounts of objectivity developed by Kant and Hegel, using the resources of a kind of Christian Platonism. Unlike Kant and Hegel, he flatly asserts that what really exists are what he calls spirits, which combine Christian-Platonic attributes with those of minds in a broadly Cartesian-Lockean sense.

A bit like the monads of Leibniz but without the infinite nesting and mutual inclusion Leibniz posited, Berkeley’s spirits are inherently active, and inherently endowed with perception. Spirits have experience that is expressed in purely immanent and immediate — but entirely passive and inert — contentful ideas.

Berkeley wrote an important early work on the theory of vision, arguing that what we really see is immediate phenomena of light and color, rather than inferred “things”. This was an important source for phenomenalism in early 20th century philosophy of science. Like the later phenomenalists, he tried to explain all cognitive error as bad inference from good immediate perception. From this point of view, “ideas” cannot be wrong, because they are purely immediate and purely inert; the possibility of error depends on the actions of finite spirits.

The common tradition of Cartesianism and British empiricism insists that there is a layer of immediate apprehension that is immune to error, and wants to ground knowledge and science by more authentically getting back to that immediate layer. I think Kant and Hegel convincingly showed that everything we experience as immediate actually has a prehistory, so that immediacy itself is only an appearance, and all immediacy that we experience is really what Hegel called mediated immediacy. Mediated immediacy has the same general kind of explanation as what is called “habit” in translations of Aristotle. We “just know” how to ride a bicycle once we have already learned. We don’t have to think about it; we just spontaneously do it. Similarly, I think “immediate” perception involves a complex unconscious application of categories that is affected by large bodies of previous experience.

Thus I want to say that there is no layer of human experience that is immune to error. On the other hand, through reflection and well-rounded judgment, we genuinely but fallibly participate in objectivity. Objectivity is not something that is simply “out there”; it is a real but always finite and relative achievement.

Empiricism

Already in the 1950s, analytic philosophers began to seriously question empiricism. Quine’s “Two Dogmas of Empiricism” (1951), Wittgenstein’s Philosophical Investigations (1954), and Sellars’ “Empiricism and the Philosophy of Mind” (1956) all contributed to this.

Brandom explicates Sellars’ pivotal critique of the empiricist “Myth of the Given” as belief in a kind of awareness that counts as a kind of knowledge but does not involve any concepts. (If knowledge is distinguished by the ability to explain, as Aristotle suggested, then any claim to knowledge without concepts is incoherent out of the starting gate.) Building on Sellars’ work, Brandom’s Making It Explicit (1994) finally offered a full-fledged inferentialist alternative. He has rounded this out with a magisterial new reading of Hegel.

The terms “empiricism” and “rationalism” originally referred to schools of Greek medicine, not philosophy. The original empirical school denied the relevance of theory altogether, arguing that medical practice should be based exclusively on observation and experience.

Locke famously began his Essay Concerning Human Understanding (1689) with an argument that there are no innate ideas. I take him to have successfully established this. Unfortunately, he goes on to argue that what are in effect already contentful “ideas” become immediately present to us in sensible intuition. This founding move of British empiricism seems naive compared to what I take Aristotle to have meant. At any rate, I take it to have been decisively refuted by Kant in the Critique of Pure Reason (1781; 2nd ed. 1787). Experience in Kant is highly mediated. “Intuitions without concepts are blind.” (See also Ricoeur on Locke on Personal Identity; Psyche, Subjectivity.)

In the early 20th century, however, there was a great flourishing of phenomenalism, or the view that all knowledge is strictly reducible to sensation understood as immediate awareness. Kant himself was often read as an inconsistent phenomenalist who should be corrected in the direction of consistent phenomenalism. Logical empiricism was a diverse movement with many interesting developments, but sense data theories were widely accepted. Broadly speaking, sense data were supposed to be mind-dependent things of which we are directly aware in perception, and that have the properties they appear to have in perception. They were a recognizable descendent of Cartesian incorrigible appearances and Lockean sensible intuition. (Brandom points out that sense data theory is only one of many varieties of the Myth of the Given; it seems to me that Husserlian phenomenology and its derivatives form another family of examples.)

Quine, Wittgenstein, and Sellars each pointed out serious issues with this sort of empiricism or phenomenalism. Brandom’s colleague John McDowell in Mind and World (1994) defended a very different sort of empiricism that seems to be a kind of conceptually articulated realism. In fact, there is nothing about the practice of empirical science that demands a thin, phenomenalist theory of knowledge. A thicker, more genuinely Kantian notion of experience as always-already conceptual and thus inseparable from thought actually works better anyway.

Thought and intuition are as hylomorphically inseparable in real instances of Kantian experience as form and matter are in Aristotle. A positive role for Kantian intuition as providing neither knowledge nor understanding, but crucial instances for the recognition of error leading to the improvement of understanding, is preserved in Brandom’s A Spirit of Trust. (See also Radical Empiricism?; Primacy of Perception?; Aristotle, Empiricist?)