McDowell on the Space of Reasons

John McDowell’s paper “Sellars and the Space of Reasons” (2018) provides a useful discussion of this concept. Unlike Brandom, who aims to complete Sellars’ break with empiricism, McDowell ultimately wants to defend “a non-traditional empiricism, uncontaminated by the Myth of the Given” (p. 1).

McDowell begins by quoting Sellars: “in characterizing an episode or a state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says” (ibid; emphasis added).

For Sellars, to speak of states of knowing is to talk about “epistemic facts”. A bit later, McDowell says that Sellars’ epistemic facts also include judgments and uses of concepts that might not be considered knowledge. Not only beliefs but also desires end up as a kind of epistemic facts. McDowell uses this to argue that the space of reasons is a version of the concept of knowledge as justified true belief. I want to resist this last claim.

McDowell points out that knowledge for Sellars has a normative character. Sellars also regards the foundationalist claim that epistemic facts can be explained entirely in terms of non-epistemic facts (physiology of perception and so on) as of a piece with the naturalistic fallacy in ethics.

McDowell cites Donald Davidson’s contrast between space-of-reasons intelligibility and the kind of regularity-based intelligibility that applies to a discipline like physics, but does not want to assume there is a single model for all non-space-of-reasons intelligibility.

He notes that Sellars contrasts placing something in the space of reasons with empirical description, but wants to weaken that distinction, allowing epistemic facts to be grounded in experience, and to be themselves subject to empirical description. “Epistemic facts are facts too” (p. 5). I prefer going the other direction, and saying empirical descriptions are judgments too.

The space of reasons is only occupied by speakers. Sellars is quoted saying, “all awareness of sorts, resemblances, facts, etc., in short all awareness of abstract entities — indeed, all awareness even of particulars — is a linguistic affair” (p. 7, emphasis in original). “And when Sellars connects being appropriately positioned in the space of reasons with being able to justify what one says, that is not just a matter of singling out a particularly striking instance of having a justified belief, as if that idea could apply equally well to beings that cannot give linguistic expression to what they know” (ibid).

“‘Inner’ episodes with conceptual content are to be understood on the model of overt performances in which people, for instance, say that things are thus and so” (p. 8). “What Sellars proposes is that the concept of, for instance, perceptual awareness that things are thus and so should be understood on the model of the concept of, for instance, saying that things are thus and so” (p. 10). All good so far.

To be in the space of reasons, “the subject would need to be able to step back from the fact that it is inclined in a certain direction by the circumstance. It would need to be able to raise the question whether it should be so inclined” (pp. 10-11, emphasis in original). But McDowell says — and I agree — that this is without prejudice as to whether there is still a kind of kinship between taking reasons as reasons, on the one hand, and the purposeful behaviors of animals, on the other.

McDowell acknowledges that the idea that epistemic facts can only be justified by other epistemic facts is easy to apply to inferential knowledge, but rather harder to apply to the “observational knowledge” that he claims should also be included in the space of reasons. For McDowell, observational knowledge is subject to a kind of justification by other facts.

McDowell and Brandom both recognize something called “observational knowledge”, but Brandom thinks that it necessarily involves appeal to claimed non-epistemic facts, whereas McDowell wants to broaden the concept of epistemic facts enough to be able to say that observational knowledge can be justified by appealing only to epistemic facts. I would prefer to say, observational judgments are subject to a kind of tentative justification by other judgments.

McDowell says that acquiring knowledge noninferentially is also an exercise of conceptual capacities. This clearly implies a noninferential conception of the conceptual, and seems to me to presuppose a representationalist one instead. This has huge consequences.

He says that the space of reasons must include noninferential relations of justification, which work by appeal to additional facts rather by inference. But where did those facts come from? In light of Kant, I would say that we rational animals never have direct access to facts that just are what they are. Rather, if we are being careful, we should recognize that we can only consider claims and judgments of fact, which may be relatively well-founded or not. But appeal to claims of fact for justification is just passing the buck. Claims of any sort always require justification of their own.

As an example, McDowell discusses claims to know that something is green in color. As non-inferential justification in this context, he says one might say that “This is a good light for telling the colours of things by looking” (p. 18). That is fine as a criterion for relatively well-founded belief, but that is all it is.

A bit later, he adds, “I can tell a green thing when I see one, at least in a good light, viewed head-on, and so forth. A serviceable gloss on that remark is to say that if I claim, in suitable circumstances, that something is green, then it is” (p. 19).

This is to explicitly endorse self-certification of one’s authority. It is therefore ultimately to allow the claim, it’s true because I said so. I think it was a rejection on principle of this kind of self-certification that led Plato to sharply distinguish knowledge from belief.

As Aristotle pointed out in discussing the relation between what he respectively called “demonstration” and “dialectic”, we can apply the same kinds of inference both to things we take as true and to things we are examining hypothetically. We can make only hypothetical inferences (if A, then B) from claims or judgments of A; we can only legitimately make categorical inferences (A, therefore B) from full-fledged knowledge of A — which, to be such, must at minimum not beg the question or pass the buck of justification.

The great majority of our real-world reasoning is ultimately hypothetical rather than categorical, even though we routinely act as if it were categorical. One of Kant’s great contributions was to point out that — contrary to scholastic and early modern tradition — hypothetical judgement is a much better model of judgment in general than categorical judgment is. The general form of judgment is conditional, and not absolute.

I think it’s fine to include beliefs, opinions, and judgments in the space of reasons as McDowell wants to do, provided we recognize their ultimately hypothetical and tentative character. But once we recognize the hypothetical and tentative character of beliefs, I think it follows that all relations within the space of reasons can be construed as inferential.

I don’t think contemporary science has much to do with so-called observational knowledge of the “it is green” variety, either. Rather, it has to do partly with applications of mathematics, and partly with well-controlled experiments, in which the detailed conditions of the controls are far more decisive than the observational component. The prejudice that simple categorical judgments like “it is green” have anything to do with science is a holdover from old foundationalist theories of sense data.

I would also contend that all putative non-space-of-reasons intelligibility ultimately depends on space-of-reasons intelligibility. (See also What We Saw.)

Cause of Itself

Spinoza famously begins his Ethics with a definition of “cause of itself” (causa sui). This will become the hallmark of his “Substance”, of which he says there can be only one, and which he identifies with his own heterodox conception of God. Cause of itself would be that the essence of which involves existence.

In Hegel or Spinoza (French ed. 1979), Pierre Macherey writes that “First of all we can show, as Guéroult does, that the concept of causa sui does not really have an initial foundational value for Spinoza: it does not represent a kind of first truth, a principle in the Cartesian sense, from which the entire system can be developed, as if from the starting point of a germ of truth” (p. 16).

“Here we can begin to be astonished: does Hegel ignore that this aporia of beginning — which sets his Logic in motion, this impossibility of grounding the infinite process of knowledge in a first truth which in itself as principle or foundation — is also an essential lesson of Spinozism, the principal objection that he himself opposes to the philosophy of Descartes? In such a sense that it is only… ‘so to speak’, the geometric exposition of the Ethics ‘begins’ with definitions, which for that matter do not have an effective sense, except at the moment when they function in demonstrations or they really produce the effects of truth: Spinozist thinking precisely does not have this rigidity of a construction relying on a base and pushing its analytic to an end point, which would find itself thus limited between a beginning and an end” (p. 17).

For Hegel according to Macherey, “The causa sui is based on a substantial principle that ‘lacks the principle of the personality’. It thus constitutes a substance that cannot become subject, which fails in this active reflection of self, which would permit it to undertake its own liberation in its own process…. This is an arrested and dead spirit” (p. 18).

This is supposed to be the individuality and freedom denying “Oriental” attitude that Hegel with broad brush unfortunately really does attribute to Judaism, Islam, Hinduism, Buddhism, the Roman Empire, Catholicism, the pre-Socratic philosopher Parmenides, and Spinoza, among others. This unfortunate over-the-top anti-anti-subjectivity theme of Hegel’s kept me from really appreciating his work for a long time.

On the other hand, the details of his argument about freedom and subjectivity as affirmative values actually make sense, even to the point of winning over an old sympathizer of French anti-Hegelianism like myself.

Reference, Representation

The simplest notion of reference is a kind of literal or metaphorical pointing at things. This serves as a kind of indispensable shorthand in ordinary life, but the simplicity of metaphorical pointing is illusory. It tends to tacitly presuppose that we already know what it is that is being pointed at.

More complex kinds of reference involve the idea of representation. This is another notion that is indispensable in ordinary life.

Plato and Aristotle used notions of representation informally, but gave them no privileged status or special role with respect to knowledge. They were much more inclined to view knowledge, truth, and wisdom in terms of what is reasonable. Plato tended to view representation negatively as an inferior copy of something. (See Platonic Truth; Aristotelian Dialectic; Aristotelian Semantics.)

It was the Stoics who first gave representation a key role in the theory of knowledge. The Stoics coupled a physical account of the transmission of images — bridging optics and physiology — with very strong claims of realism, certain knowledge both sensory and rational, and completeness of their system of knowledge. In my view, the Stoic theory of representation is the classic version of the “correspondence” theory of truth. The correspondence theory treats truth as a simple “correspondence” to some reality that is supposed to be known beyond question. (Such a view is sometimes misattributed to Plato and Aristotle, but was actually quite alien to their way of thinking.)

In the Latin middle ages, Aquinas developed a notion of “perfect” representation, and Duns Scotus claimed that the most general criterion of being was representability. In the 17th century, Descartes and Locke built foundationalist theories of certain knowledge in which explicitly mental representations played the central role. Descartes also explicitly treated representation in terms of mathematical isomorphism, representing geometry with algebra.

Taking putatively realistic representational reference for granted is a prime example of what Kant called dogmatism. Kant suggested that rather than claiming certainty, we should take responsibility for our claims. From the time of Kant and Hegel, a multitude of philosophers have sharply criticized claims for certain foundations of representational truth.

In the 20th century, the sophisticated relational mathematics of model theory gave representation renewed prestige. Model-theoretic semantics, which explains meaning in terms of representation understood as relational reference, continues to dominate work in semantics today, though other approaches are also used, especially in the theory of programming languages. Model-theoretic semantics is said to be an extensional rather than intensional theory of meaning. (An extensional, enumerative emphasis tends to accompany an emphasis on representation. Plato, Aristotle, Kant, and Hegel on the other hand approached meaning in a mainly intensional way, in terms of concepts and reasons.)

Philosophical criticism of representationalist theories of knowledge also continued in the 20th century. Husserl’s phenomenological method involved suspending assumptions about reference. Wittgenstein criticized the notion of meaning as a picture. All the existentialists, structuralists, and their heirs rejected Cartesian/Lockean representationalism.

Near the end of the 20th century, Robert Brandom showed that it is possible to account very comprehensively for the various dimensions of reference and representation in terms of intensionally grounded, discursive material inference and normative doing, later wrapping this in an interpretation of Hegel’s ethical and genealogical theory of mutual recognition. This is not just yet another critique of representationalism, but an actual constructive account of an alternative, meticulously developed, that can explain how effects of reference and representation are constituted through engagement in normative discursive practices — how reference and representation have the kind of grip on us that they do, while actually being results of complex normative synthesis rather than simple primitives. (See also Normative Force.)

Kant and Foundationalism

According to Kant, all human experience minimally involves the use of empirical concepts. We don’t have access to anything like the raw sense data posited by many early 20th century logical empiricists, and it would not be of much use if we did. In Kantian terms, this would be a form of intuition without concepts, which he famously characterized as necessarily blind, and unable to function on its own.

Foundationalism is the notion that there is certain knowledge that does not depend on any inference. This implies that it somehow comes to us ready-made. But for Kant, all use of empirical concepts involves a kind of synthesis that could not work without low-level inference, so this is impossible.

The idea that any knowledge could come to us ready-made involves what Kant called dogmatism. According to Kant, this should have no place in philosophy. Actual knowledge necessarily is a product of actual work, though some of that work is normally implicit or preconscious. (See also Kantian Discipline; Interpretation; Inferentialism vs Mentalism.)

It also seems to me that foundationalism is incompatible with the Kantian autonomy of reason.

Foundations?

Foundationalism is the mistaken notion that some certain knowledge comes to us ready-made, and does not depend on anything else. One common sort involves what Wilfrid Sellars called the Myth of the Given.

Certainty comes from proof. A mathematical construction is certain. Nothing in philosophy or ordinary life is like that. There are many things we have no good reason to doubt, but without proof, that still does not make them certain.

In life, high confidence is all we need. Extreme skepticism is refuted by experience. It is not possible to live a life without practical confidence in many things.

Truth, however, is a result, not a starting point. It must be earned. There are no self-certifying truths, and truth cannot be an unexplained explainer.

In philosophy, we have dialectical criticism or analysis that can be applied from any starting point, then iteratively improved, and a certain nonpropositional faith in reason to get us going. All we need is the ability to question, an awareness of what we do not know, and a little faith. We can always move forward. It is the ability to move forward that is key. (See also Interpretation; Brandom on Truth; The Autonomy of Reason.)

Justification

Epistemological foundationalism always sounded like a patent logical absurdity to me, an attempt to escape the natural indefinite regress of serious inquiry by sheer cheating — a bald pretense that some content could be self-certifying and therefore exempt from the need for justification. I have a hard time being polite about this; such a claim feels to me like a deep moral wrong.

The kind of justification we should care about is not some guarantee of absolute epistemic certainty, but a simple explanation why a claim is reasonable, accompanied by a willingness to engage in dialogue. All claims without exception should be subject to that. As Sellars said, “in characterizing an episode or state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.” (Empiricism and the Philosophy of Mind, p.76.) Aristotle would agree. (See also Verificationism?; Empiricism; Free Will and Determinism.)