Ockham on Reference

William of Ockham (1285-1347) is the most famous so-called “nominalist” in Latin medieval philosophy. He sought to explain our practical and theoretical uses of universals entirely in terms of our relations to existing singular things.

Without losing sight of Plato’s emphasis on the value of pure thought, Aristotle had adopted a broader perspective, starting from the generality of human life. In this context, in contrast to Plato he had emphasized the genuine importance, positive role, and irreducibility of singular beings or things that we encounter in life. “For us” singular beings and things come first, even if they do not come first in the order of the cosmos.

Singular beings and things are more concretely “real” than any generalizations about them. But Aristotle simultaneously upheld the “Platonic” view that knowledge in the strong sense can apply only to generalizations of necessary consequences between things, and not to our experiences of singulars. There can be no necessity in our experience of something purely singular. What I would call the extraordinarily productive tension between Aristotle’s fundamental views of reality (putting singulars first) and of knowledge (putting universals first) created an appearance of paradox that later commentators sought to resolve, often by favoring one side at the expense of the other.

Ockham wanted to explain universals entirely in terms of singulars. In the Cambridge Companion to Ockham, Claude Panaccio summarizes that “Ockham’s project is to explicate all semantical and epistemological features — truth values, for instance — in terms of relations between sign-tokens and singular objects in the world” (p. 58).

Ockham built on the work of many less well-known figures. The Latin world had seen lively inquiries about logic and semantics since the 12th century, when Arabic learning first began to be disseminated across Europe. Within this tradition, there is more than one approach to meaning.

The technical notion of “signification” was a development inspired largely by Augustine’s theory of “signs”. Unlike more recent usages (e.g., in Saussurean linguistics), this kind of signification involves a simple relation of correspondence between a thing taken as a “sign” and some other thing.

Ockham and many of his predecessors held that there is such a thing as natural signification, independent of any language. In this sense, smoke is taken to be a “sign” of a fire. This relation of smoke signifying fire is called “natural”, because in our experience smoke only exists where there is fire, and this has to do with how the world is, rather than with us. This is very different from the conventional imposition of the word “fire” to refer to a fire.

At the same time, this notion of signification also seems to have an irreducible “psychological” component. It has something to do with how the world is, but in a more direct sense, it has to do with something like what the British empiricists later called the association of ideas. Our “natural” association of smoke with fire is not arbitrary. As the empiricists would say, it is grounded in experience. As the Latin scholastics would say, the soul “naturally” tends to associate smoke with fire, and this is as much a truth about the soul — or about the soul existing in the world — as it is a truth about the world.

For Ockham, natural signification applies to concepts, which constitute the core of a sort of “mental language” that is in many ways analogous to spoken or written language, but is more original and does not depend on convention. Concepts on this understanding are subject to all the same kinds of syntactical relationships as individual words in speech.

In this tradition, the meaning of concepts is analyzed by analogy with the role of individual words in speech. This presupposes a view that linguistic meaning overall is founded on the meanings of individual words. The individual concepts of “mental language” that apply to individual real-world things are analogously supposed to have pre-given, natural meanings. Logic and semantics are then a sort of mental hygiene with respect to their proper use.

Ockham offers a rich analysis of connotative terms that modify the concepts corresponding to things.

Again building on the work of many authors in the Latin tradition, he develops the theory of logical “supposition”, which contemporary scholars associate with semantic discussions of reference to real-world objects. This has nothing to do with supposition in the sense of hypothesis; rather, it relates etymologically to a notion of something “standing under” something else.

Notably, Ockham and this whole tradition insist that while individual words independently have signification, only in the context of propositions or assertions expressed by whole sentences do words have the kind of reference associated with supposition. I suspect this is ultimately grounded in Aristotle’s thesis that truth and falsity apply only to whole propositions or assertions; “supposition” is to explain not just meaning, but also truth and falsity. This tradition develops a much more explicit theory of reference than Aristotle did, and the kind of reference it develops is tied to contexts of assertion, or true assertion.

The idea that reference to real-world things should be approached at the level of propositions rather than individual words or concepts has much to recommend it. But for Ockham and the tradition he continued, supposition is still fundamentally governed by signification, and signification begins with individual words or concepts. Individual words or concepts are thought to have pre-given meanings, and Ockham attempts to give this a theoretical grounding with his notion of “mental language”.

As Ockham suggests, there is a way in which notions of syntactic relations apply to pure concepts. But I take this to be an abstraction from actual usage in spoken or written language, and I don’t believe in any pre-given meanings.

Ockham’s general strong privileging of individual things over universals has a deep relation to his voluntarist and fideist theology, which owes much to his fellow Franciscan Duns Scotus. In logic, Scotus is considered a defender of “realism” about universals as opposed to nominalism, but in his theology he developed a strong notion of individuation, tied to a very radical notion of divine omnipotence that refused to subordinate it in any way, even to divine goodness (see Aquinas and Scotus on Power; Being and Representation). Essentially, from this point of view, every single thing that happens is a miracle coming directly from God, and all observed regularity in the world pertains only to a sort of divine “habit” that could be contravened at any moment.

Aquinas aimed at a sort of diplomatic compromise between this extreme theistic view that makes everything solely dependent on God, and Aristotle’s unequivocal assertion of the reality of “secondary” causes. Scotus and Ockham applied high levels of logical sophistication in defense of the extreme view.

Ockham also denied the reality of mathematical objects. Together with his extreme view on divine power, this makes very unlikely the view promoted by some scholars that Ockham in particular represented the strand of medieval thought that most helped promote the emergence of modern science. Ockham’s undeniable logical acumen was dedicated to downplaying rather than elaborating the practical importance of order in nature.

It does seem, though, that views like Ockham’s contributed to the shaping of British empiricist philosophy. Here is another chapter in the complex history of notions of reference and representation. Ockham’s very strong notion of reference as directly grounded in singular real-world objects — combined with that of the natural signification or pre-given meaning of concepts in “mental language” — helped lay the ground for what modern empiricism would treat as common sense.

For most of the 20th century, the mainstream of analytic philosophy seemed to be inseparable from a strongly empiricist direction. But Wittgenstein, Quine, Sellars, Brandom, and others have initiated a new questioning of the assumptions of empiricism from within contemporary analytic philosophy. Analytic philosophy is no longer nearly so opposed to the history of philosophy or to continental philosophy as it was once assumed to be. It is in this context that we can begin to look at a sort of Foucaultian or de Libera-esque “archaeology” of empiricism, in which Ockham certainly deserves an important place.

Normative “Force”

Frege’s notion of the “force” of an assertion plays a large role in the discussions of analytic philosophers about speech acts. In his usage, it has nothing to do with coercion or Newtonian physics. Rather, it concerns what I might call the “substance” of what is said, and what Brandom calls conceptual content, which for Brandom would be made explicit first of all through being interpreted as a kind of doing. The question of force seems to be, what are we doing in asserting this rather than that? This also brings in the larger real-world context of that doing.

Although Brandom subordinates reference to Fregean sense or intensional meaning, he also complements and interweaves his account of material-inferential sense with an account of real-world normative-pragmatic force”, and suggests that this is the ultimate driver of meaning. How things come to have or lose normative-pragmatic force — i.e., how the appearance of such force is legitimized or de-legitimized — he very persuasively argues is best explained by the Hegelian theory of mutual recognition.

At a programmatic level, a deep and wide historical and critical genealogy of the specific forms emerging from mutual recognition is the more particular shape that something like Ricoeur’s “long detour” of mediating interpretation takes for Brandom. Brandom’s monumental work pulling all the pieces of his general account together has left him little time to dwell on details of interpretation for particular cases, but I see it as an open invitation. My own “historiography” and “history of philosophy” notes tentatively sketch some key details in the broad panorama of the history of values. (See also Normativity; Autonomy, Normativity; Space of Reasons; Ethics.)

One important result of Brandom’s comprehensive development is that cases where reality figuratively “pushes back” against us are subsumed under the figure of normative force. (See also Rethinking Responsibility; Expansive Agency; Brandomian Forgiveness.)

Reference, Representation

The simplest notion of reference is a kind of literal or metaphorical pointing at things. This serves as a kind of indispensable shorthand in ordinary life, but the simplicity of metaphorical pointing is illusory. It tends to tacitly presuppose that we already know what it is that is being pointed at.

More complex kinds of reference involve the idea of representation. This is another notion that is indispensable in ordinary life.

Plato and Aristotle used notions of representation informally, but gave them no privileged status or special role with respect to knowledge. They were much more inclined to view knowledge, truth, and wisdom in terms of what is reasonable. Plato tended to view representation negatively as an inferior copy of something. (See Platonic Truth; Aristotelian Dialectic; Aristotelian Semantics.)

It was the Stoics who first gave representation a key role in the theory of knowledge. The Stoics coupled a physical account of the transmission of images — bridging optics and physiology — with very strong claims of realism, certain knowledge both sensory and rational, and completeness of their system of knowledge. In my view, the Stoic theory of representation is the classic version of the “correspondence” theory of truth. The correspondence theory treats truth as a simple “correspondence” to some reality that is supposed to be known beyond question. (Such a view is sometimes misattributed to Plato and Aristotle, but was actually quite alien to their way of thinking.)

In the Latin middle ages, Aquinas developed a notion of “perfect” representation, and Duns Scotus claimed that the most general criterion of being was representability. In the 17th century, Descartes and Locke built foundationalist theories of certain knowledge in which explicitly mental representations played the central role. Descartes also explicitly treated representation in terms of mathematical isomorphism, representing geometry with algebra.

Taking putatively realistic representational reference for granted is a prime example of what Kant called dogmatism. Kant suggested that rather than claiming certainty, we should take responsibility for our claims. From the time of Kant and Hegel, a multitude of philosophers have sharply criticized claims for certain foundations of representational truth.

In the 20th century, the sophisticated relational mathematics of model theory gave representation renewed prestige. Model-theoretic semantics, which explains meaning in terms of representation understood as relational reference, continues to dominate work in semantics today, though other approaches are also used, especially in the theory of programming languages. Model-theoretic semantics is said to be an extensional rather than intensional theory of meaning. (An extensional, enumerative emphasis tends to accompany an emphasis on representation. Plato, Aristotle, Kant, and Hegel on the other hand approached meaning in a mainly intensional way, in terms of concepts and reasons.)

Philosophical criticism of representationalist theories of knowledge also continued in the 20th century. Husserl’s phenomenological method involved suspending assumptions about reference. Wittgenstein criticized the notion of meaning as a picture. All the existentialists, structuralists, and their heirs rejected Cartesian/Lockean representationalism.

Near the end of the 20th century, Robert Brandom showed that it is possible to account very comprehensively for the various dimensions of reference and representation in terms of intensionally grounded, discursive material inference and normative doing, later wrapping this in an interpretation of Hegel’s ethical and genealogical theory of mutual recognition. This is not just yet another critique of representationalism, but an actual constructive account of an alternative, meticulously developed, that can explain how effects of reference and representation are constituted through engagement in normative discursive practices — how reference and representation have the kind of grip on us that they do, while actually being results of complex normative synthesis rather than simple primitives. (See also Normative Force.)

Phenomenological Reduction?

This is a follow-up to my earlier article on Husserlian and existential phenomenology in light of the past year’s reading of Paul Ricoeur. In The Conflict of Interpretations (French ed. 1969), Ricoeur discusses the impact of his own view of hermeneutics as a “long detour” essential to understanding.

Ricoeur wrote that “It is in spite of itself that [Husserlian] phenomenology discovers, in place of an idealist subject locked within its system of meanings, a living being which from all time has, as the horizon of all its intentions, a world, the world. In this way, we find delimited a field of meanings anterior to the constitution of a mathematized nature, such as we have represented it since Galileo, a field of meanings anterior to objectivity for a knowing subject. Before objectivity, there is the horizon of the world; before the subject of the theory of knowledge, there is operative life” (p. 9). “Of course, Husserl would not have accepted the idea of meaning as irreducibly nonunivocal” (p. 15).

“In truth, we do not know beforehand, but only afterward, although our desire to understand ourselves has alone guided this appropriation. Why is this so? Why is the self that guides the interpretation able to recover itself only as a result of the interpretation? …the celebrated Cartesian cogito, which grasps itself directly in the experience of doubt, is a truth as vain as it is invincible…. Reflection is blind intuition if it is not mediated by what Dilthey called the expressions in which life objectifies itself. Or, to use the language of Jean Nabert, reflection is nothing other than the appropriation of our act of existing by means of a critique applied to the works and the acts which are the signs of this act of existing…. [R]eflection must be doubly indirect: first, because existence is evinced only in the documents of life, but also because consciousness is first of all false consciousness, and it is always necessary to rise by a corrective critique from misunderstanding to understanding” (pp. 17-18). This is a nice expression of what I take to be one of the greatest lessons of Aristotle and Hegel (see First Principles Come Last; Aristotelian Actualization; What We Really Want.)

For Ricoeur, Husserlian phenomenological reduction ceases to be a “fantastic operation” identified with a “direct passage”, “at once and in one step”. Rather, “we will take the long detour of signs” (p. 257).

Husserl’s “reductions” reduced away reference to putatively existing objects in favor of a sole focus on what would be the Fregean sense in meaning. Ricoeur wants to reintroduce reference, and in this way to distinguish a semantics that includes consideration of reference from a semiology addressing pure sense articulated by pure difference. Reference for Ricoeur is not a primitive unexplained explainer, but something that needs to be explained, and a big part of the explanation goes through accounts of sense. Ricoeur also wants to connect reference back to the earlier mentioned “self that guides the interpretation”, which again functions as an end rather than being posited as actual from the outset.

Similarly to his critique of phenomenological reduction “at once and in one step”, he criticizes Heidegger’s “short route” that in one step simply replaces a neo-Kantian or Husserlian “epistemology of interpretation” with an “ontology of understanding”. Ricoeur is a lot more deferential to Heidegger than I would be at this point, but for Ricoeur such an ontology is again only a guiding aim, and not a claimed achievement like it was for Heidegger. I think this makes Ricoeur’s “ontological” interest reconcilable with my own “anti-ontological” turn of recent years, because my objections have to do with claimed achievements. I broadly associate Ricoeur’s modest ontology-as-aim with my own acceptance of a kind of inquiry about beings that avoids strong ontological claims. Even Heidegger emphasized Being as a question.

Ricoeur of course rejects foundationalist epistemology (see also Kant and Foundationalism), but sees both an epistemology of interpretation and an ontology of understanding as aims guiding the long detour. He effectively contrasts the long path of investigation of meaning with the short path of appeals to consciousness (see also Meaning, Consciousness).

I actually like the idea he attributes to Husserl of reducing being to meaning or the sense(s) of being. If meaning is fundamentally nonunivocal as Ricoeur says rather than univocal as Husserl wanted, this would not be idealist in a bad sense.

Brandom’s simpler suggestion that reference is something real but that it should be ultimately explained in terms of sense seems to me a further improvement over Ricoeur’s apparent notion of reference as a kind of supplement to sense that nonetheless also needs to be explained in terms of sense, but without being reduced to it. I see the inherently overflowing, non-self-contained nature of real as compared to idealized being/meaning as making a supplement superfluous. (See also Reference, Representation; Meant Realities.)


The last post suggests another nuance, having to do with how “total” and “totality” are said in many ways. This is particularly sensitive, because these terms have both genuinely innocent senses and other apparently innocent senses that turn out to implicitly induce evil in the form of a metaphorically “totalitarian” attitude.

Aiming for completeness as a goal is often a good thing.

There is a spectrum of relatively benign errors of over-optimism with respect to where we are in achieving such goals, which at one end begins to shade into a less innocent over-reach, and eventually into claims that are obviously arrogant, or even “totalitarian”.

Actual achievements of completeness are always limited in scope. They are also often somewhat fragile.

I’ll mention the following case mainly for its metaphorical value. Mathematical concepts of completeness are always in some sense domain-specific, and precisely defined. In particular, it is possible to design systems of domain-specific classification that are complete with respect to current “knowledge” or some definite body of such “knowledge”, where knowledge is taken not in a strong philosophical sense, but in some practical sense adequate for certain “real world” operations. The key to using this kind of mathematically complete classification in the real world is including a fallback case for anything that does not fit within the current scheme. Then optionally, the scheme itself can be updated. In less formal contexts, similar strategies can be applied.

There are also limited-scope, somewhat fragile practical achievements of completeness that are neither mathematical nor particularly ethical.

When it comes to ethics, completeness or totality is only something for which we should strive in certain contexts. About this we should be modest and careful.

Different yet again is the arguably trivial “totality” of preconceived wholes like individuals and societies. This is in a way opposite to the mathematical case, which worked by precise definition; here, any definition is implicitly suspended in favor of an assumed reference.

Another kind of implicit whole is a judgment resulting from deliberation. At some point, response to the world dictates that we cut short our in principle indefinitely extensible deliberations, and make a practical judgment call.

Rule of Metaphor

In The Rule of Metaphor, which contains essays from the early 1970s, Ricoeur aimed among other things to refute Frege’s apparent claim that poetry has no reference or denotation, but only a sense or connotation. According to Ricoeur, poetic language achieves a kind of “second-level reference” by suspending first-level reference. I tend to think all reference presupposes higher-order constructs, so I am sympathetic. This is also an argument for the general importance of metaphor.

In an appendix, he describes how the rise of structuralism in the 1960s — of which he remained critical — led him from a kind of existential phenomenology to a much closer engagement with language. His earlier emphasis on symbols gave way to a more general approach to hermeneutics, and he began to also engage with analytic philosophy.

At the beginning of the book, he notes how the study of rhetoric became much narrower after Aristotle, losing its connection with dialectic and philosophy. Later, he goes on to argue that meanings of sentences take precedence meanings of words, and meanings of whole discourses take precedence over meanings of sentences.

Apparently, some structuralist writers on rhetoric argued for the contrary, bottom-up approach starting with meanings of words. My own past interest in so-called structuralism never led in this direction; I was initially more concerned with the priority of relations over “things”, and later with the explanatory power of Foucaultian “discursive regularities”. I do think a compositional, bottom-up approach has great value in formal contexts, and that formal analysis is not irrelevant to ordinary language, but I think ordinary language meaning is best approached mainly in terms of material inference, which has a holistic character.

Pragmatics of Utterance

The second chapter of Ricoeur’s Oneself as Another concerns speech acts in context. Here we begin to consider reflexive acts of self-designation. Like Brandom, Ricoeur emphasizes that saying is a form of doing. “I” is not an identifying reference, but more resembles terms like “here” and “now”. Linguistically, it is a performative associated with affirmation, promising, and similar actions.

Reflexivity applies to the utterance, rather than to the subject of the utterance. It “is not intrinsically bound up with a self in the strong sense of self-consciousness” (p. 47). Meanwhile, it is not utterances but speakers that refer to things, making reference an action rather than a property.

At a later stage, the performative “I” does after all get assimilated to an identifying reference tied to a body, but Ricoeur thinks it will be necessary to step outside the analysis of language and consider our status as incarnated beings to understand this.

Abstract and Concrete

In contrast to later traditional “metaphysics”, Aristotle recommended we start with the concrete, but then aim to dialectically rise to higher understanding, which is still of the concrete. In any inquiry, we should begin with the things closer to us, but as Wittgenstein said in a different context, we should ultimately aim to kick away the ladder upon which we climbed.

What Aristotle would have us eventually kick away is by no means the concrete itself, but only our preliminary understanding of it as a subject of immediate, simple reference. Beginnings are tentative, not certain. We reach more solid, richer understanding through development.

Aristotle’s discussion of “primary” substance in Categories has often been turned into a claim that individuals are ontologically more primary than form. This is to misunderstand what Categories is talking about. Aristotle explicitly says Categories will be about “things said without combination” [emphasis added], i.e., about what is expressed by kinds of apparently atomic sayings that are used in larger sayings.

The initial definition of substance in the strict or “primary” sense — which he will eventually kick away in the Metaphysics — is of a thing (said) “which is neither said of something underlying nor in something underlying”. (Aristotle often deliberately leaves it open whether he is talking about a referencing word or a referenced thing — or says one and implies the other — because in both cases, the primary concern is the inferential meaning of the reference.)

This initial definition is a negative one that suffices to distinguish substance from the other categories. By implication, it refers to something that is said simply of something, in the way that a proper name is. As examples, he gives (namings of) an individual human, or an individual horse.

“Socrates” would be said simply of Socrates, and would thus “be” — or refer to — a primary substance in this sense. The naming of Socrates is an apparently simple reference to what we might call an object. As Brandom has noted, this picks out a distinctive semantic and inferential role that applies only to references to singular things.

Aristotle then says that more universal namings or named things like “human” and “horse” are also “substances” — i.e., can also refer to singular objects — in a secondary sense, as in “that horse”. Then substance in general is further distinguished, by saying it is something A such that when something else B is said of it, both the naming and the “what-it-is” of B are said of the primary or secondary substance A. (See also Form; Things in Themselves; Definition.)

If a horse as such “is” a mammal of a certain description, then that horse must be a mammal of that description. If a mammal as such “is” warm-blooded, then that horse “is” warm-blooded.

These are neither factual nor ontological claims, but consequences of a rule of interpretation telling us what it means to say these kinds of things. Whether or not something is a substance in this sense is surely a key distinction, for it determines the validity or invalidity of a large class of inferences.

Based on the classification of A as an object reference and B as something said of A, we can make valid inferences about A from B.

When something else C is said of the non-substance B, by contrast, we still have a “naming” of B, but the “what-it-is” or substantive meaning of C does not apply to B itself, but only modifies it, because B is not an object reference. Applying the substantive meaning of C to B — i.e., making inferences about B from the meaning of C — would be invalid in this case.

Just because, say, warm-blooded as such “is” a quality, there is no valid inference that mammals “are” qualities, or that that horse “is” a quality. The concern here is with validity of a certain kind of inference and interpretation, not ontology (or epistemology, either).

In the Metaphysics, the initial referential notion of substance as something underlying is explicitly superseded through a far more elaborate development of “what it was to have been” a thing that emphasizes form, and ultimately actuality and potentiality. The appearance of what might be mistaken for a sort of referential foundationalism is removed. (See also Aristotelian Dialectic.)

I also think he wanted to suggest that practically, a kind of preliminary grasp of some actuality has to come first in understanding. Actuality is always concrete and particular, and said to be more primary. But potentiality too plays an irreducible role, in underwriting the relative persistence of something as the “same” something through change, which motivated the earlier talk about something underlying. The persistence of relatively stable identities of things depends on their counterfactual potentiality, which can only be apprehended in an inferential way. (See also Aristotelian Demonstration.)

It does make sense to say that things like actuality and substance inhere more in the individual than in the species, but that is due to the meanings of actuality and substance, not to an ontological status.