Beyond Obedience: Brandom’s “Lost Chapter”

Early modern legal and political theory has a significant historical relationship to theological voluntarism that it would be important to understand. This also seems relevant to my recent work on Scotus (or vice versa).

A chapter of Brandom’s A Spirit of Trust that was omitted from the final published version, and mainly traces antecedents of Hegelian mutual recognition in early modern social contract theory, first brought this issue to my attention. I will be devoting a few posts to it.

It seems indisputable that social contract theory has a genealogical relationship to theological voluntarism. But it is no secret that I prefer to ground mutual recognition in Aristotle’s ethical concept of friendship. Among other issues, social contract theories are tainted by an at best only partial emancipation from their voluntarist heritage. It is my contention that Kant and Hegel finally work free of this widespread voluntarist taint that Plato and Aristotle never shared, and this is one of the reasons why they are so valuable.

I was initially quite horrified to see what looked like a kind of historical valorization of voluntarism by one of my heroes. But although it does contain a few valorizing phrases, as I read it now, Brandom’s discussion really has more to do with the existence of counter-trends within trends than with a real endorsement. In any case, this additional complication deserves to be documented. For now I will skip over the first section, which offers a nice recap of his high-level view of Kant and Hegel, on which I have commented several times already.

“The traditional metaphysics of normativity that Hegel sees all subsequent forms of understanding as developing from the rejection of is the subordination-obedience model” (Pre-Hegelian Stages in the History of the Metaphysics of Normativity, p. 6).

Elsewhere, Brandom has referred to this as the authority-obedience model, but the meaning is the same. This bad model puts all authority on one (commanding) side, and all responsibility on the other (obeying) side. Brandom has championed the idea that authority and responsibility should instead be apportioned symmetrically. Aristotle would approve of this.

Obedience has no role in rational ethics. A rationally ethical person will normally obey the law, giving the benefit of the doubt to measures designed to promote safety and social peace. But her motivation for doing so is a general consent to the reasonableness of enacting such measures.

Aristotle’s highest moral ideal is the reciprocity of friendship. He further suggests that we extend the model of friendship to those who dwell in our city. In the same spirit, it could be extended further, and that is just what Hegel eventually did. The only reference to obedience in Aristotle’s Nicomachean Ethics is the incidental mention of a sick person disobeying her doctors, in one of his examples.

However, in the development of the Latin tradition, obedience came to be designated as a virtue in its own right. In the early modern period, all virtue was sometimes reduced to obedience. This was reinforced by the concept of “positive” law, which is supposed to be obeyed merely because it is law, independent of whether the law is just or rational or not. This makes goodness a derivative property that follows from the meeting of obligations, rather than being based on independent criteria. An obligation of obedience to authority displaces proper human ends. Meeting such obligations becomes an end in itself.

“The distinguishing feature of this model is that the paradigmatic normative status, obligation, is taken to be instituted by the command of a superior. As an explicit metaphysics of normativity, the origin of theories of this sort is in theology, in a picture of God as the ultimate legislator, whose commands institute laws that his creatures are obliged to obey. The voluntarist wing of Catholic natural law theory represented by Duns Scotus and William of Ockham gave rise to Protestant natural law theorists who to one extent or another secularized and naturalized the approach. (I’ll say something further along about the significance for Hegel of the contrary intellectualist wing of the natural law tradition — paradigmatically Aquinas, but also Averroes — and of Suarez’s characteristic attempt at a synthesis of the two.) Grotius, Cumberland, Hobbes, Pufendorf, Thomasius, and Locke all understood the normatively binding force of laws, their capacity to oblige obedience, as rooted in the antecedent existence of a superior-subordinate relationship between the authoritative promulgator of the law and those responsible for obeying it” (pp. 6-7).

This emphasis on obedience to authority is a big part of what I mean by a “voluntarist taint”. For some, like Hobbes, this is not just a taint, but something wholeheartedly embraced.

“Hobbes attributes God’s natural right to command obedience to his ‘irresistible power’ to punish disobedience. His ‘state of nature’ is identified precisely with the lack of natural social relations of ‘sovereignty and subordination,’ among humans, in which no-one owes obedience to anyone else because power to punish, from which the right to command obedience derives, has not yet been concentrated in a sovereign. Locke, too, thinks that ‘the inferior, finite, and dependent is under an obligation to obey the supreme and infinite.’ But he understands God’s authority to oblige and compel human obedience as consisting not only in his power to do so, but as rooted in another matter of objective fact: his status as our creator. A creator, he thinks, has a natural right to lay down laws creating obligations of obedience for his creations” (pp. 7-8).

Hobbes bluntly affirms political voluntarism and a Thrasymachan “might is right” doctrine as justification for absolute monarchy. Locke is more refined, but adding a creationist justification to a voluntarist justification is not particularly helpful.

“Cumberland offers a characteristically mixed account. He analyzes law into two components, the precept (the content enjoined or proscribed) and the sanctions provided for noncompliance. Possession of the power to punish disobedience is a non-normative matter. But God’s paradigmatic possession of normative authority as a superior to legislate for subordinates depends crucially on his benevolence towards those subordinates. It is his wishing them well (and knowing what is best for them) that is the basis of his normative status as superior in the sense of having the right to legislate. On the one hand, one can think of God’s (or a king’s) benevolence as a matter of objective fact. He either has the attitude of wishing the good for his subordinates, or he does not. On the other hand, the attitude of benevolence is itself a normative attitude: being motivated to act for their welfare, aiming at what is good for them” (p. 9).

The moment authority becomes even partially answerable to something like benevolence or a standard of reasonableness or justice, we no longer have pure authoritarianism or voluntarism. It is debatable whether we still have voluntarism at all if it is qualified in any way, since the distinctive mark of voluntarism is to explicitly allow or “justify” arbitrariness, which means anything at all. But whatever we call them, the existence of mixed forms needs to be recognized.

[quote from Richard Cumberland, A Treatise of the Laws of Nature (1672):] “the Obligation of a Law properly so called, which proceeds from the Will of a Superior,” (p. 9n).

Here we have the voluntarist calling card.

[Cumberland:] “the intrinsick Force of all those Arguments, with which the Legislator (God) uses to enforce Universal Benevolence, is, in my opinion, all that is meant by the Obligation of Laws: The Rewards annext to Universal Benevolence by the right Reason of Men, chiefly oblige, because they promise, beside the Favour of Man, the Friendship of the Chief of Rational Beings, GOD, the Supreme Governour of the World. The Punishments they inflict by the same Reason, are both Parts of the present, and most certain presages of the future, Divine Vengeance” (ibid).

Reward and punishment are sub-ethical motivations. But benevolence is a genuine ethical criterion.

[Cumberland:] “That the End of the Legislator, and also of him who fulfils the Law of Nature, is far greater and more excellent, than the avoiding that Punishment, or the obtaining that Reward, whence the Law receives its Sanction, and which is what immediately affects every Subject; though the Obligation of every Subject to yield Obedience be indeed, immediately, discover’d by those Rewards and Punishments. For the End, that is, the Effect directly intended by both, is the Publick Good, the Honour of the Governor, and the Welfare of all his Subjects” (ibid).

The public good and welfare are again genuine ethical criteria.

Brandom finds greater clarity in Samuel Pufendorf (1642-1694). The next section, to which I will devote a separate post, will go into more detail on Pufendorf as a precursor to Kantian ethics. We get just a taste of it here.

“Pufendorf, too, rejects Hobbes’s claim that the superior/subordinate status relationship that is the source of the normative force of obligations consists solely in the differential power of the one who is owed and the one who owes obedience” (p. 9).

Might does not confer right.

[quote from Samuel Pufendorf, Of the Law of Nature and Nations (1672):] “Neither strength nor any other natural pre-eminence is alone sufficient to derive an obligation on me from another’s will, but that it is farther requisite that I should have received some extraordinary good from him, or should have voluntarily agreed to submit myself to his direction” (pp. 9-10).

“God, for instance, gave us an ‘extraordinary good’, performed a ‘special service’ by creating us, so this thought might be seen to be behind Locke’s invocation of the right of the creator. Or, as Cumberland has it, God showed us his benevolence towards us by not only creating us, but creating us in his image in the specific sense of making us like him at base universally benevolent. Here we see two rising themes challenging the grounding of obligation in prior objective relative statuses of superior/subordinate, calling forth command on the part of the superior and obedience on the part of the subordinate as the consequent appropriate practical acts or normative attitudes” (p. 10).

Here Brandom’s analysis is extremely valuable.

“One is the idea that the status of superior, having the right to command, to oblige those commanded to obey, has not only normative consequences, but also normative conditions. This is the idea that being a superior is a normative status that one must deserve (for instance, through the fact of service or an attitude of benevolence). This goes beyond the simple idea that authority is more than mere power. For that distinction can be made entirely on the side of the consequences of application of the concept superior. It is the claim that the circumstances of application of that concept are themselves normative in character. One has to have done well by the subordinates through performing a service, or at least had an attitude of wishing them well, that is, benevolence towards them. The second idea is the idea that the status of being a superior, in the sense of having a right or authority to impose obligations and command obedience (as opposed to the mere power to punish noncompliance) might be dependent on the attitudes of the subordinates: on their having agreed or consented to, or otherwise acknowledged that authority” (ibid).

If there is such a thing as a right to command others and not just a power to do so, that right is necessarily conditional and not absolute. This is related to the Enlightenment notion of government by consent.

“Both these ideas can be seen at play throughout early modern thinking about normativity. And they both stand in substantial tension with the traditional metaphysical picture of normative statuses of obligation as rooted in the prior existence of objective ontological relations of superiority and subordination, as epitomized by the neoplatonic scala naturae. The idea that beyond one’s power to enforce obedience, status as a superior with the normative authority to impose obligations is something one might or might not be entitled to — that the normative issues of one’s right to command or whether one deserves to do so are not settled just by how things non-normatively are — threatens to undermine the idea that all normative statuses can be understood to be instituted by the commands of superiors to subordinates. As Leibniz argues in his “Opinion on the Principles of Pufendorf” of 1706, if it is acknowledged that besides power there must be reasons justifying commands for them to be legitimately imbued with the authority of a superior, understanding what entitles the superior to command as a normative status instituted by the command of a superior would create a circle ‘than which none was ever more manifest’ ” (p. 11).

Once the issue of entitlement to command is raised, it cannot be answered by simply appealing to another command.

“The subordination-obedience metaphysical model of normativity that explains the normative status of obligation on the part of the subordinate cannot be extended to explain the normative status of being entitled to the authority to command. If the concept of the status of superiority not only has normative consequences of application in the form of authority to impose obligations on subordinates, but also normative circumstances of application in the sense that the one who commands must be justified in doing so, must deserve, be worthy, or have a right to that authority, then some other form of normative status must be acknowledged that is not itself to be understood on the model of institution by the command of a superior. Leibniz, like Cumberland, looked to the attitude of benevolence. The thought that the relative statuses of superiority and subordination are themselves already fully normative statuses is part of what is behind the famous opposition between law and love (for example in the natural law tradition and in the Cambridge Platonists, respectively) as what is taken to be the most basic conception in early modern moral theory” (pp. 11-12, emphasis in original).

“The second idea is even more momentous. For it is the idea that the normatively significant status of having the authority to impose obligations (which according to the first idea also counts as a normative status in the sense that exhibiting it has normative conditions of desert, worth, or entitlement) is, or at least can be, attitude-dependent. Pufendorf’s invocation of ‘consent’ (or elsewhere ‘acknowledgement’) by the subordinate as a condition of the superior’s right to command marks a decisive change from traditional views. The idea that the normative statuses instituted by natural law might be dependent on normative attitudes is a distinctively modern one. Indeed, the core of Hegel’s understanding of the transition from traditional to modern selves, norms, and societies, as laid out in the Spirit chapter, should be understood to consist in a shift in the relative priority of normative statuses and normative attitudes…. The basic thought is that it is of the essence of traditional structures of normativity that normative statuses are conceived of as objective, in the sense that neither their content nor their binding force depends on anyone’s normative attitudes. Those normative statuses set the standard for assessments of the propriety of attitudes. The law is what it is, independently of what anyone thinks about it, and one is obliged to acknowledge one’s responsibility to its authority. The paradigmatic form of this traditional structure is what I have called the “subordination-obedience” model of normativity. In its classic form, being a subordinate or a superior is an objective normative status, and normative subjects are supposed to (are subject to a distinctive kind of criticism, including punishment, if they do not) acknowledge them by adopting practical attitudes of obedience and command” (pp. 12-13).

I would say this a little differently. What is important to the argument is that from a Kantian or Hegelian point of view, normative statuses are never simply given. They are always the result of an evaluation, though the quality of the evaluation may be better or worse. What is important to the argument is that normative statuses are the result of an interpretation.

“By contrast, it is distinctive of modernity to take normative statuses of authority and responsibility, entitlement and commitment, to be instituted by normative attitudes of acknowledging or attributing those statuses: taking or treating someone in practice as authoritative or responsible, entitled or committed. While Hegel insists that this modern model expresses a genuine and important truth about the metaphysics of normativity, in the end he sees both the traditional and the modern models of normativity as one-sided: the first as hyper-objective and the second as hyper-subjective. Just as traditional accounts failed to acknowledge the authority of attitudes over statuses, the responsibility of statuses to attitudes that the moderns had discovered, even the most sophisticated version of the modern understanding, Kant’s autonomy account, though it does also acknowledge the authority of statuses over attitudes, the responsibility of attitudes to statuses, which the tradition had appreciated, fails adequately to integrate the traditional and modern lines of thought. Hegel’s own social recognitive metaphysics of normativity is to give each its due” (p. 13).

Kant already aims at a kind of synthesis of these two perspectives. Hegel, according to Brandom, judges that Kant fails to achieve it, because Kant treats moral judgment only from the point of view of the individual.

“The vocabulary I am using to express these ideas is mine rather than Hegel’s. He does not use the terms ‘authority’ and ‘responsibility’. These are the terms I am adopting to talk about what he discusses under the headings of ‘independence’ and ‘dependence’, neither of which, he insists, can properly be understood independently of its relation to the other, both of which must be understood as themselves interdependent ‘moments’ in a more complex structure. Though he uses these central logical-metaphysical terms in many ways, I want to claim that the normative uses paraphrasable in terms of authority and responsibility are fundamental — their ‘home language game’. Nor does Hegel use the terms ‘status’ and ‘attitude’. These are the terms I am adopting to talk about what he discusses under the headings of what things are in themselves (Ansichsein) and what they are for themselves or others (Fürsichsein). The discussion in the previous chapter of understanding self-conscious selves as beings such that what they are in themselves is an essential element of what they are for themselves introduces the idea of a kind of normative status, being a self-conscious individual normative subject, that depends on (is responsible to) normative attitudes (the commitments one acknowledges by identifying with them). Though ‘in-itself’ and ‘for-itself’ (also ‘for-an-other’) are central logical-metaphysical terms Hegel uses in many ways. For instance, in discussion [of] the Perception chapter, we saw them used to distinguish, roughly, intrinsic from relational properties. But I claim that their use to distinguish normative statuses from practical normative attitudes in the social recognitive metaphysics of normativity is fundamental — their ‘home language game’. This strategy of understanding ‘independence’ and ‘dependence’ in terms of authority and responsibility and ‘in-itself’ and ‘for-itself’ (‘for-an-other’) in terms of normative statuses and normative attitudes lies at the core of the semantic reading of the Phenomenology I am offering here” (p. 14).

This is a good reminder that when Brandom speaks of attitudes, he means to express what for Hegel is part of a broader notion of what something is for itself, or for another. As Brandom points out, relational properties are another example of what something is “for” (in relation to) another. Hegelian self-consciousness is perhaps the most famous “for” relation. Its relational character is the simplest reason why self-consciousness is not properly speaking a (non-relational) thing, and why it should not be identified with any simple term like ego, which is again a non-relational thing. When we speak of attitudes in an empirical way, they may seem like non-relational, simple properties, perhaps of a psychological sort. On the other hand, the Avicennan intentions that are so important for Scotus and others do have an intrinsically relational character. But in all these cases, the meaning of “relation” (Latin relatio) in question is the Aristotelian category of (asymmetrical) pros ti (toward what). It is in view of this well-established and different older usage that Pierce avoids the term “relation” when speaking about the inherently symmetrical mathematical relations that he calls “relatives”.

“Of course ancient and medieval philosophers acknowledged that there were some normative statuses that were instituted by practical normative attitudes. Having the authority or responsibilities exercised by one who holds some elected office, or those conferred by explicit legislation in cases where the aim of the legislation could obviously have been achieved in other ways are central among them. But the most basic norms, those defining the persons or normative subjects of positive laws, were not understood to be of this kind. The whole idea of natural law is intended to contrast with that artificial kind of law. The normative statuses articulated by natural laws are to be construed as necessary, as conceptually and metaphysically antecedent to and independent of the contingent attitudes, practices, and institutions of creatures of the kind whose nature they articulate” (p. 15).

The term “person” names a standing under Roman law. The reference to normative subjects here reflects Brandom’s main philosophical use of “subject”, which is normative and non-psychological, as is also true of his use of “intention” and “intentionality”. (This sharply distinguishes the latter from its Avicennan sense, revived by Brentano in Psychology from an Empirical Standpoint (1874). Brentano says that all psychological phenomena and only psychological phenomena are intentional.)

Next, Brandom devotes three paragraphs to medieval voluntarism and intellectualism. This is obviously a very limited engagement, but his concern is with tracing antecedents backward from Hegel. This is the farthest point he reaches, so it makes sense that it would be the least detailed part of the discussion. (In contemporary Hegel scholarship, it is Robert Pippin who has discussed Hegel’s relation to Aristotle in the greatest depth.)

“In this connection it is illuminating to consider the distinction within the natural law tradition between intellectualists and voluntarists. Intellectualists, paradigmatically among the Catholic theologians, Aquinas, held that the authoritativeness of commands issued by superiors to subordinates (expressions of the attitudes of those superiors) answered to (depended upon) reasons rooted in the same objective natures that determined their relative ‘primacy’ as superiors/subordinates. Even God, with the objective status of superior to all, is understood as constrained in the laws he lays down by the demands of reasons concerning the objective good of creatures with the natures with which he has endowed them. God’s unconstrained omnipotence is acknowledged by attributing to him the ‘absolute’ power to have created beings with different natures than the ones he actually created, but his ‘ordained’ power, given the natures he actually created, is understood as constrained by reasons provided by those determinate natures. He could not have made murder or (tellingly) adultery right. Even God’s normative attitudes, as expressed in his commands, in this sense answer to antecedent objective normative statuses” (pp. 15-16).

“By contrast, theological voluntarists, such as William of Ockham reject the constraint on God’s attitudes by reasons rooted in objective natures, as codified in Aquinas’s distinction between his absolute and his ordained power. What makes something right or obligatory (institutes those normative statuses) is just God’s normative attitudes towards them, his approval or commands. Those attitudes are not constrained by reasons stemming from any antecedent objective normative statuses. It is his will alone (which I am talking about in terms of his normative attitudes) that institutes normative statuses of obligation and permission. God could, if he so chose, have made murder and adultery right — though he did not in fact do so. The theological disagreement between intellectualists and voluntarists about the relationship between normative statuses stemming from objective created and creating natures and normative attitudes (obligation-instituting acts of divine will) is intimately entangled with the ontological-semantic dispute between realists and nominalists about universals. Ockham attributes no reality to kinds or natures over and above the reality of the particulars they group. Assimilating particulars by treating them as exhibiting a common universal or nature is itself an act of will, the expression of a practical attitude. The groupings are arbitrary in the original sense — the product of ‘arbitrium brutum’. Understanding universals, including kinds and natures, as the product of contingent activities of naming (hence ‘nominalism’) makes reasons deriving from those natures themselves attitude-dependent” (p. 16).

Brandom here treats will as a normative attitude. What it makes sense to treat this way is any particular, definite will, but not the famous or notorious faculty of unconstrained choice. It is assertion of the latter that defines voluntarism.

I believe Brandom is a truly great philosopher, but Aquinas and Ockham are mere cartoon figures here. Aquinas is indeed more “traditional” in some ways. But Aquinas recognizes the existence of rational ethics, independent of revelation. That to me is huge. Ockham, like Scotus, both makes radically voluntarist claims and endorses ethical criteria of right reason and good intent. I find the combination very confusing.

Later, Brandom mentions that Luther and Calvin were voluntarists. Nominalism also seems to have been strong in early Protestantism. I have no basis for arguing with any of that. But all this together is far from justifying a presumption that voluntarism per se must therefore be considered historically progressive. There are a great many other alternatives to voluntarism besides Thomism. And Thomism itself is far from monolithic.

(But Hegel himself valorizes Protestantism, and Luther in particular, and shares the Enlightenment disdain for scholasticism. But in Hegel’s day as in the Enlightenment, medieval philosophy was virtually terra incognita, especially in Protestant countries. This was true because printed books and pamphlets in vernacular languages had become predominant. Most works of medieval philosophy did not exist in print or in a vernacular language, but only as rare Latin manuscripts that hardly anyone studied, or even had access to. It is easy to be disdainful of what we only know from a caricature.)

The third paragraph devoted to this topic sums up the outcome.

“Divine command theorists understand the obligations — normative statuses obliging the adoption of normative attitudes of obedience — of us subordinates-because-inferiors as instituted by divine attitudes (expressed in commands, acts of will), even if the framework of relative normative statuses of superior-subordinate is understood as objective in the sense of attitude-independent. Where intellectualists see all attitudes as answering to attitude-independent statuses, voluntarist natural lawyers do not see the status-instituting attitudes of superiors as themselves constrained to acknowledge prior statuses. The voluntarists can be thought of as holding a variant of the traditional subordination-obedience model. But compared to the still more traditional intellectualists, they substantially inflate the significance of attitudes relative to statuses” (pp. 16-17).

He is right that both voluntarists and “intellectualists” in the middle ages largely adhered to the obedience model. But if all attitudes are attributed to the will, it is pretty much a tautology that voluntarism puts more weight on attitudes. The voluntarist refusal to acknowledge any constraint on the will is precisely what leads to arbitrariness.

The argument of Plato’s Euthyphro is not mentioned here. According to the internet, this objection to divine command theory is well known to contemporary scholarship. The so-called Euthyphro dilemma is widely regarded as the most serious issue that divine command theory has to face.

At the paragraph’s end is the sentence that I found really disturbing.

“In this sense, theological voluntarism in the Catholic natural law tradition represents the first stirrings of the attitude-dependence of normative statuses that would burst into full bloom among the early modern Protestant natural lawyers: the thin leading edge of the wedge of modernity. (Luther and Calvin were voluntarists.)” (p. 17).

Given Brandom’s sympathy for the classic American pragmatists’ “Whiggish” belief in progress, this “thin leading edge of the wedge of modernity” amounts to a claim that theological voluntarism should be seen as historically progressive. Fortunately, this weak link in this part of the argument is not essential to the larger point he is making. In particular, it does not affect the insightful reading of Pufendorf’s notion of the consent of the governed that is to follow.

“It is still a huge, distinctively modern, step from understanding the normative statuses of subordinates to be dependent on the normative attitudes of their superiors to seeing the normative status of being a superior (‘primacy’) as dependent on the attitudes of the subordinates. It is, of course, the driving idea of social contract theories of specifically political obligation. I quoted Pufendorf above rejecting Hobbes’s claim that objective matter-of-factual power over others could confer the status of superiority in the sense of the right to command attitudes of obedience, when introducing the notion of consent of the subordinates as an attitude that can institute the relative statuses of superior-subordinate. Pufendorf himself recognizes that a thought like this is also present already in Hobbes, quoting him as saying as saying ‘All right over others is either by nature or by compact.’ Pufendorf radicalizes Hobbes by rejecting the idea that power all by itself can confer right over others, insisting that only the combination of consent and power to punish confers such normative primacy” (pp. 17-18).

This notion of consent, of course, is foundational to modern democratic politics.

“Hegel sees a paradigm of the shift from traditional to modern modes of thought in what became the popular contrast between status-based ‘divine right of kings’ political theories and the attitude-based consent theories epitomized by Thomas Jefferson’s resonant words in the American Declaration of Independence (paraphrasing Locke in his “Second Treatise of Civil Government” of 1690): ‘…governments are instituted among men, deriving their just powers from the consent of the governed.’ According to this line of thought, the distinction between possessing matter-of-factual power and exhibiting the normative status of just power is a matter of the attitudes of the subordinates subject to that authority to oblige obedience” (p. 18).

Being and Representation Revisited

Michel Foucault in Les mots et les choses (literally Words and Things, 1966; English tr. The Order of Things), the book that made him a celebrity in France and raised the brewing French controversy over so-called “structuralism” and humanism into high gear, argues that there was a major paradigm shift from resemblance to more abstract representation at the beginning of the classical age (17th century). More recently, Robert Brandom has focused more specifically on Descartes’s analytic geometry as based on a global isomorphic representation of geometry in terms of algebra, which replaced the medieval paradigm of resemblance.

Certainly the notion of representation plays a fundamental role in both Descartes and Locke. Foucault made a huge impression on me when I first read him around 1979, and — as witnessed here — Brandom is one of my current leading lights. But Foucault and Brandom are both just wrong about the middle ages being simply dominated by a paradigm based on resemblance.

While I have several times referred to L’Être et représentation [Being and Representation] (1999) by Olivier Boulnois, I have yet to more substantially work here on this important book, which details the rise of the notion of a “science” of metaphysics as ontology — closely associated with an abstract notion of representation, not reducible to resemblance — in the later middle ages. This offers a vital corrective to the rather ahistorical global generalizations commonly applied to these topics.

On Boulnois’s account, which involves a cast of many, the leading character in these developments will be the theologian John Duns Scotus (1266-1308). Boulnois is a leading scholar and translator of Scotus.

I would note that this substantial work on Scotus also seems to thoroughly invalidate the thinly documented valorization of the Scotist univocity of being by Gilles Deleuze. It is hard to think of a writer more viscerally opposed to the representationalist paradigm than Deleuze. Deleuze’s other valorizations of Spinoza and Leibniz and the ethical notion of affirmation in his early Nietzsche book influenced me in the past. But to my knowledge, Deleuze never even mentions the central role of representation in Scotus and its strong connection with univocity. I felt betrayed when I discovered this.

“To represent means at once to ‘make present’, ‘stand in place of’, ‘resemble’. Precisely, in the Middle Age the vocabulary of repraesentatio is used frequently, in all of these senses” (Boulnois p. 7, my translation throughout).

The point here is not to deny that resemblance plays a major role in medieval thought. It is rather that — as with several other notions commonly associated with modernity, such as a psychological Subject — the later middle ages already saw substantial and systematic use of a more abstract notion of representation.

“Already in Tertulian, the statue of Hercules ‘represents’ Hercules … it indicates his presence in absence — it takes his place. To represent is in a certain way to make present, and Maxim of Turin uses these two terms as synonyms. The liturgical use of the term follows naturally…. A new turn appears in the Cistercian order with Aelred of Rievaulx, as a meditative exercise that makes Christ present in the imagination” (ibid).

In spite of the visual or visualizing character of these uses and their association with notions of resemblance, there is clearly more going on here than just the application of a criterion of resemblance. We see explicit theoretical development centered on the notion of representation.

“But the text that durably imposed the vocabulary of representation seems to be the Latin translation of the De Anima [On the Soul] of Avicenna: the expression appears at least seventeen times in this work…. It is indeed repraesentatio that bears all the difficulty of the platonizing noetics of Avicenna…. In this way the problematic rejoins the central difficulty of another Platonism, that of Augustine, which has to understand how the soul, always spiritual, can have sensible images without losing its spiritual nature…. [T]he problem of representation obliges us to explore the confluence of Augustinianism and Avicennism” (pp. 8-9).

Representation is a fundamental concept of Avicenna’s elaborate psychology, which combines Platonic, Aristotelian, and medical elements. (Avicenna was the second greatest medical authority in the middle ages, after Galen.)

The historical importance of the adoption of Avicenna by later medieval Augustinians was already pointed out by the great Thomist scholar Etienne Gilson in the early 20th century.

“The Middle Age explains repraesentatio by its equivalents: stare pro (taking the place of) — signs take the place of things that cause them and to which they refer; supponere pro (supposing for) — in a proposition, the terms take the place of the thing to which they refer; similitudo, species, imago (being a resemblance, an image) — the sensible species, the phantasm, the concept representing the object they resemble; supplere vicem (playing the role of) — abstractive knowing takes the place of the object” (p. 9).

The role of signs in thought and language was already discussed by Roger Bacon in the 13th century. The theory of “supposition” was an important and sophisticated Latin innovation that anticipates modern referential semantics. Theories of “species” were another major non-Aristotelian Latin development, possibly derived from Stoic physiological-epistemological theories of phantasia.

“We need to ask ourselves about the logical, optical, and noetic status of representation, corresponding to the functions of the sign, of the sensible image (species or phantasm), of the concept. Taking the place of, being the image of, resembling, conceiving are diverse regimes that need to be studied in their own right. Then we need to rearticulate these terms one to another, and ask ourselves how the representation of being is constituted successively as a semantics, an eidetics, and a noetics. In a theory where concepts are themselves signs, where they also have sensible species for content, these three dimensions form a coherent system” (pp. 9-10).

His reference to the “representation of being” here anticipates what we will see as the Scotist approach to being in terms of representation. Roger Bacon treats concepts as signs.

The Latin middle ages saw huge development in logic, semantics, the theory of signs, optics, and noetics. Scholastic interest in logic is well known, and we have seen at least a taste of medieval noetics in the disputes about Aristotelian intellect. There was major development in geometrical optics in the Arabic-speaking world, and in the medieval and Renaissance Latin world. Representability is the minimal criterion of univocal being in Scotus.

“But it is also necessary to examine its genesis. What form of concentration allowed signification, knowledge, and thought to be re-expressed using only the concept of representation? And what changed in the notion of representation to make it possible to represent all things in a unique way? Researching the origin of a metaphysics of representation is not to write the history of the concept of representation, but the genealogy of a new structure, the thinking of being by representation” (p. 10).

Here I think he is onto something important. These discussions have a multi-dimensional aspect that is clearly not reducible to a notion of simple resemblance.

We will see that a “metaphysics of representation” and a “thinking of being by representation” are especially characteristic of Scotus, but not only Scotus.

Metaphysics “gets a new formulation in the tradition that goes from Roger Bacon to Duns Scotus, often identified with the English Franciscan school of the 13th-14th century. I hope to show that this is too restrictive, because the problematic plunges its roots further in the Augustinian and Avicennan ground and overflows this school, since we find important elements in Thomas Aquinas or Henry of Ghent” (ibid).

“With these distinctions made, it will be possible to measure the mutations of metaphysics that result. It gets its modern status as a science thanks to the concept of being, which allows Henry of Ghent to know all things in one sole act of thought, and in Duns Scotus replaces the analogy of being by founding its univocity: the unity of metaphysics rests on a noetic unity. We need to investigate that which gives priority to the concept, a unity engendered by intellect, with the power to represent all its meanings in its stable identity” (ibid).

By “modern” here, he means early modern. It is important to note that the disrepute of metaphysics and widespread talk about surpassing it came later. The early moderns generally claimed to have a new and better metaphysics. This will turn out to have Scotist and other scholastic roots.

I did not recall Boulnois’s use of “concept” in this context. This will be something to watch. In the context of univocal being and intellectual intuition of individuals, “concept” has a completely different sense than it does in Hegel or Brandom. By “concept” here, he seems to mean a mental representation that could be simply given as an atomic thing. It is important to note that the term “concept” can also be given a non-mentalist, non-representationalist, and relational rather than atomic interpretation in terms of conditions of use and consequences. This is a fundamental theme in Brandom’s work.

“This study aims at the same time to propose a new interpretation of the history of metaphysics. With the conceptual unity of being, it seeks to understand that which constitutes the invisible foyer and hidden sub-basement of modern philosophy” (ibid).

“This study is centered on the affirmation, more Avicennan than Aristotelian, that the ens can be apprehended in a unique concept, which leads to the univocity of the ens. The object of metaphysics thus becomes the first object of thought in the order of conception (first adequate object: being) and not the first object in the order of perfection (first object by eminence: God)” (p. 12).

Ens is the present participle of the verb esse, “to be”. Implicitly it refers to an individual entity, a particular “being”. This will be related to claims of knowledge by intellectual intuition of individuals as individuals. In this context, claims of univocity go hand-in-hand with claims to univocally know by intellectual intuition. “Concept” here seems to be tied to intellectual intuition, whereas in Kant and Hegel the two are sharply opposed.

“Contemporary studies of ‘modern metaphysics’ from Suárez to Kant (Schulmetaphysik) show that the classic articulation of metaphysics into metaphysica generalis and metaphysica specialis rests on a discreet but decisive acceptance of the univocity of the concept of being, particularly in Suárez. Thus modern metaphysics acquired the status of a science and a univocal constitution thanks to the concept of ens in Scotus, which replaced the analogy of being. The univocity of being comprehended in a unique concept remains the principal turn in the history of metaphysics…. This structure imposes itself from the 14th to the 18th century, passing through the work of Suarez and Wolff” (p. 13).

The bad idea here is that a being has a concept, straightforwardly and univocally. This is antithetical to Aristotle, for whom the very beginning of wisdom is that things are said in many ways. I have occasionally invoked “beings” as objects of Aristotelian phronesis, which is all about grasping particulars in an open way that is not locked in to a univocal “concept”.

Next in this series: Signs, Concepts, Things

Realism, Nominalism, Modality

There is an important intersection between the 14th century debate about realism and nominalism and contemporary questions about the status of modality in logic that ought to be of interest to non-specialists. Both of these topics probably sound obscure to most people. At sound-byte level, the first is about the status of universals, and modality is something we implicitly presuppose any time we try to reach for something “more” than allegedly pure phenomena or mere appearance.

Both sides of the medieval debate often wanted to enlist the support of Aristotle, who took a remarkably even-handed approach to these questions we have yet to clarify. The debate was often invested with a great moral significance, and provoked a number of intemperate claims. But at the same time, both sides were able to use the technical vocabulary of the theory of “supposition” — along with shared familiarity with Aristotle — to discuss semantic issues of concrete meaning and word use in detail, in terms both sides could in large measure agree upon. This led to a very high quality and sophistication in many contributions to the debate on both sides.

On some slight acquaintance, many modern readers can easily sympathize with nominalist critiques of the premature and illegitimate use of universals. We may think of vulgar platonism, excessive abstraction, reification, alienation, and so on. On the other side though, there are premature and illegitimate claims that universals can be explained away entirely. But Hegel’s Frau Bauer could not even recognize her individually named cows, if there were no such thing as legitimately reusable reference, naming, and vocabulary. I think most people should be able to see that there are two sides to the coin here.

If we ask how legitimate repeatabilities in ordinary language are constituted and used, something like modality inevitably comes into play. It now occurs to me that Brandom’s emphasis on the priority of hypotheticals over alleged categoricals in real-world material inference — a point to which I am deeply sympathetic — really calls for something like the notion of modality that he develops.

All the Way Down

Once of the things I’ve most appreciated about Brandom has been his unwillingness to reduce normativity and value judgments to non-normative factors. Repeatedly in Making It Explicit, he speaks of norms “all the way down”. There is even a subheading for “all the way down” in the index entry for “norms” (p. 732). But in conjunction with this, he repeatedly suggests that the relation between pragmatics and semantics, while symmetrical in many respects, also includes an asymmetry, according to which it is more appropriate to say that normative pragmatics grounds representational semantics than vice versa. This is in distinction both to common views that privilege representation over inference and semantics over pragmatics, and to the purely symmetrical view of semantics and pragmatics that he seems to propound in Reasons for Logic, Logic for Reasons.

The symmetrical view can be seen in the favorable light of other symmetries that Hegel argues for in his campaign against “one-sidedness”. But it also implies that there is no sense in which normative pragmatics ought to be seen as coming before representational semantics.

Brandom’s 1976 dissertation, which is partly framed as the elaboration of a new form of pragmatism, makes links between the pragmatism it advocates, and a priority of pragmatics over semantics in philosophy of language. But as mentioned above, this year’s Reasons for Logic, Logic for Reasons, while applying inferentialist explanation to semantics in new ways, and while remaining as much as ever committed to an inferentialist order of explanation in general, nonetheless seems to back off from claiming any priority for pragmatics over semantics.

My worry is that this new symmetry and parity between pragmatics and semantics could end up weakening the commitment to “normativity all the way down”. The new thesis of full symmetry builds on his previous analogy between normativity and modality or subjunctive robustness, which I take to be sound. It may be that normativity all the way down does not really require the relative priority of pragmatics over semantics that Brandom claims in the dissertation and Making It Explicit, but I think more on this needs to be said.

Anaphora and Reason Relations

Applying Brandom’s 2025 concept of reason relations to his 1980 expansion of anaphora, it seems that the new reason relations codify and make explicit the same material inferences that are expressible in terms of anaphoric back-reference between sentences in a non-logical base language. Reason relations are constructed formal objects that are designed to codify an explicit formal representation of the material inferences expressed by anaphora. They provide a conservative extension and explanation of the material inferences expressible in the base language.

Anaphora and Prosentences

This will conclude an examination of Brandom’s early programmatic work “Assertion and Conceptual Roles”. At one point he pithily comments that he is developing an account of saying that does not depend on a prior account of naming. Once again, at a broad level I think that is also something that Aristotle does. Saying viewed this way is more oriented toward valuation than toward representation.

I would suggest that naming is a kind of shorthand for a description or classification that is sufficient to pick something out from other things in the applicable context. What a name cannot be counted on to do is to unambiguously specify an essence or an adequate definition. The very first topic raised in Aristotle’s Categories — which was traditionally placed first in the order of instruction — is “things said in many ways”.

The young Brandom says, “Our strategy now is to use the conditionals we have constructed to develop precise representations of the conceptual contents sentences acquire in virtue of playing a material inferential role in some justificatory system. The most sophisticated use of the notion of a conceptual role has been made by Sellars, who in Science and Metaphysics and elsewhere develops a theory of meaning couched in terms of dot-quoted expressions, where such dot-quotation of an expression results in a term referring to the conceptual (inferential-justificatory) role of that expression” (p. 34).

Every concept worth its salt carries its justification with it. We don’t properly understand an expression if we are unable to justify its use. As Aristotle says, the mark of knowing something is the ability to explain why it is the case. I would maintain that there isn’t any knowing “never you mind how”. The latter is rather the mark of what Plato calls mere opinion.

“According to the present view, it is the defining task of a logic or logical construction that it make possible the explicit codification in a conceptual role of what is implicit in the inferential and justificatory employment of an expression…. [C]onceptual roles in Frege’s and Sellars’ sense can be expressed, using the conditionals of our formal logic not only as the means of expression of roles, but also as providing the model according to which we understand such roles.”

On this view, ordinary if-then reasoning turns out to be a kind of key to understanding meaning. But considerable care is required in working out the details. The conditional that codifies material inferences has different detailed behavior than the common one based on a truth table, and that is a good thing, because the truth table one has significant defects.

“The key to this line of thought is the observation that the only sentences whose roles we understand explicitly are the conditionals. We understand them because we constructed them, stipulating their introduction conditions, and deriving the consequences of such introduction (the validity of detachment)” (ibid).

If-then conditionals allow us to explicitly express the reasons and dependencies that implicitly guide judgment and thought.

“We propose to generalize this clear case, and conceive the mastery of the use of an expression which one must exhibit in order to properly be said to understand it (‘grasp’ its conceptual role) as consisting of two parts, knowing when one is entitled to apply the expression, and knowing what the appropriate consequences of such application are (what justifies using the expression, and what inferences one licenses by so doing). Applying the expression is thus assimilated to performing an inference from the circumstances of appropriate application of the expression to the consequences of its application” (ibid).

But “applying the expression” is just what assertion is. By these lights, every asserting is an inferring.

“On this model, suggested by the later Carnap’s use of partial reduction forms, the conceptual role of any expression is the pair of its circumstances of appropriate application and the consequences of such application, that is, of its (individually) sufficient conditions and of its (jointly) necessary conditions. The application of that expression is to be thought of as an inference from the former to the latter. Assertion thus becomes a limiting case of inference” (p. 35).

It is inference that grounds assertion, not the reverse. Only through inference can anyone understand the significance of an assertion.

“More must be said, however, about the ramifications of taking conditionals to be the models for the conceptual roles of basic sentences, insasmuch as our strategy has been to construct a conditional as stating explicitly (as a license) what is implicit in an inference from its antecedent to its consequent, and then to assimilate the content of basic statements to the model of these constructed conditional statements” (ibid).

“In general, one might think that it was incoherent or circular to define the contents of the categorical sentences of an idiom in terms of the contents of hypothetical sentences of that idiom…. Our construction avoids this worry, since we define conditionals in terms of the contents of basic sentences only in the sense in which those contents are implicit in the informal inferential practices which are the use of the basic sentences.” (pp. 35-36).

Kant already questioned the primitiveness of categorical judgments. My take is that they constitute a form of shorthand for what are really reasonings or interpretations.

“Nor is there anything peculiar about taking a sub-class of sentences as the paradigms to which all others are assimilated in a theory of meaning. Frege, for instance, treats all sentences as implicit identity statements (involving names of the True or the False)…. Thus Frege constructs a theory of meaning based on terms explicated with the logical device of identity, where we base our account on sentences explicated by means of the logical device of conditionals” (p. 36).

Brandom has a complex relation to Frege, championing some of his early work and questioning some of his later work.

“We attempt to give a direct account of saying and what is said which does not appeal to naming and what is named” (ibid).

“This is the essential difference between conceptual role semantics inspired by the sort of concerns articulated by the later Wittgenstein, and referential semantics inspired by Frege” (ibid).

“As Dummett points out, the later Frege broke from previous logicians in treating logic not as the study of inference, but of a special kind of truth…. This view seems to have been motivated by his presentation of logic as an axiomatic system, where some truths are stipulated and other truths are derived from them by a minimum of purely formal inferential principles. The philosophical critique in terms of linguistic practice of the distinction between meaning-constitutive stipulated truths and empirically discovered truths, together with Gentzen’s achievement of parity of formal power between proof-theoretic methods of studying consequence relations and the truth-oriented methods epitomized by matrix interpretations … require us to reassess the relations of explanatory priority between the notions of inference and truth” (p. 36).

Brandom makes a good case for seeing the early Frege as a proto-inferentialist concerned with the formalization of material inference. The later Frege propounded an original and rather strange notion of truth and truth-values as foundational. He held that truth is a (unique) object referred to by all true statements, rather than a property.

“One of Frege’s achievements is his formulation of the principle of semantic explanation, according to which the appropriateness of a form of inference is to be accounted for by showing that it never leads from true premises to conclusions which are not true. The usual way in which to exploit this principle is to begin with an account of truth (typically in representational or referential terms) and partition a space of abstractly possible inferences and forms of inference into those which are appropriate and those which are not appropriate according to the semantic principle, as Frege does in the Begriffschrift. Our approach in effect reverses this order of explanation, beginning analysis with a set of appropriate inferences and explaining semantic interpretants, including truth-values, in terms of them” (pp. 36-37).

The idea of this “principle of explanation” is that sound reasoning from true premises cannot yield a false conclusion. This is not a fact, but a definition that also has characteristics of a Kantian imperative. It is up to us to make it true.

He considers possible objections to the idea of treating hypothetical judgments as more originary than categorical judgments. This should not be taken to apply at the level of truths. In a similar vein, he also says that what our words mean does not determine what we believe.

“Just as it is implausible to take what is possible as determining what is actual, so it is implausible to take the totality of conditional truths as determining the totality of unconditional truths. Indeed, the possession by a formal system of this semantic property would be a strong reason to take its conditional as not a reasonable rendering of the English hypothetical construction ‘if … then’. Embarrassingly enough, the standard truth-functional (mis-named ‘material’) conditional which Frege employs has just this property, namely that if the truth-values of all of the conditionals of the language are settled, then the truth-values of all the sentences of the language are settled. This is proven in Appendix II” (p. 37).

This surprising proof really turns things around. I suppose this result is related to the concerns about “logical omniscience” in classical logic. It is not reasonable to suppose that if a human knows A, then she necessarily knows all the consequences of A. But this is independent of the question of whether we really know anything unconditionally (I tend to think not). There is a also question whether we are properly said to “know” abstract tautologies like A = A, without necessarily knowing what A is (I am inclined to use some other word than knowledge for these cases).

“Our genuine conditional, introduced as codifying a set of non-formal inferences, will not have this undesirable property…. We avoid that result by taking the principle that appropriate inference should never lead from true premises to conclusions which are not true as a necessary, but not sufficient condition for appropriateness of inference. The truth-functional conditional results from taking the principle to provide sufficient conditions as well” (ibid).

Again, this falls within the tradition of alternative, “better” definitions of implication.

“Taking Frege’s semantic explanatory principle as a necessary condition on an account of inferential relations settles that the primary semantic notion will be whatever it is that is preserved by appropriate inferences. Frege calls this ‘truth’, but abstractly there are other properties which could also play this role (e.g., justificatory responsibility) and there are good reasons to expect an adequate semantic theory to account as well for the preservation of ‘relevance’ of some kind by appropriate inferences. This primary semantic notion, however, pertains only to the use of a sentence as a free-standing assertive utterance. A full notion of sentential content must specify as well the role a sentence has as a component in other, compound, sentences, paradigmatically in conditionals. It cannot be determined a priori that these two roles coincide. If with Frege we take the first semantic property to be a truth-value either possessed or not by any sentence, then the assumption that the second or componential notion coincides with the first results in classic two-valued truth-functional logic” (p. 38).

It is noteworthy that even the later Frege’s concern in this context was with “whatever it is that is preserved by appropriate inferences”.

He has previously used the term “designatedness”, which names that “whatever it is that inference preserves” that plays a role in multi-valued logics broadly analogous to that played by truth in two-valued logics.

“[M]any-valued semantics requires the assignment to each sentence of two different sorts of semantic interpretant: a designatedness value indicating possession or lack by a sentence used as a free-standing utterance of the property which appropriate inference must preserve, and a multivalue codifying the contribution the sentence makes to the designatedness value of compound sentences containing it, according to the principle … Two sentences have the same multivalue if and only if they are intersubstitutable salva designatedness value in every sort of compound sentence” (p. 39).

He relates the current development to technical work on the algebraic interpretation of logics.

“A matrix is characteristic for a logic if it verifies just the theorems of that logic. Lindenbaum showed that every logic has a characteristic matrix, namely the one gotten by taking the set of multivalues to be classes of inferentially equivalent sentences, and the designated multivalues to be the theorems of the logic in question” (ibid).

“We are now in a position to notice that a repertoire, together with the partial ordering induced on the sentences of a repertoire by the conditionals contained in its formally expanded consequence extension constitute such a Lindenbaum matrix” (ibid).

The conditional as Brandom has defined it provably meets Frege’s criterion of inference preservation. Brandom has extended algebraic logic to include patterns of material inference.

“Theorem 1 above shows that modus ponens preserves designatedness, that is membership in the extended repertoire. Or, to put the same point another way, that result shows that our constructed conditional satisfies Frege’s semantic explanatory principle when membership in a repertoire is taken as the prime semantic notion, and social practice determines an antecedent class of appropriate material inferences. The formally extended repertoire thus is, in a precise sense, the characteristic semantic matrix not for a logic or a set of formal inferences, but for a set of material inferences” (p. 40).

“There are three specific points which should be made concerning this interpretation. First, what is captured by semantic matrices is taken to be a matter of formal inferences first, and logical truths verified by the matrix only second, although this is not how such matrices are usually thought of. Second, we generalize the notion of a characteristic matrix for a set of formal inferences to apply to material inferences as well. Finally, notice that in addition to the structure of material inference codified in each repertoire-matrix we can in fact identify a logic with regard to the whole idiom, insofar as some complicated conditionals will appear in all repertoires. We have not constructed a characteristic matrix for this logic by ordering the sentences of the language according to repertoire-designated conditionals. In some ways it is accordingly more appropriate to say that each repertoire expresses a single matrix valuation characteristic of a set of material inferences, and that the whole idiom comprising all admissible repertoires is characteristic of the formal or logical inferences involving the conditional we used to make explicit the materially appropriate inferences” (ibid).

“In this way, then, we can exploit Frege’s semantic explanatory principle and the truth-oriented matrix semantics it inspired as theoretical auxiliaries useful in the formal analysis of a socially specified set of appropriate inferences” (ibid).

“Seeing logic in the way I have been recommending, however, as a formal tool for the explicit expression of inferential roles, obviates the need for appealing to prior notions of truth or truth-value. We have interpreted Frege’s truth-values as they figure in his semantic principle first as the designatedness values of multivalued logic, and then moving from concern with the codification of formal inference to concern with the codification of material inference, interpreted as expressing membership in a repertoire. Recalling the social practical origins of these repertoires, it would be appropriate to call the two circumstances of membership and non-membership in a particular repertoire assertibility values with respect to that repertoire. We have given a much more precise sense to this term than semantic theorists who advocate the primacy of assertibility over truth typically manage to do, however” (pp. 40-41).

“We represent the matrix valuation on the language induced by a formally expanded repertoire by associating with each sentence its repertoire-relative conceptual role, consisting of inferential circumstances and consequences of assertion. It is clear that this is an adequate representation in that this set of roles, together with the repertoire generating them, determines the partial order of the language by the conditional which is the Lindenbaum matrix. These conceptual roles are thus taken as multivalues, with repertoire membership identified as designatedness with respect to the semantic principle. The multivalues must, of course, determine compounding behavior according to our motivation…. It is … a criterion of adequacy of this representation that sentences with the same conceptual role, that is, multivalue, should be intersubstitutable in conditionals preserving both designatedness values and multivalues” (p. 41).

So far he has focused on a notion of the conditional that is a primitive “arrow” rather than something defined by a truth table. He briefly considers how to define other connectives that work off of the designatedness that plays a truth-like role in multi-valued logics, but again affirms the special importance of conditionals.

” ‘Truth-functional’ connectives can now be introduced using designatedness values as the extensions of sentences…. We would like to be able to semantically interpret all forms of sentence compounding by means of functions taking conceptual roles, or sets of them, into conceptual roles, as we can do for conditionals…. Our use of the conditional as both the model of and a tool for the expression of conceptual roles embodies the belief that the contribution a sentence makes to the roles of conditional it is a component in suffices to determine its role in other compounds” (p. 42).

He quotes Frege saying that the kernel of the problem of judgment splits into that of truth and that of what he calls “a thought”, which refers to some declarative content. Given Frege’s unitary view of “truth”, this thought-content identified with saying and conceptual roles has to be responsible for all differentiation.

“By a thought, Frege makes clear, is intended what is referred to in English by that-p clauses. We have identified these judged contents as conceptual roles. In what follows, we try to exhibit a representative variety of uses of such that-p clauses in terms of conceptual roles” (p. 43).

Finally we come to prosentences.

“Our starting point is the prosentential theory of truth of Grover, Camp, and Belnap. That account can best be sketched as the product of three different lines of thought: i) the redundancy theory of Ramsey and others, which says that the conceptual content of ‘it is true that-p‘ is always just the same as that of p…. ii) an account of truth in terms of infinite conjunctions and disjunctions…. [T]he best succinct statement of this view is in Putnam’s Meaning and the Moral Sciences…. ‘If we had a meta-language with infinite conjunctions and infinite disjunctions (countable infinite) we wouldn’t need “true”!…. [F]or example, we could say … “He said ‘P1‘ & P1” (ibid).

“iii) Finally, and this is what is distinctive to the view under discussion, it is observed that pronouns serve two sorts of purposes. In their lazy use, … they may simply be replaced by their antecedents (salva conceptual role). In their quantificational use, as in ‘Each positive number is such that if it is even, adding it to 1 yields an odd number’, the semantic role of the pronoun is determined by a set of admissible substituends (in turn determined by the pronomial antecedent)” (p. 44).

“Thus ‘Everything he said is true’ is construed as a quantificational prosentence, which picks up from its anaphoric antecedent a set of admissible substituends (things that he said), and is semantically equivalent to their conjunction” (ibid).

“The authors of the prosentential theory are concerned that ‘is true’ be taken to be a fragment of a prosentence, not a predicate which characterizes sentence-nominalization…. The authors are worried that if the first part of a sentence of the form ‘X is true’ is taken to be a referring sentential nominalization that, first, ‘is true’ will inevitably be taken to be a predicate, and second, the anaphoric prosentential reference of the whole sentence will be passed over in favor of the view that the nominalization does all the referring that gets done, and would vitiate the view” (p. 45).

“In fact this is a situation in which we can have our cake and eat it too. We consider ‘X is true’ as composed of a sentence nominalization X which refers to sentences, and a prosentence-forming operator ‘is true’.” (ibid).

“Our construction of conceptual roles in terms of conditionals of course presents natural criteria of adequacy for translation functions between repertoires contained in a single idiom, or which are members of different idioms” (p. 51).

“We show now how those semantic facts about the idiom can be expressed explicitly as the content of claims made within that idiom. We use the logical vocabulary of conditionals and repertoire attributions we have already constructed to define a further bit of expressive machinery, that-clauses, which will thus have a logical function in making explicit semantic features implicit in the idiom” (p. 53).

“[T]he account of conceptual roles is novel in being entirely non-representational. In the formal idiom we develop, it is not a necessary feature of a saying that-p that the sentence involved represent some state of affairs. Of course sentences used to say things may also be representations, and this fact might be crucial for the understanding of the use of language in empirical inquiry. But our model is broader, and we may hope that it can find application in the explication of other forms of discourse (e.g., literary and political discourse) where the representational paradigm is less apt than it perhaps is for scientific idioms” (p. 55).

“Perhaps the most important feature of our account is the crucial place given to logic, as providing the formal means by which an idiom can come to express explicitly crucial semantic facts which are implicit in the system of justificatory practices which are the use of a language. We argued that the function thus assigned to logic as a formal auxiliary in a theory of meaning is that which Frege originally envisioned and pursued. Our own development looked at he codification of inferential practices in conditionals in some detail, and somewhat less closely at the codification of repertoires in prosentences containing ‘is true’ and in propositional attitudes, and at the codification of roles in ‘that’-clauses. The basic claim here is that logic must not be restricted to the analysis of the meanings sentences acquire in virtue of the formal inferences they are subject to, as is the usual procedure). Logic should not be viewed as an autonomous discipline in this way, but as a tool for the analysis of material inference, and for making explicit the roles played by sentences in systems of material inferential practice. Using logical devices so interpreted, we were able to specify not only what role a performance needs to play in a system of social practices in order to be a saying (asserting, professing, claiming, etc.) that-p, but also to show what it is about that system of practices in virtue of which the content of such a saying can be that someone else has said (asserted, etc.) something. Indeed the only sort of ‘aboutness’ we ever employ is the reference of one bit of discourse to another (anaphoric reference if performance or sentence tokens are at issue, and mediated by conceptual roles otherwise)” (pp. 55-56).

When Aristotle discusses saying something about something, implicitly that second something is also something said. This phrase refers to that phrase. The kind of reference that is most relevant in all this is what I think of as constitutive cross-reference, or as Brandom calls it, back-reference or anaphora. Less adequately, it has been called “self” reference, but if we examine this closely, it does not involve a unitary self or a pure undifferentiated reflexivity, but rather parts referring to other parts.

Conceptual content emerges out of a sea of cross-reference. A constitutive molecular cross-reference of Fregean declarative “thoughts” or “content” or Aristotelian “sayings” precedes sedimentation into molar subjects and objects.

Epilogue to this series: Anaphora and Reason Relations

Conditionals and Conceptual Roles

Saying something is more than the material fact of emitting sounds in conventionalized patterns. We ought to be able to say more about that “more”.

This is part two of a look at an early programmatic document in which Brandom first develops his highly original approach to meaning and logic. Brandom’s “logical expressivism” treats logic as a tool for explaining meaning, rather than a discipline with its own distinctive subject matter. That logic is such a tool and not a science is an Aristotelian view (or, I would say, insight) that has been mostly ignored by subsequent traditions.

The dominant modern tradition treats meaning as representation by pointing or reference. But pointing is rather trivial and uninformative. By contrast, I normally think of meaning in terms of something to be interpreted. But this hermeneutic approach tends to focus attention on concrete details. Brandom ambitiously wants to say meaningful things about meaning in general, and I think he succeeds.

As in the first installment, I will continue to focus on the discursive parts of the text, while skirting around the formal development. (There is more formal logical development in this text than anywhere else in Brandom’s corpus, at least until this year’s publication of the collaborative work Reasons for Logic, Logic for Reasons, which returns to the current text’s aim of implementing his program of logical expressivism.)

Brandom begins with the early work of Frege, who pioneered modern mathematical logic.

“To make out the claim that the systems of social practices we have described implicitly define assertion, we need to supplement that account of assertings with a story about the contents which are thereby asserted. Our starting point is Frege’s discussion in the Begriffschrift, where the distinction between force and content was first established…. First, Frege identifies conceptual content with inferential role or potential. It is his project to find a notation which will allow us to express these precisely. Second, sentences have conceptual contents in virtue of facts about the appropriateness of material inferences involving them. The task of the logical apparatus of the conceptual notation which Frege goes on to develop is to make it possible to specify explicitly the conceptual contents which are implicit in a set of possible inferences which are presupposed when Frege’s logician comes on the scene. The task of logic is thus set as an expressive one, to codify just those aspects of sentences which affect their inferential potential in some pre-existing system” (“Assertion and Conceptual Roles”, p. 21).

Meaningful “content” is to be identified with the inferential roles of things said, which are each in turn defined by the pair consisting of the conditions of their application and the consequences of their application. The novelty of what is expressed here is tactfully understated by the reference to “facts” about the appropriateness of material inferences. This tends to downplay the “fact” that the inquiry into conditions of application is really a normative inquiry into judgments about appropriateness more than an inquiry into facts.

What is being said here also needs to be sharply distinguished from the nihilistic claim that there are no facts. There are facts, and they need to be respected. The point is that this respect for facts ought to be opposed to taking them for granted.

“We will derive conceptual contents from the systems of practices of inference, justification, and assertion described above. Following the Fregean philosophy of logic, we do so by introducing formal logical concepts as codifications of material inferential practices. First we show how conditionals can be introduced into a set of practices of using basic sentences, so as to state explicitly the inference license which the assertion of one sentence which becomes the antecedent of the conditional can issue for the assertion of another (the consequent of the conditional). With conditionals constructed so as to capture formally the material inferential potential of basic sentences, we then show how conceptual contents expressed in terms of such conditionals can be associated with basic sentences on the model of the introduction and elimination rules for compound sentence forms like the conditional” (ibid).

Introduction and elimination rules are characteristic of the natural deduction and sequent calculi due to Gentzen. This style of formalization — common in proof theory, type theory, and the theory of programming languages — is distinctive in that it is formulated entirely in terms of specified inference rules, without any axioms or assumed truths.

Until Sellars and Brandom, modern logic was considered to be entirely about formal inference. Brandom argues that the early Frege correctly treated it instead as about the formalization of material inference. Brandom also endorses Quine’s logical holism against atomistic bottom-up views like that defended by Russell.

“We cannot in general talk about ‘the consequences’ of a claim (for instance, that the moon is made of green cheese) without somehow specifying a context of other claims against the background of which such consequences can be drawn. (Can we use what we know about the mammalian origins of cheese and take as a consequence that at one time the moon was made of milk, for instance?) Quine, in “Two Dogmas [of Empiricism]”, may be seen as arguing against the possibility of an atomistic theory of meaning (e.g. one which assigns to every sentence its ‘conceptual content’) that such meanings must at least determine the inferential roles of sentences, and that the roles of each sentence in a ‘web of belief’ depends on what other sentences inhabit that same web. In particular, whether anything counts as evidence for or against a certain claim … depends on what other sentences are held concurrently. Given any sentence, … and given any second sentence there will be some webs in which the second counts as evidence for the first, and some where it counts as evidence against the first, where what ‘web of belief’ is considered determines what other sentences are available as auxiliary hypotheses for inferences. Accepting the general Fregean line that meanings as theoretical constructs are postulated to express inferential potentials, Quine reminds us of basic facts about our inferential practices … to impugn the comprehensibility of assignments of conceptual role to individual sentences, unrelativized to some doxastic context. Conceptual roles can only be specified relative to a set of other sentences which are all and only those which can be used as auxiliary hypotheses, that is, as Quine puts it, at the level of whole theories-cum-languages, not at the level of individual sentences” (pp. 22-23).

Much of the ensuing discussion will revolve around conditionals, and what logicians call the implicational fragment of a logic, in which only implication is considered. This is a kind of minimal form for what constitutes a logic — if you specify a notion of implication, you have a logic. But the common modern truth-table definition of implication has been criticized from many quarters. Much work has been done on the precise definition of alternate or “better” notions of implication. This is one of the things Brandom will be doing here.

One of the most important questions about implication is whether it is “primitive” — i.e., something in terms of which other things are defined, which is itself considered to be defined only operationally (indirectly, by its use) — or whether it is to be defined in terms of something else, such as a truth table. For instance, category theory (by which all of mathematics can be interpreted) can be elaborated entirely in terms of primitive “arrows” or morphisms, which generalize both the notion of a mathematical function and that of logical implication. Arrow logics, which generalize modal logic, also start from a primitive notion of arrows. Later in this text, Brandom will develop his own notion of arrows as a primitive, alternate form of implication.

In the context of the debate about holism and atomism, it is interesting to consider the scholastic practice of debating for and against individual propositions. At top level, it seems atomistic, in that the propositions are taken up one at a time. But at a detailed level, the arguments turn out to be mostly about the consequences of accepting or rejecting the proposition under discussion. Brandom will argue that propositions are to be understood by the combination of their consequences and their conditions of appropriate use.

He turns to the question of what assertion is. The novelty here is that assertion will be explained in terms of primitive conditionals, rather than treated as primitive.

“The first step in our account of the semantic contents or conceptual roles sentences acquire in virtue of being used according to the practices expressed in some idiom is the introduction of some logical vocabulary. We understand the inference-licensing function of assertion by our model of justificatory systems of social practices. We will introduce the conditional as a compound sentence-form constructed out of the basic sentences on which some idiom is defined. The conceptual content of the conditionals will be stipulated; a sentence of the form pq is to have as content the inference-license of a statement of the appropriateness of an inference from the assertion of p to the assertion of q. Various formal inferential connections between such conditional sentences will then be elicited. For these formal principles to comprise a logic is for them to make possible the explicit formal codification of the material inferential and justificatory practices of some conceptual idiom. This is the task Frege sets for logic in the Begriffschrift — although in that work he succeeded only in completely codifying the formal inferences involving his logical constructions, his discussion makes clear that the ultimate criterion of adequacy for his conceptual notation is its capacity to express explicitly and precisely the contextual material inferences which define the conceptual roles of non-logical sentences” (p. 23).

We see here too some of the motivation for focusing on compound sentences — all sentences that include explicit conditionals are compound. But according to his analysis, it will turn out that simple sentences of the form “A is B” implicitly express a sort of minimal form of material inference.

I would suggest that the allegedly unconditional or categorical judgment “A is B” is best understood as a kind of shorthand for a judgment like A(x)→B(x). Aristotle’s concern with sayings leads him to treat the sentences that express propositions in a non-atomic way. He glosses “A is B” as expressing “combination” and “A is not B” as expressing “separation”. I have suggested that “combination” could be read as a relation of material consequence, and “separation” as a relation of material incompatibility. This means that for Aristotle too, a proposition can be considered a kind of minimal material inference. (See Aristotelian Propositions.)

“Once the conditional has been introduced as codifying the consequence relation implicit in material inferential practice, and its formal logical properties have been presented, we will use such conditionals both as models for the conceptual roles of non-logical sentences (which will have analogues of introduction and elimination rules, and will be given content as licensing inferences from their circumstances of appropriate application to the consequences of such application) and as tools for making those roles explicit” (ibid).

Treating conditionals as models for the conceptual roles of simple “non-logical” sentences like “A is B” begins from the intuition that these simple assertions are the potential antecedents or consequents of inferences, and that this role in possible inferences is what gives them specifiable meaning.

“We may think of the relation between basic and extended repertoires in a conceptual idiom as defining a consequence function on admissible sets of sentences. For the extended repertoire … comprises just those sentences which an individual would socially be held responsible for (in the sense that the relevant community members would recognize anaphoric deference of justificatory responsibility for claims of those types to that individual) in virtue of the dispositions that individual displays explicitly to undertake such responsibility for the sentences in his basic repertoire. The extended repertoire consists of those claims the community takes him to be committed to by being prepared to assert the claims in his basic repertoire. These community practices thus induce a consequence function which takes any admissible basic repertoire and assigns to it its consequence extension. The function only represents the consequences of individual sentences relative to some context, since we know what the consequences are of p together with all the other sentences in a basic repertoire containing p, but so far have no handle on which of these various consequences might ‘belong’ to p. Thus we have just the sort of material inferential relations Frege presupposes when he talks of the inferences which can be drawn from a given judgment ‘when combined with certain other ones’…. The idiom also expresses a material consistency relation…. The sets which are not idiomatically admissible repertoires are sets of sentences which one cannot have the right simultaneously to be disposed to assert, according to the practices … of the community from which the idiom is abstracted. The final component of a conceptual idiom as we have defined it is the conversational accessibility relation between repertoires” (pp. 23-24).

The accessibility relation will turn out to correspond to whether a sentence makes sense or is categorial nonsense like “Colorless green ideas sleep furiously”.

“Given such an idiom defined on a set of non-logical sentences, we will add conditional sentences pq to each of the consequence-extended repertoires in which, intuitively, p is inferentially sufficient for q, in such a way that the newly minted sentences have the standard inferential consequences of conditionals such that this formal swelling of the original repertoires is inferentially conservative, that is does not permit any material inferences which were not already permitted in the original idiom” (p. 24).

He defines an idiom as a triple consisting of a set of sets of sentences or basic repertoires, a function from basic repertoires to their consequence extensions, and a function from repertoires to the other repertoires “accessible” from each.

“Recalling the constitutive role of recognitions by accessible community members in determining consequence relations, we may further define p as juridically (inferentially) stronger than q at some repertoire R just in case p is actually stronger than q at every repertoire S accessible from R. This natural modal version of inferential sufficiency will be our semantic introduction rule for conditional sentences…. The conditional thus has a particular content in the context of a given repertoire, a content determined by the inferential roles played by its antecedent and consequent” (p. 25).

“We must show that the important formal properties of idioms are preserved by the introduction of conditionals, and that the conditionals so introduced have appropriate properties. In order to permit sentences with more than one arrow in them, we must swell the basic idiom with conditionals first, and then iterate the process adding conditionals which can have first-order conditionals as antecedents or consequents, and so on, showing that the relevant properties of conceptual idioms are preserved at each stage. Our procedure is this. Starting with a basic idiom …, we define a new idiom … with repertoires defined not just over the original set of non-logical sentences, but also containing first-order conditionals, as well as consequence and accessibility relations between them. The same procedure is repeated, and eventually we collect all the results” (ibid).

“The properties of conceptual idioms which must be preserved at each stage in this construction are these. First is the extension condition, that for any admissible repertoire R, R [is a subset of its consequence extension]. The motive for this condition is that the consequence extension c(R) of R is to represent those claims one is taken to be committed to in virtue of being prepared explicitly to take responsibility for the members of R, and certainly one has committed oneself to the claim one asserts, and licenses the trivial inference which is re-assertion justified by anaphoric deferral to one’s original performance. Second of the properties of conceptual idioms which we make use of is the interpolation condition, which specifies that any basic repertoire R which can be exhibited as the result of adding to some other repertoire S sentences each of which is contained in the consequence extension of S, has as its consequence extension c(R) just the set c(S).” (pp. 25-26).

“The idempotence of the consequence function, that for all [repertoires in the domain], c(c(R)) = c(R), is a consequence of the interpolation property. Of course this is a desirable circumstance, since we want idempotence in the relation which is interpreted as the closure under material inference (as constituted by social attributions of justificatory responsibility) of admissible basic repertoires” (p. 26).

“The consequence relation is contextual, in that a change in the total evidence which merely adds to that evidence may entail the denial of some claims which were consequences of the evidential subset. Allowing such a possibility is crucial for codifying material inferential practices, which are almost always defeasible by the introduction of some auxiliary hypothesis or other…. [B]oth ‘If I strike this match, it will light’, and ‘If I strike this match and I am under water, it will not light’, can be true and justified. Denying monotonicity (that if [one repertoire is a subset of another], then [its consequence extension is a subset of the consequence extension of the other]) forces our logic to take account of the relativity of material inference to total evidence at the outset, with relativity to context made an explicit part of the formalism instead of leaving that phenomenon to the embarrassed care of ceteris paribus [other things being equal] clauses because standard conditionals capture only formal inference, which is not context-sensitive” (p. 27).

Real things are in general sensitive to context, whereas formal logical tautologies are not.

Monotonicity is a property of logics such that if a conclusion follows from a set of premises, no addition of another premise will invalidate it. This is good for pure mathematics, but does not hold for material inference or any kind of causal reasoning, where context matters. The match will light if you strike it, but not if you strike it and it is wet, and so on.

“We are now in a position to investigate the logic of the arrow which this formal, non-substantive expansion of the basic idiom induces. To do so, we look at the sentences which are idiomatically valid, in that every repertoire in the formally expanded idiom contains these sentences in its consequence extension. First, and as an example, we show that if p is in some consequence-extended repertoire, and pq is also in that repertoire, then so is q, that is, that modus ponens is supported by the arrow” (p. 29).

What he calls a basic repertoire is defined by some set of simple beliefs, assumptions, or presumed facts, with no specifically logical operations defined on it. Non-substantive expansion leaves these unchanged, but adds logical operations or rules.

At this point he proves that modus ponens (the rule that p and (p implies q) implies q, which he elsewhere refers to as “detachment” of q) applies to the conditional as he has specified it. Additional theorems are proved in an appendix.

“[T]he most unusual feature of the resulting logic is its two-class structure, treating conditionals whose antecedents are other conditionals rather differently from the way in which it treats conditionals involving only basic sentences. This feature is a direct consequence of the introduction of first-order conditionals based on material inferential circumstances of the repertoire in question, and higher-order conditionals according to purely formal, materially conservative criteria. Thus it is obvious from inspection of the … steps of our construction of the hierarchy of conditionals that the complement of basic sentences in a consequence extended repertoire is never altered during that construction, and that the novel repertoires introduced always have first-order restrictions which are elements of the original set…. Higher-order conditionals, of course, are what are added to the original idiom, and … those conditionals obey a standard modal logic. The principles governing conditionals with basic sentences as antecedents or consequents, however, are those of the pure implicational fragment of Belnap and Anderson’s system EI of entailment” (ibid).

Belnap and Anderson worked on relevance logic, which restricts valid inference to the case where premises are relevant to the conclusion. The premises of a material inference are always “relevant” in this sense. Formal inference on the other hand doesn’t care what the underlying terms or propositions are. It is entirely governed by the abstractly specified behavior of the formal operators, whereas material inference is entirely governed by the “content” of constituent terms or propositions.

That there would be two distinct kinds of conditionals — first-order ones that formally codify material inferences, and higher-order ones that operate on other conditionals in a purely formal way — seems consonant with other cases in which there is a qualitative difference between first-order things and second-order things, but no qualitative difference between second-order and nth-order for any finite n.

“We may view the conditionals which end up included in the consequence extensions of formally extended repertoires as partially ordering all of the sentences of the (syntactically specified) language. Since according to our introduction rule, a repertoire will contain conditionals whose antecedents and consequents are not contained in that (extended) repertoire, the ordering so induced is not limited to the sentences of the repertoire from which the ordering conditionals are drawn. Although the conditional induces an appropriately transitive and reflexive relation on the sentences of the language, the ordering will not be total (since for some p, q and R [in the domain], it may be that neither pq nor qp is in c(R)), and it will not be complete, in that sentences appearing only in inaccessible repertoires will have only trivial implication relations (e.g. p→p)” (ibid).

“The conditionals which do not have antecedents in c(R) are counterfactual with respect to R. These are of three kinds: i) those taken true by the theory codified in the repertoire, that is, counterfactuals in c(R), ii) those taken not to be true, i.e. conditionals not in c(R) but on which R induces non-trivial entailments, and iii) inaccessible counterfactuals, assigned no significance by the extended repertoire (e.g. ‘If the number seventeen were a dry, well-made match’, an antecedent generating counterfactuals which, with respect to a certain set of beliefs or repertoire simply makes no sense). Entailment relations between counterfactuals of the first two kinds and between each of them and base sentences will be underwritten by the induced partial ordering, all depending on the original material inferential practices involving only base sentences” (pp. 29-30).

There are many counterfactuals that we take to be true. For example, if I had left earlier, I would have arrived earlier. In fact counterfactuals are essential to any truth that has any robustness. Without counterfactuals, what Brandom is calling an idiom could apply only to some exactly specified set of facts or true statements. This would makes it very brittle and narrowly applicable. For example, any kind of causal reasoning requires counterfactuals, because causes are expected to operate under a range of circumstances, which by definition cannot all hold at the same time. Counterfactuals play an important role in Brandom’s later work.

“The repertoire which induces such a partial ordering by its conditionals will then be a distinguished subset of the sentences it orders, one which Theorem 1 assures us is deductively closed under modus ponens. Each repertoire is in short a theory or set of beliefs, embedded in a larger linguistic structure defining the implications of the sentences in that theory. Not only do different repertoires codify different theories, but they assign different significances to syntactically type-identical sentences of those theories, in that p as an element of c(R) may have one set of inferential consequences, and as an element of c(R’) have a different set of consequences. The repertoires ordered by their indigenous implication relations thus deserve to be called ‘webs of belief’ in Quine’s sense, as the smallest units of analysis within which sentences have significance. The idiom, comprising all of these repertorial structures of implicational significance and embedded belief, is not a set of meanings common and antecedent to the repertoires, but is the structure within which each such web of belief is a linguistic perspective made possible by a justificatory system of social practices” (p. 30).

Each repertoire counts as a “theory” or set of beliefs.

“The systematic variation of the significance of those sentences from one individual to another expressed in a formally expanded idiom then exactly answers to whatever communication is going on in the original set of practices. The possibility of communication consists in [a] kind of coordination of significances across repertoires codified in a formally expanded idiom” (p. 31).

The success or failure of communication depends on something like a kind of translation from your repertoire to mine.

“We have described the practical origins and effects of elements of extended repertoires which are first-order sentences of the language, in terms of attributions and undertakings of justificatory responsibility and the issuing and recognition of inferential authority. What, in these terms, should we take to be the significance of a conditional pq? The presence of such a conditional in the formally expanded consequence extension of the repertoire exhibited by an individual should signify, first, that that individual recognizes others who are prepared to assert p as licensing the inference to q, and, second, that he recognizes the assertion of p as justifying the assertion of q” (p. 32).

“So if all those recognized by the individual exhibiting R are responsible for the conditional pq and p [is in] c(R), then q [is in] c(R), which means that pq plays the proper role as codifying the recognition of inferential licensing and appropriate justification of q by p” (ibid).

“Finally, we state a more general condition under which the arrow we have defined will be a practically complete expression of a justificatory system” (ibid).

Next in this series: Anaphora and Prosentences

What Meaning Is

Brandom has characterized the focus of his interests as the theory of meaning. Recent additions to his website include a fascinating 1980 typescript “Assertion and Conceptual Roles”. This early piece has a programmatic character. It goes even further than the 1976 dissertation in anticipating the leading ideas of his major works. (I will omit the also interesting mathematical-logical formalization that he experiments with here, but steers away from in Making It Explicit and A Spirit of Trust.)

While Brandom is resolutely modern in his identifications, this sort of investigation was pioneered by Aristotle. Meaning and truth are approached in terms of a kind of normative “saying” that is up to us. But the paradigmatic kind of saying is what Aristotle calls “saying something about something”, so it is not entirely up to us. Finally, the paradigmatic use of language is dialogical, imbued with a Socratic ethic of dialogue and free-spirited inquiry. And what we most fundamentally are is dialogical talking animals.

As Brandom puts it in the first sentence, “The paradigmatic linguistic activity is saying that-p, in the sense of asserting, claiming, or stating that-p for some declarative sentence p” (p. 1).

Today “declarative” is also an important if ill-defined concept in the theory of programming languages, where its use has a close relation to the logical use that is given ethical significance here. In that context, it is often glossed as focusing on the what not the how (or the end and not the means), although that is a simplification.

The deep issue underneath both these disparate cases is something like the meaning of meaning. In what follows, I think Brandom makes some real progress in clarifying what is at stake. It has both ethical and formal dimensions.

“Frege shows in the Begriffschrift that the ways in which sentences can occur as significant constituents of other sentences require us to distinguish the content of such an assertion (what is asserted) and the force of the assertion (the asserting of that content). For when a sentence appears as the antecedent of a conditional, it must have something, let us call it the ‘content’, in common with its occurrence as a free-standing assertion, or there would be no justification for detaching the consequent of the conditional when one is prepared to assert its antecedent. On the other hand, the asserting of the conditional does not include the asserting of the antecedent, since the asserter of the conditional might well take the former to be true and the latter to be false. It is a criterion of adequacy for any account of either of these features of declarative discourse that it be compatible with some correct account of the other” (ibid).

I had not realized that the Fregean distinction of Sinn (sense or force) and Bedeutung (reference) arose in this context of reference relations between parts of compound sentences. It seems likely that this point attributed to Frege was a source for Michael Dummet’s work on compound sentences in which one part refers to another, which Brandom had made significant use of a few years earlier, in the dissertation. Dummet was a leading Frege scholar.

It strikes me also that in a formal context, this inter-reference between components of compound sentences could serve as an inductively definable and thus paradox-free version of “self” reference. In a more discursive, less formal context, it recalls Kantian-Hegelian “reflection” and other interesting weakenings of strict identity like Hegel’s “speculative” identity or Ricoeur’s “narrative” identity. Instead of a formally strict and thus empty global self-reference, it is a matter of specifiable internal cross-reference.

Further below, Brandom will explicitly connect this with the theme of anaphora or internal back-reference that he later develops at length in Making It Explicit as a way in which identities are constituted out of difference. In the current text he will also relate it to the “prosentential” theory of truth. Prosentences like “that is true” are the sentential analogue of pronouns — they refer to sentences that express definite propositions in the same way that pronouns refer to nouns. Brandom is saying that concrete meaning involves both Fregean sense and Fregean reference.

“Exclusive attention to the practice of asserting precludes understanding the conceptual significance which such linguistic performances express and enable, while the complementary exclusion must cut off semantic theory from its only empirical subject matter, talking as something people do” (ibid).

Standard bottom-up compositional approaches to semantics focus exclusively on the “content”, and not on the related doing.

“[I]t might be tempting to think that such a theory offers special resources for a theory of asserting as representing, classifying, or identifying. It is important to realize that the same considerations which disclose the distinction of force and content expose such advantages as spurious” (ibid).

“There is no reason to suppose that the semantic representability of all sentences in terms of, say, set-membership statements or identity statements, reflects or is reflected in the explanatory priority of various kinds of linguistic performances” (p. 2).

“It then turns out that giving a rich enough description of the social practices involved in assertion allows us to exhibit semantic contents as complex formal features of performances and compound dispositions to perform according to those practices. In other words, I want to show that it is possible to turn exactly on its head the standard order of explanation canvassed above” (p. 3).

“To specify a social practice is to specify the response which is the constitutive recognition of the appropriateness of performances with respect to that practice…. But in the case of discursive practices, the constitutive responses will in general themselves be performances which are appropriate (in virtue of the responses the community is disposed to make to them) according to some other social practice. The appropriateness of any particular performance will then depend on the appropriateness of a whole set of other performances with similar dependences. Each social practice will definitionally depend upon a set of others” (p. 4).

This notion of practice is thus inherently normative or value-oriented. Brandom compares his holistic view of practices with Quine’s holistic view of the “web of belief”.

“Definitional chains specifying the extension of one practice in terms of its intension, and that intension in terms of another extension, and so on, may loop back on one another. We will say that any system of social practices which does so … is a holistic system…. Such a system of practices cannot be attributed to a community piecemeal, or in an hierarchic fashion, but only all at once.”

The key point about such a holistic system is that there are mutual dependencies between parts or participants.

“It follows that in systems containing essentially holistic practices, the norms of conduct which are codified in such practices are not reducible to facts about objective performances. The appropriateness or inappropriateness of any particular performance with respect to such a practice cannot ultimately be expressed in terms of communal dispositions to respond with objectively characterizable sanctions and rewards…. The norms themselves are entirely constituted by the practices of socially recognizing performances as according or not according with them” (p. 5).

“Facts about objective performances” have a monological character. In technical contexts this can be of great value. But ethical and general life contexts have an inherently dialogical or mutual character.

“A community ought to be thought of as socially synthesized by mutual recognition of its members, since a plausible sufficient condition of A‘s being a member of some community is that the other members of that community take him to be such…. This simple Hegelian model of the synthesis of social entities by mutual recognition of individuals has the advantage that it preserves the basic distinction between the individual’s contribution to his membership in a group and the contribution of the other members” (p. 6, emphasis added).

Here we have the first appearance of the great theme of mutual recognition in Brandom’s work. Brandom has dug deeply into this particular aspect of Hegel, making very substantial contributions of his own. In ethics, mutual recognition has roots in Aristotelian philia (friendship or love) and the so-called golden rule (do and do not do to others as you would have them do and not do to you). Brandom sees that Hegel treats mutual recognition not only as an ethical ideal but also as a fundamental explanatory principle.

“The crucial point is that the reflexive recognition (as social self-recognition) be an achievement requiring the symmetry of being recognized in a particular respect by those whom I recognize in that respect, and presupposing that my recognitions will be transitive…. A community is then any set P which is closed under transitive recognition…. [N]o one member is omniscient or infallible about such membership…, nor is it required that everyone recognize everyone else in the community” (p. 7).

The symmetry of recognizing and being recognized leads to the idea that authority and responsibility ought to be symmetrically balanced. This has tremendous implications.

“Asserting that-p is, among other things, to explicitly authorize certain inferences…. Saying this much does not yet say what the constitutive recognition of this authorizing consists in…. Our account of the authorizing of inferences will draw upon the second major feature of the social role of assertion” (ibid).

The idea of understanding acts of assertion principally in terms of an inferential constitution of meaning is transformative. Others have suggested or implied something like this, but Brandom expresses it with more clarity and thoroughness than anyone.

Reasoning is not a merely technical activity. The constitution of meaning has fundamental ethical significance.

“This second feature is noted by Searle when he says that an assertion (among other things) ‘counts as an undertaking to the effect that p represents an actual state of affairs’. Leaving aside the representationalist expansion of the content ascribed, we can see in the use of the term ‘undertaking’ the recognition of a dimension of responsibility in assertion, coordinate with the previously indicated dimension of authority. In asserting that-p one is committing oneself in some sense to the claim that-p. What sort of responsibility is involved? The leading idea of the present account is that it is justificatory responsibility which one undertakes by an assertion. Justification and assertion will be exhibited as essentially holistic social practices belonging to the same system of practices, internally related to one another. So the recognitive response-type which is the intension of the social practice of assertion must include recognition of the assertor as responsible for justifying his assertoric performance under suitable circumstances…. Authority in this sense consists in the social recognition of a practice as authorizing others” (pp. 9-10).

“What is essential is that the relation between the intensions and the extensions of a family of social practices underwrite a relation of what we may call (extending the usual sense) anaphoric reference between various performances. The term ‘anaphoric’ is used to indicate that this ‘referential’ relation is internal to a system of social practices, where one performance refers to another as one word refers to another in A: ‘Pynchon wrote the book’ B: ‘But has he tried to read it?’, where the pronouns anaphorically refer to the antecedent terms ‘Pynchon’ and ‘the book’. No relation between discursive and non-discursive items is supposed. A prime use of this expressive resource of anaphoric reference to typed utterings is exhibited just below, as a feature of demands for justification” (p. 12).

In Making It Explicit, Brandom uses linguistic anaphora to explain the constitution of objects as objects. Here he gives it an even broader role. Anaphora or back-referencing is the birth of substance, solidity, and modality in meaning. Again the ethical dimension comes to the fore. Assertion as lived concerns neither naked Parmenidean being nor pure objective facts.

“The key to our attempt to offer sufficient conditions for assertion by specifying a class of systems of social practices is the relation of justification which a set of assertions can have to another assertion…. Both the dimension of authority and the dimension of responsibility will be explicated in terms of the recognition of justification. Each of the different types of assertion which play a role in the systems we will examine, free-standing assertions, assertions which are the results of inferences authorized by other assertions, and assertions which are part of the justification which another asserting made its asserter responsible for, each of these types of assertion incurs a justificatory responsibility itself and authorizes further inferences. The relevant responsibility is to produce (what would be recognized as) an appropriate justification, if one is demanded…. The utterance of a conventional request for justification addressed to a foregoing assertion is to be always appropriate, and not itself in need of justification. The cognitive significance of the linguistic practices we describe stems from this universal appropriateness of demands for further justification (as Sellars takes the ‘rational’ structure of scientific practice to consist in its being a ‘self-correcting enterprise which can put any claim in jeopardy, though not all at once’…. An utterance in the conventional style of assertions (utterances which undertake justificatory responsibilities and issue inference licenses whose contents vary as the content of the assertion vary) will constitutively be recognized as possessing that authority only so long as the conditional responsibility to justify if queried has not been shirked…. No more for this distinction than elsewhere in the social practice story need we appeal to intentions or beliefs of performers” (pp. 12-13).

As I’ve mentioned a number of times, other variants of this ethics of dialogue or dialogical ethics have been developed by Plato, Gadamer, and Habermas.

“For just as inference passes the authority of assertion one way along the anaphoric chain, it also passes the justificatory responsibility incurred the other way along that chain” (p. 14).

“The extended responsibility induced by the presentation of a justification is defeasible by the performance of a counter-justification, comprising further assertions…. The categories of justificatory and counter-justificatory performances are not disjoint” (p. 17).

“Each of these conditions codifies some aspect of our ordinary practices of giving and asking for reasons” (p. 18).

“[A] set of basic and extended repertoires related by an accessibility relation will be called a conceptual idiom…. It is in terms of these still rather particularized structures that we will define assertional contents or conceptual roles” (pp. 18-19).

Next in this series: Conditionals and Conceptual Roles

Convention, Novelty, and Truth in Language

We have been exploring the earliest publicly available work of the great contemporary philosopher Robert Brandom, his doctoral dissertation from 1976. He has been concerned to develop the philosophy of language along pragmatist lines, while working hard to point out that a pragmatist approach need not be construed as globally rejecting talk about objectivity, truth, and reality. The pragmatist approach is appealing as a sort of third way that avoids both subjectivist and objectivist excesses. This is the last chapter before his conclusion.

“[W]e saw how the notion of truth and the truth conditions of sentences could arise in a pragmatic investigation into the social practices which are the use of a language by a population. That is, we saw how an account of social practices (which are whatever the linguistic community takes them to be) can require us to consider the sentences uttered in those practices as making claims which are objectively true or false, regardless of what the community takes them to be” (Brandom, Practice and Object, p. 129).

He has argued earlier that understanding the meaning of compound sentences (in which one clause refers to and modifies another) implicitly does after all presuppose a technical concept of truth that goes beyond the warranted assertibility that Dewey recommends as a less pretentious replacement for truth-talk.

Both in ordinary life and in ordinary ethical discourse, warranted assertibility — justification in taking things to be such-and-such — is able to do the work commonly allotted to claims about truth that is what it is independent of us. But insofar as we engage in the meta-level discourse about discourse that is already implied by the understanding of compound sentences, it becomes necessary to introduce a distinction between how things are for us and how they are in themselves. This kind of situation can also be seen as motivation for Kant’s talk about “things in themselves”.

“[W]e will see how that sort of inquiry requires that a sophisticated grammar be attributed to the language being investigated, and in particular requires notions of syntactic deep-structure, meaning, and denotation or reference. We thus extend the method of the previous chapter to consider sub-sentential linguistic components, and see what it is about the practices associated with them in virtue of which it is appropriate to associate them with objective things or features” (pp. 129-130).

He will defend Chomsky’s notion of deep syntactic structure objectively existing in natural language against Quine’s instrumentalist critique.

Only by abstraction from things said do we come to consider individual words in isolation. In common with his later work and at odds with the standard compositional account of meaning in linguistics and analytic philosophy of language, in the understanding of meaning Brandom here gives explanatory priority to sentences over words, and to propositions over terms. This will be more explicitly thematized in his later work.

The compound sentences analyzed by Dummett that Brandom refers to as requiring an auxilary notion of truth beyond epistemic justifiability partake of the character of discourse about discourse, because some parts of them refer to and modify other parts.

He considers what it means to investigate the use of a natural language — what he will later call normative pragmatics. Investigating language use implicitly means investigating proprieties of use, along with their origin and legitimation. We may also collect ordinary empirical facts about the circumstances of concrete “takings” of propriety and legitimacy and their contraries, without prejudice as to whether or not those takings are ultimately to be endorsed by us.

Using the neutral language of “regularities”, he specifies a sort of minimalist, almost behaviorist framework for investigating language use that is designed to be acceptable to empiricists. In later work, he develops a detailed analogy between the deontic moral “necessity” of Kantian duty and a “subjunctively robust” modal necessity of events following events that is inspired by the work of analytic philosopher David Lewis on modality and possible worlds.

“We may divide these regularities of conduct into two basic kinds: Regularities concerning what noises are made, and regularities concerning the occasions on which they are made…. The phonetic descriptions are just supposed to be some rule which tells us what counts as an instance of what utterance-type…. Without attempting to say anything more specific about these regularities, we can express what a speaker, as we say, ‘knows’, when he knows how to use an utterance-type by associating with it a set of assertibility conditions” (p. 130).

“In terms of these notions, we can represent a language by a set of ordered pairs called sentences. The first element of each ordered pair is a phonetic description and the second element is a set of assertibility conditions…. A linguist who has such a representation of the sentences of some alien language ought to be able, subject to various practical constraints, to duplicate the competence of the natives, that is, to converse with them as they converse with each other” (p. 131).

Here he is applying a stipulative re-definition of the ordinary English word “sentence”. “Ordered” pair just means it is always possible, given a member of the pair, to say which member it is. The pair here consists of 1) the sequence of sounds by which a particular sentence is identified, and 2) the conditions under which it is appropriate to use that sentence.

“[A] theory of the use of a language just is some mechanism for generating a list of ordered pairs of phonetic descriptions and assertibility conditions which codifies the social practices which are speaking the language” (p. 132).

Every sentence in every natural language has the two above aspects — a recognizable series of sounds that identifies it, and conditions for its appropriate use.

“Speaking only about the first element of the ordered pairs which we have taken to specify a language, Quine takes the task of a theory of syntax to be the generation of the infinite set of phonetic descriptions. He then argues that if the aim of a theory of syntax is determined by this target description of speaker competence, then many different axiomatizations will generate the same set of phonetic descriptions, and hence be descriptively adequate. Insofar as a theory of syntax is a part of the project of generating the right set of sentences, then, we may choose between alternative theories only on the basis of convenience of their representation (pp. 132-133).”

This is an example of Quine’s instrumentalism that was mentioned earlier. Syntactic constructs in a natural language like English are identifiable by their mapping to distinct series of sounds. I haven’t spent enough time on Quine directly to say much more at this point, but to identify syntax with the phonetics used to pick out syntactic distinctions seems reductionst. Before criticizing it, he elaborates on Quine’s view.

“Representing the conversational capacities as ordered pairs of phonetic descriptions and assertibility conditions, we will see a good translation as associating with each phonetic description in one language a phonetic description in the other which is paired with the same assertibility conditions…. In this way a translation function would enable one to converse in a foreign language. If the goals of translation are regarded as determined in this way by pairs of phonetic descriptions and assertibility conditions, then convenience of representation and arbitrary choice will enter here as much as on the syntactic side” (p. 133).

“Denotational relations are presumably correlations between phonetically distinguishable elements … which appear in the phonetic descriptions of many sentences, and some element which regularly appears in the assertibility conditions of those sentences. A theory of denotation would consist of a relatively small list of such correlations, together with a set of structural rules which would permit the derivation of the full set of ordered pairs which are the sentences of the language, by combination of the various elements…. If one such axiomatization or recipe is possible, many are” (p. 134).

“More generally, given any scheme, we can substitute as the denotation of any phonetically specified expression anything systematically related to it, …and adjust the rest of the scheme to get the same assertibility conditions” (pp. 135-136).

“The point is that we may think of a language as being an abstract object consisting of a set of social practices…. If one now considers the various theoretical notions which have been thought to be crucial to the specification of a language by those who are not primarily concerned with social practices — the syntactic and semantic structure of its sentences, their meaning and the denotation of expressions occurring in them — one finds these notions playing drastically reduced roles” (p. 136).

“It is our purpose in this chapter to show how to circumvent … conventionalism while retaining the pragmatic point of view which renders language as comprised of social practices” (p. 137).

Classic 20th century analytic philosophy has a very thin notion of language use, effectively identifying it with empirically existing conventions. In contrast to this, Brandom sees in Noam Chomsky’s linguistics a crucial recognition of the ubiquity of linguistic novelty. He quotes Hilary Putnam’s critique of conventionalism:

“We see now why conventionalism is not usually recognized as essentialism. It is not usually recognized as essentialism because it is negative essentialism. Essentialism is usually criticized because the essentialist intuits too much. He claims to see that too many properties are part of a concept. The negative essentialist, the conventionalist, intuits not that a great many strong properties are part of a concept, but that only a few could be part of a concept” (ibid).

In contemporary usage, “essentialism” is a bad thing that consists in taking putatively unproblematic essences of things for granted. In contrast, Plato and Aristotle’s preoccupation with questions of what we translate as “essence” reflects a significant problematization.

Brandom now turns to a careful criticism of Quine.

“Quine’s arguments as we have reconstructed them seek to show that, for a particular specification …, the role of a translation function (or of syntactic deep structure, or of denotational scheme) can be played equally well by a number of different notions” (p. 138).

“Such sound conventionalist arguments cannot be refuted. They can be shown not to impugn the usefulness or objectivity of the notions they apply to. To do this one simply has to come up with some other project, with respect to which the various versions of, e.g., translation, do not play equally well the role that notion is invoked to play” (pp.138-139).

“The question I want to consider is, roughly, where the assertibility conditions and phonetic descriptions come from. In virtue of what does a sentence have the assertibility conditions and phonetic description that it does?” (p. 140).

Questions about conventional use are questions of empirical fact. Brandom’s “in virtue of what” question is on the other hand properly philosophical, in a sense that Plato and Aristotle would recognize.

We come to Brandom’s defense of Chomsky against Quine.

“Chomsky has argued on statistical grounds that most sentences used by adult native speakers have never been heard or used by that speaker before, and indeed that the majority of these have never been uttered by anyone in the history of the language. This is a striking empirical observation of far-reaching theoretical significance. Let us consider the sentences of English which have never yet been used. Not just any phonetic description is the phonetic description of some sentence of this set…. But a native speaker can not only discriminate between the phonetic descriptions which are on this list and conform to them in his own utterances, he has exactly the same acquaintance with the assertibility conditions of such a sentence that he does with the assertibility conditions of some familiar sentence like ‘Please pass the salt’. That is, a native speaker can discriminate between occasions on which it might be appropriately used and those on which it would be inappropriate. Granting, as we must, that there is a community of dispositions concerning these novel sentences which is sufficient to determine a social practice regarding their use, a notion of correct or incorrect utterance, surely this fact is remarkable. Why should the community agree as much about how to use sentences no one has ever heard before as about how to use common ones?” (pp. 140-141).

“For human beings, training in the use of the relatively few sentences we have actually been exposed to determines how we will use (or would use) the vast majority of sentences which we have not been exposed to” (p. 142).

“The question ‘In virtue of what is there a correct usage for a sentence no one has ever used before’ is distinct from, but not independent of the question ‘How do individual members of the linguistic community come to acquire dispositions which conform to the standard of correct usage for novel sentences?’ The questions are distinct because no individual’s dispositions, however acquired, establish a standard of correct usage. The questions are not independent since using a sentence is a social practice…. The question of how such agreement is achieved, its source and circumstance, is clearly related to the question of how individuals come to behave in ultimately agreeable ways…. The explanation of projection by populations must ultimately rest on facts about individual projective capacities…, although that explanation need not resemble the explanation of any such individual capacity” (pp. 143-144).

He clarifies what he means by projection.

“I want to argue that a theory of grammar is properly a part of the attempt to explain and predict the projective capacities of language-using populations. A theory of syntactic structure, of meaning, and of denotation and truth are to provide a framework for accounting for the empirical fact that the practices of a population which are the use of [a] relatively small number of sentences of a natural language determines, for that population, the use of a potentially infinite remainder they have never been exposed to” (p. 144).

“The notion of ‘grammar’ which I am addressing here is that of an interpreted categorial-transformational grammar. Such a grammar is an account of the generation of surface sentences of a language … from an underlying set of deep structures” (p. 144).

This is grammar in a Chomskyan rationalist, antibehaviorist sense.

“The projective capacities which are to be explained are obviously not entailed by the practices and dispositions codified in a set of those phonetic descriptions and assertibility conditions…. An account of projection is thus an explanation of how people, being the sorts or organisms that we are, can engage in the complex social practices we do engage in. It is just this sort of inquiry which we considered … as the sort of inquiry within which the objects involved in a practice become important” (p. 145).

This puts new light on how individual words and phrases come to mean what they do.

“Consideration of projective facts of this sort can lead us, further, to attribute structural classes of sub-sentential components to some speaker” (ibid).

“We are interested in seeing how, by looking at facts about the acquisition of vocabulary and compounding forms by a subject, we can in principle explain his open-ended competence to use novel utterances, by exhibiting that competence as the product of projective capacities associated with classes of sub-sentential components” (p. 147).

“Projective classes for an individual were pictured as attributed on the basis of two sorts of acquisition, roughly the acquisition of some projective form, and the acquisition of vocabulary” (pp. 147-148).

“Indeed, it is only in terms of such projective dispositions that we can explain the notion of correctness for novel utterances. We can only explain how there should be such an agreement in terms of shared structural classes induced by familiar expressions, which determine the projection to novel utterances” (p. 148).

Linguistic structure is a theoretical object of just the kind whose status is a matter of dispute between the realists and the instrumentalists.

“This picture of linguistic structure as postulated to account for a speaker’s ability to use novel utterances correctly, on the basis of facts about the acquisition of capacities to project sub-sentential expressions, leads immediately to a change in the criteria of adequacy we impose upon translation functions, and accordingly to a change in the notion of the ‘meaning’ of a sentence which is preserved by translation” (p. 150).

From an empiricist point of view, questions about norms are questions of fact about what is usually the case. Empirical norms are “norms” in a non-normative, statistical sense of “normal” that has nothing to do with what should be the case, except accidentally. The projection of grammar to novel cases on the other hand is possible because grammar has a properly normative sense of “right” usage that is independent of whatever we conclude are the facts about statistically “usual” usage.

“[I]f translation is really to transform the capacity to speak one language into the capacity to speak another, it must transform an individual’s capacity to project novel sentences…. In order to learn to speak the new language, to form novel sentences and use them appropriately, an individual must have a translation-scheme which does more than match assertibility conditions. It must generate the matched assertibility conditions of an infinite number of sentences on the basis of a familiarity with the elements out of which they are constructed, as exhibited in fairly small samples” (p. 150).

Speaking is not merely the utterance of sounds, and it is not just an imitation of other speaking. Concrete meanings presuppose learned notions of rightness or goodness of fit that are furthermore always in principle disputable. This also requires a non-behaviorist account of learning.

“Our account of this fact must show how what the subject learned to do before enables him to use this expression in just this way now, even though he has never been exposed to a correct use of it” (p. 151).

“Projection is not just a matter of using novel utterances, but also of using familiar ones under novel circumstances” (ibid).

“We can conclude that competence involved, not just in using … a free-standing utterance, but in projecting it as a genuine component of compound utterances, cannot be expressed merely by assertibility conditions, but requires some additional element” (p. 153).

“We should notice that the argument we have just considered is formally analogous to two arguments we have seen before. In the first place, it is just the same style of argument which we employed … in order to show that truth conditions were required to account for the contribution by component sentences to the assertibility conditions of compound sentences containing them…. All we have done here is to extend the earlier argument to sub-sentential compounding, an extension made possible by the more detailed consideration of why compounding is important. Second, this argument … is analogous to the ‘syntactic’ arguments of Chomsky…. In each case similar surface forms (phonetic descriptions and assertibility conditions respectively) are assigned different deep structures on the basis of their different projective roles…. So it is clear that these expressions would have to be associated with something besides assertibility conditions in our theory of their projection anyway” (pp. 154-155).

“Our explanation of the fact that there are correct phonetic descriptions and assertibility conditions for sentences no one has ever used before will be that the use of those sentences is determined by the grammar, … and that any individual’s learning to use the language is his learning to conform to the regularities of projection codified in that grammar” (p. 156).

“We have found that explaining the actual, empirical generation of the sentences of the language, shown by the sorts of projection of one corpus of utterances onto another which actually occur, requires that structural elements underlying phonetic structure be assigned to parallel structural elements underlying the assertibility conditions…. Just as the structure underlying the phonetic descriptions is plausibly identified as syntactic structure, so the corresponding structure underlying assertibility conditions is plausibly identified with semantic structure” (ibid).

“The same argument which gave us objective truth conditions … may thus be extended, within the context of our more detailed account of the empirical project which produces a grammar, to yield a parallel account of the function and origin of objective denotations” (p. 158).

“The case of the brown rabbit with a white foot shows that the denotations associated with the expressions ‘rabbit’ and ‘undetached rabbit-part’ must determine in some way the boundaries which white patches must exhibit in order to be grounds for reporting white rabbits or white undetached rabbit-parts” (ibid).

“But the boundaries which determine what objects or objective features are denoted by the expressions are not apparent boundaries…. Explaining the different patterns of projection of the elements of these pairs requires an objective difference in boundaries around white patches” (p. 159).

“It is important to realize that our grammar does not just seek to account for individual linguistic competence. It seeks to account for the shared projective practices in virtue of which there is a distinction between correct and incorrect uses of sentences no one has ever used before…. The grammar must account for the correct and incorrect potential uses of even quite complicated sentences which the ordinary man would never use” (ibid).

“[D]enotational schemes are part of an empirical explanation of certain social practices. Such explanations must cohere with the empirical explanations we are prepared to offer for other sorts of human conduct…. It is a prime virtue of the account we have offered of the question to which a grammar would be an answer that it shows us we can pick the objects in terms of which we explain projective practices in the same way we pick the objects in terms of which we explain color vision, indigestion, and quasars” (p. 162).

Here he is appealing to empirical explanation, and to something like the positivist notion of the unity of science. I am inclined to go to the opposite extreme, and to argue that genuine explanation is never merely empirical. There are empirical things, and we do want to explain them. There also is an empirical field of experience, but it too belongs to what is to be explained. In themselves empirical things do not explain anything. I think, though, that coherence does not apply only to explanation. There is also an implicit coherence on the level of what is to be explained. That is the sounder basis of the ideal of the unity of science.

In later work he explicitly criticizes empiricism in the philosophy of science, but he continues to be interested in empirical things, as evinced by many of his examples and by the theme of “semantic descent” in A Spirit of Trust.

Truth and Assertibility

Here we consider the second to last chapter of Brandom’s 1976 dissertation, which has proven to be quite an interesting document. On the one hand, he contrasts Dewey’s pragmatist notion of “warranted assertibility” with standard representationalist theories of truth. On the other, he argues that a thorough account of assertibility conditions entails an account of truth conditions, and that a thorough account of truth conditions entails an account of assertibility conditions. This chapter uses some formal logical machinery and a running series of examples, both of which I will downplay.

The very idea of examining the conditions that make something true is already quite sophisticated. One could almost forget its representationalist and foundationalist origins, because here we seem to be dealing with something more like reasons why. Truth conditions border on the territory of subjunctive robustness that Brandom develops in his later work. Truth in this sense is not just a static property that sentences abstractly and in a binary way have or do not have.

“The dominant tradition in contemporary philosophy of language, influenced by Frege, Russell, Wittgenstein of the Tractatus, Tarski, and Carnap, takes truth to be the basic concept in terms of which a theory of meaning, and hence a theory of language, is to be developed. According to this view, the essential feature of language is its capacity to represent the way things are. Understanding this function in detail is a matter of describing the conditions under which particular sentences truly represent the way things are. Formal semantics, the study of the truth conditions of sentences of various sorts of discourse, is the natural expression of this point of view.”

“On the other hand, there is a pragmatic approach to language shared by Dewey and the later Wittgenstein which attributes little or no importance to the notion of truth. According to this view, language, the medium of cognition, is best thought of as a set of social practices. In order to understand how language works, we must attend to the uses to which its sentences are put and the circumstances in which they are used. Dewey claimed that everything useful which could be said about language with the notion of truth could also be said with a more general and methodologically unproblematic notion of justified utterance or ‘warranted assertibility’ ” (p. 101).

The truth to which little or no importance is attributed is truth as representational correspondence. Even representational correspondence still has its uses though, as we will see from his remarks about Russell further below. But first he elaborates on Dewey’s concept.

“We want to associate with each sentence of the language the set of conditions under which it is appropriately uttered, or, as Dewey puts it, ‘warrantedly assertible’. We want, in other words, to associate with each sentence of the language some set, call it the assertibility conditions of the sentence such that our theory of the language gives us a uniform away of generating the regularities of usage a speaker must conform to for a given sentence, given only the ‘assertibility conditions’ assigned to that sentence” (p. 103).

“Now it is clear that no regularity of appropriate utterance which a speaker learns to conform to and which is reconstructed by a hypothetical theory of assertibility conditions for a language can amount to requiring that all utterances be true. To require that each speaker report the presence of a deer when and only when a deer is present would make infallibility a prerequisite for learning the language. The most that can be codified in the conditions of appropriate utterance of such reports is that one report deer when and only when there are what pass in the community as good reasons for believing a deer to be present” (p. 104).

The important thing here from an ethical point of view is not vacuous “certainty” about presumed facts, but the goodness of reasons for believing this or that.

“The suggestion I will develop as to the proper role of truth in explaining language-use is that of Michael Dummett….’Epistemic justifiability’ is a part of what we have called the ‘assertibility conditions’ of an utterance…. What we want to know is indeed how a notion of truth can be ‘born out of’ the less specific mode of commendation which is assertibility. And Dummett’s suggestion is that it is sentential compounding that enforces such a distinction.”

Dummett offers philosophical arguments for the superiority of constructive or “intuitionist” logic over classical logic. Constructive logic does not accept any assertion as primitive. It requires assertions to be justified by concrete evidence, rather than derived from axioms or assumed truths. It thus identifies what is true with what is provable, and at the same time it constrains what qualifies as proof.

The sentential compounding that Dummett emphasizes is a syntactic way of characterizing the idea of logical self-reference. One clause of a compound sentence modifies and refers to another clause or clauses in the same sentence. This is how richer meanings are built up. The suggestion is that truth arises out of this elemental process of refining meanings and increasing their “robustness” by tying them to other meanings.

“The primary sort of compound sentence Dummett has in mind seems to be the conditional” (p. 106).

The if-then form of conditionals is one way of expressing the fundamental notion of logical consequence, or how something follows from something else. Logic is less about distinguishing the true from the false than it is about discerning what follows from what.

“We may take the suggestion, then, to be that truth is ‘born out of’ assertibility as an auxiliary notion introduced to explain the assertibility conditions of some kinds of compound sentences” (p. 107).

“The ideal case would be one in which each compounding operator were assertibility-explicable…. Thus Dummett, giving him his premises, would have shown that English is not uniformly assertibility-explicable…. ” (pp. 110-111). “There are, of course, languages which are assertibility explicable. Intuitionistic mathematics is formulated in such a way that the assertibility conditions of compounds depends only upon the assertibility conditions of the components” (p.111n).

No natural language is purely constructive. Next we come to Brandom’s point about the interdependence of truth conditions and assertibility conditions.

“In the context of the machinery just developed, one thing which we might take Dummett to be saying is that truth is to be defined functionally, as the auxiliary … which explicates a certain class of compounding devices, among which is the conditional. In order to generate in a uniform way the assertibility conditions of compound sentences we need to look not only at the assertibility conditions of the embedded sentences, but also at the truth conditions of those embedded sentences. Put slightly differently, there is a class of compounding devices which are not uniformly assertibility-explicable, and such that they are truth-inducing, in that whatever does explicate them is a truth-concept…. I will try to show that there is a class of compounding devices which ought to be taken to be Truth Inducing Sentential Contexts…. I will try, in other word, to exhibit truth as an auxiliary notion introduced in order to account for the assertibility conditions of certain kinds of compound sentences” (p. 112).

“For if (speaker) meaning is, plausibly, whatever it is that the speaker must be said to ‘know’ when he can use that sentence properly, then that meaning includes on our account not just the assertibility conditions of the sentence, but also the contribution the sentence makes to the assertibility conditions of compound sentences containing it. Identity of assertibility conditions is thus a necessary but not sufficient condition for identity of meaning. Indeed, in any language containing [truth inducing sentential contexts], truth conditions, as well as assertibility conditions, are part of the meaning of each sentence which can appear embedded in a [truth inducing sentential context]” (p. 113).

“According to our formal analysis, then, … English is not assertibility-explicable. So some auxiliary notion must be introduced to generate the assertibility conditions of compound sentences. Dummett’s suggestion, as we have reformulated it, is that there is a class of compounding devices in English such that the auxiliary notion we need to introduce to explicate them (in our technical sense) is truth. What set of compounding devices ought we to take as [truth inducing sentential contexts] in English, then? Presumably the conditional is one” (p. 114).

Truth viewed in this way can be thought of as a kind of identity property that emerges out of the details of how things follow from other things.

In a note he quotes Quine, Roots of Reference (1970), “Two-valued logic is a theoretical development that is learned, like any other theory, in indirect ways upon which we can only speculate”, and adds, “The present chapter presents just such a detailed speculation” (ibid).

“The present suggestion is that we take truth as the auxiliary notion introduced … to explicate a certain class of compounds…. This is as yet only the form of a definition, for all we know so far of the class of compounds which would need to be specified is that it contains the devices used in our examples. Assuming that we had some independent characterization of the desired class of compounding devices, then, we could define the truth concept of any particular theory of a language to be that notion which in that theory explicates the hypothesized class. Some theories would be better than others in accounting for language-use, for all of the mundane reasons applicable anywhere else in science — ease of coupling with other theories, power, elegance, intuitive acceptability, exhibition of general principles, and so on. A fortiori, then, some truth-concepts would be better than others, for the language in question. We seek a definition of what it is to be a truth-concept (what role a notion must play in a theory of a language to be functioning as the truth-concept of the language according to that theory) which will allow us to be somewhat precise about the point of truth-theories before the entire details of the ‘best’ theory of any language are known. It is a striking fact that, as Dummett led us to see, we have pretty good intuitions concerning the role of truth in explicating the assertibility conditions of compounds even though we know nothing about such crucial details as what sort of thing the elements of sets of assertibility conditions are best taken to be, and even though we can exhibit no single concrete example of a sentence for which we can write down assertibility conditions” (pp. 116-117).

“Representationalists like Russel, arguing for a language-transcendent notion of truth, have claimed against truth-as-assertibility theorists like Dewey that the very notion of truth lies in the contrast it enables and enforces between how things are and how they are thought to be, believed to be, or desired to be by any person or group of people. If you have this distinction, you have a notion of truth; fail to make this distinction and you are simply talking about something else…. [W]e have seized on just that distinction which according to the representationalists generates the notion of truth. For on our account it is precisely the explication of compounds which systematically discriminate between the content of an utterance (how it says things are) and any state of the utterer (belief, desire, or what have you) which may be associated with it which requires the notion of truth as an auxiliary notion” (pp. 121-122).

My late father, who wrote his dissertation on Pierce, attributed to Pierce an aphorism to the effect that “the mark of reality is the sheriff’s hand on your shoulder”. In other words, reality can be distinguished as whatever constrains us in some way. In an earlier chapter, Brandom in passing situates Pierce as dealing with a recognizably Cartesian problem of how we can know an “external” reality that is what it is independent of us. My own distaste for Descartes notwithstanding, this does seem like an important point.

“In languages with sentential compounding devices, the speaker-meaning of a sentence (what the speaker must ‘know’ in order to be able to use the sentence) must be taken to consist not just of the assertibility conditions of that sentence, but also the contribution that a sentence makes to the assertibility conditions of sentences of which it is a component” (p. 122).

“Semantics as such never considers the final step of generating assertibility conditions given the truth conditions of components. For some sorts of compounding device — the conditional, negation, tensing, modal operators, and some others — it happens to be possible to generate the truth conditions of their components in relatively simple ways, as formal semantics has shown us. For other sorts of compounds, notoriously for analogues of ‘Waldo believes that…’ it appears that not only the truth conditions of components are needed, but also the assertibility conditions. If so, then the theory of truth conditions will not be able to insulate itself as a self-contained part ” (p. 123).

The point about belief here has to do with the need to distinguish something other than mere appearance. If I say I believe something, it has to be possible to ask whether I am justified or not in believing it, and that is different from simply asking what it was that I said I believed.

“In conclusion I would like to say something about the notion of truth that results from this way of looking at things. According to the usual understanding, the notion of truth is generated initially by the consideration of sentences in their categorical uses. According to this almost universally held view, a sentence like ‘Snow is white’, is either true or not true as a free-standing utterance. The employment of the notion of truth (in the form of truth conditions) in compounds of which the sentence is a part, e.g., conditionals, is a secondary, derivative matter. On the view which I have been urging in this chapter, however, it is the hypothetical use of sentences to which the notion of truth is primarily applicable, and its application to sentences in their categorical use is derivative. For according to our account, a free-standing utterance is truth-criticizable only in virtue of the possibility of taking it as the antecedent of a conditional” (pp. 125-126).

This is a fundamental point that in his later work Brandom attributes to Kant. Simple “categorical” judgments are always derivative. It is hypothetical judgments — that something follows from something else — that are more originary.

“Thus truth is primarily a predicate applicable to sentences used hypothetically, as antecedents of conditionals and similar constructions” (p. 126).

That is to say that rather than being an inexplicable property of categorical assertions, truth has do primarily with what is or is not a good inference.

“Thus the notion of truth is appropriately applied to free-standing, categorical utterances just insofar as they are involved in a social discourse in which conclusions may be based upon them according to inferential practices codified in conditionals with those sentences as antecedents” (p. 128).

“In order to see how the formal notion of truth invoked by the technical linguistic discipline we have considered is connected to the ordinary use of the truth predicate within the language, … one must consider the relations of the hypothetical use of a sentence as an antecedent of a conditional to the apparently categorical use of that sentence which is implicitly conditionalized by its utterance in the social context of argument, with inferential schemes parallel to conditionals” (ibid).

This is another important point. The fact that the surface grammar of an assertion is simple and categorical does not require that what is meant by it is categorical. When a superficially categorical assertion is cited in support of some other assertion, that pragmatic context makes it effectively a conditional.

Next in this series: Convention, Novelty, and Truth in Language