Normative “Force”

Frege’s notion of the “force” of an assertion plays a large role in the discussions of analytic philosophers about speech acts. In his usage, it has nothing to do with coercion or Newtonian physics. Rather, it concerns what I might call the “substance” of what is said, and what Brandom calls conceptual content, which for Brandom would be made explicit first of all through being interpreted as a kind of doing. The question of force seems to be, what are we doing in asserting this rather than that? This also brings in the larger real-world context of that doing.

Although Brandom subordinates reference to Fregean sense or intensional meaning, he also complements and interweaves his account of material-inferential sense with an account of real-world normative-pragmatic force”, and suggests that this is the ultimate driver of meaning. How things come to have or lose normative-pragmatic force — i.e., how the appearance of such force is legitimized or de-legitimized — he very persuasively argues is best explained by the Hegelian theory of mutual recognition.

At a programmatic level, a deep and wide historical and critical genealogy of the specific forms emerging from mutual recognition is the more particular shape that something like Ricoeur’s “long detour” of mediating interpretation takes for Brandom. Brandom’s monumental work pulling all the pieces of his general account together has left him little time to dwell on details of interpretation for particular cases, but I see it as an open invitation. My own “historiography” and “history of philosophy” notes tentatively sketch some key details in the broad panorama of the history of values. (See also Normativity; Autonomy, Normativity; Space of Reasons; Ethics.)

One important result of Brandom’s comprehensive development is that cases where reality figuratively “pushes back” against us are subsumed under the figure of normative force. (See also Rethinking Responsibility; Expansive Agency; Brandomian Forgiveness.)

Reference, Representation

The simplest notion of reference is a kind of literal or metaphorical pointing at things. This serves as a kind of indispensable shorthand in ordinary life, but the simplicity of metaphorical pointing is illusory. It tends to tacitly presuppose that we already know what it is that is being pointed at.

More complex kinds of reference involve the idea of representation. This is another notion that is indispensable in ordinary life.

Plato and Aristotle used notions of representation informally, but gave them no privileged status or special role with respect to knowledge. They were much more inclined to view knowledge, truth, and wisdom in terms of what is reasonable. Plato tended to view representation negatively as an inferior copy of something. (See Platonic Truth; Aristotelian Dialectic; Aristotelian Semantics.)

It was the Stoics who first gave representation a key role in the theory of knowledge. The Stoics coupled a physical account of the transmission of images — bridging optics and physiology — with very strong claims of realism, certain knowledge both sensory and rational, and completeness of their system of knowledge. In my view, the Stoic theory of representation is the classic version of the “correspondence” theory of truth. The correspondence theory treats truth as a simple “correspondence” to some reality that is supposed to be known beyond question. (Such a view is sometimes misattributed to Plato and Aristotle, but was actually quite alien to their way of thinking.)

In the Latin middle ages, Aquinas developed a notion of “perfect” representation, and Duns Scotus claimed that the most general criterion of being was representability. In the 17th century, Descartes and Locke built foundationalist theories of certain knowledge in which explicitly mental representations played the central role. Descartes also explicitly treated representation in terms of mathematical isomorphism, representing geometry with algebra.

Taking putatively realistic representational reference for granted is a prime example of what Kant called dogmatism. Kant suggested that rather than claiming certainty, we should take responsibility for our claims. From the time of Kant and Hegel, a multitude of philosophers have sharply criticized claims for certain foundations of representational truth.

In the 20th century, the sophisticated relational mathematics of model theory gave representation renewed prestige. Model-theoretic semantics, which explains meaning in terms of representation understood as relational reference, continues to dominate work in semantics today, though other approaches are also used, especially in the theory of programming languages. Model-theoretic semantics is said to be an extensional rather than intensional theory of meaning. (An extensional, enumerative emphasis tends to accompany an emphasis on representation. Plato, Aristotle, Kant, and Hegel on the other hand approached meaning in a mainly intensional way, in terms of concepts and reasons.)

Philosophical criticism of representationalist theories of knowledge also continued in the 20th century. Husserl’s phenomenological method involved suspending assumptions about reference. Wittgenstein criticized the notion of meaning as a picture. All the existentialists, structuralists, and their heirs rejected Cartesian/Lockean representationalism.

Near the end of the 20th century, Robert Brandom showed that it is possible to account very comprehensively for the various dimensions of reference and representation in terms of intensionally grounded, discursive material inference and normative doing, later wrapping this in an interpretation of Hegel’s ethical and genealogical theory of mutual recognition. This is not just yet another critique of representationalism, but an actual constructive account of an alternative, meticulously developed, that can explain how effects of reference and representation are constituted through engagement in normative discursive practices — how reference and representation have the kind of grip on us that they do, while actually being results of complex normative synthesis rather than simple primitives. (See also Normative Force.)

Brandom and Hermeneutics

It’s been a while since I said much about Robert Brandom, though his work — along with my own nonstandard reading of Aristotle — continues to be one of the main inspirations behind everything I write here.

Lately I’ve been devoting a lot of energy to belatedly catching up on the hermeneutics of Paul Ricoeur. To my knowledge, Ricoeur never commented on Brandom during his lifetime, and Brandom has not specifically commented on Ricoeur.

Brandom has, however, in Tales of the Mighty Dead explicitly endorsed some of the broad perspectives of Hans-Georg Gadamer’s hermeneutics, and he has devoted much attention to a “hermeneutics of magnanimity” in Hegel’s Phenomenology. Brandom’s mentor Richard Rorty concluded his famous work Philosophy and the Mirror of Nature by recommending a general turn from foundational epistemology to nonfoundationalist hermeneutics, and I have previously suggested that Brandom’s work as a whole could be viewed as a novel sort of hermeneutics developed within the analytic tradition.

Brandom’s fundamental concept of the priority of material inference over formal inference puts meaning — and therefore the interpretation of meaning — in the driver’s seat for reasoning, so to speak. This allows for the recovery of a more historic concept of Reason, which ever since Descartes has been mostly replaced by a mathematically based kind of rationality that is more precise and invaluable in technical realms, but also much more rigid, and in fact far more limited in its applicability to general human concerns (see Kinds of Reason).

Even prior to Descartes, Latin medieval logic already moved increasingly toward formalism. Since Frege and Russell, the rigorous mathematization of logic has yielded such impressive technical results that most philosophers seem to have forgotten there is any other way to view logic.

In the 1950s Wilfrid Sellars took the first steps toward initiating a counter-trend, reaching back to the pre-Cartesian tradition to formulate the notion of material inference later taken up by Brandom.

Modern complaints against Reason strongly and wrongly presuppose that it inevitably follows or approximates a formal path. Material inference provides the basis for a fundamentally hermeneutic view not only of Reason but also of logic and logical truth.

I have further stressed the fundamentally ethical or meta-ethical character of material inference, leading to a concept of ethical reason as the most fundamental form of Reason overall in a view that puts material inference before formal logic. As I put it not too long ago, ethical reason may optionally use the more technical forms of reason as tools. Ethical reason, I want to say, has a genuinely active character, but technical reason does not. Ethical reason is fundamentally oriented toward the concrete, like Aristotle’s practical judgment.

I want to say that there is such a thing as logical or semantic reference — saying something about something is not in vain — but a prior hermeneutic inquiry is necessary to ground and explain reference. Moreover, both Aristotle and Kant recognized something like this. Such a perspective is compatible with science, while putting ethical and meta-ethical inquiry first.

A hermeneutic concept of Reason saves us from a false dilemma between formalism on the one hand and question-begging appeals to intuition, authority, or irrational “decision” on the other. (See also Dialogue.)

Split Subject, Contradiction

The Žižekians, referencing Lacan, like to talk about a “split subject” that is noncoincident with itself. In broad terms, I think this is useful. What we call subjectivity is divided, and lacking in strong unity. (See also Pure Negativity?; Acts in Brandom and Žižek.) But it seems to me that if we try to speak carefully about this, we should not then go on using singular articles like “the” or “a”.

I tend to think subjectivity is not just fractured or un-whole, but also actually consists of a complex overlay of different things that we tend to blur together. In particular, it seems clear to me that a common-sense, biographical “self” whose relative unity over time is trackable by relation to the “same” physical body — or by Lockean continuity of memory — is not the same as what we might in a given moment view from a distance as an individualized ethos, or up close as a unity of apperception. This is, I believe, the same distinction that Brandom discusses in terms of sentience and sapience.

Ethos and unity of apperception, and their constituent values and conceptions — the very things that most properly say “I”, and play the functional role of an ethical “subject”, or of a subject of knowledge — are profoundly involved with language, social relations, and what Lacan in his earlier work called the Symbolic and the “Other”. These instances of sapience are pure forms whose identity can only be expressed in terms of sameness of form — nonempirical, but inseparable from a larger ethical world — and simultaneously intimate to us, but by no means strictly “ours”. (See also Self, Subject.)

Where I am still a bit torn is that I also feel that emotions — which I’ve been locating on the former, “self” side — are fundamental to subjectivity as a whole, but I have theoretically separated them from the main locus of transcendental ethical and epistemic subjectivity, even though they play an essential role in making it possible. One logical solution would be to say this just means subjectivity as a whole is more than just ethical and epistemic. Another would be to say that there is a separate kind of emotional subjectivity. I’m not entirely satisfied yet, because I think feeling combines these, but the noncoincidence of our factual selves with our ethical and epistemic being seems very important in understanding how we overcome empirical limitations.

The Žižekians will perhaps remind us that they were not talking about a split between self and subject, but about a split within the subject. I think we habitually overstate the degree of unity and identity we attribute to selves, subjects, and things in general, so I’m fine with that, too. They also want to expand this into a general “ontological” point, which I see as a semantic point.

Perhaps the Žižekians are more comfortable talking about “a” or “the” subject in part due to their doctrine of the ubiquity of contradiction. Todd McGowan in Emancipation After Hegel (2019) nicely distinguishes the Žižekian notion from the old confusion between contradiction and conflict or polarity — and from immediate self-contradiction — but still wants to maintain that the standard logical law of noncontradiction ultimately “refutes itself”, and that Hegel thought this as well. This argument combines a laudable awareness of some of the practical issues with identity, with a logically invalid use of the distinction between explicit and implicit self-contradiction.

Hegel meditated profoundly on the difficulties of applying logic to meaningful content and to real life. He strained language to the breaking point trying to express his conclusions.

On the frontiers of mathematical logic today, the so-called law of identity has been replaced by a requirement to specify identity criteria for each formally defined type, and identity in general has been weakened to isomorphism. (See also Form as a Unique Thing.)

Real-world applications of strong identity typically involve loose “extensional” reference to things assumed to be the same, and a lot of forgetting. The linchpin of old “identity thinking” was inattention to difficulties of formalization from ordinary language — basically an illegitimate moving back and forth between formal and informal domains, resulting in lots of homogenizing confusion of things that ought to be distinct. Weaker, “intensional” assertions about identity as specifiable sameness of form make it the exception rather than the rule. What come first conceptually are distinctions within the manifold, not pre-synthesized things already possessed of identity. Where things are not the same to begin with, contradiction — far from being omnipresent — is not even potentially at issue. (See also Self-Evidence?)

Meanwhile, Sellars and Brandom have revived material inference about meant realities in contrast to formal logic, which deals with purely syntactic relations between presumed extensional “things” with presumed identity. Things Kant and Hegel said about Understanding and Reason can be nicely understood in terms of the relation between syntactic inference about symbolic terms standing for formless extensional “things” and substantive, material inference about the actual form of meant realities. Especially in the reading of Hegel, not having the resource of this distinction available now seems positively crippling.

Finally, Aristotle, who originated the law of noncontradiction as a kind of ethical imperative, and stands in the background to all of Hegel’s discussions of logic, was himself rather cautious and tentative about applying identity to real things, and in his logic was also mainly concerned with (composition of) material inferences, which have more to do with the actual form of things .

Hegel never violated Aristotle’s imperative not to say opposite things about the same thing said in the same way. What he did was to constantly point out the gap between reality and traditional semi-formal logic applied to ordinary language — not to encourage us to reject logic, but rather to refine and sublimate it. (See also Aristotelian and Hegelian Dialectic.)

Form as a Unique Thing

Ever since Plato talked about Forms, philosophers have debated the status of so-called abstract entities. To my mind, referring to them as “entities” is already prejudicial. I like to read Plato himself in a way that minimizes existence claims, and instead focuses on what I think of as claims about importance. Importance as a criterion is practical in a Kantian sense — i.e., ultimately concerned with what we should do. As Aristotle might remind us, what really matters is getting the specific content of our abstractions right for each case, not the generic ontological status of those abstractions.

One of Plato’s main messages, still very relevant today, is that what he called Form is important. A big part of what makes Form important is that it is good to think with, and a key aspect of what makes Plato’s version good to think with is what logically follows from its characterization as something unique in a given case. (Aristotle’s version of form has different, more mixed strengths, including both a place for uniqueness and a place for polyvocality or multiple perspectives, making it simultaneously more supple and more difficult to formalize.) In principle, such uniqueness of things that nonetheless also have generality makes it possible to reason to conditionally necessary outcomes in a constructive way, i.e., without extra assumptions, as a geometer might. Necessity here just means that in the context of some given construction, only one result of a given type is possible. (This is actually already stronger than the sense Aristotle gave to “necessity”. Aristotle pragmatically allowed for defeasible empirical judgments that something “necessarily” follows from something else, whenever there is no known counter-example.)

In the early 20th century, Bertrand Russell developed a very influential theory of definite descriptions, which sparked another century-long debate. Among other things (here embracing an old principle of interpretation common in Latin scholastic logic), he analyzed definite descriptions as always implying existence claims.

British philosopher David Corfield argues for a new approach to formalizing definite descriptions that does not require existence claims or other assumptions, but only a kind of logical uniqueness of the types of the identity criteria of things. His book Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy, to which I recently devoted a very preliminary article, has significant new things to say about this sort of issue. Corfield argues inter alia that many and perhaps even all perceived limits of formalization are actually due to limits of the particular formalisms of first-order classical logic and set theory, which dominated in the 20th century. He thinks homotopy type theory (HoTT) has much to offer for a more adequate formal analysis of natural language, as well as in many other areas. Corfield also notes that most linguists already use some variant of lambda calculus (closer to HoTT), rather than first-order logic.

Using first-order logic to formalize natural language requires adding many explicit assumptions — including assumptions that various things “exist”. Corfield notes that ordinary language philosophers have questioned whether it is reasonable to suppose that so many extra assumptions are routinely involved in natural language use, and from there reached pessimistic conclusions about formalization. The vastly more expressive HoTT, on the other hand, allows formal representations to be built without additional assumptions in the representation. All context relevant to an inference can be expressed in terms of types. (This does not mean no assumptions are involved in the use of a representation, but rather only that the formal representation does not contain any explicit assumptions, as by contrast it necessarily would with first-order logic.)

A main reason for the major difference between first-order logic and HoTT with respect to assumptions is that first-order logic applies universal quantifications unconditionally (i.e., for all x, with x free or completely undefined), and then has to explicitly add assumptions to recover specificity and context. By contrast, type theories like HoTT apply quantifications only to delimited types, and thus build in specificity and context from the ground up. Using HoTT requires closer attention to criteria for identities of things and kinds of things.

Frege already had the idea that logical predicates are a kind of mathematical function. Mathematical functions are distinguished by invariantly returning a unique value for each given input. The truth functions used in classical logic are also a kind of mathematical function, but provide only minimal distinction into “true” and “false”. From a purely truth-functional point of view, all true propositions are equivalent, because we are only concerned with reference, and their only reference (as distinguished from Fregean sense) is to “true” as distinct from “false”. By contrast, contemporary type theories are grounded in inference rules, which are kinds of primitive function-like things that preserve many more distinctions.

In one section, Corfield discusses an HoTT-based inference rule for introduction of the definite article “the” in ordinary language, based on a property of many types called “contractibility” in HoTT. A contractible type is one that can be optionally taken as referring to a formally unique object that can be constructed in HoTT, and whose existence therefore does not need to be assumed. This should also apply at least to Platonic Forms, since for Plato one should always try to pick out the Form of something.

In HoTT, every variable has a type, and every type carries with it definite identity criteria, but the identity criteria for a given type may themselves have a type from anywhere in the HoTT hierarchy of type levels. In a given case, the type of the identity criteria for another type may be above the level of truth-functional propositions, like a set, groupoid, or higher groupoid; or below it, i.e., contractible to a unique object. This sort of contractibility into a single object might be taken as a contemporary formal criterion for a specification to behave like a Platonic Form, which seems to be an especially simple, bottom-level case, even simpler than a truth-valued “mere” proposition.

The HoTT hierarchy of type levels is synthetic and top-down rather than analytic and bottom-up, so everything that can be expressed on a lower level is also expressible on a higher level, but not necessarily vice versa. The lower levels represent technically “degenerate” — i.e., less general — cases, to which one cannot “compile down” in some instances. This might also be taken to anachronistically explain why Aristotle and others were ultimately not satisfied with Platonic Forms as a general basis for explanation. Importantly, this bottom, “object identity” level does seem to be adequate to account for the identity criteria of mathematical objects as instances of mathematical structures, but not everything is explainable in terms of object identities, which are even less expressive than mere truth values.

Traditionally, mathematicians have used the definite article “the” to refer to things that have multiple characterizations that are invariantly equivalent, such as “the” structure of something, when the structure can be equivalently characterized in different ways. From a first-order point of view, this has been traditionally apologized for as an “abuse of language” that is not formally justified. HoTT provides formal justification for the implicit mathematical intuition underpinning this generally accepted practice, by providing the capability to construct a unique object that is the contractible type of the equivalent characterizations.

With this in hand, it seems we won’t need to make any claims about the existence of structures, because from this point of view — unlike, e.g., that of set theory — mathematical talk is always already about structures.

This has important consequences for talk about structuralism, at least in the mathematical case, and perhaps by analogy beyond that. Corfield argues that anything that has contractible identity criteria (including all mathematical objects) just is some structure. He quotes major HoTT contributor Steve Awodey as concluding “mathematical objects simply are structures. Could there be a stronger formulation of structuralism?”

Thus no ontology or theory of being in the traditional (historically Scotist and Wolffian) sense is required in order to support talk about structures (or, I would argue, Forms in Plato’s sense). (In computer science, “ontology” has been redefined as an articulation of some world or domain into particular kinds, sorts, or types, where what is important is the particular classification scheme practically employed, rather than theoretical claims of real existence that go beyond experience. At least at a very high level, this actually comes closer than traditional “metaphysical” ontology did to Aristotle’s original practice of higher-order interpretation of experience.)

Corfield does not discuss Brandom at length, but his book’s index has more references to Brandom than to any other named individual, including the leaders in the HoTT field. All references in the text are positive. Corfield strongly identifies with the inferentialist aspect of Brandom’s thought. He expresses optimism about HoTT representation of Brandomian material inferences, and about the richness of Brandom’s work for type-theoretic development.

Corfield is manifestly more formally oriented than Brandom, and his work thus takes a different direction that does not include Brandom’s strong emphasis on normativity, or on the fundamental role of what I would call reasonable value judgments within material inference. From what I take to be an Aristotelian point of view, I greatly value both the inferentialist part of Brandom that Corfield wants to build on, and the normative pragmatic part that he passes by. I think Brandom’s idea about the priority of normative pragmatics is extremely important; but with that proviso, I still find Corfield’s work on the formal side very exciting.

In a footnote, Corfield also directs attention to Paul Redding’s recommendation that analytic readers of Hegel take seriously Hegel’s use of Aristotelian “term logic”. This is not incompatible with a Kantian and Brandomian emphasis on the priority of integral judgments. As I have pointed out before, the individual terms combined or separated in canonical Aristotelian propositions are themselves interpretable as judgments.

Syllogism

Aristotle invented logic as a discipline, and in Prior Analytics developed a detailed theory of so-called syllogisms to codify deductive reasoning, which also marks the beginning of formalization in logic. Although there actually were interesting developments in the European middle ages with the theory of so-called supposition as a kind of semi-formal semantics, Kant famously said Aristotle had said all there was to say about logic, and this went undisputed until the time of Boole and De Morgan in the mid-19th century. Boole himself said he was only extending Aristotle’s theory.

The fundamental principle of syllogistic reasoning is best understood as a kind of function composition. Aristotle himself did not have the concept of a mathematical function, which we owe mainly to Leibniz, but he clearly used a concept of composition of things we can recognize as function-like. In the late 19th century, Frege pointed out that the logical meaning of grammatical predication in ordinary language can be considered as a kind of function application.

Aristotle’s syllogisms were expressed in natural language, but in order to focus attention on their form, he often substituted letters for concrete terms. The fundamental pattern is

(quantifier A) op B
(quantifier B) op C
Therefore, A op C

where each instance of “quantifier” is either “some” or “all”; each instance of “op” is either what Aristotle called “combination” or “separation”, conventionally represented in natural language by “is” or “is not”; and each letter is a type aka “universal” aka higher-order term. (In the middle ages and later, individuals were treated as a kind of singleton types with implicit universal quantification, so it is common to see examples like “Socrates is a human”, but Aristotle’s own concrete examples never included references to individuals.) Not all combinations of substitutions correspond to valid inferences, but Prior Analytics systematically described all the valid ones.

In traditional interpretations, Aristotle’s use of conventionalized natural language representations sometimes led to analyses of the “op” emphasizing grammatical relations between subjects and predicates. However, Aristotle did not concern himself with grammar, but with the more substantive meaning of (possibly negated) “said of” relations, which actually codify normative material inferences. His logic is thus a fascinating hybrid, in which each canonical proposition represents a normative judgment of a material-inferential relation between types, and then the representations are formally composed together.

The conclusion B of the first material inference, which is also the premise of the second, was traditionally called the “middle term”, the role of which in reasoning through its licensing of composition lies behind all of Hegel’s talk about mediation. The 20th century saw the development of category theory, which explains all mathematical reasoning and formal logic in terms of the composition of “morphisms” or “arrows” corresponding to primitive function- or inference-like things. Aside from many applications in computer science and physics, category theory has also been used to analyze grammar. The historical relation of Aristotle to the Greek grammarians goes in the same direction — Aristotle influenced the grammarians, not the other way around. (See also Searching for a Middle Term; Aristotelian Demonstration; Demonstrative “Science”?)

Interpretation

It seems to me that the main thing human reason does in real life is to interpret the significance of things. When we think of something, many implicit judgments about it are brought into scope. In a way, Kant already suggested this with his accounts of synthesis.

In real-world human reasoning, the actually operative identity of the things we reason about is not the trivial formal identity of their names or symbols, but rather a complex one constituted by the implications of all the judgments implicitly associated with the things in question. (See also Identity, Isomorphism; Aristotelian Identity.)

This is why people sometimes seem to talk past one another. The same words commonly imply different judgments for different people, so it is to be expected that this leads to different reasoning. That is why Plato recommended dialogue, and why Aristotle devoted so much attention to sorting out different ways in which things are “said”. (See also Aristotelian Semantics.)

I think human reason uses complex material inference (reasoning based on intermediate meaning content rather than syntax) to evaluate meanings and situations in an implicit way that usually ends up looking like simple summary judgment at a conscious level, but is actually far more involved. A great deal goes on, very rapidly and below the level of our awareness. Every surface-level judgment or assertion implicitly depends on many interpretations.

Ever since Aristotle took the first steps toward formalization of logic, people have tended to think of real-world human reasoning in terms modeled straightforwardly on formal or semi-formal logical operations, with meanings of terms either abstracted away or taken for granted. (Aristotle himself did not make this mistake, as noted above.) This fails to take into account the vast amount of implicit interpretive work that gets encapsulated into ordinary terms, by means of their classification into what are effectively types, capturing everything that implicitly may be relevantly said about the things in question in the context of our current unity of apperception.

A typed term for a thing works as shorthand for many judgments about the thing. Conversely, classification and consequent effective identity of the thing depend on those judgments.

As a result of active deliberation, we often refine our preconscious interpretations of things, and sometimes replace them altogether. Deliberation and dialectic are the testing ground of interpretations.

In general, interpretation is an open-ended task. It seems to me that it also involves something like what Kant called free play. (See also Hermeneutics; Theory and Practice; Philosophy; Ethical Reason; The Autonomy of Reason; Foundations?; Aristotelian Demonstration; Brandom on Truth.)

Dialogue

The ethical importance of dialogue can hardly be overstated. The key to ethical dialogue is mutual acceptance of sincere questioning about reasons. To ask a question is not to make a counter-assertion, and no one should ever take offense at a sincere question.

To qualify as based on good judgment or sound reasoning, a commitment or one’s reasons for holding it must be explainable in a shareable way. Sharing of the kind of meaning-based material inference used in everyday reasoning and judgment (as well as most philosophy) is a social process of open-ended dialogue.

The world’s oldest preserved examples of such rational dialogue (or any kind of rational development) are contained in the works of Plato. Earlier figures just wrote down what they saw as the truth. Plato provided many examples of a method of free inquiry. (Aristotle says the atomist Democritus was another initiator of rational inquiry, but the works of Democritus do not survive.) This is yet another reason why Hegel called Plato and Aristotle the greatest teachers of the human race.

Plato bequeathed to us many idealized examples of reasoning by dialogue. He raised them into an art form, creating a new literary genre in the process. His dialogues vary in the degree to which they approximate free open-ended discussion; most often, one character leads the discussion through question and answer, and sometimes even the question and answer is limited. However, since Plato’s dialogues are like plays portraying self-contained conversations, they are very accessible.

The style of question and answer often practiced by Platonic characters like Socrates — commonly known as Socratic method — provides a model for how anyone can contribute to such a development. The questioner tries to reason only from things to which the answerer agrees, but often has to keep questioning to draw out the needed background.

In a fully free and open dialogue characterized by mutual recognition, any party may make contributions of this sort. As Sellars and Brandom would remind us, to assert anything at all is implicitly to take responsibility for that assertion, which is to invite questioning about our reasons.

Material Inference

Most “real world” reasoning is actually material in the sense developed by Sellars and Brandom. The “materiality” of material inference recalls several things.

Material inference addresses meant realities through the medium of their expression in ordinary language.

The “materiality” of material inference also refers to the fact that it always involves meaning-based judgments about proprieties of inference, not simple mechanical application of formal transformations. These may subsequently be given formal representation, but the starting point is what I want to call a form of ethical judgment, that it is right to make this inference about these things in this situation. This could equally be applied to questions of what really is the case, or of the appropriateness of actions. The “meaning” at issue should itself be understood in terms of other normative material inferences, provisionally held constant while the inference at hand is assessed.

Finally, the “materiality” of material inference also recalls its recursive dependence on previous concrete judgments.

Material inference works with two kinds of relational judgments: material consequence and material incompatibility.

Things Said

Saying is a specialized form of doing. When saying and doing are contrasted, what is asserted is a contrast between kinds of doing that have different implications. Proprieties of both saying and doing are matter for material inference.

Implicit consideration of a material-inferential ethical dimension is what distinguishes canonical Aristotelian saying from the mere emission of words. This dimension of ethics of material inference gives more specific content to epistemic conscientiousness.

Saying is also a social act that occurs in a larger social context. This gives it a second ethical dimension, starting from consideration of others and situational appropriateness. (See also Interpretive Charity; Honesty, Kindness; Intellectual Virtue, Love; Mutual Recognition.)