Summing Up So Far

A philosophical approach to ethics brings in many considerations that may initially seem remote from the question of what to do, but can greatly enrich our ability to think about it.

Philosophy is not just any view of the world, but an inquiry into the meaning of things that is sustained and free. It also could not be the activity of an isolated individual. It is an intrinsically historical development, because it is a cumulative achievement of the virtual universal community of talking animals across space and time, through various ups and downs. The best way into it is through a kind of dialogue with the great philosophers. Pursuing this in depth turns out to involve many historiographical questions.

Truth does not come to us ready-made. What we take as truth is always the provisional result of a development. The primary activity of reason is the determination of meaning through a kind of open-ended interpretation. It is therefore involved with a kind of hermeneutics.

Ethics involves us as whole beings. Subjectivity is manifold. Its ethically important aspects have to do both with our acquired emotional constitution and with shareable contents and commitments.

Aristotelian Demonstration

Demonstration is literally a showing. For Aristotle, its main purpose is associated with learning and teaching, rather than proof. Its real objective is not Stoic or Cartesian certainty “that” something is true, but the clearest possible understanding of the substantive basis for definite conclusions, based on a grasping of reasons.

Aristotle’s main text dealing with demonstration, the Posterior Analytics, is not about epistemology or foundations of knowledge, although it touches on these topics. Rather, it is about the pragmatics of improving our informal semantic understanding by formal means.

For Aristotle, demonstration uses the same logical forms as dialectic, but unlike dialectic — which does not make assumptions ahead of time whether the hypotheses or opinions it examines are true, but focuses on explicating their inferential meaning — demonstration is about showing reasons and reasoning behind definite conclusions. Dialectic is a kind of conditional forward-looking interpretation based on consequences, while demonstration is a kind of backward-looking interpretation based on premises. Because demonstration’s practical purpose has to do with exhibiting the basis for definite conclusions, it necessarily seeks sound premises, or treats its premises as sound, whereas dialectic is indifferent to the soundness of the premises it analyzes in terms of their consequences.

We are said to know something in Aristotle’s stronger sense when we can clearly explain why it is the case, so demonstration is connected with knowledge. This connection has historically led to much misunderstanding. In the Arabic and Latin commentary traditions, demonstration was interpreted as proof. The Posterior Analytics was redeployed as an epistemological model for “science” based on formal deduction, understood as the paradigm for knowledge, while the role of dialectic and practical judgment in Aristotle was greatly downplayed. (See also Demonstrative “Science”?; Searching for a Middle Term; Plato and Aristotle Were Inferentialists; The Epistemic Modesty of Plato and Aristotle; Belief; Foundations?; Brandom on Truth.)

Interpretation

It seems to me that the main thing human reason does in real life is to interpret the significance of things. When we think of something, many implicit judgments about it are brought into scope. In a way, Kant already suggested this with his accounts of synthesis.

In real-world human reasoning, the actually operative identity of the things we reason about is not the trivial formal identity of their names or symbols, but rather a complex one constituted by the implications of all the judgments implicitly associated with the things in question. (See also Identity, Isomorphism; Aristotelian Identity.)

This is why people sometimes seem to talk past one another. The same words commonly imply different judgments for different people, so it is to be expected that this leads to different reasoning. That is why Plato recommended dialogue, and why Aristotle devoted so much attention to sorting out different ways in which things are “said”. (See also Aristotelian Semantics.)

I think human reason uses complex material inference (reasoning based on intermediate meaning content rather than syntax) to evaluate meanings and situations in an implicit way that usually ends up looking like simple summary judgment at a conscious level, but is actually far more involved. A great deal goes on, very rapidly and below the level of our awareness. Every surface-level judgment or assertion implicitly depends on many interpretations.

Ever since Aristotle took the first steps toward formalization of logic, people have tended to think of real-world human reasoning in terms modeled straightforwardly on formal or semi-formal logical operations, with meanings of terms either abstracted away or taken for granted. (Aristotle himself did not make this mistake, as noted above.) This fails to take into account the vast amount of implicit interpretive work that gets encapsulated into ordinary terms, by means of their classification into what are effectively types, capturing everything that implicitly may be relevantly said about the things in question in the context of our current unity of apperception.

A logical type for a thing works as shorthand for many judgments about the thing. Conversely, classification and consequent effective identity of the thing depend on those judgments.

As a result of active deliberation, we often refine our preconscious interpretations of things, and sometimes replace them altogether. Deliberation and dialectic are the testing ground of interpretations.

In general, interpretation is an open-ended task. It seems to me that it also involves something like what Kant called free play. (See also Hermeneutics; Theory and Practice; Philosophy; Ethical Reason; The Autonomy of Reason; Foundations?; Aristotelian Demonstration; Brandom on Truth.)

Kantian Discipline

The Discipline of Pure Reason chapter in Kant’s Critique of Pure Reason makes a number of important points, using the relation between reason and intuition introduced in the Transcendental Analytic. It ends up effectively advocating a form of discursive reasoning as essential to a Critical approach.

If we take a simple empirical concept like gold, no amount of analysis will tell us anything new about it, but he says we can take the matter of the corresponding perceptual intuition and initiate new perceptions of it that may tell us something new.

If we take a mathematical concept like a triangle, we can use it to rigorously construct an object in pure intuition, so that the object is nothing but our construction, with no other aspect.

However, he says, if we take a “transcendental” concept of a reality, substance, force, etc., it refers neither to an empirical nor to a pure intuition, but rather to a synthesis of empirical intuitions that is not itself an empirical intuition, and cannot be used to generate a pure intuition. This is related to Kant’s rejection of “intellectual” intuition. We are constantly tempted to act as if our preconscious syntheses of such abstractions referred to objects in the way that empirical and mathematical concepts do, each in their own way, but according to Kant’s analysis, they do not, because they are neither perceptual nor rigorously constructive.

All questions of what are in effect higher-order expressive classifications of syntheses of empirical intuitions belong to “rational cognition from concepts, which is called philosophical” (Cambridge edition, p.636, emphasis in original). This is again related to his rejection of the apparent simplicity and actual arbitrariness of intellectual intuition and its analogues like supposedly self-evident truth. It opens into the territory I have been calling semantic, and associating with a work of open-ended interpretation. (See also Discursive; Copernican; Dogmatism and Strife; Things In Themselves.)

I am more optimistic than Kant that something valuable — indeed priceless — can come from this sort of open-ended work of interpretation. Its open-endedness means no achieved result is ever beyond question, but I think we implicitly engage in this sort of “philosophical” interpretation every day of our lives, and have no choice in the matter. I also think serious ethical deliberation necessarily makes use of such interpretation, and again we have no choice in the matter. So, pragmatically speaking, defeasible interpretation is indispensable.

Kant goes on to polemicize against attempts to import a mathematical style of reasoning into philosophy, like Spinoza tried to do. Spinoza’s large-scale experiment with this in the Ethics I find fascinating, but ultimately artificial. It does make the inferential structure of his argument more explicit, and Pierre Macherey used this to great advantage in his five-volume French commentary on the Ethics. But there is a big difference between a pure mathematical construction — which can be interpreted without remainder by something like formal structural-operational semantics in the theory of programming languages, and so requires no defeasible interpretation of the sort mentioned above, on the one hand — and work involving concepts that can only be fully explicated by that sort of interpretation, on the other. Big parts of life — and all philosophy — are of the latter sort. So it seems Kant is ultimately right on this.

Kant points out that definition only has precise meaning in mathematics, and prefers to use a different word in other contexts. I make similar well-intentioned but admittedly opinionated recommendations about vocabulary, but what is most important is the conceptual difference. As long as we are clear about that, we can use the same word in more than one sense. As Aristotle would remind us, multiple senses of words are an inescapable feature of natural language.

Kant says that unlike the case of mathematics, in philosophy we should not put definitions first, except perhaps as a mere experiment. Again, he probably has Spinoza in mind, and again — personal fondness for Spinoza notwithstanding — I have to agree. (Macherey in his reading of Spinoza actually often goes in the reverse direction, interpreting the meaning of each part in terms of what it is used to “prove”, but the order of Spinoza’s own presentation most obviously suggests the kind of thing to which Kant is properly objecting.) More than anything else, meanings are what we seek in philosophical inquiry, so they cannot be just given at the start. We can certainly discuss or dialectically analyze stipulated meanings, but that is strictly secondary and subordinate to a larger interpretive work.

Following conventional practice, Kant allows for axioms in mathematics, but says they have no place in philosophy. He has in mind the older notion of axioms as supposedly self-evident truths. Contemporary mathematics has vastly multiplied alternative systems, and effectively treats axioms like stipulative definitions instead. If we have in mind axioms as self-evident truths, Kant’s point holds. If we have in mind axioms as stipulative definitions, then his point about stipulative definitions in philosophy applies to axioms as well.

A similar pattern holds for demonstration or proof. Mathematics for Kant always has to do with strict constructions, which do not apply in philosophy, where there is always matter for interpretation. (From the later 19th century, mathematicians began increasingly to invent theories that seemed to require nonconstructive assumptions — transfinite numbers, standard set theories, and so on. This is currently in flux again. Contrary to what was thought at an earlier time, it now appears that all valid “classical” mathematics, including transfinite numbers, can be expressed in a higher-order constructive formalism. Arguments are still raging about which style is better, but I am sympathetic to the constructive side.) Philosophical arguments are informally reasoned interpretations, not proofs.

Kant says that speculative thought in general, because it does not abide by these guidelines, unfortunately ends up full of what he does not hesitate to call dishonesty and hypocrisy. (When I occasionally ascribe honesty or dishonesty to a philosopher, it is with similar criteria in mind — especially the presence or absence of frank identification of speculation as such when it occurs. See also Likely Stories.)

The kind of philosophy I am recommending is concerned with explication of meanings, not a supposed generation of truths, so it is not speculative in Kant’s sense. What may not be obvious is just how large and vital the field of this sort of interpretation really is in life. The most common and compact form by which such interpretations are expressed in the small looks syntactically like ordinary assertion, and in ordinary social interaction, mistaking one for the other has little effect on communication. When the focus is not on practical communication but on improving our understanding, we have to step back and look at the larger context, in order to tell what is a speculative assertion and what is an interpretation expressed in the form of assertion. (See also Pure Reason, Metaphysics?; Three Logical Moments.)

(In the present endeavor, the great majority of what look like simple assertions are actually compact expressions of interpretations!)

Classification

Like definition, classification sometimes gets an unjustified bad name. It is not a kind of truth about the world, but rather plays an expressive role, helping to explain the meaning of what is said. Classification makes it possible to substitute simple names for arbitrarily complex conditions or adverbial expressions, allowing the underlying complexity to be abstracted away, or reconstructed as needed.

In Book 1 of Parts of Animals, Aristotle shows great sophistication about this. He explicitly argues against Plato’s recommended procedure of “division” or repeated application of binary distinctions, noting that many significant real-world distinctions are better approached as n-ary or manifold than as binary.

He also explicitly notes how difficult and arbitrary it ends up being to develop real-world classifications in a strictly hierarchical manner, arguing for a more holistic approach, which cannot be reduced to a sequential application of lower-level operations.

Good real-world classifications are arrived at through dialectical trial, error, and iterative self-correction over time. Conversely, behind every ordinary referential use of simple names for things is a complex implicit dialectical/semantic development. (See also Aristotle’s Critique of Dichotomy; Hermeneutic Biology?; Difference; Aristotelian Identity; Substance; Aristotelian Semantics.)

Dialectic, Semantics

Aristotle’s potent combination of dialectic with semantics starting from common experience guides his interpretations of things throughout his work. (Metaphysics applies this general approach especially to higher-order cases.) His core concepts are mainly either tools for this — like form, matter or circumstance, ends, means, actuality, potentiality, hylomorphism, difference, univocity and equivocity, and substance — or they are the results of applying such an approach in particular contexts. (See also Material Inference; Practical Judgment.)

Monism, Pluralism, Dualism?

I’d like to return to the question of keeping space open for the harmonious coexistence of a kind of monism, a kind of pluralism, and a kind of dualism at different levels of interpretation in the development pursued here.

At the level of the whole field of potential attributions of agency and responsibility, I’d like to foster the normative monism or monism of expression that I have attributed to Brandom. This seems to have the resources to translate any given empirical, factual content into the expressive terms of a transcendental normative evaluation. Here, everything that is expressible in any way whatsoever becomes expressible in ethical terms. The meaning of the monism in question has to do mainly with a kind of completeness of coverage in overcoming the subject-object dichotomy, not a lack of differentiation. Also, the complete field will include many overlapping attributions, so we should not expect it to have a univocal interpretation. So, in these ways, this monism is not incompatible with a pluralism after all.

At the level of detailed actual processes of evaluation of what is right and true, “monism” — or, more properly, unity of apperception — is only a guiding end that must be applied to a constantly moving target, so a unity that is momentarily achieved may partially unravel again. (See Error.) Also, there may be more than one sound interpretation of the “same” content under evaluation, and multiple explanations may yield complementary insight. The aimed-at “monism” here has to do mainly with a kind of coherence subject to all these caveats, so it is even more pluralistic.

At the level of an adequate account of the many aspects of subjectivity and experience, I want to be careful to preserve a broadly Kantian distinction between empirical and transcendental elements, while modeling their relation on the broadly Aristotelian relation of “first nature” to second nature. In Kant’s own presentation, the empirical/transcendental distinction has a dualistic appearance, but the first-nature/second-nature distinction I want to map it to involves a kind of emergence of second nature from first nature, rather than a dualism.

Previously, I resorted to programming language metaphors of compilation and “lifting” — and a distinction between operational and expressive equivalence — to help describe the relations between first and second nature in a way that would resolve the tension between my monistic claim and the distinction I want to maintain. (See Bookkeeping; Layers.) I’m still pondering the implications of such a metaphorical application of concepts from a formal domain to things that are after all not formal. While I still find that interesting, I think the above sketch might be sufficient to assuage concerns of overall consistency without it.

Husserlian and Existential Phenomenology

Phenomenology in the tradition stemming from Husserl is a prime example of what Habermas called subject-centered philosophy. Though a much more serious philosopher than Descartes, Husserl explicitly adopted a Cartesian perspective, and on this basis wanted to trace all meaning back to a foundation in intentional acts of a transcendental Ego. Existential phenomenology tried to soften Husserl’s Cartesianism, and favored analysis of more concrete experience over Husserl’s foundational concerns (see Primacy of Perception?; Phenomenology of Will).

I’ve been developing a strong distinction of actual adverbial subjectivity from any posited unitary Subject standing behind it, while also sharply separating empirical “subjectivity” from transcendental Subjectivity. I’d like to recover some of the detailed insights of both Husserlian and existential phenomenology for a broadly semantic perspective that addresses subjectivity in a modular way, and hence has no use for a monolithic Subject, be it transcendental or existential. (As usual, by “semantic” I have in mind the combination of Aristotelian and Brandomian concerns developed here.) Then with respect to the matter of subjectivity, I’d like to achieve an Aristotelian mean between coherence and pluralism. Meaning is neither a single tree nor a collection of atoms, but mostly constituted at the level of intermediate structures that build coherence.

Unlike the Aristotelian/Brandomian approach favored here, the phenomenological tradition avowedly aims at a sort of hermeneutic genealogy of perceptual and other mental representations rather than of reasons. Nonetheless, any serious, in-depth tracing of layers and dependencies of meaning can be reconstructed in terms of reasons, and then combined with other materials directly derived from a genealogy of reasons.

Husserl aimed at a subjective discipline of direct observation of pure forms of appearance. Initially interested in the foundations of mathematics, in early work he developed a critique of psychologism in logic. He went on to recommend a radical “reduction” or suspension of ordinary assumptions, in two interdependent moments — epoché, a putting in brackets of putative existence behind appearances, and in general of what we ordinarily think we know or practically act as if we know; and the phenomenological reduction proper, which would be the recognition of everything that has been put in brackets as what Brandom would call a taking.

Husserl’s close collaborator Eugen Fink characterized the reduction as an extensive and rigorous meditative discipline that would take us back to an original astonishment characteristic of genuine knowing. According to Fink, when carried through rigorously, the reduction eventually shows itself as a “self-meditation” that would lay bare the transcendental Ego as the material ground of all science. What remains after the reduction is an “unhumanized” pure “reducing I”.

This bears some resemblance to the Kantian “I” as bare index of the unity of a unity of apperception, but unlike the Kantian “I”, the Husserlian Ego is not fully abstract. For one thing, it is supposed to be the agent performing the reduction, and it seems to be assumed that it is appropriate to speak of “the” agent in this role.

For another, Husserl stressed that the reduction should provide access to what he called pure essences, understood as pure forms of intentionality grounded in acts of the transcendental Ego. This makes it clear that Husserl’s transcendental Ego is supposed to be contentful, not purely formal like the Kantian “I”. Correlatively, Husserl’s essences, while nonpychological and free of empirical content and the presuppositions that go with it, are what they are by virtue of their complete and unilateral subordination to a foundational Subject that has supposedly been not merely posited, but discovered via the meditative process of the reduction. By contrast, the determination of content in a Kantian unity of apperception is purely a matter of coherence. (See also Transcendental Field; Error.)

In my youth, though already viewing the Ego as a reification, I was attracted to the idea of a meditative discipline and a focus on improving the knower by shedding presuppositions. While still seeing some value in this, I have come increasingly to think not only that such discipline is insufficient by itself, but that such a focus can easily be taken too far, implicitly reflecting an undesirable ascetic and effectively subjectivist turn away from serious open-ended inquiry about the larger world. A sole focus on improving the knower is too narrow. (Husserl did at one point have a motto “to the things themselves”, and certainly was far too serious to be subjectivist in the crude sense. His Ego would be purely transcendental. However, his “things themselves” seem to turn out to be intentional acts of the Ego.) Nonetheless, I remain fascinated by Husserl’s detailed descriptions of the stream of consciousness with all its passive syntheses, margins of awareness, implicit back sides of things, and so on. (See also Phenomenological Reduction?; Ricoeur on Husserl on Memory; Ricoeur on Husserl’s Ideas II.)

Ontology

Ontology as a supposed science of being acquired its basic shape in the middle ages, as a sort of reification of Aristotelian semantics. Duns Scotus was very proud of his ontological “improvement” of Aristotle. Aristotle himself preferred to shift clumsy, sterile discussions of sheer being onto more subtle and fruitful registers of form and meaning at the earliest opportunity.

Kant pointed out that existence is not a property, and Hegel pointed out the equivalence of Being to Nothing. When Hegel talks about “logic” as the form of future metaphysics, this means a return to the original meaning of “metaphysics” as Aristotelian dialectical semantics, not an ontologization of dialectic. Broadly Aristotelian dialectical semantics give us all the “ontology” we will ever need.

For the historical back story of how Scotus invented ontology as we know it today, if you read French, see Olivier Boulnois, Être et représentation: Une généalogie de la métaphysique moderne à l’époque de Duns Scot (XIIIe–XIVe siècle). As suggested by the title, this work also has extremely important things to say about the premodern history of strongly representationalist views. The famous univocal “being” invented by Scotus was defined in terms of representability. (See also Being, Existence; Aristotelian Dialectic; Objectivity of Objects; Form; Repraesentatio.)

Propositions, Terms

Brandom puts significant emphasis on Kant and Frege’s focus on whole judgments — contrasted with simple first-order terms, corresponding to natural-language words or subsentential phrases — as the appropriate units of logical analysis. The important part of this is that a judgment is the minimal unit that can be given inferential meaning.

All this looks quite different from a higher-order perspective. Mid-20th century logical orthodoxy was severely biased toward first-order logic, due to foundationalist worries about completeness. In a first-order context, logical terms are expected to correspond to subsentential elements that cannot be given inferential meaning by themselves. But in a higher-order context, this is not the case. One of the most important ideas in contemporary computer science is the correspondence between propositions and types. Generalized terms are interpretable as types, and thus also as propositions. This means that (higher-order) terms can represent instances of arbitrarily complex propositions. Higher-order terms can be thus be given inferential meaning, just like sentential variables. This is all in a formal context rather than a natural-language one, but so was Frege’s work; and for what it’s worth, some linguists have also been using typed lambda calculus in the analysis of natural language semantics.

Suitably typed terms compose, just like functions or category-theoretic morphisms and functors. I understand the syllogistic principle on which Aristotle based a kind of simultaneously formal and material term inference (see Aristotelian Propositions) to be just a form of composition of things that can be thought of as functions or typed terms. Proof theory, category theory, and many other technical developments explicitly work with composition as a basic form of abstract inference. Aristotle developed the original compositional logic, and it was not Aristotle but mid-20th century logical orthodoxy that insisted on the centrality of the first-order case. Higher-order, compositionally oriented logics can interpret classic syllogistic inference, first-order logic, and much else, while supporting more inferentially oriented semantics on the formal side, with types potentially taking pieces of developed material-inferential content into the formal context. We can also use natural-language words to refer to higher-order terms and their inferential significance, just as we can capture a whole complex argument in an appropriately framed definition. Accordingly, there should be no stigma associated with reasoning about terms, or even just about words.

In computer-assisted theorem-proving, there is an important distinction between results that can be proved directly by something like algebraic substitution for individual variables, and those that require a more global rewriting of the context in terms of some previously proven equivalence(s). At a high enough level of simultaneous abstraction and detail, such rewriting could perhaps constructively model the revision of commitments and concepts from one well-defined context to another.

The potential issue would be that global rewriting still works in a higher-order context that is expected to itself be statically consistent, whereas revision of commitments and concepts taken simply implies a change of higher-level context. I think this just means a careful distinction of levels would be needed. After all, any new, revised genealogical recollection of our best thoughts will be in principle representable as a new static higher-order structure, and that structure will include something that can be read as an explanation of the transition. It may itself be subject to future revision, but in the static context that does not matter.

The limitation of such an approach is that it requires all the details of the transition to be set up statically, which can be a lot of work, and it would also be far more brittle than Brandom’s informal material inference. (See also Categorical “Evil”; Definition.)

I am fascinated by the fact that typed terms can begin to capture material as well as purely formal significance. How complete or adequate this is would depend on the implementation.