Cause of Itself

Spinoza famously begins his Ethics with a definition of “cause of itself” (causa sui). This will become the hallmark of his “Substance”, of which he says there can be only one, and which he identifies with his own heterodox conception of God. Cause of itself would be that the essence of which involves existence.

In Hegel or Spinoza (French ed. 1979), Pierre Macherey writes that “First of all we can show, as GuĂ©roult does, that the concept of causa sui does not really have an initial foundational value for Spinoza: it does not represent a kind of first truth, a principle in the Cartesian sense, from which the entire system can be developed, as if from the starting point of a germ of truth” (p. 16).

“Here we can begin to be astonished: does Hegel ignore that this aporia of beginning — which sets his Logic in motion, this impossibility of grounding the infinite process of knowledge in a first truth which in itself as principle or foundation — is also an essential lesson of Spinozism, the principal objection that he himself opposes to the philosophy of Descartes? In such a sense that it is only… ‘so to speak’, the geometric exposition of the Ethics ‘begins’ with definitions, which for that matter do not have an effective sense, except at the moment when they function in demonstrations or they really produce the effects of truth: Spinozist thinking precisely does not have this rigidity of a construction relying on a base and pushing its analytic to an end point, which would find itself thus limited between a beginning and an end” (p. 17).

For Hegel according to Macherey, “The causa sui is based on a substantial principle that ‘lacks the principle of the personality’. It thus constitutes a substance that cannot become subject, which fails in this active reflection of self, which would permit it to undertake its own liberation in its own process…. This is an arrested and dead spirit” (p. 18).

This is supposed to be the individuality and freedom denying “Oriental” attitude that Hegel with broad brush unfortunately really does attribute to Judaism, Islam, Hinduism, Buddhism, the Roman Empire, Catholicism, the pre-Socratic philosopher Parmenides, and Spinoza, among others. This unfortunate over-the-top anti-anti-subjectivity theme of Hegel’s kept me from really appreciating his work for a long time.

On the other hand, the details of his argument about freedom and subjectivity as affirmative values actually make sense, even to the point of winning over an old sympathizer of French anti-Hegelianism like myself.

Reference, Representation

The simplest notion of reference is a kind of literal or metaphorical pointing at things. This serves as a kind of indispensable shorthand in ordinary life, but the simplicity of metaphorical pointing is illusory. It tends to tacitly presuppose that we already know what it is that is being pointed at.

More complex kinds of reference involve the idea of representation. This is another notion that is indispensable in ordinary life.

Plato and Aristotle used notions of representation informally, but gave them no privileged status or special role with respect to knowledge. They were much more inclined to view knowledge, truth, and wisdom in terms of what is reasonable. Plato tended to view representation negatively as an inferior copy of something. (See Platonic Truth; Aristotelian Dialectic; Aristotelian Semantics.)

It was the Stoics who first gave representation a key role in the theory of knowledge. The Stoics coupled a physical account of the transmission of images — bridging optics and physiology — with very strong claims of realism, certain knowledge both sensory and rational, and completeness of their system of knowledge. In my view, the Stoic theory of representation is the classic version of the “correspondence” theory of truth. The correspondence theory treats truth as a simple “correspondence” to some reality that is supposed to be known beyond question. (Such a view is sometimes misattributed to Plato and Aristotle, but was actually quite alien to their way of thinking.)

In the Latin middle ages, Aquinas developed a notion of “perfect” representation, and Duns Scotus claimed that the most general criterion of being was representability. In the 17th century, Descartes and Locke built foundationalist theories of certain knowledge in which explicitly mental representations played the central role. Descartes also explicitly treated representation in terms of mathematical isomorphism, representing geometry with algebra.

Taking putatively realistic representational reference for granted is a prime example of what Kant called dogmatism. Kant suggested that rather than claiming certainty, we should take responsibility for our claims. From the time of Kant and Hegel, a multitude of philosophers have sharply criticized claims for certain foundations of representational truth.

In the 20th century, the sophisticated relational mathematics of model theory gave representation renewed prestige. Model-theoretic semantics, which explains meaning in terms of representation understood as relational reference, continues to dominate work in semantics today, though other approaches are also used, especially in the theory of programming languages. Model-theoretic semantics is said to be an extensional rather than intensional theory of meaning. (An extensional, enumerative emphasis tends to accompany an emphasis on representation. Plato, Aristotle, Kant, and Hegel on the other hand approached meaning in a mainly intensional way, in terms of concepts and reasons.)

Philosophical criticism of representationalist theories of knowledge also continued in the 20th century. Husserl’s phenomenological method involved suspending assumptions about reference. Wittgenstein criticized the notion of meaning as a picture. All the existentialists, structuralists, and their heirs rejected Cartesian/Lockean representationalism.

Near the end of the 20th century, Robert Brandom showed that it is possible to account very comprehensively for the various dimensions of reference and representation in terms of intensionally grounded, discursive material inference and normative doing, later wrapping this in an interpretation of Hegel’s ethical and genealogical theory of mutual recognition. This is not just yet another critique of representationalism, but an actual constructive account of an alternative, meticulously developed, that can explain how effects of reference and representation are constituted through engagement in normative discursive practices — how reference and representation have the kind of grip on us that they do, while actually being results of complex normative synthesis rather than simple primitives. (See also Normative Force.)

Kant and Foundationalism

According to Kant, all human experience minimally involves the use of empirical concepts. We don’t have access to anything like the raw sense data posited by many early 20th century logical empiricists, and it would not be of much use if we did. In Kantian terms, this would be a form of intuition without concepts, which he famously characterized as necessarily blind, and unable to function on its own.

Foundationalism is the notion that there is certain knowledge that does not depend on any inference. This implies that it somehow comes to us ready-made. But for Kant, all use of empirical concepts involves a kind of synthesis that could not work without low-level inference, so this is impossible.

The idea that any knowledge could come to us ready-made involves what Kant called dogmatism. According to Kant, this should have no place in philosophy. Actual knowledge necessarily is a product of actual work, though some of that work is normally implicit or preconscious. (See also Kantian Discipline; Interpretation; Inferentialism vs Mentalism.)

It also seems to me that foundationalism is incompatible with the Kantian autonomy of reason.

Foundations?

Foundationalism is the mistaken notion that some certain knowledge comes to us ready-made, and does not depend on anything else. One common sort involves what Wilfrid Sellars called the Myth of the Given.

Certainty comes from proof. A mathematical construction is certain. Nothing in philosophy or ordinary life is like that. There are many things we have no good reason to doubt, but without proof, that still does not make them certain.

In life, high confidence is all we need. Extreme skepticism is refuted by experience. It is not possible to live a life without practical confidence in many things.

Truth, however, is a result, not a starting point. It must be earned. There are no self-certifying truths, and truth cannot be an unexplained explainer.

In philosophy, we have dialectical criticism or analysis that can be applied from any starting point, then iteratively improved, and a certain nonpropositional faith in reason to get us going. All we need is the ability to question, an awareness of what we do not know, and a little faith. We can always move forward. It is the ability to move forward that is key. (See also Interpretation; Brandom on Truth; The Autonomy of Reason.)

Justification

Epistemological foundationalism always sounded like a patent logical absurdity to me, an attempt to escape the natural indefinite regress of serious inquiry by sheer cheating — a bald pretense that some content could be self-certifying and therefore exempt from the need for justification. I have a hard time being polite about this; such a claim feels to me like a deep moral wrong.

The kind of justification we should care about is not some guarantee of absolute epistemic certainty, but a simple explanation why a claim is reasonable, accompanied by a willingness to engage in dialogue. All claims without exception should be subject to that. As Sellars said, “in characterizing an episode or state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.” (Empiricism and the Philosophy of Mind, p.76.) Aristotle would agree. (See also Verificationism?; Empiricism; Free Will and Determinism.)