This is the 6th part of a series adapted from my 2009 Bachelor’s Thesis in philosophy.
Part 1 introduced the series and its premise: there are two ways to look at the self — a scientific way and a traditional way — and transferring statements from one to the other has weird effects.
Part 2 described the traditional view, using the philosopher C.A. Campbell as a representative.
Part 3 offered a sketch of an alternative view, assembled from background assumptions in the physical sciences.
Part 4 discussed some scientific disciplines with bearing on the self, and how their results are interpreted differently by the traditional paradigm vs. the scientific.
Part 5 gave some examples of people expressing “Campbellian” views online.
Here in part 6 I discuss the reasons why the traditional view persists when prescientific thinking on other topics often doesn’t.
So where does this Campbellianism come from? Even people with no personal commitment to supernaturalism gets uneasy when pondering the implications of causation in human behavior. I think it’s the result of a sleight of hand that many philosophers have performed, unbeknownst to themselves, throughout western philosophical history.
Much has been written on the problem of free will. The prospect of it being “unfree” as in determined, has often been considered unbearable. Why is that?
The idea of determinism in a broad sense can be in conflict with personal and political convictions about humans being free or unfree to do and be what they want to be without any limitations or restrictions. Evidence of this is Segerstråle’s account of some of the reasons behind the vigorous protests against E.O. Wilson in the 1970’s after his publication of Sociobiology, which contained observations and theories about humans’ social behavior as well as those of a number of other species. The following explains how some of the protesters felt:
However, later statements by group members indicated that the group was in fact not supportive of a blank-slate idea [here interpreted as the idea that culture creates the mind – my comment, JN] . The critics of sociobiology were against any kind of determinism, environmental as well as biological. Humans should be seen as free agents, having choices (see, for example, Larry Miller, 1978, speaking for the group).
Here the idea of the self being non-causal feels not only metaphysically but also practically and politically significant. The critics callling themselves The Sociobiology Study Group saw their mission as largely political: to counteract the perceived threat against the idea of political progress posed by discussion of humans being limited creatures.
Political freedom and perfectibility are big issues, but not the only reason for fear of determinism. What one sees most notably in Campbell is something else. He aims to establish a conception of free will, not just any conception but “the kind of free will required for moral responsibility”. We can see in the last comment by “Inquiry” in the previous section that this thinking is alive and well in the 21st century. Perhaps it’s even the leading reason why causation seems so worrying in the area of human choices. Further up, among those commenting on the finding of an “infidelity gene”, we find examples of this fear; that acknowledgment of causes for behaviors is tantamount to excusing and accepting those behaviors.
Now, this fear probably lies behind much of the need to posit a metaphysically free will. But I think the assumption rests upon an unfortunate and unnecessary conjunction of two sparate ideas. One is the reasonable notion that for you to be responsible for something the decision must be made by you. Campbell phrases this as “the agent must be the sole author” of their action.
The other is that human thought occurs separately from physical processes, as per Descartes. Combining these two creates a problem, since it entails treating any intrusion of normal causality into the decision-making process as just that, an intrusion. And as character traits are more and more being brought under the umbrella of biology, neurology and psychology, there is less and less for this incorporeal faculty to do. This creates reasoning like such:
A: Moral responsibility requires that you yourself make the decisions.
B: Your thoughts operate in a non-physical realm, outside natural causation.
B → C: Natural causation in decision-making means you aren’t making those decisions.
A + C → D: Natural causation in decision-making is incompatible with moral responsibility.
The absurdity occurs in cases where one’s unwilling to explicitly defend B but is unable to see that this frees you from having to defend D because C doesn’t hold. This makes it somehow desirable to introduce an element – any element – that operates outside of causation and then believe this accomplishes anything. When the libertarian philosopher reaches this point which he has mistaken himself into pursuing, one might ask exactly how a non-causal, singular event emerging from nowhere is supposed to have bearing on our moral responsibility.
A modern philosopher that does exactly this is Robert Kane, who is in the unusual position of trying to save a metaphysical free will (or as I think it should be called: a non-causal element in human decision-making) from a materialist perspective. Kane’s convoluted metaphysics invoking quantum indeterminacy to get uncaused actions, is necessary to him because he — despite being a materialist — retains the idea that non-causality is required for responsibility. This if anything shows that C and thus D has taken on a life of its own and remains firmly lodged even in those who reject B.
Dennett speaks about Kane’s efforts:
The best attempt so far is by Robert Kane, in his 1996 book, The Significance of Free Will. Only a libertarian account, Kane claims, can provide the feature we—some of us, at least—yearn for, which he calls Ultimate Responsibility. /—/ A human mind has to be where the buck stops, Kane says, and only libertarianism can provide this kind of free will, the kind that can give us Ultimate Responsibility.
It is interesting and telling that Kane chooses to call what he’s after “Ultimate Responsibility”. This shows how he, and I suspect many others, sees responsibility. To them it is an objectively real property that people may or may not possess, and we are in the process of examining whether we have it or not. Responsibility is in this case a metaphysical entity with an existence or nonexistence. And if this concept is to be real then metaphysical free will or non-causal element in human decision-making needs to be real too, no washed-out naturalist conception will do.
Galen Strawson uses the term Ultimate Responsibility as well (in The Impossibility of Moral Responsibility), but he uses it to illustrate what he says doesn’t exist. Strawson uses the qualifier “Ultimate” conscientiously to separate it from the more ordinary, mundane responsibility we do have. But when it comes to the kind of responsibility that could motivate sending someone to eternal torment in hell, he says it simply does not and cannot exist.
What “Is moral responsibility real?” really means is “are our anger at and punishment of moral transgressions justified?”. And how do we determine that? Not by examining the structure of the universe. Which of our feelings are justified is a moral question whose answer must be decided, not an ontological question whose answer must be discovered. Right, wrong, good, evil, responsible, not responsible, justified, unjustified etc. are concepts we use to regulate the social world, not things to find in the natural world.
This goes against an influential thread in western philosophy, where ethics is seen as an exercise in abstract thought through which we can find “the Right and the Good”, as if they were real things. This is partly Plato’s fault. A dissenting voice here, whose philosophy has stood up to scrutiny remarkably well over several centuries, is David Hume — who fairly early in the history of moral philosophy emphasized the role of emotional reactions.
This separateness between body and mind ties in to a broader idea about concepts. Historically the world has been thought to consist of an eclectic collection of diverse things whose existence was independent. Before Newton, the universe was thought to be divided into the superlunar and the sublunar world which were fundamentally different, before Darwin a similar divide between humans and other animals were thought to exist. Nowadays the earth and the heavens are understood to be different emergent entities with a common ontological base, and so are humans and other animals (even though this has failed to sink in as much). But the analogous divide between mental and physical is still with us.
The opposite of this view of the world as consisting of separate and parallel strands is monism, the view that everything in the world is the multifaceted expression of one set of fundamental rules. The only real remaining version of this is materialism, where different things are different because of their structure and arrangement of matter, not because they are fundamentally separate kinds of things. A human can be turned into a dog by rearranging its matter, it isn’t needed to extract its “humanness” and inject it with “dogness”.
Reductionism is the intellectual application of materialism with the aim to understand the world by uncovering exactly how phenomena arise from the interaction of its constituent parts. Sometimes this meets resistance, when we saw Dennett apply a reductionist perspective towards love earlier he was negating the antinaturalist and antireductionist notion that love has an existence parallel to the physical world rather than being a phenomenon arising within it.
Popular views of the self and the will are only one manifestation of this broader tendency. To think this way seems to be a natural human tendency that we are very slowly shedding through centuries of science and rigorous intellectual discipline. Confronting ontologically independent concepts with monism and naturalism forces us to recognize that we conceptualize the world from a human perspective and our conception therefore is a function of our interaction with the rest of the world rather than some inherent nature of it as seen from outside reality. Concepts are therefore not “real” in a deep, ontological sense. Where this view differs from that of radical constructionism is that I think that our concepts are far from arbitrary and that different views map better or worse onto the structure and regularities in the world.
This may well leave many people uncomfortable. It may sound like it leaves everything meaningless. That thought of course reflects a lingering paradigm where things like meaning need to be real in an independent metaphysical way to be real at all. Again we see the feeling that something doesn’t exist if it’s different from what we thought it was.
If one does not find redefinition of things that matter to us satisfactory and despite Flanagan’s elucidation still lament the loss of the Earth then one is asking for more than a respectable philosophy can offer. Glib and grim maybe, but I think we’re in no worse predicament than the Ptolemaic that Flanagan writes about. Our views of the nature of ourselves, meaning, love and everything like it is, like the nature of the earth, a matter of habit. I and many others with me are living proof that humans can live with a view of everything, including ourselves, as enworldened without falling into existential despair and nihilism. We’re all too busy tending our gardens.
• • •
The ”free will debate” seems to be a peculiarity of the western intellectual tradition. It doesn’t have the same status in the “eastern world”. This is well illustrated by some theorists of the skeptical persuasion having an interest in Buddhist teachings. Both Owen Flanagan (mentioned before) and Susan Blackmore (mentioned later in part 7), consider themselves Buddhists; Galen Strawson also has certain Buddhist influences in his writings. Buddhism teaches (roughly) that the goal of life is to extinguish the illusion of self and the desires on that self’s behalf. Contrast this with the theology of the west, where individual salvation is central, and one might get an inkling of why this has been such an important issue here. The take home-message is that obsession with the self and will in the western manner is not as necessary we might think.
Newton put an end to this by claiming that the forces that make things fall here on earth is actually the same forces that make the planets circle about in the heavens.
“Reductionism” is often used as a term of abuse, most often by theorists in the humanities. It’s aimed at those who emphasize lower levels in causation (such as biology or neurology rather than, say, culture). However, naturalism requires that for an explanation to be complete. Sure, you can explain something as emanating from a higher level abstracted entity (like “culture”) but then that entity needs to be explained in turn (How did culture come about? What does it consist of?). To be complete, an explanation need to, at some point, reach all the way down to the ground/physical level (in the way a building cannot hang in the air — it can be hung in something and not touch the ground, but then what it hangs in has to be planted on the ground, and so forth). This is simply required by naturalism (materialism) and that leads me to suggest that those who use reductionism as a pejorative term are actually uncomfortable with naturalism and materialism but don’t want to admit it.
2017 comment: Considering I’ve been subject to much more existential anxiety the last few years than I ever had when I wrote that, I’m much less glib on that point today.