Category Archives: Ethics

The non-problem of moral luck

The Good Place - trolley dilemma

The Good Place is an unusual comedy on NBC, featuring a professor of moral philosophy among its main characters. My friend Skye Cleary has interviewed the real life philosopher who consults for the show, Todd May of Clemson University, for the blog of the American Philosophical Association. The exchange is definitely worth a read. In this post I will make an argument that one can learn more about moral philosophy from watching a single episode of the show than by listening to a technical talk in that same field while attending the APA’s own annual meeting.

Episode five of the second season of TGP features a sophisticated discussion of the infamous trolley problem, a thought experiment in ethics that has by now generated a cottage industry among both philosophers and neuroscientists. I will not explain for the n-th time what the problem consists of, you can look it up on Wikipedia. Suffice to say that the more I study virtue ethics, the more I become skeptical of the value of much modern moral philosophy, with its indulging in more and more convoluted hypothetical situations that seem to be designed more to show off the cleverness of the people working in the field than to actually help the rest of us live an ethical life. It is no coincidence that the dilemma is always framed in terms of what a deontologist or a utilitarian would do, those two frameworks having gotten further and further away from any relevance to real life, contra to what either Immanuel Kant or John Stuart Mill surely intended.

At any rate, the episode in question features a theoretical lecture on trolleys by the resident philosophical character, Chidi (played by the excellent William Jackson Harper). One of those on the receiving end of the lecture is the demon-turning-good-guy Michael (played by the awesome Ted Danson). During the lecture, Michael becomes impatient with the theory, so he snaps his fingers and transports Chidi, his friend Eleanor (played by Kristen Bell) and himself aboard an actual trolley, about to kill what appear to be real people. Michael then asks Chidi for a real-life demonstration: what is the philosopher going to do when suddenly faced with the dilemma, in the field, so to speak? Hilarity (and mayhem) quickly ensue. The episode is so good that I made my students watch it and comment on it.

Michael’s point is well taken: ethics is not (or ought not to be!) a theoretical exercise in cleverness, but a guide to navigating life’s real situations, and Chidi the philosopher — while very good in theory — fails spectacularly at it. I was thinking of that sit-com imparted lesson while attending a talk at the Eastern APA meeting last January, delivered by Philip Swenson of the College of William and Mary. In the following I will pick on Swenson a bit, not because his talk was bad (it wasn’t), but because it is an example of a way of doing philosophy that I increasingly object to, on ground of indulgence in irrelevant logic chopping.

Swenson set out to propose a solution to the “problem” of moral luck. He began, of course, with a couple of hypothetical situations:

Resultant luck case. Alice and Bill both go on walks along a riverbank. Both encounter a drowning child and attempt a rescue. They make the same choices and attempt the same actions. Alice’s rescue succeeds, but a sudden current prevents Bill’s attempt from succeeding, and the child drowns.

Circumstantial luck case. Alice goes for a walk along a riverbank and encounters a drowning child. She rescues the child. On a separate occasion, Claire goes for a walk along the riverbank. She does not encounter a drowning child. If Claire had encountered a drowning child she would have rescued the child.

What’s the problem? I mean, other than for the (fortunately hypothetical) child who occasionally drowns? Swenson is bothered by the fact that, in the first case, if we say that Alice is more praiseworthy than Bill, it looks as though we accept something apparently horrible called “resultant moral luck.” In the second case, if we say that Alice is more praiseworthy than Claire, then we accept something equally objectionable, called “circumstantial moral luck.” As Swenson puts it:

“Rejecting circumstantial moral luck appears to require a very significant revision to everyday moral judgment. Consider the plausible claim that a great many people all over the world are not so different from those who went along with the rise of the Nazis. Many people would have done similar things under similar circumstances. If we accept this and reject circumstantial luck then it looks as though some radical claim or other will follow.”

That would be, in case the reasoning isn’t clear, the radical claim that most of us are not as good as we think, and that if we had lived under the Nazi we would have been just as culpable as the majority of the German population of the time for the Holocaust. But it doesn’t end here, there is a third case to consider:

Constitutive luck case. Alice goes for a walk along a riverbank and encounters a drowning child. she rescues the child. On a separate occasion Daniel goes for a walk along the riverbank and also encounters a drowning child. Because Daniel is — through no previous fault of his own — cruel and uncaring, he refrains from rescuing the child. However, if he had possessed Alice’s naturally caring disposition, he would have rescued the child.

Swenson went on to remind the audience of the two classical “solutions” found in the philosophical literature for the problem of moral luck: “responsibility skepticism” (deny that anyone is ever praiseworthy or blameworthy at all), and the “responsibility explosion” (say that people are praiseworthy or blameworthy in virtue of what they would have done in various circumstances they never actually faced, equipped with character traits they never had).

He then goes on to present his own solution to the problem, which involves a strange calculation of moral desert levels, beginning with the assumption that the “expected desert level” for an agent is zero, and continuing with the notion that we can then assign points to different ethical situations according to a variety of criteria. I will not go into the details because they are irrelevant to my argument here. Which is that we should reject this whole approach to moral philosophy, period.

To begin with, I find bizarre the very idea that we should engage in some sort of morality ledger construction, keeping score of the praiseworthiness or blameworthiness of people. Why? What the heck is the point? Are we some sort of god who has to decide on where to send people in the afterlife? (That, incidentally, is the premise of TGP show. And it’s very funny.) Take the first scenario, the case of resultant luck. It wouldn’t cross my mind for a second to say that Alice is more praiseworthy than Bill just because Bill did not succeed in his attempt at rescuing the drowning child. On the contrary, I am in awe of anyone who would attempt the rescue, regardless of whether s/he succeeds or not.

The circumstantial luck case is even more out there: there is no reason for us to consider Claire at all. If the circumstances were such as not to test her moral fiber, fine, why should that be an issue of any sort? Alice is to be praised for her attempted (and successful) rescue, the question of what Claire would have done simply did not arise, and that’s the end of that.

The last scenario, that of constitutive luck, is interesting, but only academically. To begin with, my view — contra Swenson’s stated hypothesis — is that adult human beings are morally responsible by simple virtue of being adults. That’s what it means to be an adult, regardless of the circumstances of one’s childhood. But if Daniel has an aberrant character because, say, of some developmental abnormality in his brain, or perhaps because a tumor is interfering with his moral decision making brain network, then fine, he is not to be blamed for his inaction. That’s no skin off of Alice’s nose, because moral desert is not (or should not be) a competition! Again, why the karmic obsession with keeping scores?

What about the choice between responsibility skepticism and the responsibility explosion? It seems to me that a society cannot function without a reasonable attribution of responsibility for the actions of its (adult, normally functioning) members. But one shouldn’t be carried away and start thinking of all possible hypothetical scenarios. Ethics should be concerned with what actually happens to real people, not with how hypothetical individuals would behave under (infinite) hypothetical circumstances. If you care about the latter, I suggest you’ve got your priorities seriously screwed up.

In the end, the “problem” of moral luck is not a problem at all. When Thomas Nagel wrote his now classical paper by that title, back in 1979, I took it to call our attention to the humbling fact that we may be far less moral than we like to think, and that that observation ought to make us more sympathetic toward the above mentioned ordinary Germans under the Nazi. To cure us of moral hubris, as it were. That is a very good practical lesson, nudging us toward being both less complacent about our own abilities and more charitable toward the shortcomings of others. But if the whole thing degenerates into an entirely impractical mathematical exercise in the assignment of praise and blame we have lost track of what ethics should be about. As the Stoic philosopher Epictetus put it 19 centuries ago:

“If you didn’t learn these things in order to demonstrate them in practice, what did you learn them for?” (Discourses I, 29.35)

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

Why neuroscience is largely irrelevant to ethics

Benjamin Libet, neuroscientist

A few days ago, over at my other blog, I published an article that I touted on my social media as “the last piece on free will you will ever need to read.” That was a slight exaggeration, but only slight. The specific point of the post was to explain in some detail the ancient Stoic take on human decision making, what I and modern psychologists prefer to call volition rather than free will (given how loaded with metaphysical nonsense the latter term is). I also wanted to see how the Stoic position squares with the findings of modern science. As it turns out, that ancient view is highly compatible with what contemporary cognitive science says about the matter, but this is neither a miraculous coincidence nor indication that somehow the Stoics managed to anticipate scientific discoveries that would be made more than two millennia later. (Which would be just as preposterous as to maintain, as some do, that the pre-Socratic atomists “anticipated” modern physics. They didn’t, as even a superficial reading of the pre-Socratics, and a passing acquaintance with modern physics, should amply demonstrate.)

Rather, the reasons we still find so much of value in Stoic (or Aristotelian, or several other) ancient moral philosophy are twofold: first, some of the ancients were keen observers of human psychology; second, moral discourse has little to do with whatever mechanisms make it possible for human brains to think about morality (so long as some mechanisms that allow us to think do exist, of course). Both notions need to be unpacked a bit, which is what I intend to do in this essay.

What was so special about Aristotle, or Epicurus, or Epictetus? In a sense, not much. They were sharp thinkers who paid attention to the empirical side of what they were thinking about. We tend to forget that many others at the time and since have written about the same topics, and yet they are completely forgotten, or they appear at best as footnotes in philosophy books. (Have you ever heard of Aristippus of Cyrene? Not likely, and he was one of the major figures among the minor Greek philosophers…)

The reasons we read some ancient philosophers are, so to speak, evolutionary. Specifically, the cultural analogues of two basic processes that steer biological evolution: drift and selection. Drift is about statistical sampling: some books survive and others don’t because of luck. There probably never were too many copies — by modern standards — of the works of Chrysippus, one of the most noted Hellenistic philosophers, and unfortunately not a single one has come down to us. Selection makes it so that whatever authors are highly esteemed not just by their contemporaries, but further and further down in history, are the ones whose works and ideas tend to survive. In the case of Chrysippus, we know a good amount about what he thought because so many later commentators copied several of his passages, in order to praise him or criticize him. To put it into another fashion, we still read Plato and Aristotle because of what biologist Jacque Monod once called a combination of chance and necessity.

But we don’t read all of Plato and Aristotle nowadays, unless we are historians of philosophy, or of science. There isn’t much point in consulting Aristotle’s Physics if you are a physicist, because the field has moved very far from the Aristotelian positions, beginning with Galileo and arriving at Einstein and Stephen Hawking. By contrast, philosophers still find a lot of value in the Nichomachean Ethics. Ill informed people (who shall here go unmentioned) are under the impression that this is because philosophy, unlike physics, doesn’t make progress (usually, these people just happen to be physicists). But that’s sheer ignorance, which ought (morally) to be embarrassing. Philosophy does make progress (see here), but it is a very different kind of endeavor from physics, so any direct comparison is a category mistake.

No, the reason Aristotle, the Stoics, and so forth are relevant today (other than the above mentioned one that they were la creme de la creme of their period) is that modern science has little of relevance to say about certain branches of philosophy, and in particular ethics. (Yes, I know, certain individuals are making a cottage industry of arguing the opposite. But they too shall go mercifully unmentioned in this post. I’ve dealt with them ad nauseam in the past.)

The reason this is the case has been explained by philosophers Ludwig Wittgenstein and Wilfrid Sellars, and is exemplified by the work of neuroscientist Benjamin Libet. Let me explain.

First, Wittgenstein. In Tractatus 4.111 he famously wrote that “philosophy is not one of the natural sciences,” adding at 4.112 that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations we find:

“[Philosophy’s] investigation is a grammatical one. Such an investigation sheds light on our problem by clearing misunderstandings away. Misunderstandings concerning the use of words, caused, among other things, by certain analogies between the forms of expression in different regions of language.” (90)

While I think that Wittgenstein had too narrow a view of what philosophy does, there is quite a bit of truth in the above. The job of philosophers isn’t to discover new things about the world (we’ve got science for that), but rather to clarify issues by way of critical analysis, and to see how things that appear disparate “hang together,” so to speak. That is, for instance, why metaphysics isn’t being replaced by physics, it is transforming itself into a discipline informed by physics (and biology, and other sciences) whose objective is to make sense of the picture of the world that emerges from the discoveries of individual special sciences, something that no single science does or is concerned with. (See, for instance, Ladyman and Ross’ Every Thing Must Go, a sort of manifesto for a naturalistic metaphysics.)

Wittgenstein becomes even more relevant to the present discussion when we consider his concept of “language games” as presented in the Investigations:

“The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words ‘block,’ ‘pillar,’ ‘slab,’ ‘beam.’ A calls them out; B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.” (2)

Ethics is another language game, or, rather, a multiplicity of language games, since there are a number of ways to conceive, talk about, and actually do, ethics. Within the human community, we talk about “good,” “bad,” “moral,” “immoral,” “ought,” and so forth, and any competent language user understands what others mean by those words. Moreover, .just like the words of the builder’s language actually help building things, so the words of ethical language actually help regulate our actions within a given community. The fact that science comes in and, say, tells us that “bricks” are really mostly empty space is interesting from within the science language game, but it is utterly useless, and indeed a distraction, to the builder. Analogously, that a neuroscientist may be able to tell us which parts of the human brain are involved in the production of ethical judgments, and by which cellular means, is interesting within the language game of neuroscience, but it is a useless distraction if we are concerned with improving social justice, or becoming a better person.

Which brings me to what I have termed the most important philosopher you likely never heard of: Wilfrid Sellars. My friend Dan Kaufman and I did an extensive video conversation on Sellars, which I think is worth checking out. One of Sellars’ landmark ideas was the distinction between what he called the manifest and the scientific images of the world. The manifest image is the way most people understand and navigate the world. The Sun “rises,” genocide is morally repellant. That sort of thing. The scientific image, by contrast, is the way science looks at the world: the Sun does not, actually, rise; it is the Earth that rotates on its axis. As for genocide? Ah, therein lies the rub. I’m sure there are scientific explanations for why genocide is such a recurring feature of human history, from the biology and neuroscience of violence to those of inter-group relations. While such scientific understanding of genocide may be useful, it does not give us the complete picture. Why not?

Because, according to Sellars, the manifest, but not the scientific, image deals with things like reasons and values. This is not a call to reject science. On the contrary. Sellars was quite clear that whenever the scientific and the manifest images of the world are in conflict (as in “the Sun rises” vs “the Earth rotates” case), then the sensible thing is for us to yield to science. But science simply isn’t in the business of doing a number of other things for which we have developed different tools: philosophy, literature, history, and so forth. These tools are complementary with, not opposed to, scientific ones. Ideally, says Sellars, we want to develop a conceptual stereoscopic vision, whereby we are capable of integrating the manifest and scientific images. Indeed, according to Sellars — and I wholeheartedly agree — developing and constantly updating such vision is a major task of philosophy, and our discipline is uniquely positioned to carry the task out because of both its methods (empirically-informed critical discourse) and its scope (very, very broad).

In a sense, what emerges from Wittgenstein, but even more so from Sellars’ thought is that there are a number of things about which we can talk at different levels of analysis, and which level(s) make the most sense depends on what it is that we wish to accomplish. While in theory a full integration of all levels may be possible, in practice it is often not desirable, because it doesn’t help with the particular language game we happen to be playing.

Let me then come back to “free will” (or volition), and use my discussion of Stoic philosophy as it compares to the famous experiments by Benjamin Libet to present a specific example of what I have outlined above, attempting to convince you of why I think science is largely irrelevant to moral discourse.

The Stoics thought that we have a faculty of judgment, which they call the hêgemonikon. It was a major goal of Stoic training to improve the way we use it, i.e., to arrive at better and better judgments about whatever life throws at us. In the post at my other blog I suggest that, roughly speaking, the hêgemonikon corresponds to the frontal lobes of the human brain, which are far more developed than in most other mammals, and are known to be associated, in fact, with our capacity for judgment, and in particular with our ability to “veto,” so to speak, certain actions that might otherwise come natural to us (as in: “there is a strange noise in my house in the middle of the night! Someone is about to kill me!! I need to run the hell out of here!!! … Oh, wait, it’s the cat. Back to sleep).

The Stoics themselves were spectacularly wrong about the likely location of the hêgemonikon: they thought it resided in the heart. But pretty much everything else they said about its functioning and how we can improve it was right on the money, as shown by the fact that 23 centuries later Stoic “psychology” still informs a number of evidence based psychotherapies, such as rational emotive behavior therapy and cognitive behavioral therapy.

How is this possible? Because most of what the Stoics thought about the hêgemonikon was part of the manifest image, and was useful then as it is now for the simple reason that people still deal with the same basic issues: unhealthy emotions like anger and fear, and the search for better ways to relate to others and improve the human polis. What the Stoics got wrong, not at all surprisingly, is the bit that belongs to the scientific image: as it turns out, our faculty of judgment depends on a particular part of the brain, not the heart. Crucially, though, this has had no effect whatsoever on Stoic philosophy or its usefulness. A modern Stoic simply updates that bit of information, thanks the scientist, and goes back to her practice.

Nowadays, whenever the topic of human volition comes up someone is bound to cite the famous experiments carried out by Benjamin Libet, beginning in 1983. Briefly, he asked subjects to follow the movements of a dot on the screen of an oscilloscope. The dot moved like the hands of a clock, but faster. Libet told his subjects to move a finger at a moment of their choice during the experiment, noting the position of the dot when they became aware of their decision to act. The experiment showed that the decision to move the finger entered conscious awareness about 200 milliseconds before the actual movement. But, stunningly, there was a rise in the so-called “readiness potential,” which is thought to be associated with the preparation for action, about 550 milliseconds before movement. So the subjects appeared to get ready to move the finger a full 350 milliseconds before they became conscious of their decision to do so. (Indeed, in later experiments, the readiness potential has been shown to build up even as long as 1.5 seconds before movement.)

Taken at face value, Libet’s results seem to show that we decide our actions unconsciously, and that what we call consciousness is simply a (late) awareness of a decision that has been made. There are several well known criticisms of such conclusion, beginning with the obvious one, that the experimental conditions have precious little to do with the recursive, complex behavior that we normally label “conscious decision making,” and which is understood as a continuous feedback loop between what Daniel Kahneman calls System I (fast, subconscious) and System II (slow, deliberate) brain processing systems. Moreover, recent research has both amply confirmed, and yet significantly re-interpreted, Libet’s original findings.

But a good reason to think that Libet’s experiments do not mean what so many enthusiasts of the “free will is an illusion” bandwagon seem to think they mean, is Libet’s own commentary:

“The finding that the volitional process is initiated unconsciously leads to the question: is there then any role for conscious will in the performance of a voluntary act? The conscious will does appear 150 msec before the motor act, even though it follows the onset of the cerebral action by at least 400 msec. That allows it, potentially, to affect or control the final outcome of the volitional process. An interval msec before a muscle is activated is the time for the primary motor cortex to activate the spinal motor nerve cells, and through them, the muscles. During this final 50 msec, the act goes to completion with no possibility of its being stopped by the rest of the cerebral cortex. The conscious will could decide to allow the volitional process to go to completion, resulting in the motor act itself. Or, the conscious will could block or ‘veto’ the process, so that no motor act occurs.” (B. Libet, Mind Time: The Temporal Factor in Consciousness, 2004, p. 137)

[Once more, to preempt distracting discussions: I do not think we should talk about “free will,” which is a hopelessly metaphysically confused concept. We are talking about what psychologists themselves call volition, i.e., the ability of human beings to make complex decisions informed by conscious thought. Hopefully no one will deny that we do have such ability.]

Interestingly, studies have found very good experimental evidence for the veto power Libet is talking about. But that is “interesting” from within the language game of neuroscience. It makes no difference at all in terms of the language game in which the Stoics — and most of us — are engaged, that of improving ourselves as individuals and of making society a better place for everyone to live.

That is why, as a scientist, I will keep following with interest the undoubtedly fascinating future developments of cognitive and neuro-science. But it is also why, as a philosopher and human being, I’m not very concerned with how those findings will impact my day to day life in the realm of ethics. As the Stoic philosopher Epictetus aptly put it:

“You are not flesh or hair but volition; if you keep that beautiful, then you will be beautiful.” (Discourses III.1.40)

Know thyself: still excellent advice, after all these years

“gnothi seauton,” know thyself

I have been at Delphi twice already, and I plan on going back again. It is a truly magical place. No, I don’t believe in “magic,” I’m talking about real magic, the sense of awe that strikes you when you arrive there. Despite the tourist shops, the bed and breakfasts, and the restaurants, you cannot avoid been struck by the sheer beauty of the place: a green mountainous peak overlooking a deep valley, from where you can see the Aegean Sea in the distance. No wonder the ancients thought it a place privileged by the gods, as testified today by the beautiful ruins of the temples of Apollo and Athena.

It is in Delphi, of course, that the most famous Oracle of the ancient world resided. Still today you can see the omphalos (i.e., navel), the stone that allowed direct communication between the priestess and the gods. Modern science has suggested that the location is characterized by significant underground quantities of ethylene or methane, which may cause hallucinations to people exposed to them. So far, however, this is speculation, and not really germane to the psychological power of the Oracle. The advice given by the priestess of Apollo, regardless of its natural trigger, was often sound, if not necessarily amenable to an immediate interpretation.

One of my favorite stories is that of Themistocles, the Athenian general who was told that Athens will successfully defend itself from the powerful army of the Persian king Xerxes by building a wall of wood (“Though all else shall be taken, Zeus, the all seeing, grants that the wooden wall only shall not fail”). The notion, of course, is ridiculous on its face. Surely the mighty Persians would not be stopped in their tracks by mere wood. But interpret the advice more creatively, as Themistocles did, and you realize that the wood in question was that of the ships forming the formidable Athenian navy, which did, in fact, annihilate the opponent fleet at the battle of Salamis.

Temple of Athena at Delphi (Photo by the Author)

Delphi was also famous for a list of “commandments” that were allegedly assembled from the wisdom of the Seven Sages, a legendary group of philosophers, statesmen, and law-givers from the early history of Greece. Perhaps the most famous of such commandments was “know thyself,” which has since inspired countless philosophers, most famously informing Socrates’ entire career as a gadfly to the good people of Athens (who repaid him for his trouble, as we know, by putting him to death by hemlock).

Now an article published in Aeon magazine by Bence Nanay (a professor of philosophy at the University of Antwerp, Belgium) tells us not only that “know thyself” is “silly” advice, but that it’s actively dangerous. While Nanay has a point, I will argue that it is his own article that is, in fact, dangerous.

Nanay tells us that the Delphic injunction is based on an untenable picture of the self, and of how we make decisions — though I wonder how he knows which theory of mind and psychological agency was endorsed by whoever chiseled the famous phrase on the entrance to the temple of Apollo.

He invites us to consider a simple situation: “You go to the local cafe and order an espresso. Why? Just a momentary whim? Trying something new? Maybe you know that the owner is Italian and she would judge you if you ordered a cappuccino after 11am? Or are you just an espresso kind of person? I suspect that the last of these options best reflects your choices. You do much of what you do because you think it meshes with the kind of person you think you are. You order eggs Benedict because you’re an eggs Benedict kind of person. It’s part of who you are. And this goes for many of our daily choices.”

The notion is that we have somewhat stable ideas about who we are, which is practically useful, since it saves us a lot of time whenever we have to make decisions. Except if you go to Starbucks, because they have far too many choices. Then again, no self respecting Italian would go to Starbucks. Or order a cappuccino after 11am. (See what I did there? I have an image of myself as a self respecting Italian, hence my choices about where to get my coffee and when it is proper to order a cappuccino. Also, no Parmesan cheese on seafood pasta, please.)

But of course, as Nanay reminds his readers, we also change, all the time. On occasion these changes are sudden and dramatic, and therefore very noticeable. Many people feel and act differently after having had a child, for instance. Or having experienced a trauma, such as a diagnosis of cancer. Many changes, though, are subtle and slow, yet cumulative over time. It is this second kind of change that creates the major problem for the Delphic injunction, apparently: “The problem is this: if we change while our self-image remains the same, then there will be a deep abyss between who we are and who we think we are. And this leads to conflict.”

Not only that. We apparently suffer from what psychologists call the “end of history illusion,” the idea that, right now, we are final, finished products. This, and not our selves of five, ten, or twenty years ago, is who we really are, and who we will keep being until our demise. The end of history illusion is, of course, nonsense. We are never finished, as the only constant throughout our life is precisely that things, including ourselves, change. You can see why Nanay is worried.

The problem concerns much more than your choices of morning java: “Maybe you used to genuinely enjoy doing philosophy, but you no longer do. But as being a philosopher is such a stable feature of your self-image, you keep doing it. There is a huge difference between what you like and what you do. What you do is dictated not by what you like, but by what kind of person you think you are.”

Theater and temple of Apollo at Delphi (Photo by the Author)

In an interesting twist, Nanay even manages to blame our addiction to social media on this alleged incongruence between who we are and who we think we are. That incongruence not only wastes a lot of our time and efforts (because, robotically, we keep doing things we no longer enjoy or think important), it also generates a fair degree of cognitive dissonance between reality and our image of reality. And cognitive dissonance, again the psychologists helpfully remind us, is emotionally costly. “Hiding a gaping contradiction between what we like and what we do takes significant mental effort and this leaves little energy to do anything else. And if you have little mental energy left, it is so much more difficult to switch off the TV or to resist spending half an hour looking at Facebook or Instagram.” Now you tell me!

Nanay concludes that “If we take the importance of change in our lives seriously, [following the Oracle] just isn’t an option. You might be able to know what you think of yourself in this moment. But what you think of yourself is very different from who you are and what you actually like. And in a couple of days or weeks, all of this might change anyway.” He then concludes with a pseudo-profound piece of poetry from André Gide, who wrote in Autumn Leaves (1950): “A caterpillar who seeks to know himself would never become a butterfly.”

Right. Then again, caterpillars are too stupid to philosophize about themselves, not to mention that their are profoundly ignorant of their own biology. And does anyone really believe that, except (maybe) for traumatic experiences, we can change a lot in mere days or weeks?

I hope it is clear what the central flow in Nanay’s argument is: he is assuming an essentialist view of the self, the self conceived as the “true,” unchanging part of who we are, which people are supposed to “discover” in order to live authentic lives. I’m sure some Ancient Greeks did hold to a similar notion (Plato comes to mind), though they were usually far too good observers of human psychology to fall into that trap. It is not at all clear whether whoever came up with the Delphic injunction subscribed to such an untenable theory of the self. What is abundantly clear is that “know thyself” is very good advice regardless, indeed even more so if our selves are dynamic bundles of perceptions, sensations, desires, and deliberations, to paraphrase and build on David Hume.

Let’s consider the more serious of Nanay’s examples, that of the philosopher who doesn’t realize that he doesn’t believe in philosophizing anymore. I don’t know whether that example was autobiographic, but I can certainly counter it with an autobiographical anecdote of my own. Ever since I can remember I wanted to be a scientist, a dream that eventually came through when I was appointed assistant professor of botany and evolutionary biology at the University of Tennessee in Knoxville, back in the distant 1995.

I had a reasonably successful career for several years in my chosen field of specialization, gene-environment interactions, rising through the ranks of associate and then full professor with tenure. My self image had been one of a scientist since I was five or six years old, and it had served me well until my late thirties and early forties.

Then a midlife crisis ensued, partly precisely because my reflections about myself began to alert me of some sort of growing gap between my mental image of me and how I was feeling while doing what I was doing. I realized that I was less and less interested in laboratory and field research, and more and more in theoretical and conceptual issues. And the step from the latter to philosophy of science wasn’t very big. Partly because such conscious reflections (the “know thyself” part), and partly because of serendipitous events, I was able to enroll as a graduate student in philosophy, publish a book and several papers in the field, and eventually switch career and become a full time philosopher.

That’s where I am now, though other adjustments have occurred in the meantime, like my increased interest in public philosophy, and my novel interest in Stoicism. These changes, too, were made actionable by the fact that I have a habit of reflecting about my feelings and experiences, trying as much as possible to keep adjusting what I actually do and what I want to do, in a never ending exercise of reflective equilibrium.

The bottom line is that my life, I can confidently assert, has been made better and better by trying to follow the Delphic commandment. I suspect the same is true of other people, who can benefit from a monitoring of the evolving “self,” coupled with the occasional redirection and adjustment of what they do or pursue. Contra Nanay, it is this process of self knowledge that reduces, or even preempts, the cognitive dissonance he refers to. And, apparently, it will also save you a lot of wasted time on Facebook and Instagram.

What is truly dangerous is not to follow the not at all “silly” advice that has served Socrates and so many others since. You may end up mispending a good chunk of your life if you ignore it. And if you have the chance, go to Delphi. You’ll thank me for it.

Book Club: The Edge of Reason 8, scientific morality

Here we come to the eighth installment of my running discussion about Julian Baggini’s excellent book on the nature of rationality, The Edge of Reason: A Rational Skeptic in an Irrational World. In this chapter Julian takes on those people — like Sam Harris — who want to reduce moral philosophy to neuroscience or some similarly misguided enterprise. I must admit, there is such a convergence of thinking between Julian and myself on this that reading the chapter was like indulging in philosophical porn…

Right off the bat, Baggini summarizes what is wrong with the scientistic approach (did I mention I have a book on this topic coming out soon?): “Champions of the rational are often their own worst enemies, especially when they happen also to be scientists. … [they push] an excessively narrow understanding of what reason involves, which is essentially evidence-based empiricism, no more and no less. … [this is an] iniquitous intellectual land grab, in which all meaningful discourse is claimed for science and anything else is razed to the ground as useless.” I could stop here, really. But let’s continue. As I said, it was an Epicurean dip for me.

Julian quickly moves on to his favorite example of such malfeasance: Sam Harris’ The Moral Landscape (which I have reviewed, very unfavorably, for Skeptic magazine, even though Michael Shermer censored the final bit of my review, in which I suggested that if someone wanted to learn something about moral philosophy better read Michael Sandel than Sam Harris).

Baggini explains that he is picking on Harris because “the chief value of The Moral Landscape is that it is one of the clearest articulations of the scientistic approach to ethics, which is often less brazenly expressed.” (For another brazen expression, see this discussion I had with the above mentioned Shermer.) Baggini actually interviewed Harris at his home in California, and based his commentary on such interview.

Harris told Julian that “We know that morality has something to do with human well-being and we know that human well-being must be arising from the physiology of the brain and therefore is constrained by whatever psychophysical laws are in in fact true of the brain, and therefore we know it falls potentially within the framework of science.”

As Julian immediately points out, a lot hinges on exactly what one means by “something to do,” and “constrained.” Yes, of course morality has to do with human well-being (actually, more broadly, with the well-being of sentient creatures), and it is constrained by human biology and culture — no philosopher would argue otherwise. But that’s far short of what’s needed to establish a science of morality. Sure enough, Baggini immediately acknowledges that empirical evidence, and therefore science, is informative on a number of ethical issues. For instance, the question “how should I raise my children?” does require input from child developmental psychology, among others. But there are a number of ways to raise one children given the same understanding of developmental psychology. That is, the science — as always — underdetermines the philosophical options. That’s why values are not straightforwardly reducible to empirical facts, which in turn means that one cannot collapse moral philosophy into science.

Julian again: “It simply does not follow from the fact that some things are objectively bad [for human beings] from a scientific point of view that science can determine all that is right or wrong. Take, for example, the old dispute between Mill and Bentham as to whether the pleasure of playing a simple game like pushpin has as much value as the pleasure derived from playing Chopin. Science cannot resolve this dispute.” And before you suggest it, no, it would be ridiculous to try to settle the matter by measuring the intensity of the activity of the pleasure centers of the brain: if you go that way (which actually Harris does, in his book!) you will have to conclude that the most moral thing to do is to hook everyone up to a drug delivering machine for their entire lives. I hope I don’t have to explain to you why this isn’t the moral thing to do.

Baggini notes that Harris concedes that nobody has yet proposed a way to read morality straight off, say, neuroscans. But Harris then engages in a significant amount of hand waiving to argue that not having an answer yet doesn’t mean there is no answer in principle (while at the same time not even giving a hint of what this “in principle” route would look like). Julian’s retort is that “well-being” is not a biologically meaningful category (as a biologist, I wholeheartedly agree), and that there are plenty of instances in which people choose pain and suffering because they think it is the moral thing to do: “The idea that brain scans could reveal to us what form of life is morally better is absurd because brain scans are value-neutral.”

Harris, in the course of the interview, says: “What does it mean to say it’s really true that something is wrong? If you push there, you either have to come down to some truth that falls within the purview of science — that there’s something about our world, human nature or the prospects of human happiness that admits of truth claims — or you’re just left with preferences: wrong just because we don’t like it or a majority of people don’t like it.”

But Julian immediately objects that this is a false dichotomy, that moreover misunderstands the nature of both reason and ethics: “Outlooks, values and beliefs can be more or less reasonable, more or less objective.” (See this old post of mine suggesting as much.)

Moreover, Harris did not invent anything knew. Just consider this bit from John Stuart Mill, back in 1872: “The backward state of the Moral Sciences can only be remedied by applying to them the methods of Physical Science, duly extended and generalized.” Mill’s project, however, immediately failed because of his introduction of the distinction between “high” and “low” pleasures, a qualitative dichotomy that simply cannot be backed up by any “physical science,” and yet is the only thing that saves post-Bentham utilitarianism from descending into a search for the minimum common denominator that makes everyone “happy” (which would be the above mentioned drug hook-up).

After taking care of Harris, Julian then moves on to the opposite mistake, in a sense, made this time by scientistic philosophers like Alex Rosenberg, author of the Atheist Guide to Reality: Enjoying Life Without Illusions. (I reviewed that one too, again, not positively.)

The connection between Harris and Rosenberg is explained very clearly by Baggini: “Harris is not necessarily representative of mainstream scientific thinking about morality. It is telling, however, that the more common alternative view is equally simplistic and extreme. This is the view that science debunks ethics. Science does not determine human values, it reveals them to be a kind of fiction.”

Here is an example of Rosenberg’s approach: “(i) What is the difference between right and wrong, good and bad? There is no moral difference between them. (ii) Why should I be moral? Because it makes you feel better than being immoral. (iii) Is abortion, euthanasia, suicide, paying taxes, foreign aid or anything else you don’t like forbidden, permissible, or sometimes obligatory? Anything goes.”

Julian finds it hard to believe that people like Rosenberg are serious about this, rather than just playing a (sick, I might add) intellectual game. Here is why: “it is interesting that [Rosenberg] does not add to his list child sexual abuse, rape, torture of the innocents and so on. To say ‘anything goes’ after a list like that would be extremely hard to take seriously.” Indeed.

The problem, concludes Baggini in this section of the chapter, is this: “The mistake is to believe that the methods of science have a monopoly on the practice of reason. From this it follows that morality must either be taken under the wing of science or cast out as irrational.” This mistake, of course, runs contrary to Baggini’s own careful analysis of what reason is, which we have explored in detail in the past several posts.

None of the above, however, means that science is irrelevant to moral questions. One of the most obvious examples is that of abortion — interestingly, one of those I also bring forth in the context of these discussions, and pretty much along the lines sketched by Julian in this chapter.

Let’s say we arrive at a position that says that abortion is permissible up until the moment in which the fetus begins to feel pain, and after that only if the life of the mother is in danger. (This is for the sake of discussion, not necessarily my or Baggini’s position, so don’t get worked up about it.) Well, then it is up to science — and in particular neuroscience and developmental biology — to give us the best estimate of when that is actually the case. But arriving at that specific criterion, rather than other possible ones, is a matter of philosophical dialogue, not (just) empirical evidence.

Julian also says, again, very similarly to what I’ve been writing for a while now, that another scientific input into the question of morality comes in the area of understanding the origin of the human moral sense. Here it is comparative anthropology, evolutionary biology, and primatology that play the crucial role.

Then there is the contribution of neuroscience to our understanding of how the brain arrives at moral decisions. Interesting, scientifically, but again not at all the same thing as a science of morality. Why? Because “people all over the world have the same basic brain circuitry and yet moral norms differ enormously.”

As an example, Julian compares how the Inuits and the Polynesians treat deception on the part of a group member: it is a capital offense in the first case, but only gets you a slap on the wrist in the second case. Why? Because the living conditions of Inuits are such that deception can cost the lives of several group members, or even the survival of the entire group. Not so under the more benign environmental conditions enjoyed by the Polynesians. The brains are the same, and so is their deep evolutionary history. But the cultural conditions are dramatically divergent, because of their very different environments.

Evolutionary psychology too doesn’t really help settle moral questions. For one because the fact that something is natural (rape, for instance, according to evopsychs Randy Thornhill and Craig Palmer) obviously doesn’t make it right (that would be an appeal to nature, an informal fallacy); and second because “to conclude that evolutionary psychology debunks ethics by showing that it is ‘nothing more than’ reciprocal altruism or enlightened self-interest” is an example of “the genetic fallacy: confusing an account of something’s origins with its justification.”

By the end of the chapter Julian arrives at the very same conclusion I have been defending for years, as astonishing as it is that it actually needs defense: “A scientifically informed ethics is to be welcomed, but a purely scientific ethics is an impossibility.”

Book Club: The Edge of Reason 7, rational morality

Let’s continue our discussion of Julian Baggini’s The Edge of Reason: A Rational Skeptic in an Irrational World, a book that attempts to rethink, and broaden, what counts as “reason.” Chapter 7 is about morality and its relationship with reason, broadly construed.

Julian’s discussion in this chapter is a bit of a simplification of a long and complex philosophical debate. But there is something to be said for cutting to the chase, as they say. He sets it up as a dichotomy between rationalists and sentimentalists: in one corner, Kant, who attempted to derive moral oughts from reason alone; in the other corner, Hume, who thought that at bottom morality was about emotions.

Baggini is immediately careful to disabuse people of the common, but mistaken, notion that sentimentalism is the same thing as emotivism: “Sentimentalism should not be equated with emotivism, the crudest version of the theory, which says that moral judgements are no more than expressions of approval or disgust.” He elaborates on this later in the chapter, but for now please keep it in mind, or you will completely misunderstand what follows.

Julian comes down on the side of Hume, and while I have some qualms with this, I think he got it pretty much right, especially his criticism of the Kantian position.

He begins there, using “Kantian” in the broadest possible terms, to indicate anyone who thinks that reason alone is sufficient to generate moral oughts. And he takes one of my favorite modern philosophers, John Searle, to task for his defense of a form of Kantianism.

Searle distinguishes “strong” from “weak” altruism. The latter encompasses situations in which people are naturally inclined to help others; the former is the result of rational analysis. Needless to say, the “weak” form, which is the Humean variety, is the one that will be left standing after Baggini gets through with this.

Julian gives us a preview of the sentimentalist position while he is gearing up for a discussion of Searle’s: “A sentimentalist might believe that it is wrong to cause unnecessary suffering. The ultimate basis of this judgement is not that to do so would be irrational, but simply a recognition, rooted in empathy, that suffering is a bad thing, to be avoided if possible. Having adopted this principle, the sentimentalist might on a given occasion see reasons not to act in a certain way, despite being inclined to do so, such as when confronted with a juicy steak from a factory-farmed animal.” This is contrasted with the rationalist, who ought not to eat the juicy steak, if he recognizes that doing so contributes to the suffering of animals, which he has judged to be contrary to reason in the first place.

Searle’s position is that a combination of empathetic moral sentiment and factual knowledge is insufficient as a reason to act, and that the latter can only be provided by disinterested reason moved solely by facts and logic (sans sentiments). As Baggini says, the Kantian position, then, gets off the ground by assuming a narrow conception of reason, one that excludes a priori any possibility of considering the sentimentalist position to be “reasonable.”

Julian reconstructs Searle’s argument for rationalism as dependent on three steps:

(i) The generality requirement: “to assert that something is the case is to assert that everyone in a similar situation should also assert that it is the case.”

(ii) My pain creates a need: “I am in pain so I need help. Accepting the generality requirement means that I therefore have to accept that if anyone else is in pain, their pain too creates a need. I cannot make a special case of myself.”

(iii) My need for help generates reasons for others to help me: “the generality requirement comes into play and so I have to accept that, to be rational, if my pain creates a reason for other people to help me, then the pain of others creates a reason for me to help them.”

Baggini has no problem with steps (i) and (ii), but thinks (iii) does not follow. Here is part (and I stress, part) of his argument:

“Let us accept that having a need for help is enough to establish that the need is a reason for others to help me. The problem with this is that at any given time innumerable such needs exist. Just in my neighborhood there are people who need help to cope with their grieving, do their shopping, get over their addictions, escape their loneliness, get money for medical treatment. Extend the circle wider and there are billions in need of clean water, good food, basic healthcare, education. On Searle’s view, I ought to accept that these are reasons for me to help them. That is reasonable enough. But that can’t mean those reasons are sufficient to place a moral obligation on me to help them. If it did, we would have the absurdity that at any one time, we would all be morally obliged to help other people meet any need they had.”

This, in a nutshell, is the same argument I have recently used against Effective Altruism over at my other blog, How to Be a Stoic. It is also the same reason I think we should not accept the infamous “repugnant conclusion,” which stems from utilitarianism for reasons very similar to those criticized by Baggini when it comes to rationalism (even though utilitarianism is obviously not Kantianism, the two approaches do share the rationalist assumption).

This, then, is Julian’s strong conclusion from his analysis of rationalism, one with whom I feel compelled (by reason…) to agree: “the fact that there are rationally binding desire-independent reasons for altruistic action does not in any way place an obligation on me to act on those reasons. What Searle calls ‘strong altruism’ does not, it turns out, place any obligation on me, of any strength. Whichever way you look at it, the fact that there exists a reason to help someone is not sufficient to establish that someone ought to act on that reason. … Unless reason obliges us to behave morally, the Kantian project fails.”

Baggini then moves to an exploration and defense of the sentimentalist position, beginning, of course, with a discussion of the famous is/ought (facts/values) distinction, made explicit by Hume himself:

“In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.” (A Treatise of Human Nature, 1739, book 3, part 1, section 1)

Julian’s reasoning is clear-headed here, so I will simply transcribe a few key lines:

“[Hume warns that] if the premises of an argument contain only statements of fact, then the conclusion must also contain only statements of fact, and must not smuggle in assertions of value, such as ‘oughts.’ In ordinary discourse, we do this all the time. We say, people are starving, they will die unless we send them food, therefore we ought to send them food. The conclusion does not follow logically. In practice, this is usually because premises are not so much absent as unstated. … Philippa Foot argued that it is a kind of fact about the natural world that living things have needs and desires and that therefore it is a matter of fact that certain things are of value to them. Take the proposition: ‘children are born helpless and have to be taught to learn language and so on.’ That, she explains, ‘means already that children have to be looked after.’ Crucially, these reasons are ‘objective and have nothing to do with preferences: some people love children and some people hate them. That doesn’t make any difference.’ On the one hand, this is a pure statement of fact. But it would also appear to contain implicit within it a statement of value: it is better that a child is looked after than not. We could call such statements ‘normative facts,’ meaning they are facts that contain elements of value. … It sounds paradoxical to say that we have interest-neutral reasons to accept the objectivity of interests, but the seeming paradox quickly dissolves when you see that the interest-neutrality of reason only concerns the requirement that we do not allow our interests to cloud our judgement of what is the case. It does not prohibit the recognition of real interests in the world. Given that these interests can be emotional as well as biological, this means that emotions can sometimes number among the reasons of rational argument.”

Time to take a break and go back to why sentimentalism as proposed by Hume-Baggini is not at all the same thing as emotivism, the “crude,” as Julian puts it, proposition that morality boils down to (essentially arbitrary) gut feelings.

First, notice that Baggini is — once again — broadening our conception of reason, to include our natural emotional dispositions (which, of course, ultimately derive from our evolution as social primates). So, contra Hume, it’s not that “reason is, and ought only to be the slave of the passions” (A Treatise of Human Nature, book 2, part 3, section 2), but rather that the passions are part of our reasoning arsenal.

Second, not all emotional responses are created equal: if you don’t care at all about the welfare of others, you are a socio-psychopath, and we shall not take your “reasoning” into account, and possibly provide you with the necessary mental care, since you are not a functional human being. Some degree of pro-sociality is characteristic of normal, healthy members of the species Homo sapiens.

Third, both contra and per Hume (I know, I know), the passions can actually be trained by reason, i.e., the two interact developmentally, both within an individual and even in terms of shaping different human cultures. (If you are puzzled by this, read my essay on Hume’s biological-cultural developmental theory of human nature.)

Fourth, the above should also make clear why morality cannot be read off straight from factual knowledge (and hence, there cannot be “scientific” answers to moral questions, pace Sam Harris and Michael Shermer). The facts — both straightforwardly empirical ones and those that Baggini, following Foot, terms “normative” ones — underdetermine moral action, meaning that the same set of facts of both kind do not pinpoint one and only one possible moral option. Which means one needs philosophy, i.e., the ability to reason about those facts from within a given general framework or another, in order to arrive at specific conclusions.

(Incidentally, if anyone is curious about how all of the above fits with my practice of Stoicism, the answer is very well indeed, thank you. The Stoics were the first to propose a developmental theory of morality, according to which we begin as small children with selfish desires about our own well being; we then naturally expand our concerns to our care takers and others who immediately surround us; and finally we begin to be able to use reason to further enlarge our circles of concern, shaping our moral character by way of reason and practice.)

Back to Baggini. In the fourth section of this chapter, he elaborates on the idea that the sentimentalist position does not mean that reason has nothing to contribute to morality. On the contrary, it is indispensable to it, once properly understood and broadly reformulated. He summarizes his version of sentimentalism in this fashion:

“If a creature has interests — being able to pursue projects and live a life which it finds meaningful, and/or can feel pain and pleasure, physical and psychic — then we have reasons to take those interests into account and not frustrate them without good reason, nor refrain from assisting them when it is easily in our power to do so.”

And here is a wonderful example of the above mentioned principle of underdetermination: “three people might agree with this and accept that we therefore have a duty to take a farm animal’s interests into account. But one might conclude we ought then not to eat it, another that we simply ought to rear it well, while the other might say as long as we don’t torture it, we’ve done nothing wrong.” The differences among the three people may not get settled, period. However, each of them may be able to present arguments — informed by both empirical and normative facts — for why his position is better than the other two. Sometimes that argument will succeed, at other times it won’t.

Julian is clear that one could very reasonably disagree with his definition of sentimentalism and with what follows from it. As he puts it, such “a dissenter would be heartless, not brainless.”

He adds that someone may feel the moral force of an argument, and yet not feel compelled to act (as in my case with vegetarianism: I recognize its moral force, but I tend to behave like a reducetarian, or at best a pescatarian). “But why should we expect or demand that the only good moral reasons are ones which are beyond all conceivable rational dispute? This is simply too high a demand.” After all, he points out, even the compulsion to accept that 2+2=4 (yes, yes, given certain axioms, you pedantic bastard!) is not absolute: “We can certainly imagine some people who just don’t feel the force of the argument at all. In the mathematical argument, this blindness might suggest a rare cognitive impairment that simply doesn’t allow them to follow logical steps. In the moral argument, the equivalent would be a cognitive impairment such as psychopathy, which makes people indifferent to the interests of others.”

Let me end, then, with an apt quote by American philosopher Thomas Scanlon (cited by Baggini): “To see something as good reason for acting in a certain way and being disposed to do it is not a matter of logic, but it is a matter of rationality.”

Can evolution explain ethics?

ethicsThe latest conversation with my friend and colleague Dan Kaufman (he of The Electric Agora) was on what, exactly, science can tell us about morality, meaning not the trivially misguided notion that somehow ethics can be reduced to neuroscience, or evolutionary biology, or whatever, but rather the more nuanced question of whether and how science can inform philosophizing about ethics.

Continue reading

Ancient vs modern philosophy

Ancient vs Modern

Did the ancients get it right? Indeed, better than the moderns? No, this conversation between Dan Kaufman and I is not about mystical insights or the secret scientific knowledge of the people who built the pyramids. Rather, it’s about what, if anything, ancient philosophers understood about the human condition that was then lost by the philosophy that developed during and after the Scientific Revolution.

Continue reading

Philosophers who influenced us: David Hume & Arthur Danto

Recently Dan Kaufman and I have had another of our recurrent conversations, this time a second installment of an occasional series that we might call “philosophers who influenced us” (the previous one featured Bertrand Russell, on my part, and Gilbert Ryle for Dan).

This time I picked David Hume, the empiricist and skeptic who famously awoke Kant from his “dogmatic slumber,” and who — I think — is still not appreciated as much as he should be for his impact not just on subsequent philosophy (including epistemology, ethics and aesthetics), but on science as well. Dan’s pick was the philosopher of aesthetic and highly impactful critic of art Arthur Danto, who developed one of the most recent and compelling theories of art to date.

Continue reading

Should we be fearing death?

Epicurus, National Roman Museum, photo by the author

Epicurus, National Roman Museum, photo by the author

Death, therefore, the most awful of evils, is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not. (Epicurus, Letter to Menoeceus)

Death is one of the major issues in human life, to put it mildly. Because we are blessed and cursed with self-awareness, we know we are mortal, so one of our problems is how to deal with the prospect of our own demise. A lot of religious and philosophical thinking as well as, lately, scientific research, has gone into this. Seneca famously wrote that the point of philosophy is to learn how to die, since death is the ultimate test of who we are. And things don’t seem to have changed much in that department over the past two thousand years.

Continue reading