Here we come to the eighth installment of my running discussion about Julian Baggini’s excellent book on the nature of rationality, The Edge of Reason: A Rational Skeptic in an Irrational World. In this chapter Julian takes on those people — like Sam Harris — who want to reduce moral philosophy to neuroscience or some similarly misguided enterprise. I must admit, there is such a convergence of thinking between Julian and myself on this that reading the chapter was like indulging in philosophical porn…
Right off the bat, Baggini summarizes what is wrong with the scientistic approach (did I mention I have a book on this topic coming out soon?): “Champions of the rational are often their own worst enemies, especially when they happen also to be scientists. … [they push] an excessively narrow understanding of what reason involves, which is essentially evidence-based empiricism, no more and no less. … [this is an] iniquitous intellectual land grab, in which all meaningful discourse is claimed for science and anything else is razed to the ground as useless.” I could stop here, really. But let’s continue. As I said, it was an Epicurean dip for me.
Julian quickly moves on to his favorite example of such malfeasance: Sam Harris’ The Moral Landscape (which I have reviewed, very unfavorably, for Skeptic magazine, even though Michael Shermer censored the final bit of my review, in which I suggested that if someone wanted to learn something about moral philosophy better read Michael Sandel than Sam Harris).
Baggini explains that he is picking on Harris because “the chief value of The Moral Landscape is that it is one of the clearest articulations of the scientistic approach to ethics, which is often less brazenly expressed.” (For another brazen expression, see this discussion I had with the above mentioned Shermer.) Baggini actually interviewed Harris at his home in California, and based his commentary on such interview.
Harris told Julian that “We know that morality has something to do with human well-being and we know that human well-being must be arising from the physiology of the brain and therefore is constrained by whatever psychophysical laws are in in fact true of the brain, and therefore we know it falls potentially within the framework of science.”
As Julian immediately points out, a lot hinges on exactly what one means by “something to do,” and “constrained.” Yes, of course morality has to do with human well-being (actually, more broadly, with the well-being of sentient creatures), and it is constrained by human biology and culture — no philosopher would argue otherwise. But that’s far short of what’s needed to establish a science of morality. Sure enough, Baggini immediately acknowledges that empirical evidence, and therefore science, is informative on a number of ethical issues. For instance, the question “how should I raise my children?” does require input from child developmental psychology, among others. But there are a number of ways to raise one children given the same understanding of developmental psychology. That is, the science — as always — underdetermines the philosophical options. That’s why values are not straightforwardly reducible to empirical facts, which in turn means that one cannot collapse moral philosophy into science.
Julian again: “It simply does not follow from the fact that some things are objectively bad [for human beings] from a scientific point of view that science can determine all that is right or wrong. Take, for example, the old dispute between Mill and Bentham as to whether the pleasure of playing a simple game like pushpin has as much value as the pleasure derived from playing Chopin. Science cannot resolve this dispute.” And before you suggest it, no, it would be ridiculous to try to settle the matter by measuring the intensity of the activity of the pleasure centers of the brain: if you go that way (which actually Harris does, in his book!) you will have to conclude that the most moral thing to do is to hook everyone up to a drug delivering machine for their entire lives. I hope I don’t have to explain to you why this isn’t the moral thing to do.
Baggini notes that Harris concedes that nobody has yet proposed a way to read morality straight off, say, neuroscans. But Harris then engages in a significant amount of hand waiving to argue that not having an answer yet doesn’t mean there is no answer in principle (while at the same time not even giving a hint of what this “in principle” route would look like). Julian’s retort is that “well-being” is not a biologically meaningful category (as a biologist, I wholeheartedly agree), and that there are plenty of instances in which people choose pain and suffering because they think it is the moral thing to do: “The idea that brain scans could reveal to us what form of life is morally better is absurd because brain scans are value-neutral.”
Harris, in the course of the interview, says: “What does it mean to say it’s really true that something is wrong? If you push there, you either have to come down to some truth that falls within the purview of science — that there’s something about our world, human nature or the prospects of human happiness that admits of truth claims — or you’re just left with preferences: wrong just because we don’t like it or a majority of people don’t like it.”
But Julian immediately objects that this is a false dichotomy, that moreover misunderstands the nature of both reason and ethics: “Outlooks, values and beliefs can be more or less reasonable, more or less objective.” (See this old post of mine suggesting as much.)
Moreover, Harris did not invent anything knew. Just consider this bit from John Stuart Mill, back in 1872: “The backward state of the Moral Sciences can only be remedied by applying to them the methods of Physical Science, duly extended and generalized.” Mill’s project, however, immediately failed because of his introduction of the distinction between “high” and “low” pleasures, a qualitative dichotomy that simply cannot be backed up by any “physical science,” and yet is the only thing that saves post-Bentham utilitarianism from descending into a search for the minimum common denominator that makes everyone “happy” (which would be the above mentioned drug hook-up).
After taking care of Harris, Julian then moves on to the opposite mistake, in a sense, made this time by scientistic philosophers like Alex Rosenberg, author of the Atheist Guide to Reality: Enjoying Life Without Illusions. (I reviewed that one too, again, not positively.)
The connection between Harris and Rosenberg is explained very clearly by Baggini: “Harris is not necessarily representative of mainstream scientific thinking about morality. It is telling, however, that the more common alternative view is equally simplistic and extreme. This is the view that science debunks ethics. Science does not determine human values, it reveals them to be a kind of fiction.”
Here is an example of Rosenberg’s approach: “(i) What is the difference between right and wrong, good and bad? There is no moral difference between them. (ii) Why should I be moral? Because it makes you feel better than being immoral. (iii) Is abortion, euthanasia, suicide, paying taxes, foreign aid or anything else you don’t like forbidden, permissible, or sometimes obligatory? Anything goes.”
Julian finds it hard to believe that people like Rosenberg are serious about this, rather than just playing a (sick, I might add) intellectual game. Here is why: “it is interesting that [Rosenberg] does not add to his list child sexual abuse, rape, torture of the innocents and so on. To say ‘anything goes’ after a list like that would be extremely hard to take seriously.” Indeed.
The problem, concludes Baggini in this section of the chapter, is this: “The mistake is to believe that the methods of science have a monopoly on the practice of reason. From this it follows that morality must either be taken under the wing of science or cast out as irrational.” This mistake, of course, runs contrary to Baggini’s own careful analysis of what reason is, which we have explored in detail in the past several posts.
None of the above, however, means that science is irrelevant to moral questions. One of the most obvious examples is that of abortion — interestingly, one of those I also bring forth in the context of these discussions, and pretty much along the lines sketched by Julian in this chapter.
Let’s say we arrive at a position that says that abortion is permissible up until the moment in which the fetus begins to feel pain, and after that only if the life of the mother is in danger. (This is for the sake of discussion, not necessarily my or Baggini’s position, so don’t get worked up about it.) Well, then it is up to science — and in particular neuroscience and developmental biology — to give us the best estimate of when that is actually the case. But arriving at that specific criterion, rather than other possible ones, is a matter of philosophical dialogue, not (just) empirical evidence.
Julian also says, again, very similarly to what I’ve been writing for a while now, that another scientific input into the question of morality comes in the area of understanding the origin of the human moral sense. Here it is comparative anthropology, evolutionary biology, and primatology that play the crucial role.
Then there is the contribution of neuroscience to our understanding of how the brain arrives at moral decisions. Interesting, scientifically, but again not at all the same thing as a science of morality. Why? Because “people all over the world have the same basic brain circuitry and yet moral norms differ enormously.”
As an example, Julian compares how the Inuits and the Polynesians treat deception on the part of a group member: it is a capital offense in the first case, but only gets you a slap on the wrist in the second case. Why? Because the living conditions of Inuits are such that deception can cost the lives of several group members, or even the survival of the entire group. Not so under the more benign environmental conditions enjoyed by the Polynesians. The brains are the same, and so is their deep evolutionary history. But the cultural conditions are dramatically divergent, because of their very different environments.
Evolutionary psychology too doesn’t really help settle moral questions. For one because the fact that something is natural (rape, for instance, according to evopsychs Randy Thornhill and Craig Palmer) obviously doesn’t make it right (that would be an appeal to nature, an informal fallacy); and second because “to conclude that evolutionary psychology debunks ethics by showing that it is ‘nothing more than’ reciprocal altruism or enlightened self-interest” is an example of “the genetic fallacy: confusing an account of something’s origins with its justification.”
By the end of the chapter Julian arrives at the very same conclusion I have been defending for years, as astonishing as it is that it actually needs defense: “A scientifically informed ethics is to be welcomed, but a purely scientific ethics is an impossibility.”