Category Archives: Ethics

No, science does not provide all the answers to the big questions

From time to time a famous scientist allows himself (in my experience it’s always a man) to write nonchalantly about something of which he demonstrably has only a superficial grasp: philosophy. The list of offenders is a long one, and it includes Lawrence Krauss, Neil deGrasse Tyson, and Stephen Hawking, among several others. (Fortunately, there are also exceptions, scientists who value a constructive intercourse with the humanities, like Sean Carroll.) The latest entry in this dubious pantheon is Peter Atkins, who recently published a sloppy essay in the otherwise excellent Aeon magazine entitled “Why it’s only science that can answer all the big questions.” Oh boy.


Atkins begins by telling us that there are two fundamental kinds of “big questions”:


“One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. … Most questions of this class are a waste of time; and because they are not open to rational discourse, at worst they are resolved only by resort to the sword, the bomb or the flame. … The second class of big questions concerns features of the Universe for which there is evidence other than wish-fulfilling speculation and the stimulation provided by the study of sacred texts. … These are all real big questions and, in my view, are open to scientific elucidation.”


This is not news, of course, at all. David Hume — one of my favorite philosophers — made essentially the same argument back in the 18th century, in his case rejecting what he saw as the waste of time associated with the Scholastic metaphysics that had prevailed throughout the Middle Ages:


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” (An Enquiry Concerning Human Understanding)


With all due respect to Hume, it’s a good thing people didn’t follow his advice, or we would have lost his very own Enquiry Concerning Human Understanding, since that book doesn’t contain any abstract reasoning concerning quantity or number, nor does it contain any experimental reasoning concerning matter of fact. And yet, it is — justly — considered to be one of the most important works of modern philosophy.


Atkins apparently realizes that he may come across as a bit too facile, since he acknowledges that he is defining the big questions precisely as those that science can answer, turning then around to “demonstrate” that science is the only discipline equipped to answer such questions. As he drily puts it when considering the obvious charge of circularity: “that might be so.” Which doesn’t stop him from proceeding as if it were not so.


Atkins tells us that science is getting ready to tackle what he considers the next three big questions: How did the Universe begin? How did matter in the Universe become alive? and How did living matter become self-conscious?


I have no doubt, as a scientist, that those are, indeed, scientific questions. I’m slightly more skeptical, as a philosopher, that science will actually be able to come up with answers. Fundamental physics, after more than a century of uninterrupted success, seems to have entered a period of navel gazing where speculation (admittedly mathematically informed speculation) is poised to replace empirical evidence. So we shall see if and when we’ll actually get a “theory of everything,” and whether that theory will in fact be able to tell us how the universe began from “nothing” (there is some doubt that it will).


Regarding the second question, the origin of life, theories have been piling up for several centuries now, and yet we don’t seem to be particularly close to a resolution just yet. I’m certainly not arguing that it isn’t possible, but it’s a very, very difficult problem, for the simple reason that a lot of the historical traces have been lost. No geological strata survive from the time when the primordial earth was home to the first living organisms, meaning that researchers on the origin of life are like detectives who already know the smoking gun isn’t going to be found. At best, they’ll have to rely on circumstantial evidence. Even should we be able to produce life artificially in the laboratory that would not solve the problem, since it wouldn’t mean that life on our planet actually followed anything like that particular causal path.


As for consciousness, I remain convinced that the problem is indeed biological in nature, and that therefore developmental, evolutionary, and neuro-biology are the disciplines best positioned to find a solution. But at the moment nobody seems to have much of a clue, and common talk of the brain being a computer is finally beginning to be understood as the shaky and very likely misleading analogy that is.


So, yes, if any of those three big questions are going to be answered, the answer will be a scientific one. But what about other questions that arguably just as big (or, for most of us, even bigger)? Here Atkins shifts into full scientistic mode:


“I see no reason why the scientific method cannot be used to answer, or at least illuminate, Socrates’ question ‘How should we live?’ by appealing to those currently semi-sciences (the social sciences) including anthropology, ethology, psychology and economics.”


Please notice a number of interesting and revealing things about this sentence. First, Atkins is making the time-honored argument from personal incredulity: “I see no reason why…” Which, of course, is not an argument at all, but an elementary logical fallacy. Second, he is seriously hedging his bets when he immediately qualifies his initial statement: “or at least illuminate…” Ah, well, but philosophers since the Pre-Socratics have understood that empirical evidence (i.e., “science”) can illuminate philosophical questions. However, that’s a far more modest claim than the notion that science can actually answer those questions. Third, Atkins can’t help himself but deliver a contemptuous dig at the “semi-sciences.” This attitude, common among physicists, reflects a naive understanding of the philosophy of science, according to which physics is the (self-professed) “queen” of the sciences, and every other field will achieve full scientific status only when it will finally evolve into something that looks like physics. But an increasingly common view in philosophy is that there actually is a fundamental disunity of science, that “science” is only a loosely defined family resemblance term, reflecting the fact that each science has its own goals, methods, and internal standards, and that there is no universal yardstick to be appealed to in order to make comparative judgments of quality.


Going back to philosophy, the question of “how should I live?” admits of a large number of reasonable (and a lot of unreasonable!) answers, given the very same facts about the universe and human nature. It isn’t so much a question to be answered, as to be explored and clarified. Indeed, this is arguably what most fundamentally distinguishes science from philosophy.
One of my recent morning meditations is pertinent here. It begins with a quote by the Stoic philosopher Epictetus, who says in Discourses II, 11.13:


“Here you have philosophy’s starting point: we find that people cannot agree among themselves, and we go in search of the source of their disagreement.”


As I argue in the podcast episode, there are two broad sources of disagreement among human beings: factual and conceptual. If you and I disagree about, say, the number of moons orbiting around the planet Saturn, one of us is wrong, possibly both. There is a matter of fact about the issue, and we can find out the answer by asking an astronomer. Or more simply by doing a web search. If disagreement remains after that, then one of us is more than a bit obtuse.


The second kind of disagreement concerns how to think about facts, actions, and values. Here the facts are relevant, but insufficient to settle the dispute. Let’s say we have different opinions about the permissibility of assisted suicide. Certain empirical facts are going to be pertinent to the discussion, like information about how the procedure is going to be implemented, what safeguards there may be to avoid abuses, and so forth. But even if we agree on the facts, we may still disagree on the crucial issue: is assisted suicide morally permissible?


That’s the difference between science and philosophy, and why Epictetus says that philosophy begins with the search for why people disagree on things. Notoriously, philosophy does not necessarily settle such disagreements. The joke in philosophy departments is that our profession’s slogan is: “Philosophy: we have all the questions!” But what philosophy does, by means of careful analysis and reasoned argument, is to help us clarify why, exactly, we disagree. That is of huge help to people of good will who wish to honestly pursue discussions in search of better ways to conduct their lives. Atkins may want to take notice.

Advertisements

Is exposing medical pseudoscience unethical?

You would think the answer to the title question is obvious: no, of course it isn’t unethical to expose medical pseudoscience (or any kind of pseudoscience). How can anyone think so? And yet, there are some decent reasons to doubt the certainty of such a conclusion. At least that’s the take of a paper that so far as I know has not yet been published, but is available at the PhilSci (philosophy of science) Archive at the University of Pittsburgh.


The paper is entitled “Exposing medical pseudoscience may be unethical,” and is authored by Ehud Lamm. Here is Lamm’s argument, as he presents it:


(1) Many ill health situations, in particular those involving multi- and comorbidity do not have very effective treatment options.


(2) Alternative medicine, consisting of various pseudo-scientific enterprises, provides repose to some of those suffering from such conditions.


(3) A significant part (if not all) of this benefit can be attributed to the placebo effect.


(4) In current, “Western,” societies, placebos are most effective when they have the symbolic currency of science.


(5) Ergo, exposing these pseudo-sciences will hurt people for whom there are only limited health care alternatives.


This is a reasonably constructed argument, so if one wishes to reject it — as I do — one needs to attack one or more of the premises, or to uncover and then reject some additional, hidden premises. I will follow both strategies here, though I wish to thank Lamm at the onset for having forced me to think more carefully about something I had hitherto taken for granted.


I am going to agree right away with premise (3): most (indeed, very likely, all) of the benefits of alternative medical treatments are the result of placebo effects. But I have varying degrees of issues with premise (1), (2), and (4). And I also think Lamm is not explicitly listing a few other very relevant premises here, though he mentions one of them in passing in the paper, as we shall see, only to discard it far too quickly.


Premise (1): this threads on an ambiguity surrounding the word “treatment.” Yes, modern medicine does not provide effective treatment against certain conditions, for instance a number of types of terminal cancer, or several types of dementia. But alternative medicine does not provide treatments either, according to Lamm’s own premise (3), only palliatives. Which means that the proper comparison here ought to be only between standard palliative care and “alternative” one. Palliative care works on the basis of a combination of actual biomedical effects (e.g., painkillers) and placebo. Other things being equal, then, it is more effective for a doctor to use her prestige (premise 4) to push tested palliative options rather than pseudoscientific ones.


Premise (2): for the reasons just seen, even if alternative medicine does provide some respite to patients by way of a placebo effect, standard palliative care is in any case preferable, because it also works by placebo, but additionally actually delivers (at least in some cases) a real biomedical effect. Again, remember that we are talking about alleviating pain and similar things, not actual cures, which are not achievable via placebo, and Lamm agrees that placebo is pretty much the only mechanism through which pseudo-medicine “works.”


Premise (4): the symbolic currency of science in the Western world seems much less real than Lamm assumes. Witness the widespread antivax movement, or — outside of medicine — so-called “skepticism” of climate change, or of evolutionary theory. If anything, science is suffering a credibility crisis of late, and inducing medical doctors to deploy pseudoscience is hardly going to improve such credibility.


If the reader agrees with even one of my criticisms of Lamm’s premises, that is enough to reject the argument. But there is more.


For instance, Lamm at one point says: “I am putting to one side deontological and consequentialist arguments against the use of placebos in general, and assume that in some situations relying on a placebo effect is legitimate.” That’s a bit too quick, and a rather big thing to put aside (and the bit after the comma somewhat begs the question)! A deontologist, for instance, might argue that it is a violation of the categorical imperative for a doctor to systematically lie to his patients, because that violates the implicit trust between the two (the very same trust on which premise 4 is built, incidentally), and because the doctors themselves would probably rather not been lied to when it is their turn to be patients.


On consequentialist grounds, one could argue that there are long term negative societal consequences engendered by lying to patients and by pushing pseudoscientific notions. Because, again, patients might eventually lose confidence in their doctors and begin to doubt any sort of medical advice; also because people will be more likely to embrace pseudoscience under other circumstances, for instance when there are, in fact, perfectly good evidence based treatment options available; and mroevoer because we would be condoning a multibillion dollar industry based on what is essentially a fraud.


Furthermore, there is a third big ethical framework out there: virtue ethics. It is hardly a good character trait to engage in systematic deception of others, even for their own good (a premise, this last one, that I have already rejected). Virtue ethics does not follow universal rules, so lying can be acceptable under certain circumstances. But the focus is on the character of the moral agent, and repeated lying is going to be deleterious to that character, since character is shaped precisely by repeating the same actions over and over, or by systematically entertaining and agreeing with the same thoughts.


Lamm sees another concern with exposing pseudo-medicine: “the movement toward Evidence Based Medicine (EBM), to the extent that it is successful, may lead people to be less open to treatments and assistance that are not backed by science, such as talk therapies, meeting with alternative medicine practitioners, support from informal social networks, help from clergy, and so on.”


This is an interesting point, but the way it is articulated it carelessly lumps a number of things that should be considered separately. For instance, homeopathy is clearly and definitely a pseudoscience, so it should be counted as a success if people do not waste their money (and hopes) on very expensive sugar pills. Talk therapies, however, are not necessarily pseudoscientific. We have good evidence that some forms of psychotherapy work (e.g., cognitive behavioral therapy), and there are distinctions to be drawn about what “working” means here. If a patient is in need to simply talk to someone, but is under no illusion that this will actually cure him, I don’t see what the problem is, or why such talk therapy should be counted as pseudo-medicine in the first place. Perhaps it won’t work better than getting a drink with your friend, but if chatting is the only thing one needs in order to feel better, it “works.” Much hinges, of course, on what the precise claim of the therapist is going to be. The very same considerations apply to the use of informal social networks, or to help from the clergy. These things are simply not in the same category of homeopathy and other kinds of pseudo-medicine.


And there is more to consider, like the general ideas that truth and honesty are intrinsically valuable, and should be compromised only under very unusual circumstances. Certainly both a deontologist and a virtue ethicist would reason this way, though a consequentialist might disagree (there goes yet another reason to reject consequentialism, in my view).


Lamm does consider some objections to his argument in the second part of the paper, and rejects them with a number of counterarguments. I have already covered part of what he says there in the previous points, but let me add a few pertinent comments.


He writes: “we should strive for alternative channels of social support to be available and determine whether non-science backed alternatives can provide patients with social and economic benefits that they need, such as health insurance coverage, help with the education system, and so on.” Well yes, as acknowledged before, but again this thread on an ambiguity of the term “support.” There is no problem in supporting people via talk therapy, social networks, religious counseling, and so forth. But these are not types of alternative medicine, and so long as the people providing the support are not lying, then there is no ethical issue. (Of course if you are an atheist you will think that all priests are lying by definition, but this is not the case from the point of view of a believer who has already adopted that particular religion’s framework. Admittedly though, that’s a tough case, best explored on its own.)


Lamm also maintains that some pseudoscientific “treatments” may be cheap and effective (again, via placebo) and therefore preferable whenever there are no established medical options for the condition. But we have seen above that at a very minimum modern medicine can provide palliatives, and let’s remember that much alternative medical practices are anything but cheap.


The author acknowledges a big potential downside of his approach: “if the pseudoscience providing the placebo effect causes patients to ignore science backed treatment we have to consider the tradeoff to decide whether belief in the efficacy of pseudoscience is beneficial or harmful.” Indeed, and this may often be the case. Not to mention that so long as the patient is also seeking regular medical treatment then, again, there is no additional benefit of the pseudo-medical one, which will cost money not just to the patient, but also, increasingly (and unfortunately) to the health insurance or government-backed apparatus that will pay for the so-called treatment.


Finally, Lamm argues: “It may be mistakenly suggested that the argument presented here is a justification for fraud (assuming the pseudoscience in question involves fraud). However, if anything, the argument discussed here only applies to exposing fraud, which in some cases may do more harm than good.” I don’t see how this would work: if a medical practitioner knows that a given remedy is pseudoscientific and recommends it nonetheless, he is participating in a fraud, even in cases in which he is not directly pocketing money for it (someone else is, and besides, he is being paid to take care of the patient). So, refraining from exposing fraud is, seems to me, condoning and abetting fraud no matter how one puts it. This may be a case of philosophical distinction without a difference.


Again, I recognize that Lamm’s paper is important because too often people like myself, who have pursued a career of exposing pseudoscience and defending science, make leaps of reasoning from the factual to the ethical, automatically assuming certain value judgments without further argument. But it seems to me that, once the arguments have been examined, there is still a strong preponderance of reasons to expose pseudo-medicine whenever possible.

Evolution, moral realism, and conditional imperatives

Is it true that genocide is wrong? Most of us would respond in the affirmative, would think that the answer is obvious, and would regard anyone answering in the negative as a psychopath, best immediately locked up somewhere where he can’t hurt people. And yet, that kind of response implicitly assumes that there is a fact of the matter about moral pronouncements, that some statements in ethics are true or false. But by what standard?


Moral truths — if they exist — don’t appear to be on par with scientific truths, despite much nonsense that has been written about it in recent years (see here and here). If a scientist says that, for instance, the planet Saturn has rings, that statement is true if, and only if, it turns out that Saturn does, in fact, have rings. This is referred to in philosophy as the correspondence theory of truth: a statement is true if it corresponds (to the best of our knowledge) to what’s actually “out there.”


Moral truths are also not (quite) like mathematical truths. In mathematics the Pythagorean theorem, say, is “true” if it can be derived deductively from a small number of axioms. The reasoning that leads to its derivation has to be coherent, meaning that the theorem has to be logically entailed by the axioms, and not lead to or imply any contradiction. This is known as the coherence theory of truth.


I don’t mean to imply that there is a sharp distinction between science and mathematics, nor that individual applications may not rely on a combination of the two theories of truth (indeed, we’ll see one such application below), but the above is a basic sketch that will serve us well in this essay.


So if moral truths don’t follow either a correspondence or a coherent account, what are we left with? Enter error theory. According to the excellent Stanford Encyclopedia of Philosophy:
“Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that level headed people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn’t believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn’t believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk.”


The upshot is that if you (like the majority of people) believe that there are such things as moral truths, you are a moral realist, but you need to provide an account of where moral truths come from. If you reject the existence of moral truths (and error theorists are just one class of philosophers who do) then you are left with the task of explaining how come so many people are prone to this particular type of error.


This is why I was curious to read a recent paper by eminent philosopher of science Kim Sterelny and his University of Canberra colleague Ben Fraser, entitled “Evolution and moral realism,” and published in the British Journal for the Philosophy of Science. Here is a summary of their approach:


“We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). We: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. … True moral beliefs are a ‘fuel for success,’ a map by which we steer, flexibly, in a variety of social interactions.”


Let me unpack the above, and see where this leads us. The rather uncontroversial premise of Sterelny and Fraser’s paper is that our sense of right and wrong derives from an instinct that was probably favored by natural selection in order to improve our prosocial behavior, because the latter — in highly social species like ours — increases individual survival and reproduction, which are the only things natural selection “cares” about. Elements of prosocial behavior of this sort, which we would call moral if observed in humans, are indeed present in other species of primates.


But as Sterelny and Fraser point out, evolutionary accounts have largely being co-opted by error theorists: while moral language is what philosophers call “truth-apt” (i.e., it looks like it’s referring to truths), actual moral claims cannot be demonstrated to be true, since neither the correspondence nor the coherence theory seem to apply.


This has actually always sounded strange to me, for the following reason. A similar argument could be made that natural selection evolved our intelligence not in order for us to discover truths about the world (including scientific truths), but rather to figure out how to best our rivals within social groups. This is referred to as the Machiavellian theory of the origin of mind (but see here for one criticism), and it would lead us to conclude that even our scientific utterances are “truth-apt” and yet “never true or warranted.” One theologian, Alvin Plantinga, actually makes that argument against naturalism (as opposed to supernaturalism), ironically using evolutionary theory to conclude that either evolutionary science is untrustworthy or philosophical naturalism is wrong. I think Plantinga’s argument is bogus, though it would require a separate essay to show why (maybe at a later time). Still, it seems really weird to say that science (including evolutionary theory) doesn’t at least approximate truths about the universe, given how well its products work in practice. So error theorists shouldn’t be that quick to co-opt evolutionary theory on behalf of their position, or they risk falling into something like Plantinga’s dilemma.


Which lead us to the three points that constitute the heart of Sterelny and Fraser’s paper. They consider evolutionary error theory within a broader framework, the continuum between “folk” (i.e., everyday) and scientific understanding of things. Let’s clarify by means of an example not related to ethics: the phenomena of sunrise and sunset. The folk understanding in pre-scientific times was that, literally, the sun would rise above the horizon every morning, and set below it every evening. The sun was understood in a variety of ways, metaphysically, but usually as some kind of god or manifestation of the divine. The scientific account, of course, is that the sun isn’t doing any such thing, and in reality it is the earth that rotates on its axis, causing the illusion of sunset and sunrise. An evolutionary error theory would say that although sunrises and sunsets are illusions, in the sense that they are not a truthful description of what is going on, they are useful, since people can regulate their days accordingly. For everyday life, it simply doesn’t matter whether it is the sun that rises or sets, or the earth that rotates around its axis.
This is why Sterelny and Fraser say that according to this approach “many folk conceptual structures are neither hopelessly wrong nor fully vindicated.” But how is evolutionary morality cashed out, using this framework? On the one hand, there cannot be any simple reduction of moral truths to scientific facts. On the other hand, “moral facts are facts about cooperation, and the conditions and practices that support or undermine it.”


This is an interesting move, but I think it succeeds only in part. Sterenly and Fraser are ambitious here, as they want to ground a kind of moral realism, or quasi-realism, in evolutionary theory. Essentially, they are saying that moral truths follow the correspondence account outlined above, in that something is morally true just in case it fosters cooperation among human beings, and it is morally wrong if it doesn’t.


But this simply cannot be the full story. I think it is morally right (“true”) to cooperate with the entire human race in order to achieve a peaceful and prosperous world. Unfortunately, this is certainly not the sort of cooperation that natural selection has ever fostered. On the contrary, human evolution has been characterized by competition, not cooperation, among groups, with cooperation limited to each in-group. Indeed, it can be argued that the natural human trait of xenophobia (which I assume we would unequivocally label as morally wrong) has been adaptive for much of the history of Homo sapiens: if someone looks different from members of your in-group, he’s probably dangerous and you should be wary of him.


It is true that Sterelny and Fraser are careful, and are not committed to the simplistic notion that whatever behavior was favored by natural selection it is ipso facto morally good. But there are simply far too many discrepancies between what a theory of evolutionary morality would predict and what most people nowadays consider morally right or wrong for their approach to get us very far.


What then? Are the error theorists right after all? I don’t think so. I suggest that the sort of considerations articulated by Sterelny and Fraser provide a good account of how a natural moral instinct might have evolved: to favor in-group prosociality. But ethics since the invention of language, and especially since the invention of philosophy in different parts of the world between two and a half and three millennia ago, has been about using reason to precisely articulate and usually expand what counts as moral. Slavery, oppression of women, and xenophobia were initially considered morally acceptable, because they either enhanced or did not get in the way of the functioning of human groups. But today we think of all those notions as morally wrong, and for good reasons.


These reasons are the result of a combination of a basic prosocial human nature, an innate sense of the existence of right and wrong things, and of reason applied to the amelioration of the human condition. The first two components are the result of biological evolution, the latter of cultural evolution, which took over once we left the African savanna between 70,000 and 40,000 years ago, and especially after the agricultural revolution of 12,000 years ago. While the natural selective imperative is to survive in order to reproduce, the cultural imperative goes well beyond it: we want to flourish, to pursue projects, to acquire a certain degree of independence, and so forth. Biology simply cannot account for that.


What does, then? As philosopher Philippa Foot famously argued in her landmark 1972 paper, “Morality as a System of Hypothetical Imperatives,” moral truths are conditional truths. IF we wish to build a peaceful world THEN xenophobia is immoral. IF we wish to maximize every agent’s ability to flourish THEN the oppression of groups or individuals is immoral. And so forth.
This makes moral truths a combination of correspondence and coherence. The correspondence part derives from the notion that there are certain facts about the human condition that we can ascertain empirically. For instance that individuals flourish if they are accorded some rights, like the right to health care, or education. The coherence part comes from the fact that IF … THEN statements are a matter of logic, and so reasoning built on their bases is valid in some cases and invalid in others.


Another way to put it is this: if moral reasoning is reasoning about hypothetical imperatives, as Foot correctly, in my mind, argued, then the structure of that reasoning is a matter of logic (coherence) while the assumptions from which one begins any such reasoning are empirical in nature (correspondence). If human beings were radically different kinds of beings, our moral philosophy would look very different, or perhaps wouldn’t exist at all. Hence the relevance to ethics of the concept of human nature.


Some people may be unhappy with what they will conceive as a weakened sense of moral truths. They want categorical, not just hypothetical imperatives. They wish for universal, mind-independent moral truths. Too bad, there is no such thing. Others will deny the above account and embrace a relativist position in which morality is an entirely arbitrary human construct. While theoretically possible, I challenge anyone who adheres to such position to actually live by it. It’s not going to happen because it isn’t a good concept of morality for humanity. Foot-style categorical imperative offer us the best available option to navigate between the Scylla of strict moral realism and the Charybdis of strict moral anti-realism.

We are all invited to the conversation, and if you disagree you may be logophobic

“There is no greater evil one can suffer than to hate reasonable discourse.” (Socrates, in the Phaedo)

You may have noticed that these days it has become really difficult to have a reasonable conversation. With almost anyone. I’m not referring to the obvious culprits: Trumpists, purveyors of alternative facts, or those who dismiss critical journalism as “fake news.” I’m not even talking about the classical examples, from creationists to believers in all sorts of alternative medicine, the paranormal, UFOs, and so forth (I wrote a whole book about those). I’m talking about my own “tribe,” if you will: progressive, left-leaning liberals.

I have been one of those since I was young, convinced that the fascism that had gripped my own country, Italy, for the two decades leading to World War II was the true evil of the world. Yet, even back in high school I experienced the disturbing intolerance of certain quarters of the left, where any thought that did not align exactly with the orthodoxy of the moment was automatically branded as either “fascist” (thus emptying that word of meaning), or, at best, “qualunquista” (literally “one with random opinions”).

I profess cosmopolitanism, the notion — going back to the ancient Cynics and Stoics — that we are all human beings, endowed by that very fact with a degree of dignity that deserves respect and, most importantly, equal rights. I regularly donate to organizations that fight on behalf of civil rights, of immigrants, and of the freedom of the press. I think that women ought to be paid equally to men and ought to be afforded exactly the same educational and working opportunities. I think it monstrous to deny rights to Lesbians, Gays, Bisexual, Transgender, and Queer/Questioning (LGBTQ). I am not religious, but I do not believe that religion is the root of all evil, nor do I believe that any particular religion is ethically worse than any other (with the exceptions of some cults like Scientology).

Yet, I have been told to go fuck myself, you transphobic bastard, simply because I tweeted — without comment — an article that explored some interesting, and to me rather puzzling, conflicts among segments of the feminist, lesbian, and transgender communities. According to this otherwise thoughtful and in-depth piece, my sexual aesthetic preference for slender women cannot but be political, which, obviously, makes me “fatphobic.” If I raise the issue of Islamic terrorism, in a reasonable and historically informed fashion, I risk being branded an Islamophobe (though, funny enough, if I mention Christian terrorism linked to abortion clinics or white supremacy, I get plenty of nods of approval, within my tribe). If I voice the opinion that while the Israeli are guilty of unlawful occupation of Palestinian territory and of slaughtering unharmed innocents, it is also true that too many Palestinians are intolerant and have systematically undermined their own case, then I am, of course, a fascist (again) Islamophobe pig. And so on and so forth.

And then there is this rather new, entirely postmodernist malady: the idea that unless someone has actually experienced something (being it colonialism, gender discrimination, racism, and so forth) then one is not entitled to express an opinion about it, especially if one happens to be a “privileged” white male.

Let me be clear about the latter two points. First, yes, I amply recognize my privilege. I just don’t call it that, because it’s not helpful. I happen to think that everyone, regardless of gender or ethnicity, ought to have a somewhat nurturing family, a decent education, job opportunities, and be safe from discrimination and police violence, like I have been throughout my life. It’s not a fucking privilege, it’s the way all human beings should live. Period. And that’s going to happen only if we stand united to fight discrimination and injustice, rather than wasting energy and time shouting down some of our own allies, shaming them for having gotten lucky at the gene-cultural lottery. That sort of shouting — especially anonymously on social media — may make you feel virtuous, but it is positively damaging to the causes you allegedly care about.

Second, yes, it is certainly the case that people who directly experience discrimination are in a unique position to talk about it, and that they should be afforded a primary place in public discourse in virtue of that very fact. But we also know from plenty of research in social psychology (see here and here, for example) that first person experience is extremely unreliable, as people filter such experiences through all sorts of cognitive biases that inevitably distort their perceptions of things. That is why we need the rest of society to also come to the discussion table. Social progress is made possible by the constructive, and yes, even critical, dialogue among many voices, the by now increasingly old fashioned value of pluralism.

At his point, some readers will have already put down this essay and rushed to social media to begin a campaign of shaming, outraged by the few paragraphs above. So I can keep talking to those who have stuck it so far. Those who have left the conversation, I maintain, suffer from a peculiar disease described by philosopher Nicholas Shackel in a delightful paper published back in 2005 (Metaphilosophy 36: 295–320), entitled “The vacuity of postmodernist methodology: logophobia.”

The Greek roots of the term are obvious: “logos” means “thought,” particularly rational thought, and “phobos” means “fear of.” Logophobia is defined by Shackel as, “a skeptical doctrine about rationality … [where] rationality cannot be an objective constraint on us but is just whatever we make it, and what we make it depends on what we value.” He adds, “[opponents] are held to disguise their self‐interested construction of rationality behind a metaphysically inflated view of rationality in which Reason with a capital R is supposed to transcend the merely empirical selves of rational beings.” In other words, logophobics claim that since reason cannot possibly solve every problem, so you can proceed with dismissing reason altogether.

Shackel’s paper is actually a serious, and at times rather technical, critique of the modus operandi of postmodernist (and deconstructionist, and anti‐rationalist feminist) authors such as Michel Foucault, Jacques Derrida, Jean‐Francois Lyotard, David Bloor, Richard Rorty and Stanley Fish, to mention just a few of the major offenders (though I personally make distinctions of degree among the named people). At one point I added to the list a number of pseudoscientists, such as the Intelligent Design proponents Bill Dembski and Michael Behe, alternative medicine “gurus” such as Deepak Chopra, or vaccination‐deniers such as Jenny McCarthy. And now, to my deep regret, it is becoming painfully obvious that both the extreme right (which, at the moment, appears to include most of the Republican party in the US), and certain increasingly vocal groups of the left, have been struck by the disease as well.

According to Shackel, logophobics have developed an arsenal of strategies to obfuscate clear thinking, which they deploy whenever pressed by a skeptic. For instance, consider Foucault’s classic thesis that there is no such thing as scientific truth, because truth is a reflection of local socioeconomic and political power: “The essential political problem for the intellectual is not to criticise the ideological contents supposedly linked to science, or to ensure that his own scientific practice is accompanied by a correct ideology, but that of ascertaining the possibility of constituting a new politics of truth.” This is apparently strong stuff, but it becomes a much milder statement when Foucault later tells us what he means by “truth”: “‘Truth’ is to be understood as a system of ordered procedures for the production, regulation, distribution, circulation and operation of statements.” Really? And here I was thinking that truth means as accurate a description of a subject‐independent reality as possible. Silly me.

Shackel calls this “troll truism,” which he defines as, “a mildly ambiguous statement by which an exciting falsehood may trade on a trivial truth.” It is a particular form of so‐called Humpty‐Dumptying, named after the character in Lewis Carroll’s Through the Looking Glass. Humpty Dumpty famously tells Alice, after she points out that he is arbitrarily switching word meanings in the course of their discussion: “When I use a word, it means just what I choose it to mean — neither more nor less.”

Now, to be fair, Foucault did have a point: science is a social activity, and scientists are human beings, prone to the same cognitive and cultural biases as everyone else. Also, some science has been in the thralls of either economic or political agendas, or both. Similarly, when Amia Srinivasan, in the article linked above, claims that even our sexual preferences have a political dimension, the thought is neither absurd nor easy to dismiss. But the political, or the economic, are not the beginning and end of everything we do or think. There are (Zeus forbid!) biological instincts and constraints on our behavior, there are matters of taste, and there are simple reasonable disagreements. To reduce everything to the political is to commit the Marxist mistake, only in the latter case everything reduces to economic struggle. No, the human experience is just too varied and complex to be explained away by mono-factorial accounts, no matter how good the intentions of those that push for such accounts.

Logophobia, I maintain, is a broad condition that can strike adult humans of all ages, genders and cultural backgrounds, especially when they have never been exposed to the basics of critical reasoning, or when they have grown up in the thralls of a powerful ideological system. And no, being a scientist, or a philosopher, does not make you immune, unfortunately. The effect of the disease can probably never be eliminated, as we all suffer from it, to an extent. But they can be reduced by means of sound education, although that requires painful effort on the part of educators and subjects alike. Once the subject is past middle school, it becomes increasingly difficult, though not quite impossible, to overcome the malady. But it’s important to try, as huge amounts of financial resources and time are wasted as a result. Occasionally, lives are lost as a direct outcome of logophobia, especially when the logophobic is a politician with the power to start a war, or a celebrity pushing an anti‐medical health practice. And important political agendas may unwittingly be delayed or even undermined when logophobic activists manage to shut down discourse they find threatening, insulting, or “fascistic.”

Needless to say, fighting the spread of logophobia is a primary responsibility of every reasonable thinking person, practicing scientist, and professional philosopher, despite the highly unfavourable odds against defeating it — which is why a thorough knowledge of the disease and of its symptoms is so crucial. Spread the word, and watch out for logophobics near you! (And beware of not becoming one yourself…)

The morality-as-cooperation hypothesis and the link between evolution and moral philosophy

“Morality” comes from the Latin moralis, which was Cicero’s translation for the Greek ethos. The Greek word is related to our idea of character, the Latin one has to do with habits and customs. In an important sense, then, morality (or ethics, used here interchangeably) is the study of how to conduct your life, and particularly how to constructively deal with other people.

Recently, a group of researchers headed by Oliver Scott Curry has put out a paper (as yet unpublished) entitled “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” which is both interesting and more than a bit irritating. (Here is the site with the full version, and here is a commentary by the lead author, put out by the Evolution Institute.) It is interesting because it provides tantalizing empirical evidence to bolster the case, made by several researchers for a long time now, that the bio-cultural origins of a moral sense lie in the need for a species like ours to foster pro-social behavior, or as Curry et al. put it, “cooperation.” Frustrating because it fails to make the crucial conceptual distinction between the origins of morality and its current function, as well to recognize the equally important difference between descriptive and prescriptive approaches to ethics. I’ll do my best to highlight both the positives and the negative in what follows.

First off, a recap of the study as presented by the authors themselves:

“What is morality? And to what extent does it vary around the world? The theory of ‘morality-as-cooperation’ argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and predicts that specific forms of cooperative behaviour — including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession — will be considered morally good wherever they arise, in all cultures. In order to test these predictions, we investigate the moral valence of these seven cooperative behaviours in the ethnographic records of 60 societies. We find that the moral valence of these behaviours is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviours are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.”

Curry’s commentary begins with more than a whiff of scientism: “What is morality? And are there any universal moral values? Scholars have debated these questions for millennia. But now, thanks to science, we have the answers.” Ah! Yes, thanks to science we have the answers! Except that those two questions are quite distinct, of course. The nature of morality has been debated by Socrates (in the as yet unsurpassed Euthyphro) and is of course the topic of the entire field of meta-ethics. Whether there are universal moral values is a rather ambiguous question: do we mean whether there ought to be (prescriptive) or whether there merely happen to be (descriptive)? Philosophers tend to be concerned with the first sense (e.g., Kant), and are more than happy to leave the second one to anthropologists (it is, after all, an empirical question!).

Curry suggests that addressing problems of social behavior is something that has kept natural selection busy, so to speak, for millions of years during the evolution that led to the human lineage, and that moreover, once cultural evolution got started people have added a panoply of norms, rules, and institutions to deal with such problems. So far so good.

The approach used by Curry and his collaborators revolves around seven principles imported from evolutionary biology: “Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains: why we engage in costly displays of prowess such as bravery and generosity; why we defer to our superiors; why we divide disputed resources fairly; and why we recognize prior possession.”

Setting aside a few caveats (like the fact that not all societies actually have a taboo against incest), this theoretical apparatus does, in fact, go a long way toward explaining the origin of the behaviors mentioned. That brought the researchers to predict the universality of the following seven moral rules: “love your family, help your group, return favors, be brave, defer to authority, be fair, and respect others’ property.” And their results do show that these rules are present across cultures, though different societies vary in the way they rank or prioritize the rules. While their data do not explain the reasons for this variation, they also found no case were any of the rules was considered bad.

The overarching conclusion: “so there is a common core of universal moral principles. Morality is always and everywhere a cooperative phenomenon.”

Woah, slow down a second here. First off, perhaps the seven principles in question are common across modern cultures, but that does not ipso facto mean that they always were, nor that they evolved by natural selection, though that is a plausible scenario. Take, for instance, “be fair.” Are we really to understand that “fairness” was a moral precept in the highly unequal ancient Egyptian, or Roman (or pretty much all other) societies? And what do we mean by “fair,” anyway? Fairness may be the same as equality, or not, depending on one’s views, so we have to have a discussion about what it means and how the concept is interpreted in various societies.

“Be brave” is another highly ambiguous idea, and again it is hard to imagine that it means the same thing in, say, modern day Japan, with its penchant for anti-individualism, and ancient Sparta, which was characterized by a cult of individual bravery. And of course there are many different ways of being brave (or cowardly).

Second, jumping to the conclusion that morality is “always and everywhere” about cooperation seems entirely unwarranted. We are not ants or bees, each of us has autonomous projects that we intend to pursue and that have a large impact on the degree to which we flourish, as individuals. Yes, human beings are essentially social animals, and we flourish within a given social framework, but I would argue that morality isn’t about cooperation, but rather about how to balance self-centered vs pro-social needs. The two aspects — myself as an autonomous individual and myself as a member of a group — are in constant tension, and it is that tension that morality addresses, not just the group aspect of it. Cooperation is just one means to the end of survival, reproduction, and flourishing.

And by the way, natural selection doesn’t give a damn about the latter, but we most certainly do. Which is why a lot of our norms, rules, and institutions don’t address mere survival and reproduction, but rather issues such as justice, fairness, and access to resources. That is, they are concerned with flourishing.

Third, one can perfectly coherently object to some of those rules even being on a list of moral dicta. Let’s consider three of them a bit more closely: help your group, defer to authority, and respect others’ property.

At the least since the ancient Cynics and Stoics introduced the concept of cosmopolitanism, it is actually highly morally questionable to “help your group” if this is taken to mean regardless of circumstances, or at the expense of other groups that have an equal claim to resources. Today, it is not acceptable to engage in wars of conquest, for instance, in order to make our group thrive (the fact that it happens is irrelevant to the moral point). It isn’t even acceptable to exploit other groups economically, or by imposing on them environmental damages for our own convenience.

As for deferring to authority, nope, that’s a no starter. It may be a common cross-cultural practice, but it’s a bad idea, and it is in fact highly immoral if one does so blindly, following the orders of whoever is in charge. That way lies the road to tyranny, which is not morally acceptable, in either ancient or modern societies.

But surely we should respect other people’s property. Well, it depends. If it is acquired unethically, even if legally, no, I don’t think there is any such moral requirement. If your wealth is both disproportionate and arrived at by exploiting others (and let’s be frank, if it is the former, it can hardly not be the latter), then it is just and fair to pass laws to relieve you of much of that burden, through proportional taxation, for instance. My libertarian friends can scream loudly all they want, but that doesn’t make their claims to having a “natural right” to property one bit less nonsense on stilts (in the immortal phrase by Jeremy Bentham).

So, by all means let us inquire into the likely biological and cultural evolution of morality. It is a fascinating topic, somewhat amenable to empirical investigation. But let’s not confuse the descriptive with the normative aspects of the question, just like we wouldn’t for a second confuse, say, an account of the origin of mathematical thinking with the delivery of a sound proof of Fermat’s last theorem. Any time Curry et al. tell me that rule X is universal it is still perfectly meaningful for me to reply, yes, but is it moral?

Michael Shermer on moral philosophy, second round

nonsense on stilts fishSkeptic Michael Shermer recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” I found it a confused piece on moral philosophy, and since I agree with my friend Spider-Man, that with great power comes great responsibility, I was troubled by Michel’s failure toward the broad public of that august magazine. So I wrote a rather snarky response. Shermer has in turn keyboarded a reply to me, entitled “Moral philosophy and its discontents,” which he published on his own Skeptic online. This is my counter-response, and will be the last thing I will say on the topic for a while, as my experience is that going beyond two rounds in these cases quickly leads to diminishing returns. Of course, Michael is free to write a third piece, if he wishes.

To begin with, I’m going to tone down the rhetoric and focus on the substance, first because it is the right thing to do, and second because otherwise we get into an escalation of hyperboles that doesn’t really help thoughtful discourse (in his second article, for instance, Shermer says that I have become “nearly apoplectic” at his suggestion that witch hunting and the Holocaust were the result of utilitarianism. I assure you, it was a slight exaggeration.). I’ve been guilty of this even in the recent past (mea culpa), so let’s see if I can manage to do better.

I am not the only professional philosopher that has strongly criticized Michael for his original SciAm article. Another good example is Justin Weinberg, of the University of South Carolina (and editor of the Daily Nous), who, among other things, tweeted: “Disappointing that @sciam is contributing to our era’s ever-frequent disrespect of expertise by publishing this ill-informed & confused @michaelshermer column on moral philosophy.” It is not a simplistic argument from authority to point out that when professionals in a field unequivocally say you got things wrong it is wise to seriously consider that you might, indeed, have done so.

On his part, Shermer chides me for not having read a paper by G. Kahane et al. entitled “Beyond sacrificial harm: A two-dimensional model of utilitarian psychology,” published recently in Psychological Review. Well, guilty of being honest and telling when I have or have not read something. Still, my post was not a critique of Kahane et al.’s paper, but of Michael’s commentary (which, despite his protestations to the contrary, touches only marginally on the paper in question). I have now read Kahane et al., and I still think Shermer is wrong. More on this, of course, in a moment.

In my critique, I said that Michael has taken a very simplistic view of utilitarianism (a philosophy, incidentally, that I do not endorse). He rebutted that one of the examples I labelled as simplistic comes straight out of the Kahane et al. paper. The example in question is meant to measure one’s utilitarian tendencies, and it is formulated as a question: “Would you politically oppress a people for a limited time if it increased the overall well-being of the citizenry?” It does indeed come from the paper, but that’s a negative reflection on the paper, not on my point. No serious utilitarian after J.S. Mill would answer yes to that sort of question, so it is hard to say in what sense this would be helpful to measure one’s utilitarian tendencies.

In response to an admittedly sarcastic comment I made, Shermer states that he knows the difference between act and rule utilitarianism, and moreover that he is not naive about moral philosophy, since he has taken two undergraduate courses on the subject (one in general philosophy, the other one in ethics). He has also read a lot of books by Dan Dennett (not a moral philosopher), and gone through several Teaching Company’s Great Courses in philosophy. After all of which, he felt competent enough to write two books on the subject (The Science of Good and Evil and The Moral Arc), and to teach an undergraduate course at Chapman University. I will leave it to the reader to decide whether Michael’s background is sufficient to invalidate my original observation, but I will note that bringing in the difference between act and rule utilitarianism would have cleared a lot of muddled points in the SciAm article. He didn’t do it.

In my response, I stated unequivocally that Shermer is wrong when he says that trolley problems are an example of utilitarian thinking. And I stand by that statement, see my previous post for relevant links. Here too, Michael’s defense is “Kahane et al. did it,” which of course at best just means that Kahane et al. might have gotten something wrong, and that Shermer failed to correct it. But in fact they did not get it wrong. They say the following, for instance: “researchers have tried to uncover the psychological and even neural underpinnings of the dispute between utilitarians and their opponents — such as defenders of deontological, rights-based views of the kind associated with Immanuel Kant.” Exactly, so trolley dilemmas are used in cognitive science to explore both utilitarian and deontological thinking, and are therefore not an example of the former. Moreover, trolley dilemmas were introduced by moral philosopher Philippa Foot to highlight the limitations of both utilitarian and deontological thinking (in favor of the third way, virtue ethics), and they are still usually discussed in that context in intro philosophy courses. So, yes, Michael is still wrong here.

It gets worse. Shermer writes: “one might argue that trolley dilemmas represent only one form of utilitarianism (sacrificial) … but it is inaccurate to simply assert that trolley problems have nothing to do with utilitarianism.” To begin with, I never claimed that trolley dilemmas have “nothing to do” with utilitarianism. Never. Second, there is no such thing as sacrificial utilitarianism. Look it up, it’s just not a term in moral philosophy. What Michael means is utilitarian thinking applied to sacrificial problems. Not the same thing.

We now get to the part that nearly caused me an apoplectic attack, allegedly, when Shermer stated (in the first article) that witch hunts and genocides like the Holocaust or the one in Rwanda were caused by utilitarian thinking. In his response, Michael quotes himself from The Moral Arc: “It is evident that most of what we think of as our medieval ancestors’ barbaric practices were based on mistaken beliefs about how the laws of nature actually operate. If you — and everyone around you including ecclesiastical and political authorities — truly believe that witches cause disease, crop failures, sickness, catastrophes, and accidents, then it is not only a rational act to burn witches, it is a moral duty. … Medieval witch-burners torched women primarily out of a utilitarian calculus — better to kill the few to save the many. Other motives were present as well, of course, including scapegoating, the settling of personal scores, revenge against enemies, property confiscation, the elimination of marginalized and powerless people, and misogyny and gender politics. But these were secondary incentives grafted on to a system already in place that was based on a faulty understanding of causality.”

Two points here. First off, Shermer is in full speculatory mode here. We simply have no idea how to interpret and weigh the various motives of medieval witch-burners. All factors listed by Michael (and probably more) may have played a role, but anyone who assuredly claims that “this” was the major cause while the others were secondary is pulling one out of thin air. There simply is little empirical evidence to bear on this sort of claims. Second, what I begin to suspect is going on here is a fallacy of equivocation (which will reappear below, when we get to the issue of natural rights). Shermer knows very well that medieval witch-burners could not possibly have deployed Bentham’s or Mill’s philosophy, which had yet to be invented, so he uses the word “utilitarian” in a vaguer, broader sense, which then allows him to implicate the philosophy. Nice try, but this is sophistry, not good reasoning. (I said I wasn’t going to get snarky, not that I wouldn’t be critical.)

Indeed, Michael seems aware of this: “here let me clarify to anyone who thinks I can’t even get my centuries straight that I’m not arguing Torquemada sat down with Pope Sixtus IV to compute the greater good sacrifice of 10,000 Jews in order to save 50,000 Catholics; instead I am aiming to understand the underlying psychological forces behind witch hunts and genocides.” Except you cannot possibly have empirically substantive evidence of the psychological forces underlying the thinking and acting of Torquemada and sixtus IV, so why engage in this sort of psycho-historical speculation? It is just as likely, possibly even more, that Sixtus IV would have killed ten times more Jews in order to save ten times fewer Christians, since Jews and Christians, for him, were simply not comparable in moral value. Good skepticism is about empirical evidence, so why don’t we stick to that?

Shermer continues with another lengthy citation from The Moral Arc: “As in the limbic system with it’s neural networks for emotions, approach-avoidance moral conflicts have neural circuitry called the behavioral activation system (BAS) and the behavioral inhibition system (BIS) that drive an organism forward or back, as in the case of the rat vacillating between approaching and avoiding the goal region. … These activation and inhibition systems can be measured in experimental settings in which subjects are presented with different scenarios in which they then offer their moral judgment (giving money to a homeless person as prescriptive vs. wearing a sexually suggestive dress to a funeral as proscriptive).”

This is very nice, interesting, even, but utterly irrelevant. Of course animal and human thoughts and actions have specific neural underpinnings. How else would we think or act? But, quite obviously, different people balance the outputs of their BAS and BIS differently, and they end up thinking and acting differently. Some of these differences (though certainly not all of them) may be the result of philosophical reflection on why one should act one way rather than another. And this discussion is about moral philosophy, not neuroscience. As I pointed out in my original review of Sam Harris’ The Moral Landscape (which appeared, partly censored, in e-Skeptic), we may as well discuss the validity of a proof of Fermat’s Last Theorem by doing an fMRI scan of the brain of a mathematician. Interesting, no doubt. But also entirely unhelpful to the question at hand.

Michael briefly touches on my criticism of his treatment of Kant, where I brought up a technical paper by Helga Varden on the famous problem of lying to the Nazi. He says: “There is much more to her analysis of Kant, but it seems to me that in this example lying to Nazis is both a utilitarian/consequentialist decision because it would result in the death of an innocent, and a rule/rights decision that qualifies why we should care about the innocent in the first place: because, say, Kant’s rule about never treating people as an ends to a mean but as an ends in and of themselves, or that all people have a right to their own life.” This is very muddled. First off, lying to the Nazi would save the life of an innocent, not result in his death. Second, one can make that decision within a utilitarian framework, but also within a deontological one (Kant’s deontological system is not the only one on the market, so to speak). Third, Kant did not say that we should not treat other people as a means to an end (not the other way around!), he said we should never treat other people solely as means to an end. Without that qualification, we couldn’t go to a restaurant and be served by a waiter, because that would be using a human being as a means (waiter) to an end (getting my meal). The additional “solely” says that we can do that, but always while keeping in mind that we are interacting with a human being, not a machine. Fourth, nobody believes that all people have an unqualified right to their life. If instead of the Jew my basement hid a mass murderer (not because I’m protecting him, but under threat to my life) then I would certainly give him away to the authorities, even if that should result in his death. That’s the thing about good moral philosophy: it’s complicated, and requires precise nuanced thinking. And that is why we have professionals devoted to it.

Finally we come to the topic that most railed Shermer’s supporters on Twitter: natural rights. He ended his SciAm column with an endorsement of the concept, and I responded that on that topic I was with Jeremy Bentham (the founder of utilitarianism): the notion of natural rights is “nonsense on stilts.”

This is a complex topic, with a long history in philosophy, and despite Bentham’s flippant response, there have been serious defenders of it throughout the centuries. The notion of natural rights is related to, but is not the same as, the notion of natural law. Scholars trace the idea back to Plato and Aristotle, but it is far from clear that either one of them meant anything like the modern version deployed by thinkers from John Locke and Thomas Paine to Robert Nozick. I have a separate essay on natural law on my Stoic blog, since it is pertinent to that philosophy, but here let me simply reiterate my basic point: I don’t think there is any defensible notion of natural rights (as distinct from rights as human constructs) that is not hopelessly entangled with what I think are antiquated or indefensible notions of Platonism (as in Plato), teleology (as in Aristotle), or theology (as in Thomas Aquinas). You may disagree, of course, but then you owe us a defense of Platonism, teleology, or theology.

One counter to my criticism I have run across a lot on Twitter during my debate with Michael in the past few days is that human beings are part of nature, so obviously there are natural rights. QED. Well, no. This, again, threads on an equivocation. Yes, as a biologist I certainly agree that humanity is just as natural as anything else in the universe. But in the context of this sort of discussion the distinction has always (since Aristotle!) being very clear: natural rights refers to something that is mind-independent but can be discovered by human beings capable of reason; so-called positive rights, by contrast, are the result of human agreements. True, in a sense, positive rights are also “natural,” but it just doesn’t help to talk that way, it muddles a perfectly clear distinction.

This, incidentally, is an area where there is more agreement between Shermer and myself that may at first glance appear. More, but nowhere near total. Before I get to his rebuttal, let me state briefly what my position is. “Rights” are a human construct, the result of agreeing among ourselves, on the basis of moral and political considerations, that certain things qualify as rights and others don’t. The best sort of evidence that rights are of this kind is the complete disagreement among supporters of natural rights on the number and content of these alleged rights. We go from just one right (Richard Cumberland: benevolence toward all rational creatures), to three rights (Locke: life, liberty, and property — this is the one Michael wants, qua libertarian), to a whopping 19 natural laws from which one can derive corresponding rights (Hobbes, in chapters 14 and 15 of Leviathan).

That said, I do agree that rights are not entirely arbitrary, as they are linked to human nature, just like all moral philosophy is (Skye Cleary and I have recently argued this in Aeon). This puts me somewhere in the middle between moral anti-realists, who think that there is no such thing as a moral truth, and moral realists, who think that there is. I am a moral quasi-realist, meaning that for me morality is an evolving set of ideas that strives to regulate social interactions in order to allow people to flourish qua members of a social group. The reason I don’t think — contra both Shermer and Harris — that science can give us answers to moral questions is because I think facts about human nature under-determine moral systems. That is, given human nature as it is, there are several different, possibly incompatible, ways to develop moral codes. The choice among moral philosophies, then, is informed by facts about human nature, but not determined by it. To ask whether, say, utilitarianism or deontology or virtue ethics are “true” is to commit a category mistake. These are frameworks to think about social life. They may be more or less useful and more or less coherent, but not true or false (and hence not falsifiable or verifiable scientifically).

Okay, now back to the last chunk of Michael’s response. He thinks I contradict myself when I say that we all prefer to be alive rather than dead. I don’t see how that follows. Mine is just a statement of a natural desire. One has to do philosophical work to go from there to a right, especially a right that is somehow inalienable. (I also desire gelato, but that does not imply that I have a right to it.) I do think the is/ought gap can be filled, but not by simply stating that what is natural is ipso facto good. That, as Shermer knows, is yet another informal fallacy, the appeal to nature. And it is easily countered by endless examples (aggression and war are natural for human beings, it doesn’t follow that aggression and war are good).

Shermer takes a lot of liberties with evolutionary biology (another field in which I honestly question his qualifications): “Any organism subject to natural selection — which includes all organisms on this planet and most likely on any other planet as well — will by necessity have this drive to survive and flourish.” No, there is no natural selective imperative to flourish, especially if flourishing entails things like owning private property. Indeed, strictly speaking there is no natural imperative to survive either: survival is useful, from the standpoint of natural selection, only if it leads to reproduction. Sure enough, often selection favors short life spans, or rather nasty (i.e., non flourishing) lives, so long as the reproductive imperative is satisfied. And, again, just because natural selection favors individuals who reproduce, it certainly doesn’t make non reproducing immoral, does it? One of the few times I agreed with Steven Pinker (often quoted by Shermer) is when he wrote, I believe in The Language Instinct, that he made a decision early on in his life not to have children, but to devote his life to research, teaching, friends, and other good things. He commented (I quote from memory, since I no longer have a copy of that book): “and if my genes don’t like it, they can go and jump into the lake.” Indeed.

So when Michael says “I argue, the survival and flourishing of sentient beings is my moral starting point, and it is grounded in principles that are themselves based on nature’s laws and on human nature — principles that can be tested in both the laboratory and in the real world,” he is confusing different things, or at the very least drawing a direct connection between (certain aspects of) human nature and morality. This can’t be done, one needs empirically informed philosophical work to bridge the is/ought gap, not just brute facts.

He says other things that are clearly incorrect from a biological standpoint, like “The singular and separate organism is to biology and society what the atom is to physics — a fundamental unit of nature.” No, as plenty of biological organisms are colonial (corals, some jellyfish), or their individuality is temporary (when it’s time to reproduce, as in slime molds), or don’t have clear boundaries at all (several species of trees and mushrooms), or are a complex ensemble of multiple organisms that only appear to be one (human beings, see the concept of holobionts).

Shermer approvingly quotes Pinker: “Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.” Yes, as the Stoics had already figured out 23 centuries ago, we are born with a rudimentary sense of pro-social behavior, which we share with other primates. And yes, morality is the result of building on that innate sense by way of reasoning and language (a notion that the Stoics elaborated into their theory of moral development). But we are no forced to one specific set of conclusions, again because there is a relationship of under-determination between facts about human nature and moral frameworks.

Michael counts himself and Pinker as moral realists, and thinks he slam dunks the case with the following rhetorical question: “Is there anyone (other than slave holders and Nazis) who would argue that slavery and the Holocaust are not really wrong, absolutely wrong, objectively wrong, naturally wrong?” Well, first all, history is full of slave holders. People before very recent times thought that slavery was natural and just. Indeed, they derived this conclusion from their understanding of human nature, or the divine ordering of things, or whatever. More importantly, Shermer makes a fundamental mistake here: confusing objectivity with absolute truth.

Consider a simple example. Given the (entirely arbitrary) rules of the game of chess, a number of things about the game are objectively true. Heck, one can even demonstrate mathematical theorems about chess. But these truths are not “universal,” they are contingent on the specific set of rules that constitute the game. They don’t exist “out there,” in any kind of mind independent fashion. And they are, therefore, not inalienable. One can deny them by simply refusing to play the game, or by inventing a different game with even slightly different rules.

Yes, I do believe that slavery is wrong, given my understanding of human nature, which prioritizes individual flourishing and the application of reason to the improvement of pro-sociality. But there are other understandings from which my conclusions do not follow. So I have to argue the point from within whatever moral framework I have chosen (in my case, virtue ethics), I cannot simply and straightforwardly derive it from empirical observations about human behavior. If only it were that simple.

Michael Shermer on utilitarianism, deontology, and “natural rights”

Nazi officerYou may have noticed that I don’t opine on quantum mechanics. Or jazz. The reason for this is that — although I’m very interested in both topics — I just don’t know enough about them. Not enough to be able to offer an informed opinion, at any rate. So I sit back, read what other, more knowledgeable people have to say about quantum mechanics and jazz, form my own second-hand opinion, and try to avoid embarrassing myself by pontificating in public.

Apparently, my friend Michael Shermer does not follow the same philosophy. At least, not when it comes to the field of moral philosophy. He has recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” which starts out simple (simplistic, really) enough, and ends in a crescendo of nonsense. Let’s take a look.

After asking whether you would politically oppress a people for a limited time, if it increased the overall well-being of the citizenry, Michael explains that that’s utilitarianism, the philosophy started by Jeremy Bentham back at the beginning of the 19th century, often summarized with the slogan “the greatest good for the greatest number.” (Bentham, incidentally, is currently visiting New York, go see him, if you have a chance.)

Well, that is one of many versions of utilitarianism, and it was immediately abandoned, by none other than John Stuart Mill, Bentham’s student, who actually wrote the classic 1861 text entitled Utilitarianism. Indeed, before that Mill wrote two important articles, “Remarks on Bentham’s Philosophy” (1833) and “Bentham” (1838), in which he criticized his mentor and began to develop modern utilitarian thought. One of the major distinctions one can draw within utilitarianism still today is that between so-called act utilitarianism (where we must evaluate the morality of each act, a la Bentham) and rule utilitarianism (where we conform to rules that have shown overall to bring about the greatest amount of good, a la Mill). More generally, utilitarianism has a long history, and nowadays it is actually best thought of as a particular type of consequentialist philosophy. I could be wrong, but Shermer seems unaware of these distinctions.

Michael then tells his readers that “modern utilitarianism” is best instantiated by the (in)famous trolley problems. This is just flat out wrong. The original dilemma was introduced by Philippa Foot back in 1967. Here is the first version:

“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.”

Contra Shermer, the trolley dilemma was proposed, and it continues to be used (not only in philosophy, but in social psychology), in order to probe people’s moral intuitions, not to “instantiate” utilitarianism. For instance, a deontologist would refuse to frame an innocent or switch the lever, on the basis of the Kantian notion that one ought never to treat others solely as means to an end. The fact that many people switch from utilitarian to deontological responses when considering different versions of the dilemma tells us that they tend to react emotionally, which leads them to deploy an incoherent moral philosophy.

Michael then says that “the problem” with utilitarianism is that there are situations in which following its precepts one would end up endorsing psychopathic behaviors, as in the famous case (which I pose to my intro philosophy students) of the surgeon who has five patients in the emergency room, each with a failing vital organ, and decides to pick up a stranger from the street, cut him up into pieces, and distribute his organs around to save the other five. Too bad that this sort of thing is precisely why Mill (remember, already in 1833) introduced rule utilitarianism, which blocks the psychopathic doctor in his tracks. Again, no mention of this in the SciAm article.

Shermer briefly mentions a recent paper in Psychological Review (which I have not read, so I will not comment on it), mostly to tell us that he took the Oxford Utilitarianism Scale test and scored 17/63. He ain’t no utilitarian, according to the test. Neither am I, apparently (phew!), since I scored 21/63. You can do the test yourself, here.

After a brief mention of Kantian deontology, the article really veers from simplistic to nonsensical: “Historically the application of a utilitarian calculus is what drove witch hunters to torch women they believed caused disease, plagues, crop failures and accidents — better to incinerate the few to protect the village. More recently, the 1:5 utilitarian ratio has too readily been ratcheted up to killing one million to save five million (Jews: “Aryan” Germans; Tutsi:Hutu), the justification of genocidal murderers.”

What?? No, absolutely not. Setting aside the obvious observation that utilitarianism (the philosophy) did not exist until way after the Middle Ages, no, witch hunts were the result of fear, ignorance and superstition, not of a Bentham- or Mill-style calculus. And this is the first time I heard that Hitler or the Hutu of Rwanda had articulated a utilitarian rationale for their ghastly actions. Again, they were driven by fear, ignorance, superstition, and — in the case of Nazi Germany — a cynical calculation that power could be achieved and maintained in a nation marred by economic chaos by means of the time-tested stratagem of scapegoating. (The latter is also what perpetrators of witch hunting and the Rwandan genocide did: prey on the weak, it’s easy to do and get away with it.)

But Shermer doesn’t let Kant off the hook either. He brings up the famous example (which, again, I confront my intro philosophy students with) of lying: if it is the case — as Kant says in one formulation of the categorical imperative — that we should only accept as moral those principles that we would be willing to make into universal rules, wouldn’t that mean that I should never lie and give up the Jew I’m hiding in the basement if a Nazi officer (it’s always the Nazi!) politely asks me? Or, as Michael updates the scenario: “if you live in Syria and a band of ISIS thugs knocks on your door demanding to know if you are hiding any homosexuals they can murder in the mistaken belief that this fulfills the word of God — and you are — few moralists would object to your lying to save them.”

Notice the pejorative term “moralists,” instead of moral philosophers. Anyway, you would think Kantian philosophers would have something to say about this. Oh, right, they do! A good example is a paper by Helga Varden in the Journal of Social Philosophy, entirely devoted to Kant, lying and the Nazi officer. I do not have the time here to do justice to her analysis, but a couple of points need to be brought to bear: first, in that case Kant was writing explicitly within the context of a discussion of the doctrine of rightful interactions (the original, short paper in which he tackles the case is entitled “On a supposed right to lie from philanthropy”). As Varden says, within that context, “we can make sense of why lying to the murderer, although a wrong, is not to wrong the murderer, why we become responsible for the bad consequences of the lie, and finally why lying is to do wrong in general.”

More to the point, Kant was talking about a murderer (he, obviously, couldn’t have contemplated the Nazi), but when one changes the scenario to a Nazi officer — or an ISIS terrorist — it turns out that the problem dissolves itself, because “the only time doing wrong in general by lying is legally punishable [within Kant’s framework] is when we lie to or as a representative of the public authority. The Nazis, however, did not represent a public authority on Kant’s view and consequently there is no duty to abstain from lying to Nazis.” Or to ISIS. Again, I didn’t notice any of these qualifications in Shermer’s article.

Michael, predictably, makes no mention at all of the third great framework in moral philosophy, virtue ethics, which would actually do a lot of the work he wants to do, against both utilitarianism and deontology — in their philosophically sophisticated versions, not the caricature we get in the SciAm article.

But never mind that. The true nonsense comes right at the end, when Shermer puts forth his preferred view, the one that, in his mind, has allowed for true moral progress throughout the ages: “both utilitarianism and Kantian ethics are trumped by natural-rights theory, which dictates that you are born with the right to life and liberty of both body and mind, rights that must not be violated, not even to serve the greater good or to fulfill a universal rule.”

Setting aside that you get precisely the same result from Mill’s rule utilitarianism, not to mention that natural rights theory has no argument against Kant, “natural rights” are what Jeremy Bentham famously, and correctly, referred to as “nonsense on stilts.” There is no such thing as a natural right, and we, therefore, are not born with them (contra the mindless libertarian mantra that Shermer is repeating). Michael is confusing human desires and instincts — some of which are actually culturally dependent (it is empirically not the case that everyone on earth desires liberty of mind, for instance) with rights. But rights are, obviously, a human creation. Which accounts for why, as Shermer himself notes, they have to be written down in things like the Bill of Rights, and protected by the force of state-enabled law. It’s also why people have come up with different lists of rights at different times. The United Declaration of Human Rights, for instance, provides a much more extensive list than the one arrived at by James Madison and co. back in 1789.

To argue that rights are “natural” is to commit the most elementary logical fallacy in ethics, that of the appeal to nature. And even if one were to overlook that little problem, there simply is no consistent empirical evidence for most of such alleged rights (i.e., desires, instincts) in Homo sapiens or its recent ancestors. Yeah, we all prefer to be alive rather than dead, other things being equal, but natural selection does not care about mere survival, it only favors survival that leads to reproduction. And it favors it, it doesn’t guarantee it. (So you can’t derive a natural right to sex. Too bad!)

This is the sort mess one gets when Michael talks about moral philosophy. Or when I talk about quantum mechanics. Or jazz. Please, let us all stick to what we know. It’s hard enough as it is.