Category Archives: Ethics

On the crucial differences between private and public forgiveness

1 (3)

(Stolen Generation apology, Adelaide, Australia, 13 February 2008)

In the aftermath of the Apartheid government in South Africa, a Truth and Reconciliation Commission (TRC) was formed to help the country move forward at the same time as it acknowledged and attempted to redress the injustices of the past. Not everyone was onboard with the project, but it turned out to be a success in terms of helping to heal the nation. Of 7,000 individuals who applied to the commission for amnesty about 10% were granted it, and the commission proposed a wealth tax to help fund reparation programs. (The South African government, however, never followed through.)


This was not the first TRC, nor would it be the last. An earlier attempt was made by Bolivia in 1982, with its National Commission of Inquiry into Disappearances, which brought together a variety of sectors of society after the end of the military rule there. The very first TRC was the one established in Uganda in 1974. Argentina followed in 1983, Chile and Nepal in 1990, El Salvador and Germany in 1992, Guatemala in 1994, Canada in 2006, and so forth.


Priscilla Hayner in Unspeakable Truths: Transitional Justice and the Challenge of Truth Commissions (Routledge, 2002) defines a TRC in this manner:


“A truth commission (1) is focused on the past, rather than ongoing, events; (2) investigates a pattern of events that took place over a period of time; (3) engages directly and broadly with the affected population, gathering information on their experiences; (4) is a temporary body, with the aim of concluding with a final report; and (5) is officially authorized or empowered by the state under review.”


I’m telling you all this because of a brilliant talk I just heard at City College, entitled “The Broken Promise of Public Forgiveness,” delivered by University of Texas-Austin PhD candidate Simone Gubler. Simone took as her starting point the formal apology proffered by the Parliament of Australia to the indigenous people of that continent, back in 2008. The apology was delivered by Prime Minister Kevin Rudd, who also asked the indigenous people for forgiveness on behalf of the nation.


And here is where things, according to Simone, got problematic. You see, a good argument can be made that forgiveness is an essentially personal process, not one that makes sense at the level of public institutions. Setting aside the obvious observation that the current non-indigenous inhabitants of Australia did not perpetrate the original crimes against the indigenous ones (though, of course, they still take advantage of the aftermath), and setting further aside the fact that groups cannot forgive (only individual members of such groups can), there is an obviously imbalanced power dynamic at play here. Asking for forgiveness at that level in a real sense imposes an implied demand on the other side, along the lines of “hey, I’m being so nice to ask you, and you are going to refuse?”


Individuals are far less likely to feel that pressure. If my partner betrays me and she asks for forgiveness I may or may not grant it. It’s up to me, and it is not a given that I will grant it. But when we scale up from the individual to a social group the dynamics change dramatically, according to Simone, so that forgiveness is no longer about contrite individuals who have come to agree that what they did is wrong, but rather about a political (possibly, though not necessarily, cynical) move in the public arena, meant to elicit a very specific response.


I must admit that I was rather skeptical of Simone’s approach when she outlined what she was going to argue for at the beginning of her talk, but it didn’t take much to convince me that she is right. And then Q&A time came, and my colleague Kate Ritchie upped the ante. She suggested that what Simone said about forgiveness also goes for official apologies. They too, are something that makes sense at the individual level, but not so much at the social one. And apologies too can be given by the wrong person, on behalf of groups who may not agree, used as a power play, and delivered because of cynical calculations. Even when not cynical in nature, both Simone and Kate agreed, requests for forgiveness as well as apologies quickly become empty. Simone mentioned the institution of a national “Sorry Day” in Australia, which, while probably initially well intentioned, has soon turned into a rote empty gesture, particularly since little or nothing is being done in the meantime to actually improve the lives of indigenous people.


Once again, my initial thought was that surely Kate’s point is problematic. But nope, a few seconds of reflection revealed to me that she was right.


But hold on, then. I began this article by talking in positive terms of the notion of Truth and Reconciliation Commissions, and now I’m agreeing with our guest speaker (it was Philosophy Day, by the way) that officially asking for forgiveness, and even issuing public apologies, is problematic. Doesn’t reconciliation imply apologies and forgiveness?


Probably, again, at the individual level, but not the public one. Consider two contrasting situations. The first one is the case mentioned above of my partner cheating on me. Let’s say that she apologized and asked for forgiveness. While I’m not mandated to grant it, it would seem that if I agree to reconcile, to get back together and resume our life as a couple, I have at least implicitly accepted her apology and forgiven her. Without that, life as a couple would be re-established on very shaky foundations indeed.


The second scenario is that of the CEO of Exxon-Mobil apologizing to the public for yet another environmental catastrophe caused by the greedy practices of his corporation. Again setting aside the degree to which it makes sense for a single individual to apologize on behalf of a large anonymous entity which he certainly does not control beyond a limited extent, what I – as a member of the public – expect from Exxon-Mobil is three things and three things only: (i) an acknowledgement of the facts; (ii) some concrete suggestion on how the corporation can repair the damage; and (iii) reasonable assurances that whatever caused the problem will not happen again in the future. Both apologies and forgiveness are entirely beside the point.


The bottom line from all the above is that there is an important difference between the individual and social levels at which concepts like apologies and forgiveness operate. Simone (or yours truly) does not deny that it makes perfect sense for an individual to apologize to another for some wrongdoing. She also agrees (and so do I) that it makes sense for an individual to ask for forgiveness, so long as it is understood that this is not an imposition on the other party, who may or may not grant it. Yet, as we have seen above, both concepts are problematic when scaled up to the level of social groups. If this is true, why do we do it, and how could we do otherwise?


I believe one source of the particular kind of mistake we are talking about is Plato. In the Republic he famously set out to investigate what makes for a just person. His strategy was to scale things up and ask first what makes for a just state (the Republic of the title). The explicit assumption was that one can go back and forth between the two levels. The ideal Republic will be guided by reason (in the form of a ruling class of philosophers), who will direct the other two components (the soldiers-auxiliaries and the regular folks-producers). Likewise, the ideal human being has a tripartite soul, and is well advised to put her rational soul in control of the spirited and appetitive ones.


Yet, most of the ancients rejected this approach, making a clear distinction between individuals and society. Both the Stoics and Cicero talked of ethics (from the Greek êthos, a word related to our idea of character) as distinct from law. Ethics (and morality, from the Latin moralis, which is how Cicero translated the Greek êthos) pertains to the individual and her character; law pertains to how we regulate things in society. A just individual is not the same as a just state. At the individual level the Stoics considered justice a character trait, having to do with treating other people fairly and with respect, but not necessarily equally (e.g., I don’t treat my daughter in the same way as a stranger, even though I try to show respect for everyone); by contrast, at the societal level a just state is one based on just laws, where everyone is treated equally, regardless of personal status.


Simone’s suggestion, then, can be reframed as a call for going back to a meaningful distinction between ethics and law, the first one functioning at the individual, the second at the societal level. It is within this framework that it makes perfect sense to say that apologies and forgiveness have an important role to play in the intercourse between individuals, while at the same time maintaining that they are problematic between groups. So politicians and corporate officers can keep their apologies, and they will not get our forgiveness. What they need to do instead is to acknowledge wrongdoing and put in place workable procedures to redress injustices.

No, science does not provide all the answers to the big questions

From time to time a famous scientist allows himself (in my experience it’s always a man) to write nonchalantly about something of which he demonstrably has only a superficial grasp: philosophy. The list of offenders is a long one, and it includes Lawrence Krauss, Neil deGrasse Tyson, and Stephen Hawking, among several others. (Fortunately, there are also exceptions, scientists who value a constructive intercourse with the humanities, like Sean Carroll.) The latest entry in this dubious pantheon is Peter Atkins, who recently published a sloppy essay in the otherwise excellent Aeon magazine entitled “Why it’s only science that can answer all the big questions.” Oh boy.


Atkins begins by telling us that there are two fundamental kinds of “big questions”:


“One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. … Most questions of this class are a waste of time; and because they are not open to rational discourse, at worst they are resolved only by resort to the sword, the bomb or the flame. … The second class of big questions concerns features of the Universe for which there is evidence other than wish-fulfilling speculation and the stimulation provided by the study of sacred texts. … These are all real big questions and, in my view, are open to scientific elucidation.”


This is not news, of course, at all. David Hume — one of my favorite philosophers — made essentially the same argument back in the 18th century, in his case rejecting what he saw as the waste of time associated with the Scholastic metaphysics that had prevailed throughout the Middle Ages:


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” (An Enquiry Concerning Human Understanding)


With all due respect to Hume, it’s a good thing people didn’t follow his advice, or we would have lost his very own Enquiry Concerning Human Understanding, since that book doesn’t contain any abstract reasoning concerning quantity or number, nor does it contain any experimental reasoning concerning matter of fact. And yet, it is — justly — considered to be one of the most important works of modern philosophy.


Atkins apparently realizes that he may come across as a bit too facile, since he acknowledges that he is defining the big questions precisely as those that science can answer, turning then around to “demonstrate” that science is the only discipline equipped to answer such questions. As he drily puts it when considering the obvious charge of circularity: “that might be so.” Which doesn’t stop him from proceeding as if it were not so.


Atkins tells us that science is getting ready to tackle what he considers the next three big questions: How did the Universe begin? How did matter in the Universe become alive? and How did living matter become self-conscious?


I have no doubt, as a scientist, that those are, indeed, scientific questions. I’m slightly more skeptical, as a philosopher, that science will actually be able to come up with answers. Fundamental physics, after more than a century of uninterrupted success, seems to have entered a period of navel gazing where speculation (admittedly mathematically informed speculation) is poised to replace empirical evidence. So we shall see if and when we’ll actually get a “theory of everything,” and whether that theory will in fact be able to tell us how the universe began from “nothing” (there is some doubt that it will).


Regarding the second question, the origin of life, theories have been piling up for several centuries now, and yet we don’t seem to be particularly close to a resolution just yet. I’m certainly not arguing that it isn’t possible, but it’s a very, very difficult problem, for the simple reason that a lot of the historical traces have been lost. No geological strata survive from the time when the primordial earth was home to the first living organisms, meaning that researchers on the origin of life are like detectives who already know the smoking gun isn’t going to be found. At best, they’ll have to rely on circumstantial evidence. Even should we be able to produce life artificially in the laboratory that would not solve the problem, since it wouldn’t mean that life on our planet actually followed anything like that particular causal path.


As for consciousness, I remain convinced that the problem is indeed biological in nature, and that therefore developmental, evolutionary, and neuro-biology are the disciplines best positioned to find a solution. But at the moment nobody seems to have much of a clue, and common talk of the brain being a computer is finally beginning to be understood as the shaky and very likely misleading analogy that is.


So, yes, if any of those three big questions are going to be answered, the answer will be a scientific one. But what about other questions that arguably just as big (or, for most of us, even bigger)? Here Atkins shifts into full scientistic mode:


“I see no reason why the scientific method cannot be used to answer, or at least illuminate, Socrates’ question ‘How should we live?’ by appealing to those currently semi-sciences (the social sciences) including anthropology, ethology, psychology and economics.”


Please notice a number of interesting and revealing things about this sentence. First, Atkins is making the time-honored argument from personal incredulity: “I see no reason why…” Which, of course, is not an argument at all, but an elementary logical fallacy. Second, he is seriously hedging his bets when he immediately qualifies his initial statement: “or at least illuminate…” Ah, well, but philosophers since the Pre-Socratics have understood that empirical evidence (i.e., “science”) can illuminate philosophical questions. However, that’s a far more modest claim than the notion that science can actually answer those questions. Third, Atkins can’t help himself but deliver a contemptuous dig at the “semi-sciences.” This attitude, common among physicists, reflects a naive understanding of the philosophy of science, according to which physics is the (self-professed) “queen” of the sciences, and every other field will achieve full scientific status only when it will finally evolve into something that looks like physics. But an increasingly common view in philosophy is that there actually is a fundamental disunity of science, that “science” is only a loosely defined family resemblance term, reflecting the fact that each science has its own goals, methods, and internal standards, and that there is no universal yardstick to be appealed to in order to make comparative judgments of quality.


Going back to philosophy, the question of “how should I live?” admits of a large number of reasonable (and a lot of unreasonable!) answers, given the very same facts about the universe and human nature. It isn’t so much a question to be answered, as to be explored and clarified. Indeed, this is arguably what most fundamentally distinguishes science from philosophy.
One of my recent morning meditations is pertinent here. It begins with a quote by the Stoic philosopher Epictetus, who says in Discourses II, 11.13:


“Here you have philosophy’s starting point: we find that people cannot agree among themselves, and we go in search of the source of their disagreement.”


As I argue in the podcast episode, there are two broad sources of disagreement among human beings: factual and conceptual. If you and I disagree about, say, the number of moons orbiting around the planet Saturn, one of us is wrong, possibly both. There is a matter of fact about the issue, and we can find out the answer by asking an astronomer. Or more simply by doing a web search. If disagreement remains after that, then one of us is more than a bit obtuse.


The second kind of disagreement concerns how to think about facts, actions, and values. Here the facts are relevant, but insufficient to settle the dispute. Let’s say we have different opinions about the permissibility of assisted suicide. Certain empirical facts are going to be pertinent to the discussion, like information about how the procedure is going to be implemented, what safeguards there may be to avoid abuses, and so forth. But even if we agree on the facts, we may still disagree on the crucial issue: is assisted suicide morally permissible?


That’s the difference between science and philosophy, and why Epictetus says that philosophy begins with the search for why people disagree on things. Notoriously, philosophy does not necessarily settle such disagreements. The joke in philosophy departments is that our profession’s slogan is: “Philosophy: we have all the questions!” But what philosophy does, by means of careful analysis and reasoned argument, is to help us clarify why, exactly, we disagree. That is of huge help to people of good will who wish to honestly pursue discussions in search of better ways to conduct their lives. Atkins may want to take notice.

Is exposing medical pseudoscience unethical?

You would think the answer to the title question is obvious: no, of course it isn’t unethical to expose medical pseudoscience (or any kind of pseudoscience). How can anyone think so? And yet, there are some decent reasons to doubt the certainty of such a conclusion. At least that’s the take of a paper that so far as I know has not yet been published, but is available at the PhilSci (philosophy of science) Archive at the University of Pittsburgh.


The paper is entitled “Exposing medical pseudoscience may be unethical,” and is authored by Ehud Lamm. Here is Lamm’s argument, as he presents it:


(1) Many ill health situations, in particular those involving multi- and comorbidity do not have very effective treatment options.


(2) Alternative medicine, consisting of various pseudo-scientific enterprises, provides repose to some of those suffering from such conditions.


(3) A significant part (if not all) of this benefit can be attributed to the placebo effect.


(4) In current, “Western,” societies, placebos are most effective when they have the symbolic currency of science.


(5) Ergo, exposing these pseudo-sciences will hurt people for whom there are only limited health care alternatives.


This is a reasonably constructed argument, so if one wishes to reject it — as I do — one needs to attack one or more of the premises, or to uncover and then reject some additional, hidden premises. I will follow both strategies here, though I wish to thank Lamm at the onset for having forced me to think more carefully about something I had hitherto taken for granted.


I am going to agree right away with premise (3): most (indeed, very likely, all) of the benefits of alternative medical treatments are the result of placebo effects. But I have varying degrees of issues with premise (1), (2), and (4). And I also think Lamm is not explicitly listing a few other very relevant premises here, though he mentions one of them in passing in the paper, as we shall see, only to discard it far too quickly.


Premise (1): this threads on an ambiguity surrounding the word “treatment.” Yes, modern medicine does not provide effective treatment against certain conditions, for instance a number of types of terminal cancer, or several types of dementia. But alternative medicine does not provide treatments either, according to Lamm’s own premise (3), only palliatives. Which means that the proper comparison here ought to be only between standard palliative care and “alternative” one. Palliative care works on the basis of a combination of actual biomedical effects (e.g., painkillers) and placebo. Other things being equal, then, it is more effective for a doctor to use her prestige (premise 4) to push tested palliative options rather than pseudoscientific ones.


Premise (2): for the reasons just seen, even if alternative medicine does provide some respite to patients by way of a placebo effect, standard palliative care is in any case preferable, because it also works by placebo, but additionally actually delivers (at least in some cases) a real biomedical effect. Again, remember that we are talking about alleviating pain and similar things, not actual cures, which are not achievable via placebo, and Lamm agrees that placebo is pretty much the only mechanism through which pseudo-medicine “works.”


Premise (4): the symbolic currency of science in the Western world seems much less real than Lamm assumes. Witness the widespread antivax movement, or — outside of medicine — so-called “skepticism” of climate change, or of evolutionary theory. If anything, science is suffering a credibility crisis of late, and inducing medical doctors to deploy pseudoscience is hardly going to improve such credibility.


If the reader agrees with even one of my criticisms of Lamm’s premises, that is enough to reject the argument. But there is more.


For instance, Lamm at one point says: “I am putting to one side deontological and consequentialist arguments against the use of placebos in general, and assume that in some situations relying on a placebo effect is legitimate.” That’s a bit too quick, and a rather big thing to put aside (and the bit after the comma somewhat begs the question)! A deontologist, for instance, might argue that it is a violation of the categorical imperative for a doctor to systematically lie to his patients, because that violates the implicit trust between the two (the very same trust on which premise 4 is built, incidentally), and because the doctors themselves would probably rather not been lied to when it is their turn to be patients.


On consequentialist grounds, one could argue that there are long term negative societal consequences engendered by lying to patients and by pushing pseudoscientific notions. Because, again, patients might eventually lose confidence in their doctors and begin to doubt any sort of medical advice; also because people will be more likely to embrace pseudoscience under other circumstances, for instance when there are, in fact, perfectly good evidence based treatment options available; and mroevoer because we would be condoning a multibillion dollar industry based on what is essentially a fraud.


Furthermore, there is a third big ethical framework out there: virtue ethics. It is hardly a good character trait to engage in systematic deception of others, even for their own good (a premise, this last one, that I have already rejected). Virtue ethics does not follow universal rules, so lying can be acceptable under certain circumstances. But the focus is on the character of the moral agent, and repeated lying is going to be deleterious to that character, since character is shaped precisely by repeating the same actions over and over, or by systematically entertaining and agreeing with the same thoughts.


Lamm sees another concern with exposing pseudo-medicine: “the movement toward Evidence Based Medicine (EBM), to the extent that it is successful, may lead people to be less open to treatments and assistance that are not backed by science, such as talk therapies, meeting with alternative medicine practitioners, support from informal social networks, help from clergy, and so on.”


This is an interesting point, but the way it is articulated it carelessly lumps a number of things that should be considered separately. For instance, homeopathy is clearly and definitely a pseudoscience, so it should be counted as a success if people do not waste their money (and hopes) on very expensive sugar pills. Talk therapies, however, are not necessarily pseudoscientific. We have good evidence that some forms of psychotherapy work (e.g., cognitive behavioral therapy), and there are distinctions to be drawn about what “working” means here. If a patient is in need to simply talk to someone, but is under no illusion that this will actually cure him, I don’t see what the problem is, or why such talk therapy should be counted as pseudo-medicine in the first place. Perhaps it won’t work better than getting a drink with your friend, but if chatting is the only thing one needs in order to feel better, it “works.” Much hinges, of course, on what the precise claim of the therapist is going to be. The very same considerations apply to the use of informal social networks, or to help from the clergy. These things are simply not in the same category of homeopathy and other kinds of pseudo-medicine.


And there is more to consider, like the general ideas that truth and honesty are intrinsically valuable, and should be compromised only under very unusual circumstances. Certainly both a deontologist and a virtue ethicist would reason this way, though a consequentialist might disagree (there goes yet another reason to reject consequentialism, in my view).


Lamm does consider some objections to his argument in the second part of the paper, and rejects them with a number of counterarguments. I have already covered part of what he says there in the previous points, but let me add a few pertinent comments.


He writes: “we should strive for alternative channels of social support to be available and determine whether non-science backed alternatives can provide patients with social and economic benefits that they need, such as health insurance coverage, help with the education system, and so on.” Well yes, as acknowledged before, but again this thread on an ambiguity of the term “support.” There is no problem in supporting people via talk therapy, social networks, religious counseling, and so forth. But these are not types of alternative medicine, and so long as the people providing the support are not lying, then there is no ethical issue. (Of course if you are an atheist you will think that all priests are lying by definition, but this is not the case from the point of view of a believer who has already adopted that particular religion’s framework. Admittedly though, that’s a tough case, best explored on its own.)


Lamm also maintains that some pseudoscientific “treatments” may be cheap and effective (again, via placebo) and therefore preferable whenever there are no established medical options for the condition. But we have seen above that at a very minimum modern medicine can provide palliatives, and let’s remember that much alternative medical practices are anything but cheap.


The author acknowledges a big potential downside of his approach: “if the pseudoscience providing the placebo effect causes patients to ignore science backed treatment we have to consider the tradeoff to decide whether belief in the efficacy of pseudoscience is beneficial or harmful.” Indeed, and this may often be the case. Not to mention that so long as the patient is also seeking regular medical treatment then, again, there is no additional benefit of the pseudo-medical one, which will cost money not just to the patient, but also, increasingly (and unfortunately) to the health insurance or government-backed apparatus that will pay for the so-called treatment.


Finally, Lamm argues: “It may be mistakenly suggested that the argument presented here is a justification for fraud (assuming the pseudoscience in question involves fraud). However, if anything, the argument discussed here only applies to exposing fraud, which in some cases may do more harm than good.” I don’t see how this would work: if a medical practitioner knows that a given remedy is pseudoscientific and recommends it nonetheless, he is participating in a fraud, even in cases in which he is not directly pocketing money for it (someone else is, and besides, he is being paid to take care of the patient). So, refraining from exposing fraud is, seems to me, condoning and abetting fraud no matter how one puts it. This may be a case of philosophical distinction without a difference.


Again, I recognize that Lamm’s paper is important because too often people like myself, who have pursued a career of exposing pseudoscience and defending science, make leaps of reasoning from the factual to the ethical, automatically assuming certain value judgments without further argument. But it seems to me that, once the arguments have been examined, there is still a strong preponderance of reasons to expose pseudo-medicine whenever possible.

Evolution, moral realism, and conditional imperatives

Is it true that genocide is wrong? Most of us would respond in the affirmative, would think that the answer is obvious, and would regard anyone answering in the negative as a psychopath, best immediately locked up somewhere where he can’t hurt people. And yet, that kind of response implicitly assumes that there is a fact of the matter about moral pronouncements, that some statements in ethics are true or false. But by what standard?


Moral truths — if they exist — don’t appear to be on par with scientific truths, despite much nonsense that has been written about it in recent years (see here and here). If a scientist says that, for instance, the planet Saturn has rings, that statement is true if, and only if, it turns out that Saturn does, in fact, have rings. This is referred to in philosophy as the correspondence theory of truth: a statement is true if it corresponds (to the best of our knowledge) to what’s actually “out there.”


Moral truths are also not (quite) like mathematical truths. In mathematics the Pythagorean theorem, say, is “true” if it can be derived deductively from a small number of axioms. The reasoning that leads to its derivation has to be coherent, meaning that the theorem has to be logically entailed by the axioms, and not lead to or imply any contradiction. This is known as the coherence theory of truth.


I don’t mean to imply that there is a sharp distinction between science and mathematics, nor that individual applications may not rely on a combination of the two theories of truth (indeed, we’ll see one such application below), but the above is a basic sketch that will serve us well in this essay.


So if moral truths don’t follow either a correspondence or a coherent account, what are we left with? Enter error theory. According to the excellent Stanford Encyclopedia of Philosophy:
“Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that level headed people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn’t believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn’t believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk.”


The upshot is that if you (like the majority of people) believe that there are such things as moral truths, you are a moral realist, but you need to provide an account of where moral truths come from. If you reject the existence of moral truths (and error theorists are just one class of philosophers who do) then you are left with the task of explaining how come so many people are prone to this particular type of error.


This is why I was curious to read a recent paper by eminent philosopher of science Kim Sterelny and his University of Canberra colleague Ben Fraser, entitled “Evolution and moral realism,” and published in the British Journal for the Philosophy of Science. Here is a summary of their approach:


“We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). We: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. … True moral beliefs are a ‘fuel for success,’ a map by which we steer, flexibly, in a variety of social interactions.”


Let me unpack the above, and see where this leads us. The rather uncontroversial premise of Sterelny and Fraser’s paper is that our sense of right and wrong derives from an instinct that was probably favored by natural selection in order to improve our prosocial behavior, because the latter — in highly social species like ours — increases individual survival and reproduction, which are the only things natural selection “cares” about. Elements of prosocial behavior of this sort, which we would call moral if observed in humans, are indeed present in other species of primates.


But as Sterelny and Fraser point out, evolutionary accounts have largely being co-opted by error theorists: while moral language is what philosophers call “truth-apt” (i.e., it looks like it’s referring to truths), actual moral claims cannot be demonstrated to be true, since neither the correspondence nor the coherence theory seem to apply.


This has actually always sounded strange to me, for the following reason. A similar argument could be made that natural selection evolved our intelligence not in order for us to discover truths about the world (including scientific truths), but rather to figure out how to best our rivals within social groups. This is referred to as the Machiavellian theory of the origin of mind (but see here for one criticism), and it would lead us to conclude that even our scientific utterances are “truth-apt” and yet “never true or warranted.” One theologian, Alvin Plantinga, actually makes that argument against naturalism (as opposed to supernaturalism), ironically using evolutionary theory to conclude that either evolutionary science is untrustworthy or philosophical naturalism is wrong. I think Plantinga’s argument is bogus, though it would require a separate essay to show why (maybe at a later time). Still, it seems really weird to say that science (including evolutionary theory) doesn’t at least approximate truths about the universe, given how well its products work in practice. So error theorists shouldn’t be that quick to co-opt evolutionary theory on behalf of their position, or they risk falling into something like Plantinga’s dilemma.


Which lead us to the three points that constitute the heart of Sterelny and Fraser’s paper. They consider evolutionary error theory within a broader framework, the continuum between “folk” (i.e., everyday) and scientific understanding of things. Let’s clarify by means of an example not related to ethics: the phenomena of sunrise and sunset. The folk understanding in pre-scientific times was that, literally, the sun would rise above the horizon every morning, and set below it every evening. The sun was understood in a variety of ways, metaphysically, but usually as some kind of god or manifestation of the divine. The scientific account, of course, is that the sun isn’t doing any such thing, and in reality it is the earth that rotates on its axis, causing the illusion of sunset and sunrise. An evolutionary error theory would say that although sunrises and sunsets are illusions, in the sense that they are not a truthful description of what is going on, they are useful, since people can regulate their days accordingly. For everyday life, it simply doesn’t matter whether it is the sun that rises or sets, or the earth that rotates around its axis.
This is why Sterelny and Fraser say that according to this approach “many folk conceptual structures are neither hopelessly wrong nor fully vindicated.” But how is evolutionary morality cashed out, using this framework? On the one hand, there cannot be any simple reduction of moral truths to scientific facts. On the other hand, “moral facts are facts about cooperation, and the conditions and practices that support or undermine it.”


This is an interesting move, but I think it succeeds only in part. Sterenly and Fraser are ambitious here, as they want to ground a kind of moral realism, or quasi-realism, in evolutionary theory. Essentially, they are saying that moral truths follow the correspondence account outlined above, in that something is morally true just in case it fosters cooperation among human beings, and it is morally wrong if it doesn’t.


But this simply cannot be the full story. I think it is morally right (“true”) to cooperate with the entire human race in order to achieve a peaceful and prosperous world. Unfortunately, this is certainly not the sort of cooperation that natural selection has ever fostered. On the contrary, human evolution has been characterized by competition, not cooperation, among groups, with cooperation limited to each in-group. Indeed, it can be argued that the natural human trait of xenophobia (which I assume we would unequivocally label as morally wrong) has been adaptive for much of the history of Homo sapiens: if someone looks different from members of your in-group, he’s probably dangerous and you should be wary of him.


It is true that Sterelny and Fraser are careful, and are not committed to the simplistic notion that whatever behavior was favored by natural selection it is ipso facto morally good. But there are simply far too many discrepancies between what a theory of evolutionary morality would predict and what most people nowadays consider morally right or wrong for their approach to get us very far.


What then? Are the error theorists right after all? I don’t think so. I suggest that the sort of considerations articulated by Sterelny and Fraser provide a good account of how a natural moral instinct might have evolved: to favor in-group prosociality. But ethics since the invention of language, and especially since the invention of philosophy in different parts of the world between two and a half and three millennia ago, has been about using reason to precisely articulate and usually expand what counts as moral. Slavery, oppression of women, and xenophobia were initially considered morally acceptable, because they either enhanced or did not get in the way of the functioning of human groups. But today we think of all those notions as morally wrong, and for good reasons.


These reasons are the result of a combination of a basic prosocial human nature, an innate sense of the existence of right and wrong things, and of reason applied to the amelioration of the human condition. The first two components are the result of biological evolution, the latter of cultural evolution, which took over once we left the African savanna between 70,000 and 40,000 years ago, and especially after the agricultural revolution of 12,000 years ago. While the natural selective imperative is to survive in order to reproduce, the cultural imperative goes well beyond it: we want to flourish, to pursue projects, to acquire a certain degree of independence, and so forth. Biology simply cannot account for that.


What does, then? As philosopher Philippa Foot famously argued in her landmark 1972 paper, “Morality as a System of Hypothetical Imperatives,” moral truths are conditional truths. IF we wish to build a peaceful world THEN xenophobia is immoral. IF we wish to maximize every agent’s ability to flourish THEN the oppression of groups or individuals is immoral. And so forth.
This makes moral truths a combination of correspondence and coherence. The correspondence part derives from the notion that there are certain facts about the human condition that we can ascertain empirically. For instance that individuals flourish if they are accorded some rights, like the right to health care, or education. The coherence part comes from the fact that IF … THEN statements are a matter of logic, and so reasoning built on their bases is valid in some cases and invalid in others.


Another way to put it is this: if moral reasoning is reasoning about hypothetical imperatives, as Foot correctly, in my mind, argued, then the structure of that reasoning is a matter of logic (coherence) while the assumptions from which one begins any such reasoning are empirical in nature (correspondence). If human beings were radically different kinds of beings, our moral philosophy would look very different, or perhaps wouldn’t exist at all. Hence the relevance to ethics of the concept of human nature.


Some people may be unhappy with what they will conceive as a weakened sense of moral truths. They want categorical, not just hypothetical imperatives. They wish for universal, mind-independent moral truths. Too bad, there is no such thing. Others will deny the above account and embrace a relativist position in which morality is an entirely arbitrary human construct. While theoretically possible, I challenge anyone who adheres to such position to actually live by it. It’s not going to happen because it isn’t a good concept of morality for humanity. Foot-style categorical imperative offer us the best available option to navigate between the Scylla of strict moral realism and the Charybdis of strict moral anti-realism.

We are all invited to the conversation, and if you disagree you may be logophobic

“There is no greater evil one can suffer than to hate reasonable discourse.” (Socrates, in the Phaedo)

You may have noticed that these days it has become really difficult to have a reasonable conversation. With almost anyone. I’m not referring to the obvious culprits: Trumpists, purveyors of alternative facts, or those who dismiss critical journalism as “fake news.” I’m not even talking about the classical examples, from creationists to believers in all sorts of alternative medicine, the paranormal, UFOs, and so forth (I wrote a whole book about those). I’m talking about my own “tribe,” if you will: progressive, left-leaning liberals.

I have been one of those since I was young, convinced that the fascism that had gripped my own country, Italy, for the two decades leading to World War II was the true evil of the world. Yet, even back in high school I experienced the disturbing intolerance of certain quarters of the left, where any thought that did not align exactly with the orthodoxy of the moment was automatically branded as either “fascist” (thus emptying that word of meaning), or, at best, “qualunquista” (literally “one with random opinions”).

I profess cosmopolitanism, the notion — going back to the ancient Cynics and Stoics — that we are all human beings, endowed by that very fact with a degree of dignity that deserves respect and, most importantly, equal rights. I regularly donate to organizations that fight on behalf of civil rights, of immigrants, and of the freedom of the press. I think that women ought to be paid equally to men and ought to be afforded exactly the same educational and working opportunities. I think it monstrous to deny rights to Lesbians, Gays, Bisexual, Transgender, and Queer/Questioning (LGBTQ). I am not religious, but I do not believe that religion is the root of all evil, nor do I believe that any particular religion is ethically worse than any other (with the exceptions of some cults like Scientology).

Yet, I have been told to go fuck myself, you transphobic bastard, simply because I tweeted — without comment — an article that explored some interesting, and to me rather puzzling, conflicts among segments of the feminist, lesbian, and transgender communities. According to this otherwise thoughtful and in-depth piece, my sexual aesthetic preference for slender women cannot but be political, which, obviously, makes me “fatphobic.” If I raise the issue of Islamic terrorism, in a reasonable and historically informed fashion, I risk being branded an Islamophobe (though, funny enough, if I mention Christian terrorism linked to abortion clinics or white supremacy, I get plenty of nods of approval, within my tribe). If I voice the opinion that while the Israeli are guilty of unlawful occupation of Palestinian territory and of slaughtering unharmed innocents, it is also true that too many Palestinians are intolerant and have systematically undermined their own case, then I am, of course, a fascist (again) Islamophobe pig. And so on and so forth.

And then there is this rather new, entirely postmodernist malady: the idea that unless someone has actually experienced something (being it colonialism, gender discrimination, racism, and so forth) then one is not entitled to express an opinion about it, especially if one happens to be a “privileged” white male.

Let me be clear about the latter two points. First, yes, I amply recognize my privilege. I just don’t call it that, because it’s not helpful. I happen to think that everyone, regardless of gender or ethnicity, ought to have a somewhat nurturing family, a decent education, job opportunities, and be safe from discrimination and police violence, like I have been throughout my life. It’s not a fucking privilege, it’s the way all human beings should live. Period. And that’s going to happen only if we stand united to fight discrimination and injustice, rather than wasting energy and time shouting down some of our own allies, shaming them for having gotten lucky at the gene-cultural lottery. That sort of shouting — especially anonymously on social media — may make you feel virtuous, but it is positively damaging to the causes you allegedly care about.

Second, yes, it is certainly the case that people who directly experience discrimination are in a unique position to talk about it, and that they should be afforded a primary place in public discourse in virtue of that very fact. But we also know from plenty of research in social psychology (see here and here, for example) that first person experience is extremely unreliable, as people filter such experiences through all sorts of cognitive biases that inevitably distort their perceptions of things. That is why we need the rest of society to also come to the discussion table. Social progress is made possible by the constructive, and yes, even critical, dialogue among many voices, the by now increasingly old fashioned value of pluralism.

At his point, some readers will have already put down this essay and rushed to social media to begin a campaign of shaming, outraged by the few paragraphs above. So I can keep talking to those who have stuck it so far. Those who have left the conversation, I maintain, suffer from a peculiar disease described by philosopher Nicholas Shackel in a delightful paper published back in 2005 (Metaphilosophy 36: 295–320), entitled “The vacuity of postmodernist methodology: logophobia.”

The Greek roots of the term are obvious: “logos” means “thought,” particularly rational thought, and “phobos” means “fear of.” Logophobia is defined by Shackel as, “a skeptical doctrine about rationality … [where] rationality cannot be an objective constraint on us but is just whatever we make it, and what we make it depends on what we value.” He adds, “[opponents] are held to disguise their self‐interested construction of rationality behind a metaphysically inflated view of rationality in which Reason with a capital R is supposed to transcend the merely empirical selves of rational beings.” In other words, logophobics claim that since reason cannot possibly solve every problem, so you can proceed with dismissing reason altogether.

Shackel’s paper is actually a serious, and at times rather technical, critique of the modus operandi of postmodernist (and deconstructionist, and anti‐rationalist feminist) authors such as Michel Foucault, Jacques Derrida, Jean‐Francois Lyotard, David Bloor, Richard Rorty and Stanley Fish, to mention just a few of the major offenders (though I personally make distinctions of degree among the named people). At one point I added to the list a number of pseudoscientists, such as the Intelligent Design proponents Bill Dembski and Michael Behe, alternative medicine “gurus” such as Deepak Chopra, or vaccination‐deniers such as Jenny McCarthy. And now, to my deep regret, it is becoming painfully obvious that both the extreme right (which, at the moment, appears to include most of the Republican party in the US), and certain increasingly vocal groups of the left, have been struck by the disease as well.

According to Shackel, logophobics have developed an arsenal of strategies to obfuscate clear thinking, which they deploy whenever pressed by a skeptic. For instance, consider Foucault’s classic thesis that there is no such thing as scientific truth, because truth is a reflection of local socioeconomic and political power: “The essential political problem for the intellectual is not to criticise the ideological contents supposedly linked to science, or to ensure that his own scientific practice is accompanied by a correct ideology, but that of ascertaining the possibility of constituting a new politics of truth.” This is apparently strong stuff, but it becomes a much milder statement when Foucault later tells us what he means by “truth”: “‘Truth’ is to be understood as a system of ordered procedures for the production, regulation, distribution, circulation and operation of statements.” Really? And here I was thinking that truth means as accurate a description of a subject‐independent reality as possible. Silly me.

Shackel calls this “troll truism,” which he defines as, “a mildly ambiguous statement by which an exciting falsehood may trade on a trivial truth.” It is a particular form of so‐called Humpty‐Dumptying, named after the character in Lewis Carroll’s Through the Looking Glass. Humpty Dumpty famously tells Alice, after she points out that he is arbitrarily switching word meanings in the course of their discussion: “When I use a word, it means just what I choose it to mean — neither more nor less.”

Now, to be fair, Foucault did have a point: science is a social activity, and scientists are human beings, prone to the same cognitive and cultural biases as everyone else. Also, some science has been in the thralls of either economic or political agendas, or both. Similarly, when Amia Srinivasan, in the article linked above, claims that even our sexual preferences have a political dimension, the thought is neither absurd nor easy to dismiss. But the political, or the economic, are not the beginning and end of everything we do or think. There are (Zeus forbid!) biological instincts and constraints on our behavior, there are matters of taste, and there are simple reasonable disagreements. To reduce everything to the political is to commit the Marxist mistake, only in the latter case everything reduces to economic struggle. No, the human experience is just too varied and complex to be explained away by mono-factorial accounts, no matter how good the intentions of those that push for such accounts.

Logophobia, I maintain, is a broad condition that can strike adult humans of all ages, genders and cultural backgrounds, especially when they have never been exposed to the basics of critical reasoning, or when they have grown up in the thralls of a powerful ideological system. And no, being a scientist, or a philosopher, does not make you immune, unfortunately. The effect of the disease can probably never be eliminated, as we all suffer from it, to an extent. But they can be reduced by means of sound education, although that requires painful effort on the part of educators and subjects alike. Once the subject is past middle school, it becomes increasingly difficult, though not quite impossible, to overcome the malady. But it’s important to try, as huge amounts of financial resources and time are wasted as a result. Occasionally, lives are lost as a direct outcome of logophobia, especially when the logophobic is a politician with the power to start a war, or a celebrity pushing an anti‐medical health practice. And important political agendas may unwittingly be delayed or even undermined when logophobic activists manage to shut down discourse they find threatening, insulting, or “fascistic.”

Needless to say, fighting the spread of logophobia is a primary responsibility of every reasonable thinking person, practicing scientist, and professional philosopher, despite the highly unfavourable odds against defeating it — which is why a thorough knowledge of the disease and of its symptoms is so crucial. Spread the word, and watch out for logophobics near you! (And beware of not becoming one yourself…)

The morality-as-cooperation hypothesis and the link between evolution and moral philosophy

“Morality” comes from the Latin moralis, which was Cicero’s translation for the Greek ethos. The Greek word is related to our idea of character, the Latin one has to do with habits and customs. In an important sense, then, morality (or ethics, used here interchangeably) is the study of how to conduct your life, and particularly how to constructively deal with other people.

Recently, a group of researchers headed by Oliver Scott Curry has put out a paper (as yet unpublished) entitled “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” which is both interesting and more than a bit irritating. (Here is the site with the full version, and here is a commentary by the lead author, put out by the Evolution Institute.) It is interesting because it provides tantalizing empirical evidence to bolster the case, made by several researchers for a long time now, that the bio-cultural origins of a moral sense lie in the need for a species like ours to foster pro-social behavior, or as Curry et al. put it, “cooperation.” Frustrating because it fails to make the crucial conceptual distinction between the origins of morality and its current function, as well to recognize the equally important difference between descriptive and prescriptive approaches to ethics. I’ll do my best to highlight both the positives and the negative in what follows.

First off, a recap of the study as presented by the authors themselves:

“What is morality? And to what extent does it vary around the world? The theory of ‘morality-as-cooperation’ argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and predicts that specific forms of cooperative behaviour — including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession — will be considered morally good wherever they arise, in all cultures. In order to test these predictions, we investigate the moral valence of these seven cooperative behaviours in the ethnographic records of 60 societies. We find that the moral valence of these behaviours is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviours are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.”

Curry’s commentary begins with more than a whiff of scientism: “What is morality? And are there any universal moral values? Scholars have debated these questions for millennia. But now, thanks to science, we have the answers.” Ah! Yes, thanks to science we have the answers! Except that those two questions are quite distinct, of course. The nature of morality has been debated by Socrates (in the as yet unsurpassed Euthyphro) and is of course the topic of the entire field of meta-ethics. Whether there are universal moral values is a rather ambiguous question: do we mean whether there ought to be (prescriptive) or whether there merely happen to be (descriptive)? Philosophers tend to be concerned with the first sense (e.g., Kant), and are more than happy to leave the second one to anthropologists (it is, after all, an empirical question!).

Curry suggests that addressing problems of social behavior is something that has kept natural selection busy, so to speak, for millions of years during the evolution that led to the human lineage, and that moreover, once cultural evolution got started people have added a panoply of norms, rules, and institutions to deal with such problems. So far so good.

The approach used by Curry and his collaborators revolves around seven principles imported from evolutionary biology: “Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains: why we engage in costly displays of prowess such as bravery and generosity; why we defer to our superiors; why we divide disputed resources fairly; and why we recognize prior possession.”

Setting aside a few caveats (like the fact that not all societies actually have a taboo against incest), this theoretical apparatus does, in fact, go a long way toward explaining the origin of the behaviors mentioned. That brought the researchers to predict the universality of the following seven moral rules: “love your family, help your group, return favors, be brave, defer to authority, be fair, and respect others’ property.” And their results do show that these rules are present across cultures, though different societies vary in the way they rank or prioritize the rules. While their data do not explain the reasons for this variation, they also found no case were any of the rules was considered bad.

The overarching conclusion: “so there is a common core of universal moral principles. Morality is always and everywhere a cooperative phenomenon.”

Woah, slow down a second here. First off, perhaps the seven principles in question are common across modern cultures, but that does not ipso facto mean that they always were, nor that they evolved by natural selection, though that is a plausible scenario. Take, for instance, “be fair.” Are we really to understand that “fairness” was a moral precept in the highly unequal ancient Egyptian, or Roman (or pretty much all other) societies? And what do we mean by “fair,” anyway? Fairness may be the same as equality, or not, depending on one’s views, so we have to have a discussion about what it means and how the concept is interpreted in various societies.

“Be brave” is another highly ambiguous idea, and again it is hard to imagine that it means the same thing in, say, modern day Japan, with its penchant for anti-individualism, and ancient Sparta, which was characterized by a cult of individual bravery. And of course there are many different ways of being brave (or cowardly).

Second, jumping to the conclusion that morality is “always and everywhere” about cooperation seems entirely unwarranted. We are not ants or bees, each of us has autonomous projects that we intend to pursue and that have a large impact on the degree to which we flourish, as individuals. Yes, human beings are essentially social animals, and we flourish within a given social framework, but I would argue that morality isn’t about cooperation, but rather about how to balance self-centered vs pro-social needs. The two aspects — myself as an autonomous individual and myself as a member of a group — are in constant tension, and it is that tension that morality addresses, not just the group aspect of it. Cooperation is just one means to the end of survival, reproduction, and flourishing.

And by the way, natural selection doesn’t give a damn about the latter, but we most certainly do. Which is why a lot of our norms, rules, and institutions don’t address mere survival and reproduction, but rather issues such as justice, fairness, and access to resources. That is, they are concerned with flourishing.

Third, one can perfectly coherently object to some of those rules even being on a list of moral dicta. Let’s consider three of them a bit more closely: help your group, defer to authority, and respect others’ property.

At the least since the ancient Cynics and Stoics introduced the concept of cosmopolitanism, it is actually highly morally questionable to “help your group” if this is taken to mean regardless of circumstances, or at the expense of other groups that have an equal claim to resources. Today, it is not acceptable to engage in wars of conquest, for instance, in order to make our group thrive (the fact that it happens is irrelevant to the moral point). It isn’t even acceptable to exploit other groups economically, or by imposing on them environmental damages for our own convenience.

As for deferring to authority, nope, that’s a no starter. It may be a common cross-cultural practice, but it’s a bad idea, and it is in fact highly immoral if one does so blindly, following the orders of whoever is in charge. That way lies the road to tyranny, which is not morally acceptable, in either ancient or modern societies.

But surely we should respect other people’s property. Well, it depends. If it is acquired unethically, even if legally, no, I don’t think there is any such moral requirement. If your wealth is both disproportionate and arrived at by exploiting others (and let’s be frank, if it is the former, it can hardly not be the latter), then it is just and fair to pass laws to relieve you of much of that burden, through proportional taxation, for instance. My libertarian friends can scream loudly all they want, but that doesn’t make their claims to having a “natural right” to property one bit less nonsense on stilts (in the immortal phrase by Jeremy Bentham).

So, by all means let us inquire into the likely biological and cultural evolution of morality. It is a fascinating topic, somewhat amenable to empirical investigation. But let’s not confuse the descriptive with the normative aspects of the question, just like we wouldn’t for a second confuse, say, an account of the origin of mathematical thinking with the delivery of a sound proof of Fermat’s last theorem. Any time Curry et al. tell me that rule X is universal it is still perfectly meaningful for me to reply, yes, but is it moral?

Michael Shermer on moral philosophy, second round

nonsense on stilts fishSkeptic Michael Shermer recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” I found it a confused piece on moral philosophy, and since I agree with my friend Spider-Man, that with great power comes great responsibility, I was troubled by Michel’s failure toward the broad public of that august magazine. So I wrote a rather snarky response. Shermer has in turn keyboarded a reply to me, entitled “Moral philosophy and its discontents,” which he published on his own Skeptic online. This is my counter-response, and will be the last thing I will say on the topic for a while, as my experience is that going beyond two rounds in these cases quickly leads to diminishing returns. Of course, Michael is free to write a third piece, if he wishes.

To begin with, I’m going to tone down the rhetoric and focus on the substance, first because it is the right thing to do, and second because otherwise we get into an escalation of hyperboles that doesn’t really help thoughtful discourse (in his second article, for instance, Shermer says that I have become “nearly apoplectic” at his suggestion that witch hunting and the Holocaust were the result of utilitarianism. I assure you, it was a slight exaggeration.). I’ve been guilty of this even in the recent past (mea culpa), so let’s see if I can manage to do better.

I am not the only professional philosopher that has strongly criticized Michael for his original SciAm article. Another good example is Justin Weinberg, of the University of South Carolina (and editor of the Daily Nous), who, among other things, tweeted: “Disappointing that @sciam is contributing to our era’s ever-frequent disrespect of expertise by publishing this ill-informed & confused @michaelshermer column on moral philosophy.” It is not a simplistic argument from authority to point out that when professionals in a field unequivocally say you got things wrong it is wise to seriously consider that you might, indeed, have done so.

On his part, Shermer chides me for not having read a paper by G. Kahane et al. entitled “Beyond sacrificial harm: A two-dimensional model of utilitarian psychology,” published recently in Psychological Review. Well, guilty of being honest and telling when I have or have not read something. Still, my post was not a critique of Kahane et al.’s paper, but of Michael’s commentary (which, despite his protestations to the contrary, touches only marginally on the paper in question). I have now read Kahane et al., and I still think Shermer is wrong. More on this, of course, in a moment.

In my critique, I said that Michael has taken a very simplistic view of utilitarianism (a philosophy, incidentally, that I do not endorse). He rebutted that one of the examples I labelled as simplistic comes straight out of the Kahane et al. paper. The example in question is meant to measure one’s utilitarian tendencies, and it is formulated as a question: “Would you politically oppress a people for a limited time if it increased the overall well-being of the citizenry?” It does indeed come from the paper, but that’s a negative reflection on the paper, not on my point. No serious utilitarian after J.S. Mill would answer yes to that sort of question, so it is hard to say in what sense this would be helpful to measure one’s utilitarian tendencies.

In response to an admittedly sarcastic comment I made, Shermer states that he knows the difference between act and rule utilitarianism, and moreover that he is not naive about moral philosophy, since he has taken two undergraduate courses on the subject (one in general philosophy, the other one in ethics). He has also read a lot of books by Dan Dennett (not a moral philosopher), and gone through several Teaching Company’s Great Courses in philosophy. After all of which, he felt competent enough to write two books on the subject (The Science of Good and Evil and The Moral Arc), and to teach an undergraduate course at Chapman University. I will leave it to the reader to decide whether Michael’s background is sufficient to invalidate my original observation, but I will note that bringing in the difference between act and rule utilitarianism would have cleared a lot of muddled points in the SciAm article. He didn’t do it.

In my response, I stated unequivocally that Shermer is wrong when he says that trolley problems are an example of utilitarian thinking. And I stand by that statement, see my previous post for relevant links. Here too, Michael’s defense is “Kahane et al. did it,” which of course at best just means that Kahane et al. might have gotten something wrong, and that Shermer failed to correct it. But in fact they did not get it wrong. They say the following, for instance: “researchers have tried to uncover the psychological and even neural underpinnings of the dispute between utilitarians and their opponents — such as defenders of deontological, rights-based views of the kind associated with Immanuel Kant.” Exactly, so trolley dilemmas are used in cognitive science to explore both utilitarian and deontological thinking, and are therefore not an example of the former. Moreover, trolley dilemmas were introduced by moral philosopher Philippa Foot to highlight the limitations of both utilitarian and deontological thinking (in favor of the third way, virtue ethics), and they are still usually discussed in that context in intro philosophy courses. So, yes, Michael is still wrong here.

It gets worse. Shermer writes: “one might argue that trolley dilemmas represent only one form of utilitarianism (sacrificial) … but it is inaccurate to simply assert that trolley problems have nothing to do with utilitarianism.” To begin with, I never claimed that trolley dilemmas have “nothing to do” with utilitarianism. Never. Second, there is no such thing as sacrificial utilitarianism. Look it up, it’s just not a term in moral philosophy. What Michael means is utilitarian thinking applied to sacrificial problems. Not the same thing.

We now get to the part that nearly caused me an apoplectic attack, allegedly, when Shermer stated (in the first article) that witch hunts and genocides like the Holocaust or the one in Rwanda were caused by utilitarian thinking. In his response, Michael quotes himself from The Moral Arc: “It is evident that most of what we think of as our medieval ancestors’ barbaric practices were based on mistaken beliefs about how the laws of nature actually operate. If you — and everyone around you including ecclesiastical and political authorities — truly believe that witches cause disease, crop failures, sickness, catastrophes, and accidents, then it is not only a rational act to burn witches, it is a moral duty. … Medieval witch-burners torched women primarily out of a utilitarian calculus — better to kill the few to save the many. Other motives were present as well, of course, including scapegoating, the settling of personal scores, revenge against enemies, property confiscation, the elimination of marginalized and powerless people, and misogyny and gender politics. But these were secondary incentives grafted on to a system already in place that was based on a faulty understanding of causality.”

Two points here. First off, Shermer is in full speculatory mode here. We simply have no idea how to interpret and weigh the various motives of medieval witch-burners. All factors listed by Michael (and probably more) may have played a role, but anyone who assuredly claims that “this” was the major cause while the others were secondary is pulling one out of thin air. There simply is little empirical evidence to bear on this sort of claims. Second, what I begin to suspect is going on here is a fallacy of equivocation (which will reappear below, when we get to the issue of natural rights). Shermer knows very well that medieval witch-burners could not possibly have deployed Bentham’s or Mill’s philosophy, which had yet to be invented, so he uses the word “utilitarian” in a vaguer, broader sense, which then allows him to implicate the philosophy. Nice try, but this is sophistry, not good reasoning. (I said I wasn’t going to get snarky, not that I wouldn’t be critical.)

Indeed, Michael seems aware of this: “here let me clarify to anyone who thinks I can’t even get my centuries straight that I’m not arguing Torquemada sat down with Pope Sixtus IV to compute the greater good sacrifice of 10,000 Jews in order to save 50,000 Catholics; instead I am aiming to understand the underlying psychological forces behind witch hunts and genocides.” Except you cannot possibly have empirically substantive evidence of the psychological forces underlying the thinking and acting of Torquemada and sixtus IV, so why engage in this sort of psycho-historical speculation? It is just as likely, possibly even more, that Sixtus IV would have killed ten times more Jews in order to save ten times fewer Christians, since Jews and Christians, for him, were simply not comparable in moral value. Good skepticism is about empirical evidence, so why don’t we stick to that?

Shermer continues with another lengthy citation from The Moral Arc: “As in the limbic system with it’s neural networks for emotions, approach-avoidance moral conflicts have neural circuitry called the behavioral activation system (BAS) and the behavioral inhibition system (BIS) that drive an organism forward or back, as in the case of the rat vacillating between approaching and avoiding the goal region. … These activation and inhibition systems can be measured in experimental settings in which subjects are presented with different scenarios in which they then offer their moral judgment (giving money to a homeless person as prescriptive vs. wearing a sexually suggestive dress to a funeral as proscriptive).”

This is very nice, interesting, even, but utterly irrelevant. Of course animal and human thoughts and actions have specific neural underpinnings. How else would we think or act? But, quite obviously, different people balance the outputs of their BAS and BIS differently, and they end up thinking and acting differently. Some of these differences (though certainly not all of them) may be the result of philosophical reflection on why one should act one way rather than another. And this discussion is about moral philosophy, not neuroscience. As I pointed out in my original review of Sam Harris’ The Moral Landscape (which appeared, partly censored, in e-Skeptic), we may as well discuss the validity of a proof of Fermat’s Last Theorem by doing an fMRI scan of the brain of a mathematician. Interesting, no doubt. But also entirely unhelpful to the question at hand.

Michael briefly touches on my criticism of his treatment of Kant, where I brought up a technical paper by Helga Varden on the famous problem of lying to the Nazi. He says: “There is much more to her analysis of Kant, but it seems to me that in this example lying to Nazis is both a utilitarian/consequentialist decision because it would result in the death of an innocent, and a rule/rights decision that qualifies why we should care about the innocent in the first place: because, say, Kant’s rule about never treating people as an ends to a mean but as an ends in and of themselves, or that all people have a right to their own life.” This is very muddled. First off, lying to the Nazi would save the life of an innocent, not result in his death. Second, one can make that decision within a utilitarian framework, but also within a deontological one (Kant’s deontological system is not the only one on the market, so to speak). Third, Kant did not say that we should not treat other people as a means to an end (not the other way around!), he said we should never treat other people solely as means to an end. Without that qualification, we couldn’t go to a restaurant and be served by a waiter, because that would be using a human being as a means (waiter) to an end (getting my meal). The additional “solely” says that we can do that, but always while keeping in mind that we are interacting with a human being, not a machine. Fourth, nobody believes that all people have an unqualified right to their life. If instead of the Jew my basement hid a mass murderer (not because I’m protecting him, but under threat to my life) then I would certainly give him away to the authorities, even if that should result in his death. That’s the thing about good moral philosophy: it’s complicated, and requires precise nuanced thinking. And that is why we have professionals devoted to it.

Finally we come to the topic that most railed Shermer’s supporters on Twitter: natural rights. He ended his SciAm column with an endorsement of the concept, and I responded that on that topic I was with Jeremy Bentham (the founder of utilitarianism): the notion of natural rights is “nonsense on stilts.”

This is a complex topic, with a long history in philosophy, and despite Bentham’s flippant response, there have been serious defenders of it throughout the centuries. The notion of natural rights is related to, but is not the same as, the notion of natural law. Scholars trace the idea back to Plato and Aristotle, but it is far from clear that either one of them meant anything like the modern version deployed by thinkers from John Locke and Thomas Paine to Robert Nozick. I have a separate essay on natural law on my Stoic blog, since it is pertinent to that philosophy, but here let me simply reiterate my basic point: I don’t think there is any defensible notion of natural rights (as distinct from rights as human constructs) that is not hopelessly entangled with what I think are antiquated or indefensible notions of Platonism (as in Plato), teleology (as in Aristotle), or theology (as in Thomas Aquinas). You may disagree, of course, but then you owe us a defense of Platonism, teleology, or theology.

One counter to my criticism I have run across a lot on Twitter during my debate with Michael in the past few days is that human beings are part of nature, so obviously there are natural rights. QED. Well, no. This, again, threads on an equivocation. Yes, as a biologist I certainly agree that humanity is just as natural as anything else in the universe. But in the context of this sort of discussion the distinction has always (since Aristotle!) being very clear: natural rights refers to something that is mind-independent but can be discovered by human beings capable of reason; so-called positive rights, by contrast, are the result of human agreements. True, in a sense, positive rights are also “natural,” but it just doesn’t help to talk that way, it muddles a perfectly clear distinction.

This, incidentally, is an area where there is more agreement between Shermer and myself that may at first glance appear. More, but nowhere near total. Before I get to his rebuttal, let me state briefly what my position is. “Rights” are a human construct, the result of agreeing among ourselves, on the basis of moral and political considerations, that certain things qualify as rights and others don’t. The best sort of evidence that rights are of this kind is the complete disagreement among supporters of natural rights on the number and content of these alleged rights. We go from just one right (Richard Cumberland: benevolence toward all rational creatures), to three rights (Locke: life, liberty, and property — this is the one Michael wants, qua libertarian), to a whopping 19 natural laws from which one can derive corresponding rights (Hobbes, in chapters 14 and 15 of Leviathan).

That said, I do agree that rights are not entirely arbitrary, as they are linked to human nature, just like all moral philosophy is (Skye Cleary and I have recently argued this in Aeon). This puts me somewhere in the middle between moral anti-realists, who think that there is no such thing as a moral truth, and moral realists, who think that there is. I am a moral quasi-realist, meaning that for me morality is an evolving set of ideas that strives to regulate social interactions in order to allow people to flourish qua members of a social group. The reason I don’t think — contra both Shermer and Harris — that science can give us answers to moral questions is because I think facts about human nature under-determine moral systems. That is, given human nature as it is, there are several different, possibly incompatible, ways to develop moral codes. The choice among moral philosophies, then, is informed by facts about human nature, but not determined by it. To ask whether, say, utilitarianism or deontology or virtue ethics are “true” is to commit a category mistake. These are frameworks to think about social life. They may be more or less useful and more or less coherent, but not true or false (and hence not falsifiable or verifiable scientifically).

Okay, now back to the last chunk of Michael’s response. He thinks I contradict myself when I say that we all prefer to be alive rather than dead. I don’t see how that follows. Mine is just a statement of a natural desire. One has to do philosophical work to go from there to a right, especially a right that is somehow inalienable. (I also desire gelato, but that does not imply that I have a right to it.) I do think the is/ought gap can be filled, but not by simply stating that what is natural is ipso facto good. That, as Shermer knows, is yet another informal fallacy, the appeal to nature. And it is easily countered by endless examples (aggression and war are natural for human beings, it doesn’t follow that aggression and war are good).

Shermer takes a lot of liberties with evolutionary biology (another field in which I honestly question his qualifications): “Any organism subject to natural selection — which includes all organisms on this planet and most likely on any other planet as well — will by necessity have this drive to survive and flourish.” No, there is no natural selective imperative to flourish, especially if flourishing entails things like owning private property. Indeed, strictly speaking there is no natural imperative to survive either: survival is useful, from the standpoint of natural selection, only if it leads to reproduction. Sure enough, often selection favors short life spans, or rather nasty (i.e., non flourishing) lives, so long as the reproductive imperative is satisfied. And, again, just because natural selection favors individuals who reproduce, it certainly doesn’t make non reproducing immoral, does it? One of the few times I agreed with Steven Pinker (often quoted by Shermer) is when he wrote, I believe in The Language Instinct, that he made a decision early on in his life not to have children, but to devote his life to research, teaching, friends, and other good things. He commented (I quote from memory, since I no longer have a copy of that book): “and if my genes don’t like it, they can go and jump into the lake.” Indeed.

So when Michael says “I argue, the survival and flourishing of sentient beings is my moral starting point, and it is grounded in principles that are themselves based on nature’s laws and on human nature — principles that can be tested in both the laboratory and in the real world,” he is confusing different things, or at the very least drawing a direct connection between (certain aspects of) human nature and morality. This can’t be done, one needs empirically informed philosophical work to bridge the is/ought gap, not just brute facts.

He says other things that are clearly incorrect from a biological standpoint, like “The singular and separate organism is to biology and society what the atom is to physics — a fundamental unit of nature.” No, as plenty of biological organisms are colonial (corals, some jellyfish), or their individuality is temporary (when it’s time to reproduce, as in slime molds), or don’t have clear boundaries at all (several species of trees and mushrooms), or are a complex ensemble of multiple organisms that only appear to be one (human beings, see the concept of holobionts).

Shermer approvingly quotes Pinker: “Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.” Yes, as the Stoics had already figured out 23 centuries ago, we are born with a rudimentary sense of pro-social behavior, which we share with other primates. And yes, morality is the result of building on that innate sense by way of reasoning and language (a notion that the Stoics elaborated into their theory of moral development). But we are no forced to one specific set of conclusions, again because there is a relationship of under-determination between facts about human nature and moral frameworks.

Michael counts himself and Pinker as moral realists, and thinks he slam dunks the case with the following rhetorical question: “Is there anyone (other than slave holders and Nazis) who would argue that slavery and the Holocaust are not really wrong, absolutely wrong, objectively wrong, naturally wrong?” Well, first all, history is full of slave holders. People before very recent times thought that slavery was natural and just. Indeed, they derived this conclusion from their understanding of human nature, or the divine ordering of things, or whatever. More importantly, Shermer makes a fundamental mistake here: confusing objectivity with absolute truth.

Consider a simple example. Given the (entirely arbitrary) rules of the game of chess, a number of things about the game are objectively true. Heck, one can even demonstrate mathematical theorems about chess. But these truths are not “universal,” they are contingent on the specific set of rules that constitute the game. They don’t exist “out there,” in any kind of mind independent fashion. And they are, therefore, not inalienable. One can deny them by simply refusing to play the game, or by inventing a different game with even slightly different rules.

Yes, I do believe that slavery is wrong, given my understanding of human nature, which prioritizes individual flourishing and the application of reason to the improvement of pro-sociality. But there are other understandings from which my conclusions do not follow. So I have to argue the point from within whatever moral framework I have chosen (in my case, virtue ethics), I cannot simply and straightforwardly derive it from empirical observations about human behavior. If only it were that simple.

Michael Shermer on utilitarianism, deontology, and “natural rights”

Nazi officerYou may have noticed that I don’t opine on quantum mechanics. Or jazz. The reason for this is that — although I’m very interested in both topics — I just don’t know enough about them. Not enough to be able to offer an informed opinion, at any rate. So I sit back, read what other, more knowledgeable people have to say about quantum mechanics and jazz, form my own second-hand opinion, and try to avoid embarrassing myself by pontificating in public.

Apparently, my friend Michael Shermer does not follow the same philosophy. At least, not when it comes to the field of moral philosophy. He has recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” which starts out simple (simplistic, really) enough, and ends in a crescendo of nonsense. Let’s take a look.

After asking whether you would politically oppress a people for a limited time, if it increased the overall well-being of the citizenry, Michael explains that that’s utilitarianism, the philosophy started by Jeremy Bentham back at the beginning of the 19th century, often summarized with the slogan “the greatest good for the greatest number.” (Bentham, incidentally, is currently visiting New York, go see him, if you have a chance.)

Well, that is one of many versions of utilitarianism, and it was immediately abandoned, by none other than John Stuart Mill, Bentham’s student, who actually wrote the classic 1861 text entitled Utilitarianism. Indeed, before that Mill wrote two important articles, “Remarks on Bentham’s Philosophy” (1833) and “Bentham” (1838), in which he criticized his mentor and began to develop modern utilitarian thought. One of the major distinctions one can draw within utilitarianism still today is that between so-called act utilitarianism (where we must evaluate the morality of each act, a la Bentham) and rule utilitarianism (where we conform to rules that have shown overall to bring about the greatest amount of good, a la Mill). More generally, utilitarianism has a long history, and nowadays it is actually best thought of as a particular type of consequentialist philosophy. I could be wrong, but Shermer seems unaware of these distinctions.

Michael then tells his readers that “modern utilitarianism” is best instantiated by the (in)famous trolley problems. This is just flat out wrong. The original dilemma was introduced by Philippa Foot back in 1967. Here is the first version:

“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.”

Contra Shermer, the trolley dilemma was proposed, and it continues to be used (not only in philosophy, but in social psychology), in order to probe people’s moral intuitions, not to “instantiate” utilitarianism. For instance, a deontologist would refuse to frame an innocent or switch the lever, on the basis of the Kantian notion that one ought never to treat others solely as means to an end. The fact that many people switch from utilitarian to deontological responses when considering different versions of the dilemma tells us that they tend to react emotionally, which leads them to deploy an incoherent moral philosophy.

Michael then says that “the problem” with utilitarianism is that there are situations in which following its precepts one would end up endorsing psychopathic behaviors, as in the famous case (which I pose to my intro philosophy students) of the surgeon who has five patients in the emergency room, each with a failing vital organ, and decides to pick up a stranger from the street, cut him up into pieces, and distribute his organs around to save the other five. Too bad that this sort of thing is precisely why Mill (remember, already in 1833) introduced rule utilitarianism, which blocks the psychopathic doctor in his tracks. Again, no mention of this in the SciAm article.

Shermer briefly mentions a recent paper in Psychological Review (which I have not read, so I will not comment on it), mostly to tell us that he took the Oxford Utilitarianism Scale test and scored 17/63. He ain’t no utilitarian, according to the test. Neither am I, apparently (phew!), since I scored 21/63. You can do the test yourself, here.

After a brief mention of Kantian deontology, the article really veers from simplistic to nonsensical: “Historically the application of a utilitarian calculus is what drove witch hunters to torch women they believed caused disease, plagues, crop failures and accidents — better to incinerate the few to protect the village. More recently, the 1:5 utilitarian ratio has too readily been ratcheted up to killing one million to save five million (Jews: “Aryan” Germans; Tutsi:Hutu), the justification of genocidal murderers.”

What?? No, absolutely not. Setting aside the obvious observation that utilitarianism (the philosophy) did not exist until way after the Middle Ages, no, witch hunts were the result of fear, ignorance and superstition, not of a Bentham- or Mill-style calculus. And this is the first time I heard that Hitler or the Hutu of Rwanda had articulated a utilitarian rationale for their ghastly actions. Again, they were driven by fear, ignorance, superstition, and — in the case of Nazi Germany — a cynical calculation that power could be achieved and maintained in a nation marred by economic chaos by means of the time-tested stratagem of scapegoating. (The latter is also what perpetrators of witch hunting and the Rwandan genocide did: prey on the weak, it’s easy to do and get away with it.)

But Shermer doesn’t let Kant off the hook either. He brings up the famous example (which, again, I confront my intro philosophy students with) of lying: if it is the case — as Kant says in one formulation of the categorical imperative — that we should only accept as moral those principles that we would be willing to make into universal rules, wouldn’t that mean that I should never lie and give up the Jew I’m hiding in the basement if a Nazi officer (it’s always the Nazi!) politely asks me? Or, as Michael updates the scenario: “if you live in Syria and a band of ISIS thugs knocks on your door demanding to know if you are hiding any homosexuals they can murder in the mistaken belief that this fulfills the word of God — and you are — few moralists would object to your lying to save them.”

Notice the pejorative term “moralists,” instead of moral philosophers. Anyway, you would think Kantian philosophers would have something to say about this. Oh, right, they do! A good example is a paper by Helga Varden in the Journal of Social Philosophy, entirely devoted to Kant, lying and the Nazi officer. I do not have the time here to do justice to her analysis, but a couple of points need to be brought to bear: first, in that case Kant was writing explicitly within the context of a discussion of the doctrine of rightful interactions (the original, short paper in which he tackles the case is entitled “On a supposed right to lie from philanthropy”). As Varden says, within that context, “we can make sense of why lying to the murderer, although a wrong, is not to wrong the murderer, why we become responsible for the bad consequences of the lie, and finally why lying is to do wrong in general.”

More to the point, Kant was talking about a murderer (he, obviously, couldn’t have contemplated the Nazi), but when one changes the scenario to a Nazi officer — or an ISIS terrorist — it turns out that the problem dissolves itself, because “the only time doing wrong in general by lying is legally punishable [within Kant’s framework] is when we lie to or as a representative of the public authority. The Nazis, however, did not represent a public authority on Kant’s view and consequently there is no duty to abstain from lying to Nazis.” Or to ISIS. Again, I didn’t notice any of these qualifications in Shermer’s article.

Michael, predictably, makes no mention at all of the third great framework in moral philosophy, virtue ethics, which would actually do a lot of the work he wants to do, against both utilitarianism and deontology — in their philosophically sophisticated versions, not the caricature we get in the SciAm article.

But never mind that. The true nonsense comes right at the end, when Shermer puts forth his preferred view, the one that, in his mind, has allowed for true moral progress throughout the ages: “both utilitarianism and Kantian ethics are trumped by natural-rights theory, which dictates that you are born with the right to life and liberty of both body and mind, rights that must not be violated, not even to serve the greater good or to fulfill a universal rule.”

Setting aside that you get precisely the same result from Mill’s rule utilitarianism, not to mention that natural rights theory has no argument against Kant, “natural rights” are what Jeremy Bentham famously, and correctly, referred to as “nonsense on stilts.” There is no such thing as a natural right, and we, therefore, are not born with them (contra the mindless libertarian mantra that Shermer is repeating). Michael is confusing human desires and instincts — some of which are actually culturally dependent (it is empirically not the case that everyone on earth desires liberty of mind, for instance) with rights. But rights are, obviously, a human creation. Which accounts for why, as Shermer himself notes, they have to be written down in things like the Bill of Rights, and protected by the force of state-enabled law. It’s also why people have come up with different lists of rights at different times. The United Declaration of Human Rights, for instance, provides a much more extensive list than the one arrived at by James Madison and co. back in 1789.

To argue that rights are “natural” is to commit the most elementary logical fallacy in ethics, that of the appeal to nature. And even if one were to overlook that little problem, there simply is no consistent empirical evidence for most of such alleged rights (i.e., desires, instincts) in Homo sapiens or its recent ancestors. Yeah, we all prefer to be alive rather than dead, other things being equal, but natural selection does not care about mere survival, it only favors survival that leads to reproduction. And it favors it, it doesn’t guarantee it. (So you can’t derive a natural right to sex. Too bad!)

This is the sort mess one gets when Michael talks about moral philosophy. Or when I talk about quantum mechanics. Or jazz. Please, let us all stick to what we know. It’s hard enough as it is.

Socrates, weakness of the will, and addiction

Socrates“People are dying because we misunderstand how those with addiction think,” says the title of a recent article in Vox by philosopher Brendan de Kenessey, who argues that addiction is not a moral failure, and that it is the moralistic attitude of a number of politicians and a significant portion of the public that makes the problem more difficult to deal with. Addicts are not bad people who need to be punished, he says, they are sick and need help.

And he is completely right, I think. And yet, I also suggest that the bulk of the article is based on the wrong philosophical criticism. de Kenessey blames Socrates for the moralistic attitude, while he should blame certain brands of Christianity instead. Here I will not make the positive case against Christian moralism (which is well known among certain politicians of a certain party in the US), nor will I unpack the idea that addicts are sick, not bad, people, as de Kenessey does a very fine job of that in his article. But I will defend Socrates and use the occasion to talk a bit not just about addiction, but in general the phenomenon of what the Greeks called akrasia, or weakness of the will, and which Socrates thought simply does not exist.

The starting point of de Kenessey’s analysis of the Socratic problem is the Platonic dialogue known as the Protagoras, in which the discussion between the Athenian sage and one of the most famous sophists turns to the topic of akrasia. Let’s contrast two instances of alleged akrasia, brought up by de Kenessey to make his point against Socrates, and which I think, on the contrary, show pretty clearly why Socrates was correct (once we add an hidden premise to the Socratic position, a premise not discussed by de Kenessey).

Imagine yourself in front of the television, intending to binge watch a season of Black Mirror (or whatever your favorite show happens to be). You think, when you reflect on it, that this isn’t really the best use of your time, and that you should instead pick yourself up and go to the gym, as lately you’ve let yourself go a little, and you don’t feel good, both physically and psychologically. You mull it over a bit, but in the end decide to stay and watch television, with munchies to accompany the experience.

Now imagine, says de Kenessey, an addict who is driving down the projects, thinking that he really ought to stop what he is doing, turn his life around, clean up, get a job, and take care of his family. Nevertheless, he keeps driving to the corner where he regularly meets his dealer, and buys some cocaine instead.

The two cases appear to have a similar structure, like this:

Subject A has two courses of action available to him, X and Y.

A thinks that he should do X, even though he is very tempted by Y.

A ends up doing Y, rather than X.

Socrates has this to say, in the Protagoras, about this kind of situation:

“No one who knows or believes there is something else better than what he is doing, something possible, will go on doing what he had been doing when he could be doing what is better.”

This seems paradoxical, in the original meaning of the term (para doxan = uncommon opinion), as it is a straightforward observation that people, like both our hypothetical television binger and drug addict, very often don’t do what they believe to be the best thing for them. And yet, Socrates is not alone in taking this position. Modern economists such as Paul Samuelson have proposed an approach in behavioral economics known as “revealed preference,” according to which people show what they really like by what they do, not by what they say. Similarly, modern psychology has accumulated a pretty good amount of evidence that we often confabulate about the reasons why we do things, i.e., we make up reasons to justify our actions because we often don’t really have a good understanding of our own motivations.

How does Socrates defend his “paradoxical” position, which seems to fly so clearly in the face of the evidence? He thinks that people in these cases do not suffer from akrasia, i.e., weakness of the will, thus acting against their best judgment. He thinks instead that people are doing exactly what they want to do, but are doing it because of bad judgment. Doing bad things is, therefore, a matter of ignorance, not malice.

Ignorance my ass, one might easily retort. The television watcher is not ignorant, and neither is the drug addict. They don’t luck the pertinent information, they don’t need to be educated about what is going on. True, but the word used in the Platonic dialogues in this context is amathia, which although usually translated as ignorance actually means something closer to un-wisdom, the opposite of sophia, one of the roots of the word philosophy. Socrates is arguing that apparent cases of weakness of the will are actually cases of lack of wisdom — not of factual or empirical knowledge, but of the proper way to arrive at judgments given certain factual or empirical knowledge.

Ever since discovering the Socratic idea of replacing akrasia (and, more importantly, actual “evil”) with amathia I found myself to be significantly more prone to understand others’ motivations and actions, to sympathize with their manifest lack of wisdom even when I cannot possibly condone their actions, and to generally cultivate an attitude of sorrow rather than anger when people do bad things. I find this new approach liberating and far more constructive than either the akratic or, much worse, the moralistic one.

Still, isn’t de Kenessey right that Socrates ends up blaming the victim here, and that it is this sort of blame that justifies the kind of draconian measures implemented by politicians, and supported by the public, that made the so-called war on drugs a total disaster with a high cost to society, both in human and financial terms?

I don’t think so, and the reason is that if we want to read Socrates charitably we need to see that the two cases above are actually distinct, and they are distinct because of a hidden premise in the Socratic approach. That premise is that we are talking about a normally functioning human mind, not a diseased one. It was well known even in the ancient world that human beings have a tendency to reason very poorly when they are under the influence of a number of external conditions, particularly drugs (including wine). A good deal of Greek tragedy is built on that premise, such as Euripides’ The Bacchantes. That is why Diogenes Laertius, commenting on the Stoics — which were explicit followers of Socrates — says that “they will take wine, but not get drunk.” (VII.118) Getting drunk artificially impairs one’s judgment, so when one is under the influence, as we say today, one is not suffering from lack of wisdom, he’s suffering from a temporarily dysfunctional mind.

If this is a reasonable and charitable interpretation of Socrates’ take, then the two cases of the television binger and the drug addict are very different. The first is an actual case of what Socrates is arguing against Protagoras: the binger — in accordance with modern behavioral economics theory — really does prefer to stay at home to watch Black Mirror rather than going to the gym. Yes, of course he knows that in the long run he would be better off taking the second course of action, but he judges that for him, right here and right now, binging is better. His future self be damned. He is, of course, mistaken in such judgment, just like Socrates maintained.

The same reasoning, by contrast, does not apply to the drug addict, precisely because he is an addict, and therefore his judgment is impaired. He is not suffering from amathia, he is suffering from a chemical addiction. And that is why the moralist attitude criticized by de Kenessey is pernicious, because it does not recognize that the person in question is sick, not evil (or unwise, as Socrates would put it).

There is, of course, a wrinkle in all this, which de Kenessey must be aware of, and yet never mentions in his article: on the first occasion that the soon-to-be drug addict decided to take cocaine his judgment was not impaired by being sick, yet. Which means he is still responsible for the initial decision to go down that road. Now we only have two ways of looking at the onset of the addiction, then: either the person is morally bad (the moralist view), or he lacks wisdom (the Socratic view). Not only the second view is more humane, it also makes much more sense than invoking akrasia: the future drug user had not yet had the experience of being on drugs, so he couldn’t possibly have yielded to the temptation of temporary pleasure promised by the drug. More likely, he made the unwise judgment that the drug wasn’t as bad as people say, or that he will have the willpower to resist the addiction, or something along similar lines and to the same effect.

de Kenessey points out that several modern philosophers have attempted to come up with an anti-Socratic account, but they can’t agree on what’s going on: for Harry Frankfurt the desires that represent our true self are those desires that we want ourselves to have (Harry Frankfurt); for Gary Watson they are the desires that align with our judgments of what is valuable; for Michael Bratman they are the desires that cohere with our stable life plans; and for Susan Wolf they are the desires that are supported by rational deliberation (Susan Wolf).

This business of a “true self” is, however, a red herring. As de Kenessey argues, modern psychology has done away with that notion (so did David Hume, two a half century before modern psychology). But the fact remains that “we” do make decisions in response to our desires and as a function of our capacity to arrive at judgments. Whether “we” are made of a unitary self, a bundle of perceptions, or whatever, doesn’t matter. Our judgments are either made by a functional human mind (in which case we are responsible for them) or by a non-functional one (in which case we are sick and need help). The difference between the moralist and Socratic view pertains to the first, not the second case. And there one has a choice of blaming people for the evil doing, or pity them for their lack of wisdom. I find the latter course of action to be far more preferable.

Against The Four: Amazon, Apple, Facebook, and Google

Amazon Apple Facebook Google“The Four” are the giant tech companies Amazon, Apple, Facebook and Google. You can read all about why they are a problem in Scott Galloway’s informative book, The Four: The Hidden Dna of Amazon, Apple, Facebook, and Google. Galloway is a Professor at NYU’s Stern School of Business, where he teaches brand strategy and digital marketing. He is also the founder of several firms including L2, Red Envelope, and Prophet. In 2012, he was named “one of the world’s 50 best business school professors” (no, this isn’t an oxymoron) by Poets & Quants. Moreover, he has served on the boards of Eddie Bauer, The New York Times Company, Gateway Computer, and Berkeley’s Haas School of Business. In other words, this is someone who knows a lot about corporate culture, and not at all a left wing moralist such as myself.

(If you don’t have time to read the book, look at these two articles that turned me onto it, in Wired magazine, and over at the BBC.)

In a nutshell, the problem with The Four is that they simply have far too much power in our lives, both in terms of the information they store about us (and how they use it), and of their financial muscle, which of course easily turns into political influence. From the BBC article:

“The four most important tech companies aren’t even just tech companies anymore. They each have embedded themselves in our lives, hugely influencing us by playing to our basic human instincts: from the eternal human search for answers to our need for love. … [Galloway] also says that the companies’ deep pockets and massive customer base are what allow the companies to start competing in different sectors and industries – like how Amazon is now producing original TV content, or how Facebook has more or less emerged as a news media platform. That has prompted scepticism and warnings from critics at places like the New York Times and Bloomberg.”

And that was before Amazon bought Whole Foods, for instance. You can dig into the details yourself, no point for me to repeat here easily found material. My objection to The Four is ethical: I am generally skeptical of any concentration of power, especially in the private sector (but not only: governments are a little better to the extent that they really are accountable to their people by means of a not too dysfunctional democracy. The US Government does not, at this point, qualify, for instance). But I also recognize that these and other tech companies have made possible a wonderful number of things, including, of course, this very blog (which in its initial incarnation, Rationally Speaking, was hosted by Google, with essays that were for many years written on a MacBook or an iPad, and are still broadcasted via Facebook).

Moreover, I am certainly not an anti-technologist. On the contrary, until recently I was a very early adopter of new technologies. I bought the first iPhone, and then an iPad (and then an iPad Pro, which replaced my laptop), signed up on Facebook very early on, have used Amazon for many years, and have a very very long browser history with Google.

And before you accuse me of naivete, I am perfectly aware that The Four aren’t the only giant tech companies to be wary of (add Twitter, Verizon, Samsung, just for starters), and the list gets far longer when one moves to large corporations in general. Ultimately, I think the only reasonable and effective way to curb corporate power is by legislation, along the model of the famous breakup of Bell back in 1982. As a society, we want innovation, and we certainly want private entities to benefit from their work. But innovation needs competition, not near monopolies, and benefiting from one’s work does not equate creating a very small class of ultra-billionaires who exploit their workers (like Amazon certainly does), including in other countries (like Apple equally certainly does).

But while we are waiting for governments to take action (more likely in Europe than in the US, at the moment — see Galloway’s take here), it doesn’t mean we have to be complicit enablers. I try to practice what in philosophy is called virtue ethics, which means that my first focus is on improving my own character, which in turn requires acting as virtuously (in the Greco-Roman, not the Christian sense) as possible. It follows, it seems to me, that I need to extricate myself as much as possible from The Four, as an initial step.

And that’s where I discovered two interesting things, which are the main objects of this post. First, it is much harder than one might at first imagine. Second, you are unlikely to get a lot of support even from friends and family, who might even exhibit hostility to your intentions. Let me explain.

They are called The Four for a reason. They are everywhere, and next to impossible to avoid, unless you are willing to completely disengage from the Internet. In the postscript, I detail the steps I have taken so far, in case anyone else wishes to try it. You can thank me later for having saved you endless hours of web searching (using DuckDuckGo, of course…).

Apple was actually the easiest to get rid of. Because their ecosystem is so tight and positively discourages any contact with the outside, once you decide to get out of it, you pretty much have to go the whole nine yards. This pained me, because I have been an Apple fan ever since I ditched Microsoft because of the poor quality of their products, back in 2004. But just a couple of weeks later, I hardly miss my iPad and iPhone, and I most certainly don’t miss the Watch, one of the most intrusive gadgets ever made.

Next was Amazon. The big steps here were to stop shopping on their online store (easy, plenty of alternatives), to replace the Kindle with one of several other high quality e-book readers, and to begin to direct readers of my own books to either publishers’ web sites or other e-book stores. of course, the bulk of my collection of books is on Amazon, but I’ll eventually get it back by way of available software that decrypts the files and turns them into the popular epub format. I still watch Amazon videos, because they are good and not available elsewhere. Before you accuse me of hypocrisy, however, keep in mind that the goal is to minimize my footprint on The Four, so to speak, not to eliminate them from my life altogether. It’s an incremental project, not a revolution.

Which brings me to Google. In a sense, I actually increased my use of their products, since now my office suite is the Google one, replacing Apple’s iWorks. But it is a temporary transition dictated by limited time available to search for long term suitable alternatives, and by the need not to disrupt several ongoing collaborative works. And at any rate, I hit Google where it hurts, their web search engine, which produces their advertisement revenues and is of course highly invasive of our privacy. DuckDuckGo does an excellent replacement job.

Finally, Facebook. This was the hardest, again unless I was willing to forgo keeping in touch with (real) friends and family, and also to give up my outreach presence (my “official” philosophy page, my participation to the largest online Stoic community, and a few other things). What I did was to get rid of their obnoxious Messenger app, as well as “unlike” and “unfollow” a crapload of pages that were, of course, generating lots of targeted advertisements. I am now using Facebook with a very small and tightly guarded circle of actual friends and family, as well as for the above mentioned outreach, nothing else.

So the bottom line of the first point is that this exercise showed me very clearly just how dependent our lives have become from The Four. Perhaps this should not have been surprising, but experiencing the full measure of it in such a short period was eye opening. The other thing that was eye opening relates the second point: the comparative lack of support, and occasionally more or less overt hostility, I got from friends and family (and, I’m sure, from some readers, now that I’ve put this out).

When I explained what I was doing and why, a good number of people were puzzled, and began immediately to mount arguments against my enterprise. “It’s useless.” “You won’t succeed” “It’s going to cost you a lot of money and time.” “What do you have against corporations?” “Are you a Luddite?” “Why do you hate America?” Okay, I made up the last one, but the others have been thrown at me fast and furious during the past few weeks.

So I patiently explained: no, I’m not a Luddite; on the contrary, I’ve always been an early user of especially electronic technology. No, it isn’t really that expensive (as I’m sure everyone knows, Apple alternatives in terms of phones and tablets are incredibly cheap by comparison). Yes, it took me some time, but I was helped by others who have similar objections and have done much of the legwork for me, and at any rate, it’s an ethical decision, it would be a bit too easy if it didn’t cost me money or time or effort.

My attitude toward corporations is the one already explained above. I am perfectly aware that if it weren’t Apple it would be someone else, but that’s not an argument about disinvesting from Apple. It’s the social activism equivalent of what in biology is called frequency dependent selection: you go for the rare phenotype, which eventually becomes the dominant, at which point you switch to the new rare, and so on.

In terms of success and utility, it depends on what one’s goal is. I am perfectly aware that Apple, Google and the others are not going to feel the pinch of my decisions. But from a virtue ethical perspective that’s not the objective: I just don’t want to be personally co-responsible for what they are doing. Moreover — and that’s why I’m writing this post and promoting Galloway’s book — if enough others do the same, the damage will be greater and greater, and it might bring about change.

Also, again in terms of success, as I said above my goal was never to completely disengage from The Four, only to distance myself from them. Many years ago I read Peter Singer’s How Are We to Live?: Ethics in an Age of Self-Interest, and it changed my life. No, I did not become a utilitarian like Singer, but I was struck by one of the first things he says in that book: don’t try to do everything at once, you will be overwhelmed, get discouraged, and fail. Instead, decide what your ethical priorities are, and then make some small but concrete steps in that direction. I discussed How Are We to Live? at a book club I founded in Knoxville, TN, and my wife at the time and I were talking about it on our way back home. We decided to follow Peter’s advice: we sold our house in the suburbs and moved downtown, near the bus lines and where we could bicycle to work; we also sold one of our two cars. Our life improved as a result, our carbon footprint went down, and we felt good about the decision. The current anti-Four action is along similar lines: I’m doing something, not everything, because I can do the former, but not the latter.

I thought my explanations were reasonable and cogent. One may still disagree, and indeed one may even agree with my take and still not act in a similar fashion, for all sorts of reasons. But my arguments hardly made I dent. Some people seemed not just to disagree with me, but to positively resent my chosen course of action. What was going on?

Then it hit me. It’s the same reaction I got when I stopped eating meat, and that my partner gets every time people find out she is a vegetarian. The same objections are immediately raised: it’s useless; it’s difficult; what’s wrong with the meat industry?; are you an environmental nuts?; do you feel somehow superior to the rest of?; why do you hate America??

It’s the next to the last one that should give you the clue. At least in my judgment, a lot of people who are not vegetarian recognize, at some level, that vegetarians have by far the better argument: no matter how you look at the issue — in terms of animal suffering, environmental degradation, treatment of labor, or even pure and simple self interest when it comes to health — vegetarianism is better. But it’s harder, too. Stakes are delicious; burgers are delightful; and everyone eats them, so it’s easier to just go along with the habit. But when you meet someone who is bucking the trend, and you are dimly aware that she has made the right choice and you haven’t, resentment kicks in. She simply must be mistaken, and you begin to rattle out a number of more or less incoherent “arguments” for why that is “obviously” the case.

I think something similar has been going on with my anti-Four strategy over the past few weeks. A number of my friends and family realize that I’m onto something (and Galloway’s book gives me plenty of well researched ammunitions, as well as the comfort to know that there are others who think and act the same). But it’s too hard, or expensive, or just inconvenient for them to follow suit. So I must be wrong. And once you know someone is wrong then you immediately begin to search for all the flaws in their reasoning, while ignoring the big ones in your own. It’s a well known cognitive fallacy.

Be that as it may. My conscience feels slightly better, in the same way and measure in which similar small decisions (to not eat meat, to try to shop locally, to voluntarily pay carbon footprint offsets when I travel by air, to change bank because my old one was a giant corporate monster, and so forth) have made me feel better. Is this going to change the world? Of course not. But what are you doing to help, right now?

_____

Postscript: Massimo’s (ongoing) anti-Four plan. Each item lists a service or product offered by Amazon, Apple, Facebook or Google, followed by a suitable, or at least less objectionable, alternative. All of these have already been implemented, it took about a week to switch completely. Remember, the goal is not perfection, just progress. Readers’ suggestions for further improvements are welcome.

Amazon > myriad alternative online retailers, obviously

Amazon Kindle > Nook, Kobo, Sony, etc.

(My) Amazon books > wherever possible I changed links to direct readers to the publishers themselves instead of the A-store

Amazon Video > Hulu, Netflix, though I still use some Amazon Video because some of their productions are unique and good

Apple’s iPad > any Android tablet (even though Android is Google, each manufacturer uses it differently, and the platform is more open than Apple’s)

Apple’s iPhone > any Android phone, except Google’s own, obviously

Apple’s Watch > back to analogical (and more stylish!)

Apple’s TV > back to simple smart TV native apps

Apple’s Music > Pandora, Spotify, or similar

Apple’s Mail > Aqua Mail (which does not track your search history), or any of a number of alternative third party clients

Apple’s office productivity (iWorks) > Google office, as first step, then independent systems, ideally open source

Apple’s Safari browser > Firefox (which does not track your history), Google Chrome not acceptable

Facebook > un-liked most pages, tightened security, limited who can ask me for “friendship”

Facebook Messenger > eliminated in favor of simple sms, or third-party apps

Google search > DuckDuckGo (which does not track your search history)

Google navigator > this is a tough one (particularly since Apple Maps is neither acceptable for this project, nor good, really), though for instance CityMappers works very well for major cities