Category Archives: Ethics

We are all invited to the conversation, and if you disagree you may be logophobic

“There is no greater evil one can suffer than to hate reasonable discourse.” (Socrates, in the Phaedo)

You may have noticed that these days it has become really difficult to have a reasonable conversation. With almost anyone. I’m not referring to the obvious culprits: Trumpists, purveyors of alternative facts, or those who dismiss critical journalism as “fake news.” I’m not even talking about the classical examples, from creationists to believers in all sorts of alternative medicine, the paranormal, UFOs, and so forth (I wrote a whole book about those). I’m talking about my own “tribe,” if you will: progressive, left-leaning liberals.

I have been one of those since I was young, convinced that the fascism that had gripped my own country, Italy, for the two decades leading to World War II was the true evil of the world. Yet, even back in high school I experienced the disturbing intolerance of certain quarters of the left, where any thought that did not align exactly with the orthodoxy of the moment was automatically branded as either “fascist” (thus emptying that word of meaning), or, at best, “qualunquista” (literally “one with random opinions”).

I profess cosmopolitanism, the notion — going back to the ancient Cynics and Stoics — that we are all human beings, endowed by that very fact with a degree of dignity that deserves respect and, most importantly, equal rights. I regularly donate to organizations that fight on behalf of civil rights, of immigrants, and of the freedom of the press. I think that women ought to be paid equally to men and ought to be afforded exactly the same educational and working opportunities. I think it monstrous to deny rights to Lesbians, Gays, Bisexual, Transgender, and Queer/Questioning (LGBTQ). I am not religious, but I do not believe that religion is the root of all evil, nor do I believe that any particular religion is ethically worse than any other (with the exceptions of some cults like Scientology).

Yet, I have been told to go fuck myself, you transphobic bastard, simply because I tweeted — without comment — an article that explored some interesting, and to me rather puzzling, conflicts among segments of the feminist, lesbian, and transgender communities. According to this otherwise thoughtful and in-depth piece, my sexual aesthetic preference for slender women cannot but be political, which, obviously, makes me “fatphobic.” If I raise the issue of Islamic terrorism, in a reasonable and historically informed fashion, I risk being branded an Islamophobe (though, funny enough, if I mention Christian terrorism linked to abortion clinics or white supremacy, I get plenty of nods of approval, within my tribe). If I voice the opinion that while the Israeli are guilty of unlawful occupation of Palestinian territory and of slaughtering unharmed innocents, it is also true that too many Palestinians are intolerant and have systematically undermined their own case, then I am, of course, a fascist (again) Islamophobe pig. And so on and so forth.

And then there is this rather new, entirely postmodernist malady: the idea that unless someone has actually experienced something (being it colonialism, gender discrimination, racism, and so forth) then one is not entitled to express an opinion about it, especially if one happens to be a “privileged” white male.

Let me be clear about the latter two points. First, yes, I amply recognize my privilege. I just don’t call it that, because it’s not helpful. I happen to think that everyone, regardless of gender or ethnicity, ought to have a somewhat nurturing family, a decent education, job opportunities, and be safe from discrimination and police violence, like I have been throughout my life. It’s not a fucking privilege, it’s the way all human beings should live. Period. And that’s going to happen only if we stand united to fight discrimination and injustice, rather than wasting energy and time shouting down some of our own allies, shaming them for having gotten lucky at the gene-cultural lottery. That sort of shouting — especially anonymously on social media — may make you feel virtuous, but it is positively damaging to the causes you allegedly care about.

Second, yes, it is certainly the case that people who directly experience discrimination are in a unique position to talk about it, and that they should be afforded a primary place in public discourse in virtue of that very fact. But we also know from plenty of research in social psychology (see here and here, for example) that first person experience is extremely unreliable, as people filter such experiences through all sorts of cognitive biases that inevitably distort their perceptions of things. That is why we need the rest of society to also come to the discussion table. Social progress is made possible by the constructive, and yes, even critical, dialogue among many voices, the by now increasingly old fashioned value of pluralism.

At his point, some readers will have already put down this essay and rushed to social media to begin a campaign of shaming, outraged by the few paragraphs above. So I can keep talking to those who have stuck it so far. Those who have left the conversation, I maintain, suffer from a peculiar disease described by philosopher Nicholas Shackel in a delightful paper published back in 2005 (Metaphilosophy 36: 295–320), entitled “The vacuity of postmodernist methodology: logophobia.”

The Greek roots of the term are obvious: “logos” means “thought,” particularly rational thought, and “phobos” means “fear of.” Logophobia is defined by Shackel as, “a skeptical doctrine about rationality … [where] rationality cannot be an objective constraint on us but is just whatever we make it, and what we make it depends on what we value.” He adds, “[opponents] are held to disguise their self‐interested construction of rationality behind a metaphysically inflated view of rationality in which Reason with a capital R is supposed to transcend the merely empirical selves of rational beings.” In other words, logophobics claim that since reason cannot possibly solve every problem, so you can proceed with dismissing reason altogether.

Shackel’s paper is actually a serious, and at times rather technical, critique of the modus operandi of postmodernist (and deconstructionist, and anti‐rationalist feminist) authors such as Michel Foucault, Jacques Derrida, Jean‐Francois Lyotard, David Bloor, Richard Rorty and Stanley Fish, to mention just a few of the major offenders (though I personally make distinctions of degree among the named people). At one point I added to the list a number of pseudoscientists, such as the Intelligent Design proponents Bill Dembski and Michael Behe, alternative medicine “gurus” such as Deepak Chopra, or vaccination‐deniers such as Jenny McCarthy. And now, to my deep regret, it is becoming painfully obvious that both the extreme right (which, at the moment, appears to include most of the Republican party in the US), and certain increasingly vocal groups of the left, have been struck by the disease as well.

According to Shackel, logophobics have developed an arsenal of strategies to obfuscate clear thinking, which they deploy whenever pressed by a skeptic. For instance, consider Foucault’s classic thesis that there is no such thing as scientific truth, because truth is a reflection of local socioeconomic and political power: “The essential political problem for the intellectual is not to criticise the ideological contents supposedly linked to science, or to ensure that his own scientific practice is accompanied by a correct ideology, but that of ascertaining the possibility of constituting a new politics of truth.” This is apparently strong stuff, but it becomes a much milder statement when Foucault later tells us what he means by “truth”: “‘Truth’ is to be understood as a system of ordered procedures for the production, regulation, distribution, circulation and operation of statements.” Really? And here I was thinking that truth means as accurate a description of a subject‐independent reality as possible. Silly me.

Shackel calls this “troll truism,” which he defines as, “a mildly ambiguous statement by which an exciting falsehood may trade on a trivial truth.” It is a particular form of so‐called Humpty‐Dumptying, named after the character in Lewis Carroll’s Through the Looking Glass. Humpty Dumpty famously tells Alice, after she points out that he is arbitrarily switching word meanings in the course of their discussion: “When I use a word, it means just what I choose it to mean — neither more nor less.”

Now, to be fair, Foucault did have a point: science is a social activity, and scientists are human beings, prone to the same cognitive and cultural biases as everyone else. Also, some science has been in the thralls of either economic or political agendas, or both. Similarly, when Amia Srinivasan, in the article linked above, claims that even our sexual preferences have a political dimension, the thought is neither absurd nor easy to dismiss. But the political, or the economic, are not the beginning and end of everything we do or think. There are (Zeus forbid!) biological instincts and constraints on our behavior, there are matters of taste, and there are simple reasonable disagreements. To reduce everything to the political is to commit the Marxist mistake, only in the latter case everything reduces to economic struggle. No, the human experience is just too varied and complex to be explained away by mono-factorial accounts, no matter how good the intentions of those that push for such accounts.

Logophobia, I maintain, is a broad condition that can strike adult humans of all ages, genders and cultural backgrounds, especially when they have never been exposed to the basics of critical reasoning, or when they have grown up in the thralls of a powerful ideological system. And no, being a scientist, or a philosopher, does not make you immune, unfortunately. The effect of the disease can probably never be eliminated, as we all suffer from it, to an extent. But they can be reduced by means of sound education, although that requires painful effort on the part of educators and subjects alike. Once the subject is past middle school, it becomes increasingly difficult, though not quite impossible, to overcome the malady. But it’s important to try, as huge amounts of financial resources and time are wasted as a result. Occasionally, lives are lost as a direct outcome of logophobia, especially when the logophobic is a politician with the power to start a war, or a celebrity pushing an anti‐medical health practice. And important political agendas may unwittingly be delayed or even undermined when logophobic activists manage to shut down discourse they find threatening, insulting, or “fascistic.”

Needless to say, fighting the spread of logophobia is a primary responsibility of every reasonable thinking person, practicing scientist, and professional philosopher, despite the highly unfavourable odds against defeating it — which is why a thorough knowledge of the disease and of its symptoms is so crucial. Spread the word, and watch out for logophobics near you! (And beware of not becoming one yourself…)

Advertisements

The morality-as-cooperation hypothesis and the link between evolution and moral philosophy

“Morality” comes from the Latin moralis, which was Cicero’s translation for the Greek ethos. The Greek word is related to our idea of character, the Latin one has to do with habits and customs. In an important sense, then, morality (or ethics, used here interchangeably) is the study of how to conduct your life, and particularly how to constructively deal with other people.

Recently, a group of researchers headed by Oliver Scott Curry has put out a paper (as yet unpublished) entitled “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” which is both interesting and more than a bit irritating. (Here is the site with the full version, and here is a commentary by the lead author, put out by the Evolution Institute.) It is interesting because it provides tantalizing empirical evidence to bolster the case, made by several researchers for a long time now, that the bio-cultural origins of a moral sense lie in the need for a species like ours to foster pro-social behavior, or as Curry et al. put it, “cooperation.” Frustrating because it fails to make the crucial conceptual distinction between the origins of morality and its current function, as well to recognize the equally important difference between descriptive and prescriptive approaches to ethics. I’ll do my best to highlight both the positives and the negative in what follows.

First off, a recap of the study as presented by the authors themselves:

“What is morality? And to what extent does it vary around the world? The theory of ‘morality-as-cooperation’ argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and predicts that specific forms of cooperative behaviour — including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession — will be considered morally good wherever they arise, in all cultures. In order to test these predictions, we investigate the moral valence of these seven cooperative behaviours in the ethnographic records of 60 societies. We find that the moral valence of these behaviours is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviours are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.”

Curry’s commentary begins with more than a whiff of scientism: “What is morality? And are there any universal moral values? Scholars have debated these questions for millennia. But now, thanks to science, we have the answers.” Ah! Yes, thanks to science we have the answers! Except that those two questions are quite distinct, of course. The nature of morality has been debated by Socrates (in the as yet unsurpassed Euthyphro) and is of course the topic of the entire field of meta-ethics. Whether there are universal moral values is a rather ambiguous question: do we mean whether there ought to be (prescriptive) or whether there merely happen to be (descriptive)? Philosophers tend to be concerned with the first sense (e.g., Kant), and are more than happy to leave the second one to anthropologists (it is, after all, an empirical question!).

Curry suggests that addressing problems of social behavior is something that has kept natural selection busy, so to speak, for millions of years during the evolution that led to the human lineage, and that moreover, once cultural evolution got started people have added a panoply of norms, rules, and institutions to deal with such problems. So far so good.

The approach used by Curry and his collaborators revolves around seven principles imported from evolutionary biology: “Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains: why we engage in costly displays of prowess such as bravery and generosity; why we defer to our superiors; why we divide disputed resources fairly; and why we recognize prior possession.”

Setting aside a few caveats (like the fact that not all societies actually have a taboo against incest), this theoretical apparatus does, in fact, go a long way toward explaining the origin of the behaviors mentioned. That brought the researchers to predict the universality of the following seven moral rules: “love your family, help your group, return favors, be brave, defer to authority, be fair, and respect others’ property.” And their results do show that these rules are present across cultures, though different societies vary in the way they rank or prioritize the rules. While their data do not explain the reasons for this variation, they also found no case were any of the rules was considered bad.

The overarching conclusion: “so there is a common core of universal moral principles. Morality is always and everywhere a cooperative phenomenon.”

Woah, slow down a second here. First off, perhaps the seven principles in question are common across modern cultures, but that does not ipso facto mean that they always were, nor that they evolved by natural selection, though that is a plausible scenario. Take, for instance, “be fair.” Are we really to understand that “fairness” was a moral precept in the highly unequal ancient Egyptian, or Roman (or pretty much all other) societies? And what do we mean by “fair,” anyway? Fairness may be the same as equality, or not, depending on one’s views, so we have to have a discussion about what it means and how the concept is interpreted in various societies.

“Be brave” is another highly ambiguous idea, and again it is hard to imagine that it means the same thing in, say, modern day Japan, with its penchant for anti-individualism, and ancient Sparta, which was characterized by a cult of individual bravery. And of course there are many different ways of being brave (or cowardly).

Second, jumping to the conclusion that morality is “always and everywhere” about cooperation seems entirely unwarranted. We are not ants or bees, each of us has autonomous projects that we intend to pursue and that have a large impact on the degree to which we flourish, as individuals. Yes, human beings are essentially social animals, and we flourish within a given social framework, but I would argue that morality isn’t about cooperation, but rather about how to balance self-centered vs pro-social needs. The two aspects — myself as an autonomous individual and myself as a member of a group — are in constant tension, and it is that tension that morality addresses, not just the group aspect of it. Cooperation is just one means to the end of survival, reproduction, and flourishing.

And by the way, natural selection doesn’t give a damn about the latter, but we most certainly do. Which is why a lot of our norms, rules, and institutions don’t address mere survival and reproduction, but rather issues such as justice, fairness, and access to resources. That is, they are concerned with flourishing.

Third, one can perfectly coherently object to some of those rules even being on a list of moral dicta. Let’s consider three of them a bit more closely: help your group, defer to authority, and respect others’ property.

At the least since the ancient Cynics and Stoics introduced the concept of cosmopolitanism, it is actually highly morally questionable to “help your group” if this is taken to mean regardless of circumstances, or at the expense of other groups that have an equal claim to resources. Today, it is not acceptable to engage in wars of conquest, for instance, in order to make our group thrive (the fact that it happens is irrelevant to the moral point). It isn’t even acceptable to exploit other groups economically, or by imposing on them environmental damages for our own convenience.

As for deferring to authority, nope, that’s a no starter. It may be a common cross-cultural practice, but it’s a bad idea, and it is in fact highly immoral if one does so blindly, following the orders of whoever is in charge. That way lies the road to tyranny, which is not morally acceptable, in either ancient or modern societies.

But surely we should respect other people’s property. Well, it depends. If it is acquired unethically, even if legally, no, I don’t think there is any such moral requirement. If your wealth is both disproportionate and arrived at by exploiting others (and let’s be frank, if it is the former, it can hardly not be the latter), then it is just and fair to pass laws to relieve you of much of that burden, through proportional taxation, for instance. My libertarian friends can scream loudly all they want, but that doesn’t make their claims to having a “natural right” to property one bit less nonsense on stilts (in the immortal phrase by Jeremy Bentham).

So, by all means let us inquire into the likely biological and cultural evolution of morality. It is a fascinating topic, somewhat amenable to empirical investigation. But let’s not confuse the descriptive with the normative aspects of the question, just like we wouldn’t for a second confuse, say, an account of the origin of mathematical thinking with the delivery of a sound proof of Fermat’s last theorem. Any time Curry et al. tell me that rule X is universal it is still perfectly meaningful for me to reply, yes, but is it moral?

Michael Shermer on moral philosophy, second round

nonsense on stilts fishSkeptic Michael Shermer recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” I found it a confused piece on moral philosophy, and since I agree with my friend Spider-Man, that with great power comes great responsibility, I was troubled by Michel’s failure toward the broad public of that august magazine. So I wrote a rather snarky response. Shermer has in turn keyboarded a reply to me, entitled “Moral philosophy and its discontents,” which he published on his own Skeptic online. This is my counter-response, and will be the last thing I will say on the topic for a while, as my experience is that going beyond two rounds in these cases quickly leads to diminishing returns. Of course, Michael is free to write a third piece, if he wishes.

To begin with, I’m going to tone down the rhetoric and focus on the substance, first because it is the right thing to do, and second because otherwise we get into an escalation of hyperboles that doesn’t really help thoughtful discourse (in his second article, for instance, Shermer says that I have become “nearly apoplectic” at his suggestion that witch hunting and the Holocaust were the result of utilitarianism. I assure you, it was a slight exaggeration.). I’ve been guilty of this even in the recent past (mea culpa), so let’s see if I can manage to do better.

I am not the only professional philosopher that has strongly criticized Michael for his original SciAm article. Another good example is Justin Weinberg, of the University of South Carolina (and editor of the Daily Nous), who, among other things, tweeted: “Disappointing that @sciam is contributing to our era’s ever-frequent disrespect of expertise by publishing this ill-informed & confused @michaelshermer column on moral philosophy.” It is not a simplistic argument from authority to point out that when professionals in a field unequivocally say you got things wrong it is wise to seriously consider that you might, indeed, have done so.

On his part, Shermer chides me for not having read a paper by G. Kahane et al. entitled “Beyond sacrificial harm: A two-dimensional model of utilitarian psychology,” published recently in Psychological Review. Well, guilty of being honest and telling when I have or have not read something. Still, my post was not a critique of Kahane et al.’s paper, but of Michael’s commentary (which, despite his protestations to the contrary, touches only marginally on the paper in question). I have now read Kahane et al., and I still think Shermer is wrong. More on this, of course, in a moment.

In my critique, I said that Michael has taken a very simplistic view of utilitarianism (a philosophy, incidentally, that I do not endorse). He rebutted that one of the examples I labelled as simplistic comes straight out of the Kahane et al. paper. The example in question is meant to measure one’s utilitarian tendencies, and it is formulated as a question: “Would you politically oppress a people for a limited time if it increased the overall well-being of the citizenry?” It does indeed come from the paper, but that’s a negative reflection on the paper, not on my point. No serious utilitarian after J.S. Mill would answer yes to that sort of question, so it is hard to say in what sense this would be helpful to measure one’s utilitarian tendencies.

In response to an admittedly sarcastic comment I made, Shermer states that he knows the difference between act and rule utilitarianism, and moreover that he is not naive about moral philosophy, since he has taken two undergraduate courses on the subject (one in general philosophy, the other one in ethics). He has also read a lot of books by Dan Dennett (not a moral philosopher), and gone through several Teaching Company’s Great Courses in philosophy. After all of which, he felt competent enough to write two books on the subject (The Science of Good and Evil and The Moral Arc), and to teach an undergraduate course at Chapman University. I will leave it to the reader to decide whether Michael’s background is sufficient to invalidate my original observation, but I will note that bringing in the difference between act and rule utilitarianism would have cleared a lot of muddled points in the SciAm article. He didn’t do it.

In my response, I stated unequivocally that Shermer is wrong when he says that trolley problems are an example of utilitarian thinking. And I stand by that statement, see my previous post for relevant links. Here too, Michael’s defense is “Kahane et al. did it,” which of course at best just means that Kahane et al. might have gotten something wrong, and that Shermer failed to correct it. But in fact they did not get it wrong. They say the following, for instance: “researchers have tried to uncover the psychological and even neural underpinnings of the dispute between utilitarians and their opponents — such as defenders of deontological, rights-based views of the kind associated with Immanuel Kant.” Exactly, so trolley dilemmas are used in cognitive science to explore both utilitarian and deontological thinking, and are therefore not an example of the former. Moreover, trolley dilemmas were introduced by moral philosopher Philippa Foot to highlight the limitations of both utilitarian and deontological thinking (in favor of the third way, virtue ethics), and they are still usually discussed in that context in intro philosophy courses. So, yes, Michael is still wrong here.

It gets worse. Shermer writes: “one might argue that trolley dilemmas represent only one form of utilitarianism (sacrificial) … but it is inaccurate to simply assert that trolley problems have nothing to do with utilitarianism.” To begin with, I never claimed that trolley dilemmas have “nothing to do” with utilitarianism. Never. Second, there is no such thing as sacrificial utilitarianism. Look it up, it’s just not a term in moral philosophy. What Michael means is utilitarian thinking applied to sacrificial problems. Not the same thing.

We now get to the part that nearly caused me an apoplectic attack, allegedly, when Shermer stated (in the first article) that witch hunts and genocides like the Holocaust or the one in Rwanda were caused by utilitarian thinking. In his response, Michael quotes himself from The Moral Arc: “It is evident that most of what we think of as our medieval ancestors’ barbaric practices were based on mistaken beliefs about how the laws of nature actually operate. If you — and everyone around you including ecclesiastical and political authorities — truly believe that witches cause disease, crop failures, sickness, catastrophes, and accidents, then it is not only a rational act to burn witches, it is a moral duty. … Medieval witch-burners torched women primarily out of a utilitarian calculus — better to kill the few to save the many. Other motives were present as well, of course, including scapegoating, the settling of personal scores, revenge against enemies, property confiscation, the elimination of marginalized and powerless people, and misogyny and gender politics. But these were secondary incentives grafted on to a system already in place that was based on a faulty understanding of causality.”

Two points here. First off, Shermer is in full speculatory mode here. We simply have no idea how to interpret and weigh the various motives of medieval witch-burners. All factors listed by Michael (and probably more) may have played a role, but anyone who assuredly claims that “this” was the major cause while the others were secondary is pulling one out of thin air. There simply is little empirical evidence to bear on this sort of claims. Second, what I begin to suspect is going on here is a fallacy of equivocation (which will reappear below, when we get to the issue of natural rights). Shermer knows very well that medieval witch-burners could not possibly have deployed Bentham’s or Mill’s philosophy, which had yet to be invented, so he uses the word “utilitarian” in a vaguer, broader sense, which then allows him to implicate the philosophy. Nice try, but this is sophistry, not good reasoning. (I said I wasn’t going to get snarky, not that I wouldn’t be critical.)

Indeed, Michael seems aware of this: “here let me clarify to anyone who thinks I can’t even get my centuries straight that I’m not arguing Torquemada sat down with Pope Sixtus IV to compute the greater good sacrifice of 10,000 Jews in order to save 50,000 Catholics; instead I am aiming to understand the underlying psychological forces behind witch hunts and genocides.” Except you cannot possibly have empirically substantive evidence of the psychological forces underlying the thinking and acting of Torquemada and sixtus IV, so why engage in this sort of psycho-historical speculation? It is just as likely, possibly even more, that Sixtus IV would have killed ten times more Jews in order to save ten times fewer Christians, since Jews and Christians, for him, were simply not comparable in moral value. Good skepticism is about empirical evidence, so why don’t we stick to that?

Shermer continues with another lengthy citation from The Moral Arc: “As in the limbic system with it’s neural networks for emotions, approach-avoidance moral conflicts have neural circuitry called the behavioral activation system (BAS) and the behavioral inhibition system (BIS) that drive an organism forward or back, as in the case of the rat vacillating between approaching and avoiding the goal region. … These activation and inhibition systems can be measured in experimental settings in which subjects are presented with different scenarios in which they then offer their moral judgment (giving money to a homeless person as prescriptive vs. wearing a sexually suggestive dress to a funeral as proscriptive).”

This is very nice, interesting, even, but utterly irrelevant. Of course animal and human thoughts and actions have specific neural underpinnings. How else would we think or act? But, quite obviously, different people balance the outputs of their BAS and BIS differently, and they end up thinking and acting differently. Some of these differences (though certainly not all of them) may be the result of philosophical reflection on why one should act one way rather than another. And this discussion is about moral philosophy, not neuroscience. As I pointed out in my original review of Sam Harris’ The Moral Landscape (which appeared, partly censored, in e-Skeptic), we may as well discuss the validity of a proof of Fermat’s Last Theorem by doing an fMRI scan of the brain of a mathematician. Interesting, no doubt. But also entirely unhelpful to the question at hand.

Michael briefly touches on my criticism of his treatment of Kant, where I brought up a technical paper by Helga Varden on the famous problem of lying to the Nazi. He says: “There is much more to her analysis of Kant, but it seems to me that in this example lying to Nazis is both a utilitarian/consequentialist decision because it would result in the death of an innocent, and a rule/rights decision that qualifies why we should care about the innocent in the first place: because, say, Kant’s rule about never treating people as an ends to a mean but as an ends in and of themselves, or that all people have a right to their own life.” This is very muddled. First off, lying to the Nazi would save the life of an innocent, not result in his death. Second, one can make that decision within a utilitarian framework, but also within a deontological one (Kant’s deontological system is not the only one on the market, so to speak). Third, Kant did not say that we should not treat other people as a means to an end (not the other way around!), he said we should never treat other people solely as means to an end. Without that qualification, we couldn’t go to a restaurant and be served by a waiter, because that would be using a human being as a means (waiter) to an end (getting my meal). The additional “solely” says that we can do that, but always while keeping in mind that we are interacting with a human being, not a machine. Fourth, nobody believes that all people have an unqualified right to their life. If instead of the Jew my basement hid a mass murderer (not because I’m protecting him, but under threat to my life) then I would certainly give him away to the authorities, even if that should result in his death. That’s the thing about good moral philosophy: it’s complicated, and requires precise nuanced thinking. And that is why we have professionals devoted to it.

Finally we come to the topic that most railed Shermer’s supporters on Twitter: natural rights. He ended his SciAm column with an endorsement of the concept, and I responded that on that topic I was with Jeremy Bentham (the founder of utilitarianism): the notion of natural rights is “nonsense on stilts.”

This is a complex topic, with a long history in philosophy, and despite Bentham’s flippant response, there have been serious defenders of it throughout the centuries. The notion of natural rights is related to, but is not the same as, the notion of natural law. Scholars trace the idea back to Plato and Aristotle, but it is far from clear that either one of them meant anything like the modern version deployed by thinkers from John Locke and Thomas Paine to Robert Nozick. I have a separate essay on natural law on my Stoic blog, since it is pertinent to that philosophy, but here let me simply reiterate my basic point: I don’t think there is any defensible notion of natural rights (as distinct from rights as human constructs) that is not hopelessly entangled with what I think are antiquated or indefensible notions of Platonism (as in Plato), teleology (as in Aristotle), or theology (as in Thomas Aquinas). You may disagree, of course, but then you owe us a defense of Platonism, teleology, or theology.

One counter to my criticism I have run across a lot on Twitter during my debate with Michael in the past few days is that human beings are part of nature, so obviously there are natural rights. QED. Well, no. This, again, threads on an equivocation. Yes, as a biologist I certainly agree that humanity is just as natural as anything else in the universe. But in the context of this sort of discussion the distinction has always (since Aristotle!) being very clear: natural rights refers to something that is mind-independent but can be discovered by human beings capable of reason; so-called positive rights, by contrast, are the result of human agreements. True, in a sense, positive rights are also “natural,” but it just doesn’t help to talk that way, it muddles a perfectly clear distinction.

This, incidentally, is an area where there is more agreement between Shermer and myself that may at first glance appear. More, but nowhere near total. Before I get to his rebuttal, let me state briefly what my position is. “Rights” are a human construct, the result of agreeing among ourselves, on the basis of moral and political considerations, that certain things qualify as rights and others don’t. The best sort of evidence that rights are of this kind is the complete disagreement among supporters of natural rights on the number and content of these alleged rights. We go from just one right (Richard Cumberland: benevolence toward all rational creatures), to three rights (Locke: life, liberty, and property — this is the one Michael wants, qua libertarian), to a whopping 19 natural laws from which one can derive corresponding rights (Hobbes, in chapters 14 and 15 of Leviathan).

That said, I do agree that rights are not entirely arbitrary, as they are linked to human nature, just like all moral philosophy is (Skye Cleary and I have recently argued this in Aeon). This puts me somewhere in the middle between moral anti-realists, who think that there is no such thing as a moral truth, and moral realists, who think that there is. I am a moral quasi-realist, meaning that for me morality is an evolving set of ideas that strives to regulate social interactions in order to allow people to flourish qua members of a social group. The reason I don’t think — contra both Shermer and Harris — that science can give us answers to moral questions is because I think facts about human nature under-determine moral systems. That is, given human nature as it is, there are several different, possibly incompatible, ways to develop moral codes. The choice among moral philosophies, then, is informed by facts about human nature, but not determined by it. To ask whether, say, utilitarianism or deontology or virtue ethics are “true” is to commit a category mistake. These are frameworks to think about social life. They may be more or less useful and more or less coherent, but not true or false (and hence not falsifiable or verifiable scientifically).

Okay, now back to the last chunk of Michael’s response. He thinks I contradict myself when I say that we all prefer to be alive rather than dead. I don’t see how that follows. Mine is just a statement of a natural desire. One has to do philosophical work to go from there to a right, especially a right that is somehow inalienable. (I also desire gelato, but that does not imply that I have a right to it.) I do think the is/ought gap can be filled, but not by simply stating that what is natural is ipso facto good. That, as Shermer knows, is yet another informal fallacy, the appeal to nature. And it is easily countered by endless examples (aggression and war are natural for human beings, it doesn’t follow that aggression and war are good).

Shermer takes a lot of liberties with evolutionary biology (another field in which I honestly question his qualifications): “Any organism subject to natural selection — which includes all organisms on this planet and most likely on any other planet as well — will by necessity have this drive to survive and flourish.” No, there is no natural selective imperative to flourish, especially if flourishing entails things like owning private property. Indeed, strictly speaking there is no natural imperative to survive either: survival is useful, from the standpoint of natural selection, only if it leads to reproduction. Sure enough, often selection favors short life spans, or rather nasty (i.e., non flourishing) lives, so long as the reproductive imperative is satisfied. And, again, just because natural selection favors individuals who reproduce, it certainly doesn’t make non reproducing immoral, does it? One of the few times I agreed with Steven Pinker (often quoted by Shermer) is when he wrote, I believe in The Language Instinct, that he made a decision early on in his life not to have children, but to devote his life to research, teaching, friends, and other good things. He commented (I quote from memory, since I no longer have a copy of that book): “and if my genes don’t like it, they can go and jump into the lake.” Indeed.

So when Michael says “I argue, the survival and flourishing of sentient beings is my moral starting point, and it is grounded in principles that are themselves based on nature’s laws and on human nature — principles that can be tested in both the laboratory and in the real world,” he is confusing different things, or at the very least drawing a direct connection between (certain aspects of) human nature and morality. This can’t be done, one needs empirically informed philosophical work to bridge the is/ought gap, not just brute facts.

He says other things that are clearly incorrect from a biological standpoint, like “The singular and separate organism is to biology and society what the atom is to physics — a fundamental unit of nature.” No, as plenty of biological organisms are colonial (corals, some jellyfish), or their individuality is temporary (when it’s time to reproduce, as in slime molds), or don’t have clear boundaries at all (several species of trees and mushrooms), or are a complex ensemble of multiple organisms that only appear to be one (human beings, see the concept of holobionts).

Shermer approvingly quotes Pinker: “Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.” Yes, as the Stoics had already figured out 23 centuries ago, we are born with a rudimentary sense of pro-social behavior, which we share with other primates. And yes, morality is the result of building on that innate sense by way of reasoning and language (a notion that the Stoics elaborated into their theory of moral development). But we are no forced to one specific set of conclusions, again because there is a relationship of under-determination between facts about human nature and moral frameworks.

Michael counts himself and Pinker as moral realists, and thinks he slam dunks the case with the following rhetorical question: “Is there anyone (other than slave holders and Nazis) who would argue that slavery and the Holocaust are not really wrong, absolutely wrong, objectively wrong, naturally wrong?” Well, first all, history is full of slave holders. People before very recent times thought that slavery was natural and just. Indeed, they derived this conclusion from their understanding of human nature, or the divine ordering of things, or whatever. More importantly, Shermer makes a fundamental mistake here: confusing objectivity with absolute truth.

Consider a simple example. Given the (entirely arbitrary) rules of the game of chess, a number of things about the game are objectively true. Heck, one can even demonstrate mathematical theorems about chess. But these truths are not “universal,” they are contingent on the specific set of rules that constitute the game. They don’t exist “out there,” in any kind of mind independent fashion. And they are, therefore, not inalienable. One can deny them by simply refusing to play the game, or by inventing a different game with even slightly different rules.

Yes, I do believe that slavery is wrong, given my understanding of human nature, which prioritizes individual flourishing and the application of reason to the improvement of pro-sociality. But there are other understandings from which my conclusions do not follow. So I have to argue the point from within whatever moral framework I have chosen (in my case, virtue ethics), I cannot simply and straightforwardly derive it from empirical observations about human behavior. If only it were that simple.

Michael Shermer on utilitarianism, deontology, and “natural rights”

Nazi officerYou may have noticed that I don’t opine on quantum mechanics. Or jazz. The reason for this is that — although I’m very interested in both topics — I just don’t know enough about them. Not enough to be able to offer an informed opinion, at any rate. So I sit back, read what other, more knowledgeable people have to say about quantum mechanics and jazz, form my own second-hand opinion, and try to avoid embarrassing myself by pontificating in public.

Apparently, my friend Michael Shermer does not follow the same philosophy. At least, not when it comes to the field of moral philosophy. He has recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” which starts out simple (simplistic, really) enough, and ends in a crescendo of nonsense. Let’s take a look.

After asking whether you would politically oppress a people for a limited time, if it increased the overall well-being of the citizenry, Michael explains that that’s utilitarianism, the philosophy started by Jeremy Bentham back at the beginning of the 19th century, often summarized with the slogan “the greatest good for the greatest number.” (Bentham, incidentally, is currently visiting New York, go see him, if you have a chance.)

Well, that is one of many versions of utilitarianism, and it was immediately abandoned, by none other than John Stuart Mill, Bentham’s student, who actually wrote the classic 1861 text entitled Utilitarianism. Indeed, before that Mill wrote two important articles, “Remarks on Bentham’s Philosophy” (1833) and “Bentham” (1838), in which he criticized his mentor and began to develop modern utilitarian thought. One of the major distinctions one can draw within utilitarianism still today is that between so-called act utilitarianism (where we must evaluate the morality of each act, a la Bentham) and rule utilitarianism (where we conform to rules that have shown overall to bring about the greatest amount of good, a la Mill). More generally, utilitarianism has a long history, and nowadays it is actually best thought of as a particular type of consequentialist philosophy. I could be wrong, but Shermer seems unaware of these distinctions.

Michael then tells his readers that “modern utilitarianism” is best instantiated by the (in)famous trolley problems. This is just flat out wrong. The original dilemma was introduced by Philippa Foot back in 1967. Here is the first version:

“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.”

Contra Shermer, the trolley dilemma was proposed, and it continues to be used (not only in philosophy, but in social psychology), in order to probe people’s moral intuitions, not to “instantiate” utilitarianism. For instance, a deontologist would refuse to frame an innocent or switch the lever, on the basis of the Kantian notion that one ought never to treat others solely as means to an end. The fact that many people switch from utilitarian to deontological responses when considering different versions of the dilemma tells us that they tend to react emotionally, which leads them to deploy an incoherent moral philosophy.

Michael then says that “the problem” with utilitarianism is that there are situations in which following its precepts one would end up endorsing psychopathic behaviors, as in the famous case (which I pose to my intro philosophy students) of the surgeon who has five patients in the emergency room, each with a failing vital organ, and decides to pick up a stranger from the street, cut him up into pieces, and distribute his organs around to save the other five. Too bad that this sort of thing is precisely why Mill (remember, already in 1833) introduced rule utilitarianism, which blocks the psychopathic doctor in his tracks. Again, no mention of this in the SciAm article.

Shermer briefly mentions a recent paper in Psychological Review (which I have not read, so I will not comment on it), mostly to tell us that he took the Oxford Utilitarianism Scale test and scored 17/63. He ain’t no utilitarian, according to the test. Neither am I, apparently (phew!), since I scored 21/63. You can do the test yourself, here.

After a brief mention of Kantian deontology, the article really veers from simplistic to nonsensical: “Historically the application of a utilitarian calculus is what drove witch hunters to torch women they believed caused disease, plagues, crop failures and accidents — better to incinerate the few to protect the village. More recently, the 1:5 utilitarian ratio has too readily been ratcheted up to killing one million to save five million (Jews: “Aryan” Germans; Tutsi:Hutu), the justification of genocidal murderers.”

What?? No, absolutely not. Setting aside the obvious observation that utilitarianism (the philosophy) did not exist until way after the Middle Ages, no, witch hunts were the result of fear, ignorance and superstition, not of a Bentham- or Mill-style calculus. And this is the first time I heard that Hitler or the Hutu of Rwanda had articulated a utilitarian rationale for their ghastly actions. Again, they were driven by fear, ignorance, superstition, and — in the case of Nazi Germany — a cynical calculation that power could be achieved and maintained in a nation marred by economic chaos by means of the time-tested stratagem of scapegoating. (The latter is also what perpetrators of witch hunting and the Rwandan genocide did: prey on the weak, it’s easy to do and get away with it.)

But Shermer doesn’t let Kant off the hook either. He brings up the famous example (which, again, I confront my intro philosophy students with) of lying: if it is the case — as Kant says in one formulation of the categorical imperative — that we should only accept as moral those principles that we would be willing to make into universal rules, wouldn’t that mean that I should never lie and give up the Jew I’m hiding in the basement if a Nazi officer (it’s always the Nazi!) politely asks me? Or, as Michael updates the scenario: “if you live in Syria and a band of ISIS thugs knocks on your door demanding to know if you are hiding any homosexuals they can murder in the mistaken belief that this fulfills the word of God — and you are — few moralists would object to your lying to save them.”

Notice the pejorative term “moralists,” instead of moral philosophers. Anyway, you would think Kantian philosophers would have something to say about this. Oh, right, they do! A good example is a paper by Helga Varden in the Journal of Social Philosophy, entirely devoted to Kant, lying and the Nazi officer. I do not have the time here to do justice to her analysis, but a couple of points need to be brought to bear: first, in that case Kant was writing explicitly within the context of a discussion of the doctrine of rightful interactions (the original, short paper in which he tackles the case is entitled “On a supposed right to lie from philanthropy”). As Varden says, within that context, “we can make sense of why lying to the murderer, although a wrong, is not to wrong the murderer, why we become responsible for the bad consequences of the lie, and finally why lying is to do wrong in general.”

More to the point, Kant was talking about a murderer (he, obviously, couldn’t have contemplated the Nazi), but when one changes the scenario to a Nazi officer — or an ISIS terrorist — it turns out that the problem dissolves itself, because “the only time doing wrong in general by lying is legally punishable [within Kant’s framework] is when we lie to or as a representative of the public authority. The Nazis, however, did not represent a public authority on Kant’s view and consequently there is no duty to abstain from lying to Nazis.” Or to ISIS. Again, I didn’t notice any of these qualifications in Shermer’s article.

Michael, predictably, makes no mention at all of the third great framework in moral philosophy, virtue ethics, which would actually do a lot of the work he wants to do, against both utilitarianism and deontology — in their philosophically sophisticated versions, not the caricature we get in the SciAm article.

But never mind that. The true nonsense comes right at the end, when Shermer puts forth his preferred view, the one that, in his mind, has allowed for true moral progress throughout the ages: “both utilitarianism and Kantian ethics are trumped by natural-rights theory, which dictates that you are born with the right to life and liberty of both body and mind, rights that must not be violated, not even to serve the greater good or to fulfill a universal rule.”

Setting aside that you get precisely the same result from Mill’s rule utilitarianism, not to mention that natural rights theory has no argument against Kant, “natural rights” are what Jeremy Bentham famously, and correctly, referred to as “nonsense on stilts.” There is no such thing as a natural right, and we, therefore, are not born with them (contra the mindless libertarian mantra that Shermer is repeating). Michael is confusing human desires and instincts — some of which are actually culturally dependent (it is empirically not the case that everyone on earth desires liberty of mind, for instance) with rights. But rights are, obviously, a human creation. Which accounts for why, as Shermer himself notes, they have to be written down in things like the Bill of Rights, and protected by the force of state-enabled law. It’s also why people have come up with different lists of rights at different times. The United Declaration of Human Rights, for instance, provides a much more extensive list than the one arrived at by James Madison and co. back in 1789.

To argue that rights are “natural” is to commit the most elementary logical fallacy in ethics, that of the appeal to nature. And even if one were to overlook that little problem, there simply is no consistent empirical evidence for most of such alleged rights (i.e., desires, instincts) in Homo sapiens or its recent ancestors. Yeah, we all prefer to be alive rather than dead, other things being equal, but natural selection does not care about mere survival, it only favors survival that leads to reproduction. And it favors it, it doesn’t guarantee it. (So you can’t derive a natural right to sex. Too bad!)

This is the sort mess one gets when Michael talks about moral philosophy. Or when I talk about quantum mechanics. Or jazz. Please, let us all stick to what we know. It’s hard enough as it is.

Socrates, weakness of the will, and addiction

Socrates“People are dying because we misunderstand how those with addiction think,” says the title of a recent article in Vox by philosopher Brendan de Kenessey, who argues that addiction is not a moral failure, and that it is the moralistic attitude of a number of politicians and a significant portion of the public that makes the problem more difficult to deal with. Addicts are not bad people who need to be punished, he says, they are sick and need help.

And he is completely right, I think. And yet, I also suggest that the bulk of the article is based on the wrong philosophical criticism. de Kenessey blames Socrates for the moralistic attitude, while he should blame certain brands of Christianity instead. Here I will not make the positive case against Christian moralism (which is well known among certain politicians of a certain party in the US), nor will I unpack the idea that addicts are sick, not bad, people, as de Kenessey does a very fine job of that in his article. But I will defend Socrates and use the occasion to talk a bit not just about addiction, but in general the phenomenon of what the Greeks called akrasia, or weakness of the will, and which Socrates thought simply does not exist.

The starting point of de Kenessey’s analysis of the Socratic problem is the Platonic dialogue known as the Protagoras, in which the discussion between the Athenian sage and one of the most famous sophists turns to the topic of akrasia. Let’s contrast two instances of alleged akrasia, brought up by de Kenessey to make his point against Socrates, and which I think, on the contrary, show pretty clearly why Socrates was correct (once we add an hidden premise to the Socratic position, a premise not discussed by de Kenessey).

Imagine yourself in front of the television, intending to binge watch a season of Black Mirror (or whatever your favorite show happens to be). You think, when you reflect on it, that this isn’t really the best use of your time, and that you should instead pick yourself up and go to the gym, as lately you’ve let yourself go a little, and you don’t feel good, both physically and psychologically. You mull it over a bit, but in the end decide to stay and watch television, with munchies to accompany the experience.

Now imagine, says de Kenessey, an addict who is driving down the projects, thinking that he really ought to stop what he is doing, turn his life around, clean up, get a job, and take care of his family. Nevertheless, he keeps driving to the corner where he regularly meets his dealer, and buys some cocaine instead.

The two cases appear to have a similar structure, like this:

Subject A has two courses of action available to him, X and Y.

A thinks that he should do X, even though he is very tempted by Y.

A ends up doing Y, rather than X.

Socrates has this to say, in the Protagoras, about this kind of situation:

“No one who knows or believes there is something else better than what he is doing, something possible, will go on doing what he had been doing when he could be doing what is better.”

This seems paradoxical, in the original meaning of the term (para doxan = uncommon opinion), as it is a straightforward observation that people, like both our hypothetical television binger and drug addict, very often don’t do what they believe to be the best thing for them. And yet, Socrates is not alone in taking this position. Modern economists such as Paul Samuelson have proposed an approach in behavioral economics known as “revealed preference,” according to which people show what they really like by what they do, not by what they say. Similarly, modern psychology has accumulated a pretty good amount of evidence that we often confabulate about the reasons why we do things, i.e., we make up reasons to justify our actions because we often don’t really have a good understanding of our own motivations.

How does Socrates defend his “paradoxical” position, which seems to fly so clearly in the face of the evidence? He thinks that people in these cases do not suffer from akrasia, i.e., weakness of the will, thus acting against their best judgment. He thinks instead that people are doing exactly what they want to do, but are doing it because of bad judgment. Doing bad things is, therefore, a matter of ignorance, not malice.

Ignorance my ass, one might easily retort. The television watcher is not ignorant, and neither is the drug addict. They don’t luck the pertinent information, they don’t need to be educated about what is going on. True, but the word used in the Platonic dialogues in this context is amathia, which although usually translated as ignorance actually means something closer to un-wisdom, the opposite of sophia, one of the roots of the word philosophy. Socrates is arguing that apparent cases of weakness of the will are actually cases of lack of wisdom — not of factual or empirical knowledge, but of the proper way to arrive at judgments given certain factual or empirical knowledge.

Ever since discovering the Socratic idea of replacing akrasia (and, more importantly, actual “evil”) with amathia I found myself to be significantly more prone to understand others’ motivations and actions, to sympathize with their manifest lack of wisdom even when I cannot possibly condone their actions, and to generally cultivate an attitude of sorrow rather than anger when people do bad things. I find this new approach liberating and far more constructive than either the akratic or, much worse, the moralistic one.

Still, isn’t de Kenessey right that Socrates ends up blaming the victim here, and that it is this sort of blame that justifies the kind of draconian measures implemented by politicians, and supported by the public, that made the so-called war on drugs a total disaster with a high cost to society, both in human and financial terms?

I don’t think so, and the reason is that if we want to read Socrates charitably we need to see that the two cases above are actually distinct, and they are distinct because of a hidden premise in the Socratic approach. That premise is that we are talking about a normally functioning human mind, not a diseased one. It was well known even in the ancient world that human beings have a tendency to reason very poorly when they are under the influence of a number of external conditions, particularly drugs (including wine). A good deal of Greek tragedy is built on that premise, such as Euripides’ The Bacchantes. That is why Diogenes Laertius, commenting on the Stoics — which were explicit followers of Socrates — says that “they will take wine, but not get drunk.” (VII.118) Getting drunk artificially impairs one’s judgment, so when one is under the influence, as we say today, one is not suffering from lack of wisdom, he’s suffering from a temporarily dysfunctional mind.

If this is a reasonable and charitable interpretation of Socrates’ take, then the two cases of the television binger and the drug addict are very different. The first is an actual case of what Socrates is arguing against Protagoras: the binger — in accordance with modern behavioral economics theory — really does prefer to stay at home to watch Black Mirror rather than going to the gym. Yes, of course he knows that in the long run he would be better off taking the second course of action, but he judges that for him, right here and right now, binging is better. His future self be damned. He is, of course, mistaken in such judgment, just like Socrates maintained.

The same reasoning, by contrast, does not apply to the drug addict, precisely because he is an addict, and therefore his judgment is impaired. He is not suffering from amathia, he is suffering from a chemical addiction. And that is why the moralist attitude criticized by de Kenessey is pernicious, because it does not recognize that the person in question is sick, not evil (or unwise, as Socrates would put it).

There is, of course, a wrinkle in all this, which de Kenessey must be aware of, and yet never mentions in his article: on the first occasion that the soon-to-be drug addict decided to take cocaine his judgment was not impaired by being sick, yet. Which means he is still responsible for the initial decision to go down that road. Now we only have two ways of looking at the onset of the addiction, then: either the person is morally bad (the moralist view), or he lacks wisdom (the Socratic view). Not only the second view is more humane, it also makes much more sense than invoking akrasia: the future drug user had not yet had the experience of being on drugs, so he couldn’t possibly have yielded to the temptation of temporary pleasure promised by the drug. More likely, he made the unwise judgment that the drug wasn’t as bad as people say, or that he will have the willpower to resist the addiction, or something along similar lines and to the same effect.

de Kenessey points out that several modern philosophers have attempted to come up with an anti-Socratic account, but they can’t agree on what’s going on: for Harry Frankfurt the desires that represent our true self are those desires that we want ourselves to have (Harry Frankfurt); for Gary Watson they are the desires that align with our judgments of what is valuable; for Michael Bratman they are the desires that cohere with our stable life plans; and for Susan Wolf they are the desires that are supported by rational deliberation (Susan Wolf).

This business of a “true self” is, however, a red herring. As de Kenessey argues, modern psychology has done away with that notion (so did David Hume, two a half century before modern psychology). But the fact remains that “we” do make decisions in response to our desires and as a function of our capacity to arrive at judgments. Whether “we” are made of a unitary self, a bundle of perceptions, or whatever, doesn’t matter. Our judgments are either made by a functional human mind (in which case we are responsible for them) or by a non-functional one (in which case we are sick and need help). The difference between the moralist and Socratic view pertains to the first, not the second case. And there one has a choice of blaming people for the evil doing, or pity them for their lack of wisdom. I find the latter course of action to be far more preferable.

Against The Four: Amazon, Apple, Facebook, and Google

Amazon Apple Facebook Google“The Four” are the giant tech companies Amazon, Apple, Facebook and Google. You can read all about why they are a problem in Scott Galloway’s informative book, The Four: The Hidden Dna of Amazon, Apple, Facebook, and Google. Galloway is a Professor at NYU’s Stern School of Business, where he teaches brand strategy and digital marketing. He is also the founder of several firms including L2, Red Envelope, and Prophet. In 2012, he was named “one of the world’s 50 best business school professors” (no, this isn’t an oxymoron) by Poets & Quants. Moreover, he has served on the boards of Eddie Bauer, The New York Times Company, Gateway Computer, and Berkeley’s Haas School of Business. In other words, this is someone who knows a lot about corporate culture, and not at all a left wing moralist such as myself.

(If you don’t have time to read the book, look at these two articles that turned me onto it, in Wired magazine, and over at the BBC.)

In a nutshell, the problem with The Four is that they simply have far too much power in our lives, both in terms of the information they store about us (and how they use it), and of their financial muscle, which of course easily turns into political influence. From the BBC article:

“The four most important tech companies aren’t even just tech companies anymore. They each have embedded themselves in our lives, hugely influencing us by playing to our basic human instincts: from the eternal human search for answers to our need for love. … [Galloway] also says that the companies’ deep pockets and massive customer base are what allow the companies to start competing in different sectors and industries – like how Amazon is now producing original TV content, or how Facebook has more or less emerged as a news media platform. That has prompted scepticism and warnings from critics at places like the New York Times and Bloomberg.”

And that was before Amazon bought Whole Foods, for instance. You can dig into the details yourself, no point for me to repeat here easily found material. My objection to The Four is ethical: I am generally skeptical of any concentration of power, especially in the private sector (but not only: governments are a little better to the extent that they really are accountable to their people by means of a not too dysfunctional democracy. The US Government does not, at this point, qualify, for instance). But I also recognize that these and other tech companies have made possible a wonderful number of things, including, of course, this very blog (which in its initial incarnation, Rationally Speaking, was hosted by Google, with essays that were for many years written on a MacBook or an iPad, and are still broadcasted via Facebook).

Moreover, I am certainly not an anti-technologist. On the contrary, until recently I was a very early adopter of new technologies. I bought the first iPhone, and then an iPad (and then an iPad Pro, which replaced my laptop), signed up on Facebook very early on, have used Amazon for many years, and have a very very long browser history with Google.

And before you accuse me of naivete, I am perfectly aware that The Four aren’t the only giant tech companies to be wary of (add Twitter, Verizon, Samsung, just for starters), and the list gets far longer when one moves to large corporations in general. Ultimately, I think the only reasonable and effective way to curb corporate power is by legislation, along the model of the famous breakup of Bell back in 1982. As a society, we want innovation, and we certainly want private entities to benefit from their work. But innovation needs competition, not near monopolies, and benefiting from one’s work does not equate creating a very small class of ultra-billionaires who exploit their workers (like Amazon certainly does), including in other countries (like Apple equally certainly does).

But while we are waiting for governments to take action (more likely in Europe than in the US, at the moment — see Galloway’s take here), it doesn’t mean we have to be complicit enablers. I try to practice what in philosophy is called virtue ethics, which means that my first focus is on improving my own character, which in turn requires acting as virtuously (in the Greco-Roman, not the Christian sense) as possible. It follows, it seems to me, that I need to extricate myself as much as possible from The Four, as an initial step.

And that’s where I discovered two interesting things, which are the main objects of this post. First, it is much harder than one might at first imagine. Second, you are unlikely to get a lot of support even from friends and family, who might even exhibit hostility to your intentions. Let me explain.

They are called The Four for a reason. They are everywhere, and next to impossible to avoid, unless you are willing to completely disengage from the Internet. In the postscript, I detail the steps I have taken so far, in case anyone else wishes to try it. You can thank me later for having saved you endless hours of web searching (using DuckDuckGo, of course…).

Apple was actually the easiest to get rid of. Because their ecosystem is so tight and positively discourages any contact with the outside, once you decide to get out of it, you pretty much have to go the whole nine yards. This pained me, because I have been an Apple fan ever since I ditched Microsoft because of the poor quality of their products, back in 2004. But just a couple of weeks later, I hardly miss my iPad and iPhone, and I most certainly don’t miss the Watch, one of the most intrusive gadgets ever made.

Next was Amazon. The big steps here were to stop shopping on their online store (easy, plenty of alternatives), to replace the Kindle with one of several other high quality e-book readers, and to begin to direct readers of my own books to either publishers’ web sites or other e-book stores. of course, the bulk of my collection of books is on Amazon, but I’ll eventually get it back by way of available software that decrypts the files and turns them into the popular epub format. I still watch Amazon videos, because they are good and not available elsewhere. Before you accuse me of hypocrisy, however, keep in mind that the goal is to minimize my footprint on The Four, so to speak, not to eliminate them from my life altogether. It’s an incremental project, not a revolution.

Which brings me to Google. In a sense, I actually increased my use of their products, since now my office suite is the Google one, replacing Apple’s iWorks. But it is a temporary transition dictated by limited time available to search for long term suitable alternatives, and by the need not to disrupt several ongoing collaborative works. And at any rate, I hit Google where it hurts, their web search engine, which produces their advertisement revenues and is of course highly invasive of our privacy. DuckDuckGo does an excellent replacement job.

Finally, Facebook. This was the hardest, again unless I was willing to forgo keeping in touch with (real) friends and family, and also to give up my outreach presence (my “official” philosophy page, my participation to the largest online Stoic community, and a few other things). What I did was to get rid of their obnoxious Messenger app, as well as “unlike” and “unfollow” a crapload of pages that were, of course, generating lots of targeted advertisements. I am now using Facebook with a very small and tightly guarded circle of actual friends and family, as well as for the above mentioned outreach, nothing else.

So the bottom line of the first point is that this exercise showed me very clearly just how dependent our lives have become from The Four. Perhaps this should not have been surprising, but experiencing the full measure of it in such a short period was eye opening. The other thing that was eye opening relates the second point: the comparative lack of support, and occasionally more or less overt hostility, I got from friends and family (and, I’m sure, from some readers, now that I’ve put this out).

When I explained what I was doing and why, a good number of people were puzzled, and began immediately to mount arguments against my enterprise. “It’s useless.” “You won’t succeed” “It’s going to cost you a lot of money and time.” “What do you have against corporations?” “Are you a Luddite?” “Why do you hate America?” Okay, I made up the last one, but the others have been thrown at me fast and furious during the past few weeks.

So I patiently explained: no, I’m not a Luddite; on the contrary, I’ve always been an early user of especially electronic technology. No, it isn’t really that expensive (as I’m sure everyone knows, Apple alternatives in terms of phones and tablets are incredibly cheap by comparison). Yes, it took me some time, but I was helped by others who have similar objections and have done much of the legwork for me, and at any rate, it’s an ethical decision, it would be a bit too easy if it didn’t cost me money or time or effort.

My attitude toward corporations is the one already explained above. I am perfectly aware that if it weren’t Apple it would be someone else, but that’s not an argument about disinvesting from Apple. It’s the social activism equivalent of what in biology is called frequency dependent selection: you go for the rare phenotype, which eventually becomes the dominant, at which point you switch to the new rare, and so on.

In terms of success and utility, it depends on what one’s goal is. I am perfectly aware that Apple, Google and the others are not going to feel the pinch of my decisions. But from a virtue ethical perspective that’s not the objective: I just don’t want to be personally co-responsible for what they are doing. Moreover — and that’s why I’m writing this post and promoting Galloway’s book — if enough others do the same, the damage will be greater and greater, and it might bring about change.

Also, again in terms of success, as I said above my goal was never to completely disengage from The Four, only to distance myself from them. Many years ago I read Peter Singer’s How Are We to Live?: Ethics in an Age of Self-Interest, and it changed my life. No, I did not become a utilitarian like Singer, but I was struck by one of the first things he says in that book: don’t try to do everything at once, you will be overwhelmed, get discouraged, and fail. Instead, decide what your ethical priorities are, and then make some small but concrete steps in that direction. I discussed How Are We to Live? at a book club I founded in Knoxville, TN, and my wife at the time and I were talking about it on our way back home. We decided to follow Peter’s advice: we sold our house in the suburbs and moved downtown, near the bus lines and where we could bicycle to work; we also sold one of our two cars. Our life improved as a result, our carbon footprint went down, and we felt good about the decision. The current anti-Four action is along similar lines: I’m doing something, not everything, because I can do the former, but not the latter.

I thought my explanations were reasonable and cogent. One may still disagree, and indeed one may even agree with my take and still not act in a similar fashion, for all sorts of reasons. But my arguments hardly made I dent. Some people seemed not just to disagree with me, but to positively resent my chosen course of action. What was going on?

Then it hit me. It’s the same reaction I got when I stopped eating meat, and that my partner gets every time people find out she is a vegetarian. The same objections are immediately raised: it’s useless; it’s difficult; what’s wrong with the meat industry?; are you an environmental nuts?; do you feel somehow superior to the rest of?; why do you hate America??

It’s the next to the last one that should give you the clue. At least in my judgment, a lot of people who are not vegetarian recognize, at some level, that vegetarians have by far the better argument: no matter how you look at the issue — in terms of animal suffering, environmental degradation, treatment of labor, or even pure and simple self interest when it comes to health — vegetarianism is better. But it’s harder, too. Stakes are delicious; burgers are delightful; and everyone eats them, so it’s easier to just go along with the habit. But when you meet someone who is bucking the trend, and you are dimly aware that she has made the right choice and you haven’t, resentment kicks in. She simply must be mistaken, and you begin to rattle out a number of more or less incoherent “arguments” for why that is “obviously” the case.

I think something similar has been going on with my anti-Four strategy over the past few weeks. A number of my friends and family realize that I’m onto something (and Galloway’s book gives me plenty of well researched ammunitions, as well as the comfort to know that there are others who think and act the same). But it’s too hard, or expensive, or just inconvenient for them to follow suit. So I must be wrong. And once you know someone is wrong then you immediately begin to search for all the flaws in their reasoning, while ignoring the big ones in your own. It’s a well known cognitive fallacy.

Be that as it may. My conscience feels slightly better, in the same way and measure in which similar small decisions (to not eat meat, to try to shop locally, to voluntarily pay carbon footprint offsets when I travel by air, to change bank because my old one was a giant corporate monster, and so forth) have made me feel better. Is this going to change the world? Of course not. But what are you doing to help, right now?

_____

Postscript: Massimo’s (ongoing) anti-Four plan. Each item lists a service or product offered by Amazon, Apple, Facebook or Google, followed by a suitable, or at least less objectionable, alternative. All of these have already been implemented, it took about a week to switch completely. Remember, the goal is not perfection, just progress. Readers’ suggestions for further improvements are welcome.

Amazon > myriad alternative online retailers, obviously

Amazon Kindle > Nook, Kobo, Sony, etc.

(My) Amazon books > wherever possible I changed links to direct readers to the publishers themselves instead of the A-store

Amazon Video > Hulu, Netflix, though I still use some Amazon Video because some of their productions are unique and good

Apple’s iPad > any Android tablet (even though Android is Google, each manufacturer uses it differently, and the platform is more open than Apple’s)

Apple’s iPhone > any Android phone, except Google’s own, obviously

Apple’s Watch > back to analogical (and more stylish!)

Apple’s TV > back to simple smart TV native apps

Apple’s Music > Pandora, Spotify, or similar

Apple’s Mail > Aqua Mail (which does not track your search history), or any of a number of alternative third party clients

Apple’s office productivity (iWorks) > Google office, as first step, then independent systems, ideally open source

Apple’s Safari browser > Firefox (which does not track your history), Google Chrome not acceptable

Facebook > un-liked most pages, tightened security, limited who can ask me for “friendship”

Facebook Messenger > eliminated in favor of simple sms, or third-party apps

Google search > DuckDuckGo (which does not track your search history)

Google navigator > this is a tough one (particularly since Apple Maps is neither acceptable for this project, nor good, really), though for instance CityMappers works very well for major cities

The non-problem of moral luck

The Good Place - trolley dilemma

The Good Place is an unusual comedy on NBC, featuring a professor of moral philosophy among its main characters. My friend Skye Cleary has interviewed the real life philosopher who consults for the show, Todd May of Clemson University, for the blog of the American Philosophical Association. The exchange is definitely worth a read. In this post I will make an argument that one can learn more about moral philosophy from watching a single episode of the show than by listening to a technical talk in that same field while attending the APA’s own annual meeting.

Episode five of the second season of TGP features a sophisticated discussion of the infamous trolley problem, a thought experiment in ethics that has by now generated a cottage industry among both philosophers and neuroscientists. I will not explain for the n-th time what the problem consists of, you can look it up on Wikipedia. Suffice to say that the more I study virtue ethics, the more I become skeptical of the value of much modern moral philosophy, with its indulging in more and more convoluted hypothetical situations that seem to be designed more to show off the cleverness of the people working in the field than to actually help the rest of us live an ethical life. It is no coincidence that the dilemma is always framed in terms of what a deontologist or a utilitarian would do, those two frameworks having gotten further and further away from any relevance to real life, contra to what either Immanuel Kant or John Stuart Mill surely intended.

At any rate, the episode in question features a theoretical lecture on trolleys by the resident philosophical character, Chidi (played by the excellent William Jackson Harper). One of those on the receiving end of the lecture is the demon-turning-good-guy Michael (played by the awesome Ted Danson). During the lecture, Michael becomes impatient with the theory, so he snaps his fingers and transports Chidi, his friend Eleanor (played by Kristen Bell) and himself aboard an actual trolley, about to kill what appear to be real people. Michael then asks Chidi for a real-life demonstration: what is the philosopher going to do when suddenly faced with the dilemma, in the field, so to speak? Hilarity (and mayhem) quickly ensue. The episode is so good that I made my students watch it and comment on it.

Michael’s point is well taken: ethics is not (or ought not to be!) a theoretical exercise in cleverness, but a guide to navigating life’s real situations, and Chidi the philosopher — while very good in theory — fails spectacularly at it. I was thinking of that sit-com imparted lesson while attending a talk at the Eastern APA meeting last January, delivered by Philip Swenson of the College of William and Mary. In the following I will pick on Swenson a bit, not because his talk was bad (it wasn’t), but because it is an example of a way of doing philosophy that I increasingly object to, on ground of indulgence in irrelevant logic chopping.

Swenson set out to propose a solution to the “problem” of moral luck. He began, of course, with a couple of hypothetical situations:

Resultant luck case. Alice and Bill both go on walks along a riverbank. Both encounter a drowning child and attempt a rescue. They make the same choices and attempt the same actions. Alice’s rescue succeeds, but a sudden current prevents Bill’s attempt from succeeding, and the child drowns.

Circumstantial luck case. Alice goes for a walk along a riverbank and encounters a drowning child. She rescues the child. On a separate occasion, Claire goes for a walk along the riverbank. She does not encounter a drowning child. If Claire had encountered a drowning child she would have rescued the child.

What’s the problem? I mean, other than for the (fortunately hypothetical) child who occasionally drowns? Swenson is bothered by the fact that, in the first case, if we say that Alice is more praiseworthy than Bill, it looks as though we accept something apparently horrible called “resultant moral luck.” In the second case, if we say that Alice is more praiseworthy than Claire, then we accept something equally objectionable, called “circumstantial moral luck.” As Swenson puts it:

“Rejecting circumstantial moral luck appears to require a very significant revision to everyday moral judgment. Consider the plausible claim that a great many people all over the world are not so different from those who went along with the rise of the Nazis. Many people would have done similar things under similar circumstances. If we accept this and reject circumstantial luck then it looks as though some radical claim or other will follow.”

That would be, in case the reasoning isn’t clear, the radical claim that most of us are not as good as we think, and that if we had lived under the Nazi we would have been just as culpable as the majority of the German population of the time for the Holocaust. But it doesn’t end here, there is a third case to consider:

Constitutive luck case. Alice goes for a walk along a riverbank and encounters a drowning child. she rescues the child. On a separate occasion Daniel goes for a walk along the riverbank and also encounters a drowning child. Because Daniel is — through no previous fault of his own — cruel and uncaring, he refrains from rescuing the child. However, if he had possessed Alice’s naturally caring disposition, he would have rescued the child.

Swenson went on to remind the audience of the two classical “solutions” found in the philosophical literature for the problem of moral luck: “responsibility skepticism” (deny that anyone is ever praiseworthy or blameworthy at all), and the “responsibility explosion” (say that people are praiseworthy or blameworthy in virtue of what they would have done in various circumstances they never actually faced, equipped with character traits they never had).

He then goes on to present his own solution to the problem, which involves a strange calculation of moral desert levels, beginning with the assumption that the “expected desert level” for an agent is zero, and continuing with the notion that we can then assign points to different ethical situations according to a variety of criteria. I will not go into the details because they are irrelevant to my argument here. Which is that we should reject this whole approach to moral philosophy, period.

To begin with, I find bizarre the very idea that we should engage in some sort of morality ledger construction, keeping score of the praiseworthiness or blameworthiness of people. Why? What the heck is the point? Are we some sort of god who has to decide on where to send people in the afterlife? (That, incidentally, is the premise of TGP show. And it’s very funny.) Take the first scenario, the case of resultant luck. It wouldn’t cross my mind for a second to say that Alice is more praiseworthy than Bill just because Bill did not succeed in his attempt at rescuing the drowning child. On the contrary, I am in awe of anyone who would attempt the rescue, regardless of whether s/he succeeds or not.

The circumstantial luck case is even more out there: there is no reason for us to consider Claire at all. If the circumstances were such as not to test her moral fiber, fine, why should that be an issue of any sort? Alice is to be praised for her attempted (and successful) rescue, the question of what Claire would have done simply did not arise, and that’s the end of that.

The last scenario, that of constitutive luck, is interesting, but only academically. To begin with, my view — contra Swenson’s stated hypothesis — is that adult human beings are morally responsible by simple virtue of being adults. That’s what it means to be an adult, regardless of the circumstances of one’s childhood. But if Daniel has an aberrant character because, say, of some developmental abnormality in his brain, or perhaps because a tumor is interfering with his moral decision making brain network, then fine, he is not to be blamed for his inaction. That’s no skin off of Alice’s nose, because moral desert is not (or should not be) a competition! Again, why the karmic obsession with keeping scores?

What about the choice between responsibility skepticism and the responsibility explosion? It seems to me that a society cannot function without a reasonable attribution of responsibility for the actions of its (adult, normally functioning) members. But one shouldn’t be carried away and start thinking of all possible hypothetical scenarios. Ethics should be concerned with what actually happens to real people, not with how hypothetical individuals would behave under (infinite) hypothetical circumstances. If you care about the latter, I suggest you’ve got your priorities seriously screwed up.

In the end, the “problem” of moral luck is not a problem at all. When Thomas Nagel wrote his now classical paper by that title, back in 1979, I took it to call our attention to the humbling fact that we may be far less moral than we like to think, and that that observation ought to make us more sympathetic toward the above mentioned ordinary Germans under the Nazi. To cure us of moral hubris, as it were. That is a very good practical lesson, nudging us toward being both less complacent about our own abilities and more charitable toward the shortcomings of others. But if the whole thing degenerates into an entirely impractical mathematical exercise in the assignment of praise and blame we have lost track of what ethics should be about. As the Stoic philosopher Epictetus put it 19 centuries ago:

“If you didn’t learn these things in order to demonstrate them in practice, what did you learn them for?” (Discourses I, 29.35)