Category Archives: Social & Political Philosophy

On the crucial differences between private and public forgiveness

1 (3)

(Stolen Generation apology, Adelaide, Australia, 13 February 2008)

In the aftermath of the Apartheid government in South Africa, a Truth and Reconciliation Commission (TRC) was formed to help the country move forward at the same time as it acknowledged and attempted to redress the injustices of the past. Not everyone was onboard with the project, but it turned out to be a success in terms of helping to heal the nation. Of 7,000 individuals who applied to the commission for amnesty about 10% were granted it, and the commission proposed a wealth tax to help fund reparation programs. (The South African government, however, never followed through.)


This was not the first TRC, nor would it be the last. An earlier attempt was made by Bolivia in 1982, with its National Commission of Inquiry into Disappearances, which brought together a variety of sectors of society after the end of the military rule there. The very first TRC was the one established in Uganda in 1974. Argentina followed in 1983, Chile and Nepal in 1990, El Salvador and Germany in 1992, Guatemala in 1994, Canada in 2006, and so forth.


Priscilla Hayner in Unspeakable Truths: Transitional Justice and the Challenge of Truth Commissions (Routledge, 2002) defines a TRC in this manner:


“A truth commission (1) is focused on the past, rather than ongoing, events; (2) investigates a pattern of events that took place over a period of time; (3) engages directly and broadly with the affected population, gathering information on their experiences; (4) is a temporary body, with the aim of concluding with a final report; and (5) is officially authorized or empowered by the state under review.”


I’m telling you all this because of a brilliant talk I just heard at City College, entitled “The Broken Promise of Public Forgiveness,” delivered by University of Texas-Austin PhD candidate Simone Gubler. Simone took as her starting point the formal apology proffered by the Parliament of Australia to the indigenous people of that continent, back in 2008. The apology was delivered by Prime Minister Kevin Rudd, who also asked the indigenous people for forgiveness on behalf of the nation.


And here is where things, according to Simone, got problematic. You see, a good argument can be made that forgiveness is an essentially personal process, not one that makes sense at the level of public institutions. Setting aside the obvious observation that the current non-indigenous inhabitants of Australia did not perpetrate the original crimes against the indigenous ones (though, of course, they still take advantage of the aftermath), and setting further aside the fact that groups cannot forgive (only individual members of such groups can), there is an obviously imbalanced power dynamic at play here. Asking for forgiveness at that level in a real sense imposes an implied demand on the other side, along the lines of “hey, I’m being so nice to ask you, and you are going to refuse?”


Individuals are far less likely to feel that pressure. If my partner betrays me and she asks for forgiveness I may or may not grant it. It’s up to me, and it is not a given that I will grant it. But when we scale up from the individual to a social group the dynamics change dramatically, according to Simone, so that forgiveness is no longer about contrite individuals who have come to agree that what they did is wrong, but rather about a political (possibly, though not necessarily, cynical) move in the public arena, meant to elicit a very specific response.


I must admit that I was rather skeptical of Simone’s approach when she outlined what she was going to argue for at the beginning of her talk, but it didn’t take much to convince me that she is right. And then Q&A time came, and my colleague Kate Ritchie upped the ante. She suggested that what Simone said about forgiveness also goes for official apologies. They too, are something that makes sense at the individual level, but not so much at the social one. And apologies too can be given by the wrong person, on behalf of groups who may not agree, used as a power play, and delivered because of cynical calculations. Even when not cynical in nature, both Simone and Kate agreed, requests for forgiveness as well as apologies quickly become empty. Simone mentioned the institution of a national “Sorry Day” in Australia, which, while probably initially well intentioned, has soon turned into a rote empty gesture, particularly since little or nothing is being done in the meantime to actually improve the lives of indigenous people.


Once again, my initial thought was that surely Kate’s point is problematic. But nope, a few seconds of reflection revealed to me that she was right.


But hold on, then. I began this article by talking in positive terms of the notion of Truth and Reconciliation Commissions, and now I’m agreeing with our guest speaker (it was Philosophy Day, by the way) that officially asking for forgiveness, and even issuing public apologies, is problematic. Doesn’t reconciliation imply apologies and forgiveness?


Probably, again, at the individual level, but not the public one. Consider two contrasting situations. The first one is the case mentioned above of my partner cheating on me. Let’s say that she apologized and asked for forgiveness. While I’m not mandated to grant it, it would seem that if I agree to reconcile, to get back together and resume our life as a couple, I have at least implicitly accepted her apology and forgiven her. Without that, life as a couple would be re-established on very shaky foundations indeed.


The second scenario is that of the CEO of Exxon-Mobil apologizing to the public for yet another environmental catastrophe caused by the greedy practices of his corporation. Again setting aside the degree to which it makes sense for a single individual to apologize on behalf of a large anonymous entity which he certainly does not control beyond a limited extent, what I – as a member of the public – expect from Exxon-Mobil is three things and three things only: (i) an acknowledgement of the facts; (ii) some concrete suggestion on how the corporation can repair the damage; and (iii) reasonable assurances that whatever caused the problem will not happen again in the future. Both apologies and forgiveness are entirely beside the point.


The bottom line from all the above is that there is an important difference between the individual and social levels at which concepts like apologies and forgiveness operate. Simone (or yours truly) does not deny that it makes perfect sense for an individual to apologize to another for some wrongdoing. She also agrees (and so do I) that it makes sense for an individual to ask for forgiveness, so long as it is understood that this is not an imposition on the other party, who may or may not grant it. Yet, as we have seen above, both concepts are problematic when scaled up to the level of social groups. If this is true, why do we do it, and how could we do otherwise?


I believe one source of the particular kind of mistake we are talking about is Plato. In the Republic he famously set out to investigate what makes for a just person. His strategy was to scale things up and ask first what makes for a just state (the Republic of the title). The explicit assumption was that one can go back and forth between the two levels. The ideal Republic will be guided by reason (in the form of a ruling class of philosophers), who will direct the other two components (the soldiers-auxiliaries and the regular folks-producers). Likewise, the ideal human being has a tripartite soul, and is well advised to put her rational soul in control of the spirited and appetitive ones.


Yet, most of the ancients rejected this approach, making a clear distinction between individuals and society. Both the Stoics and Cicero talked of ethics (from the Greek êthos, a word related to our idea of character) as distinct from law. Ethics (and morality, from the Latin moralis, which is how Cicero translated the Greek êthos) pertains to the individual and her character; law pertains to how we regulate things in society. A just individual is not the same as a just state. At the individual level the Stoics considered justice a character trait, having to do with treating other people fairly and with respect, but not necessarily equally (e.g., I don’t treat my daughter in the same way as a stranger, even though I try to show respect for everyone); by contrast, at the societal level a just state is one based on just laws, where everyone is treated equally, regardless of personal status.


Simone’s suggestion, then, can be reframed as a call for going back to a meaningful distinction between ethics and law, the first one functioning at the individual, the second at the societal level. It is within this framework that it makes perfect sense to say that apologies and forgiveness have an important role to play in the intercourse between individuals, while at the same time maintaining that they are problematic between groups. So politicians and corporate officers can keep their apologies, and they will not get our forgiveness. What they need to do instead is to acknowledge wrongdoing and put in place workable procedures to redress injustices.

The techno-optimists are at it again

1 (4)(the atomic explosion that destroyed Hiroshima)


Techno-optimism (a form of applied scientism, if you will) is the attitude that no matter how dire humanity’s problems, science and technology will surely come to the rescue. It tends to conveniently neglect that some of humanity’s biggest contemporary problems (say, climate change, or the risk of nuclear annihilation) are, in fact, caused by the willful misuse of science and technology. It seems odd to firmly believe that more of the same thing that caused the disease in the first place will surely cure the disease, because, you know, this time we’ll get it right.


A good example of techno-optimism is a recent article in Slate by Phil Torres, based on his new book, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. It’s actually a somewhat puzzling article, because Torres is both critical and supportive of what he calls radical human enhancement as a way to solve what he perceives as humanity’s existential risks. My take is that he mostly focuses on the wrong risks, some of which are not actually existential at all, and that his proposed solution is more likely to make things worse than better. I think of myself as a realist about technology – one who both appreciates its advantages (I’m writing this on a wonderfully advanced tablet computer!) and is aware of its dark side. But if after reading this you want to class me as a techno-pessimist, I’ll take it. Just don’t dismiss me as a Luddite, okay?


Torres begins by correctly pointing out that the current century is a bit special, in the context both of human evolution and, for that matter, the evolution of life on our planet. For the first time since life emerged 3.5 billion years ago a single, sentient species actually has developed the capacity to profoundly alter Earth’s bio- and geo-spheres. As my favorite philosopher, Spider-Man, warned us, with great power comes great responsibility, but we just don’t seem to be willing to accept that responsibility.


Torres then introduces the concepts of cognitive and moral enhancements, though the word “moral” appears only near the beginning of the piece, with “cognitive” replacing it throughout the rest of the article. That, as we shall see, is a crucial mistake. There are two classes of enhancement, conventional and radical. You are surely familiar with the conventional class (hence the name!): it includes things like education, meditation, and the absorption of caffeine. Okay, it’s an odd mix, but you get the point: anything that improves our cognitive abilities without permanently altering them in a heritable fashion, that is, across generations.


Radical enhancements are a whole different story, and while still at the borderlands between science and science fiction, surely some of them will become available within years or decades. Torres focuses his essay on radical enhancements, since he thinks these are the ones that will be necessary to stave off the existential risks faced by humanity.


One such radical enhancement is embryo selection, a process by which scientists – the wisest of all people, as we all know – pick a subset of embryos generated by a given combination of sperms and eggs, and do so repeatedly in order to improve whatever human characteristic is deemed to be desirable. Torres is perfectly aware that this is eugenics, but he deems it to be of a benign type, because it doesn’t violate people’s autonomy. I guess he hasn’t seen the film Gattaca. And yes, it is perfectly acceptable to object to sci-fi scenarios by using sci-fi philosophical thought experiments. Torres comments:


If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. … According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points – a promising method for creating superbrainy offspring in a relatively short period of time. … As Bostrom puts it … ‘a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.’


I’m afraid that Bostrom and Shulman don’t know what they are talking about, and no such strong prima facie case has been made. At all. First off, we actually don’t understand the genetic basis of intelligence. We know that IQ (which is not at all the same thing as “intelligence,” whatever that is) is heritable in humans. But “heritable” simply means that there is – other things being equal – a statistical correlation between intelligence and genetic makeup. Nothing more, and that ain’t even remotely close enough to what one would need in order to do embryo selection on intelligence, even setting aside the ethical issues, which would be far more thorny than Torres lets on.


Second, and this will become a recurring theme of my critique, “superbrainy” doesn’t seem to have a lot to do with what is really needed: wisdom, or a good moral compass. I seriously doubt that there is any correlation at all between intelligence and morality, and if I’m right, creating a super-race of hyper-intelligent beings with the same highly imperfect moral compass as Homo sapiens sapiens is a sure recipe to accelerate and magnify whatever existential threat Torres, Bostrom and Shulman may be concerned about.


Speaking of which: what does Torres consider to be an existential threat to humanity? At the top of his list he puts “apocalyptic terrorism,” the possibility that someone inspired by a “Manichean belief system” will blow all of us to smithereens with a stolen atomic weapon, in the name of ridding the world of apostates and assorted infidels, thus establishing the kingdom of God on earth.


While surely there is a risk of one such attack, notice a few important caveats. To begin with, there is no credible scenario under which a nuclear terrorist attack would be civilization-ending. Yes, someone may be able to sneak a low grade nuclear weapon in a major city and kill hundreds of thousands, millions even. That would be an unprecedented and horrifying catastrophe. But an existential threat to civilization? No. You know what really constitutes such a threat? The fact that the codes for thousands of nuclear missiles are currently in the hands of an incompetent narcissist sitting in the White House. But, curiously, there is no mention of government-based threats in Torres’ piece. Lastly, please keep in mind that this specific threat is made possible by, you guessed it, science and technology! It’s the very existence of very smart scientists and unscrupulous politicians – none of whom seems to be equipped with even a barely functioning moral compass – that has put us into this situation in the first place. And you think giving more leeway to the same folks is going to save humanity?


More generally speaking, Steven Pinker’s ‘Escalator of Reason’ hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the ‘Flynn effect.’ The most important concept here is that of ‘abstract reasoning,’ which Pinker identifies as being ‘highly correlated’ with IQ. In his words, ‘abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.’


With all due respect to Steven Pinker, I’m going to call bullshit on this one as well. As a start, “violence” has indeed declined since the second half of the 20th century (though even this conclusion comes with huge caveats about what exactly counts as violence), but there is a reason Pinker picked that particular time frame: two world wars had just taken place in the previous half century, killing millions of people, thanks to science and technology. The culmination of that period was the only nuclear attack on civilians in the history of humanity (so far), perpetrated by a government, not a Manichean terrorist; the US government, to be specific.
Moreover, there is no causal model (correlation, as Pinker knows, is not the same as causation) that actually links the Flynn effect (which is probably due to “conventional enhancement techniques,” such as better nutrition and education) and moral improvement. Indeed, I see no reason to believe that humanity at large has improved morally since the times of Socrates and Confucius. And “abstraction from the concrete particulars of immediate experience” is also the sort of thing that makes possible killing at a distance by pushing a button, or that allows many of us to reconcile the otherwise irreconcilable fact that the top 10% of the human population lives by standards historically reserved to Kings and Queens while the rest is below or barely above poverty, subject to preventable disease, or killed by violence rendered particularly effective by technologically advanced weaponry in the hands of unscrupulous governments.


Torres does acknowledge some of the limitations of the approach proposed by techno-optimists like Pinker. After writing that perhaps “idiosyncratic actors” (i.e., terrorists) would suffer from less empathy if they had a higher IQ, he remembers that some real life examples of such actors, like the Unabomber Ted Kaczynski, actually do have high IQs, and yet they are still deficient in empathy. So let me state this clearly: there is no reason whatsoever to think that IQ and empathy are correlated, which throws a big wrench in Pinker’s, Bostrom’s and similar programs of enhancement. Torres continues:


Another major concern: cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button.


Right. But, again, he and his colleagues are insist in worrying about the least likely threats, which, once more, are not actually existential. No Unabomber can end the world. But Donald Trump (just to pick on the current occupant of the WH, it’s not that I trust others a hell of a lot more) can come pretty darn close. But Torres insists:


Although cognitive enhancements could worsen some types of terror agents, the evidence – albeit indirect – suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.


Ahem, no, the evidence suggests no such thing, and in fact the production of a population of “cognitively enhanced cyborg” is a nightmare that only naive techno-optimists could possibly wish on the rest of us. Don’t these people watch any sci-fi, at all? And there is more nonsense on stilts:


It seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in ‘all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.’ … Superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.


Bostrom’s calculations are based on thin air, to be charitable. It isn’t even clear what “all-round cognitive performance” means, let alone how to improve it so efficiently, and it is even more dubitable that such an improvement would actually be an improvement. Also, what sort of model of the brain is Bostrom working with, that allows him to simply sum small percentage increases across different individuals as if they were equivalent to a gigantic increase in a single person? Moreover, look at the list of disasters: most of them are both extremely unlikely and it is just as unlikely that we would be able to do much about them (I wonder why a nearby nova explosion isn’t part of the mix), but the most worrisome ones (climate change, biodiversity loss, emerging technologies, and agential risks) are all made possible by the very same thing that is supposed to save us: more intelligent technology.


Toward the end of the essay we simply reach Pindaric levels of imaginative flights:
There could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.


Sure. Now how about coming down to Earth, our only planet, which we are the ones currently destroying, and talk some sense? One of the problems with techno-optimism is that it captures the imagination with talk of supervolcanoes and “great unknown danger” X, offering us the reassuring but extremely dangerous feeling that all we need to do to get out of the trouble we have stubbornly dug ourselves into is more science. It almost sounds like magic. Because it almost is.


Only at the very end Torres manages to slip in the crucial word that has been missing from the entire exercise: wisdom. The problem is not that we are not smart enough, but rather that we are not wise enough. Science and technology have advanced by giant leaps since the time of Socrates and Confucius, and yet these two (as well as a number of other ancient sages) have been unsurpassed in wisdom by even the most cognitive advanced human beings of the intervening two and a half millennia.


I know it sounds far more pedestrian and unexciting, but what if – for a change – we actually got serious at using the sort of conventional enhancements that are proven to work? First and foremost, education. I don’t mean the sort of STEM-oriented technical tripe that produces more barely functional human drones apt for the use of large corporations. I mean serious education, what the Greeks called paideia, the rearing of good citizens of the polis. And yes, some meditation or other kinds of spiritual exercises, to calm our mind down and center ourselves so that we can live a better and more meaningful life, instead of depending on the constant acquisition of consumer goods for our so-called happiness. And caffeine, of course, that’s always helpful.

Should “the ignorant” be denied access to audiences?

People who ended up voting for Donald Trump were famously characterized by Hillary Clinton as the “basket of deplorables.” And I must admit that I wonder in stupor at the foolishness of US politics, the recent Italian elections, Brexit, or the re-election of Turkish strongman Recep Tayyip Erdogan. Not to mention what seem to be genuinely adoring crowds in Vladimir Putin’s Russia.
How is any of this possible? It’s always a complex combination of factors, of course, ranging from people’s socio-economic situation to their ideological or religious commitments, to deficient education, to the pure and simple human herd instinct that so annoyed Nietzsche. But surely one thing that contributes to the current insane state of affairs is the reach that pernicious ideologues have in the modern era, a reach made far more efficient by the existence of the internet and social media. And by the fact that these people are often offered platforms to address audiences by institutions such as universities, newspapers, television stations and the like.


My colleague Bryan Van Norden, a professor of philosophy at Wuhan University, as well as the author of “Taking Back Philosophy: A Multicultural Manifesto,” has published a thought provoking op-ed about institutional platforms in the New York Times. It is well worth considering in some detail, as I see where Bryan is coming from, but I consider his proposed path dangerous, and his argument self-contradictory.


He begins with a couple of examples. Ultra right-wing commentator Ann Coulter recently appeared on Fox News to say that the crying migrant children separated from their parents by the Trump administration were child actors. Van Norden comments: “Does this groundless claim deserve as much airtime as, for example, a historically informed argument from Ta-Nehisi Coates that structural racism makes the American dream possible?” University of Toronto psychologist, and darling of the alt-right, Jordan Peterson talked about how difficult it is to control “crazy women” and the fact that men naturally can muster respect only for people whom they can threat with violence. Bryan’s comments: “Does this adolescent opinion deserve as much of an audience as the nuanced thoughts of Kate Manne, a professor of philosophy at Cornell University, about the role of ‘himpathy’ in supporting misogyny?”


The classical liberal response to these questions is that Ann Coulter and Jordan Peterson ought to be accorded freedom of speech, on grounds famously laid out by John Stuart Mill in his On Liberty, published in 1859. The argument is based on the following considerations: (i) you may think opinion X is clearly wrong, but history is littered with people, even majorities, who were sure that something was wrong when it turned out that it wasn’t (say, that gays should have a right to marry); (ii) if X is indeed wrong, then we learn something from people who defend it, because we need to make clear to ourselves why a given notion is, in fact, wrong (otherwise, we reject it out of prejudice, not knowledge or understanding); (iii) truth is not an all or nothing matter, so we may learn even from partially or largely wrong opinions; (iv) if an opinion offends you, that’s not sufficient reason to suppress it; and (v) who, exactly, ought to be in charge of limiting the expression of unpopular or “offensive” opinions?


Van Norden calls the above line of reasoning “specious,” adding that it is rooted in “a naïve conception of rationality that [Mill] inherited from Enlightenment thinkers like René Descartes.” [Technically, Descartes influenced the Enlightenment, but was not an Enlightenment thinker, since he lived from 1596 to 1650, and the European Enlightenment was an 18th century thing.]
Bryan argues that “If you do have faith in a universal method of reasoning that everyone accepts, then the Millian defense of absolute free speech is sound,” but he very clearly states that there is no such thing as universal reason, so we should reject Mill’s argument. I think that Van Norden’s statement is ambiguous and that what he argues in the remainder of the NYT op-ed flatly contradicts his opening statement.


He writes: “I wish it were self-evident to everyone that we should not discriminate against people based on their sexual orientation, but the current vice president of the United States does not agree. I wish everyone agreed that it is irrational to deny the evidence that there was a mass shooting in Sandy Hook, but a syndicated radio talk show host can make a career out of arguing for the contrary.”


But the fact that Mike Pence does not agree with a given notion does not mean that the notion in question is not self-evident, it may simply be that Pence denies self-evident truths, either because he is too ignorant to see them, or because of bigotry, or political expediency. Similarly, a nutcase radio talk show host, syndicated or not, may deny empirical evidence all he wants, but that doesn’t mean that his denial is reasonable. At all.


Bryan understands why Mill, and Alexis de Tocqueville, made their argument. Mill was a strong proponent of women’s rights and an opponent of slavery, and he knew too well that many people found such topics offensive, resulting in what he famously termed a tyranny of the majority.


But, argues Van Norden, we are in a very different situation from 19th century England and America. We are witnessing the worsening of a scenario already described by the philosopher Herbert Marcuse back in 1965, when he wrote: “In endlessly dragging debates over the media, the stupid opinion is treated with the same respect as the intelligent one, the misinformed may talk as long as the informed, and propaganda rides along with education, truth with falsehood.”
This is quite obviously true, of course (or is it?). Only a foolish society would give “equal time” to the discussion of evolutionary theory and creation “science,” or to a climate researcher and a so-called “skeptic” of global warming, or a medical researcher and Jenny McCarthy. But setting aside that a lot of other cases, especially political opinions (as distinct from scientific theories) are not quite so easy to settle, what is the alternative? Mill wasn’t naive about how difficult it is for most people to wade through public controversies. He just thought that freedom of speech was the least of possible evils.


Marcuse famously advocated the outright suppression of right-wing perspectives, a position that, thankfully, Bryan does not endorse. Instead, he makes an intriguing proposal: to distinguish between free speech and just access: “access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole.”


But that comes perilously close to begging the question against Mill: on what criteria should we apportion the merit of different opinions? How do we figure out what is just? How do we measure the benefit of an opinion for the community as a whole? Recall that Van Norden has denies that there is such thing as universal reason. It follows that all such judgments are bound to be arbitrary, and therefore simply to reflect the will of the people who happen to be wielding power by virtue of controlling the limited resources Bryan is referring to. This may not be quite a tyranny of the majority, but it is still a tyranny (of the elite, perhaps?).


Let’s take a look at some of the specific examples Van Norden brings up. In 2004 one Nathaniel Abraham was fired by the Woods Hole Oceanographic Institute because he admitted to his employer that he did not believe in evolution. Correctly, Bryan asserts that Abraham has a right to his wacky opinion, but that Woods Hole has a right to fire him on the grounds that he holds such opinion. But this has nothing to do with freedom of speech or institutional access: Woods Hole is a preeminent research laboratory that carries out a lot of work on evolution, so Abraham had simply admitted to his incompetence at working there. It would be like NASA firing a flat-earth believer. Or a hospital a doctor who did not “believe” in vaccines.


The next example is more pertinent, but far less clear: Van Norden claims that a number of universities, including Columbia and NYU, should not have invited Charles Murray, the co-author of The Bell Curve: Intelligence and Class Structure in American Life to speak on their campuses. Why? Because Murray’s notions are junk science. That is true, I think (for a variety of reasons, including those explained here and here), but there are two additional factors to consider. First off, “universities” don’t invite anyone; often it is specific faculty or student associations that do. And to bar invited speakers in either case amounts to an infringement of academic freedom or students’ rights. Second, I am of the opinion that a significant chunk of what goes on in a number of legitimate university departments is either questionable or downright junk (no, I will not mention names). But, again, I don’t get to decide which is which. I do get, however, to argue — in perfectly Millian fashion — in favor or against certain programs, positions, claims, and so forth.


Bryan’s third example is the recent firing by ABC of their television star, Roseanne Barr, because of her racist public remarks. But that’s yet another situation altogether. Barr did not make her remarks on television, and she was fired from ABC because the network was (rightly, I think) embarrassed by her behavior, and feared a public backlash. Of course, had the episode happened, say, in the 1950s, ABC would have likely not moved a finger about it. I assume it is a rationally objective fact that we have made (some) improvements in our thinking about race and gender since then, but of course Van Norden cannot claim so, because he does not believe in universal reason.


Bryan mentions recent research in social psychology showing that if a falsehood is repeated, even when it is in order to debunk it, people are more likely to believe it. This is both true (maybe, since there is a replication crisis ongoing in that field) and worrisome, but is it — as Van Norden claims — reason to cheer MSNBC’s “Morning Joe” decision never again to invite Kellyanne Conway because of her bizarre notion of “alternative facts”? I don’t know. It is very unfortunate that someone like Conway is currently a high profile government official, but isn’t a journalist supposed to challenge that sort of notion, rather than suppress it? Besides, by way of similar actions MSNBC has now gathered the reputation (deservedly or not) of the left’s Fox, which makes their decision about Conway come across to many as naked partisanship. Is this really helpful to public discourse? I’m not so sure.


Bryan says that “right to free speech is not the right to an audience,” and he is correct. But in philosophy we make a distinction between negative and positive rights. You may have, say, the negative right of being allowed to leave the country whenever you wish. But if things are such that you could never muster the means to actually leave, you do not have a corresponding positive right, and negative rights by themselves are largely useless. To pick a more concrete example, in the US (for now) women have a right to abortion. But such right is meaningless if local state legislatures make it so difficult for abortion clinics to practice that for all effective purposes a woman in Texas or Alabama has to drive hundreds of miles, or even go out of state, to get an abortion. Ironically, it is a typical tactic of the right that whenever they cannot eliminate a negative right (like abortion, again, for now) they go after its positive counterpart, thus making it difficult or impossible for people to enjoy that right. The same goes for speech: if I have a “right” to it, but I am then systematically denied audiences by a small number of gatekeepers, I might as well shout in the void. And, again, who gets to make such decisions, and on what grounds, given that there is no universal reason?


Van Norden concludes his op-ed by stating: “These views [that he criticizes] are specious, and those who espouse them are, at best, ignorant, at worst, sophists,” calling people who hold those views “invincibly ignorant and intellectual hucksters.” It sounds to me like Bryan thinks he has good reasons to think that these people’s opinions are, in fact, wrong. I agree with his assessment. And so should any reasonable person, because reason isn’t a matter of your personal opinion — across time and cultures. There are standards of evidence and argument that have been worked out over the past two and a half millennia of philosophy and science, way before the European Enlightenment came about. On my part, I prefer by far a society where we do our utmost so that more and more people are familiar with such standards and apply them properly, rather than one in which whoever happens to be in charge is going to decide which resources to apportion to whom. Call me an old fashioned Millian, in that sense.

 

 

Should “the ignorant” be denied access to audiences?

John Stuart Mill

People who ended up voting for Donald Trump were famously characterized by Hillary Clinton as the “basket of deplorables.” And I must admit that I wonder in stupor at the foolishness of US politics, the recent Italian elections, Brexit, or the re-election of Turkish strongman Recep Tayyip Erdogan. Not to mention what seem to be genuinely adoring crowds in Vladimir Putin’s Russia.

How is any of this possible? It’s always a complex combination of factors, of course, ranging from people’s socio-economic situation to their ideological or religious commitments, to deficient education, to the pure and simple human herd instinct that so annoyed Nietzsche. But surely one thing that contributes to the current insane state of affairs is the reach that pernicious ideologues have in the modern era, a reach made far more efficient by the existence of the internet and social media. And by the fact that these people are often offered platforms to address audiences by institutions such as universities, newspapers, television stations and the like.

My colleague Bryan Van Norden, a professor of philosophy at Wuhan University, as well as the author of “Taking Back Philosophy: A Multicultural Manifesto,” has published a thought provoking op-ed about institutional platforms in the New York Times. It is well worth considering in some detail, as I see where Bryan is coming from, but I consider his proposed path dangerous, and his argument self-contradictory.

He begins with a couple of examples. Ultra right-wing commentator Ann Coulter recently appeared on Fox News to say that the crying migrant children separated from their parents by the Trump administration were child actors. Van Norden comments: “Does this groundless claim deserve as much airtime as, for example, a historically informed argument from Ta-Nehisi Coates that structural racism makes the American dream possible?” University of Toronto psychologist, and darling of the alt-right, Jordan Peterson talked about how difficult it is to control “crazy women” and the fact that men naturally can muster respect only for people whom they can threat with violence. Bryan’s comments: “Does this adolescent opinion deserve as much of an audience as the nuanced thoughts of Kate Manne, a professor of philosophy at Cornell University, about the role of ‘himpathy’ in supporting misogyny?”

The classical liberal response to these questions is that Ann Coulter and Jordan Peterson ought to be accorded freedom of speech, on grounds famously laid out by John Stuart Mill in his On Liberty, published in 1859. The argument is based on the following considerations: (i) you may think opinion X is clearly wrong, but history is littered with people, even majorities, who were sure that something was wrong when it turned out that it wasn’t (say, that gays should have a right to marry); (ii) if X is indeed wrong, then we learn something from people who defend it, because we need to make clear to ourselves why a given notion is, in fact, wrong (otherwise, we reject it out of prejudice, not knowledge or understanding); (iii) truth is not an all or nothing matter, so we may learn even from partially or largely wrong opinions; (iv) if an opinion offends you, that’s not sufficient reason to suppress it; and (v) who, exactly, ought to be in charge of limiting the expression of unpopular or “offensive” opinions?

Van Norden calls the above line of reasoning “specious,” adding that it is rooted in “a naïve conception of rationality that [Mill] inherited from Enlightenment thinkers like René Descartes.” [Technically, Descartes influenced the Enlightenment, but was not an Enlightenment thinker, since he lived from 1596 to 1650, and the European Enlightenment was an 18th century thing.]

Bryan argues that “If you do have faith in a universal method of reasoning that everyone accepts, then the Millian defense of absolute free speech is sound,” but he very clearly states that there is no such thing as universal reason, so we should reject Mill’s argument. I think that Van Norden’s statement is ambiguous and that what he argues in the remainder of the NYT op-ed flatly contradicts his opening statement.

He writes: “I wish it were self-evident to everyone that we should not discriminate against people based on their sexual orientation, but the current vice president of the United States does not agree. I wish everyone agreed that it is irrational to deny the evidence that there was a mass shooting in Sandy Hook, but a syndicated radio talk show host can make a career out of arguing for the contrary.”

But the fact that Mike Pence does not agree with a given notion does not mean that the notion in question is not self-evident, it may simply be that Pence denies self-evident truths, either because he is too ignorant to see them, or because of bigotry, or political expediency. Similarly, a nutcase radio talk show host, syndicated or not, may deny empirical evidence all he wants, but that doesn’t mean that his denial is reasonable. At all.

Bryan understands why Mill, and Alexis de Tocqueville, made their argument. Mill was a strong proponent of women’s rights and an opponent of slavery, and he knew too well that many people found such topics offensive, resulting in what he famously termed a tyranny of the majority.

But, argues Van Norden, we are in a very different situation from 19th century England and America. We are witnessing the worsening of a scenario already described by the philosopher Herbert Marcuse back in 1965, when he wrote: “In endlessly dragging debates over the media, the stupid opinion is treated with the same respect as the intelligent one, the misinformed may talk as long as the informed, and propaganda rides along with education, truth with falsehood.”

This is quite obviously true, of course (or is it?). Only a foolish society would give “equal time” to the discussion of evolutionary theory and creation “science,” or to a climate researcher and a so-called “skeptic” of global warming, or a medical researcher and Jenny McCarthy. But setting aside that a lot of other cases, especially political opinions (as distinct from scientific theories) are not quite so easy to settle, what is the alternative? Mill wasn’t naive about how difficult it is for most people to wade through public controversies. He just thought that freedom of speech was the least of possible evils.

Marcuse famously advocated the outright suppression of right-wing perspectives, a position that, thankfully, Bryan does not endorse. Instead, he makes an intriguing proposal: to distinguish between free speech and just access: “access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole.”

But that comes perilously close to begging the question against Mill: on what criteria should we apportion the merit of different opinions? How do we figure out what is just? How do we measure the benefit of an opinion for the community as a whole? Recall that Van Norden has denies that there is such thing as universal reason. It follows that all such judgments are bound to be arbitrary, and therefore simply to reflect the will of the people who happen to be wielding power by virtue of controlling the limited resources Bryan is referring to. This may not be quite a tyranny of the majority, but it is still a tyranny (of the elite, perhaps?).

Let’s take a look at some of the specific examples Van Norden brings up. In 2004 one Nathaniel Abraham was fired by the Woods Hole Oceanographic Institute because he admitted to his employer that he did not believe in evolution. Correctly, Bryan asserts that Abraham has a right to his wacky opinion, but that Woods Hole has a right to fire him on the grounds that he holds such opinion. But this has nothing to do with freedom of speech or institutional access: Woods Hole is a preeminent research laboratory that carries out a lot of work on evolution, so Abraham had simply admitted to his incompetence at working there. It would be like NASA firing a flat-earth believer. Or a hospital a doctor who did not “believe” in vaccines.

The next example is more pertinent, but far less clear: Van Norden claims that a number of universities, including Columbia and NYU, should not have invited Charles Murray, the co-author of The Bell Curve: Intelligence and Class Structure in American Life to speak on their campuses. Why? Because Murray’s notions are junk science. That is true, I think (for a variety of reasons, including those explained here and here), but there are two additional factors to consider. First off, “universities” don’t invite anyone; often it is specific faculty or student associations that do. And to bar invited speakers in either case amounts to an infringement of academic freedom or students’ rights. Second, I am of the opinion that a significant chunk of what goes on in a number of legitimate university departments is either questionable or downright junk (no, I will not mention names). But, again, I don’t get to decide which is which. I do get, however, to argue — in perfectly Millian fashion — in favor or against certain programs, positions, claims, and so forth.

Bryan’s third example is the recent firing by ABC of their television star, Roseanne Barr, because of her racist public remarks. But that’s yet another situation altogether. Barr did not make her remarks on television, and she was fired from ABC because the network was (rightly, I think) embarrassed by her behavior, and feared a public backlash. Of course, had the episode happened, say, in the 1950s, ABC would have likely not moved a finger about it. I assume it is a rationally objective fact that we have made (some) improvements in our thinking about race and gender since then, but of course Van Norden cannot claim so, because he does not believe in universal reason.

Bryan mentions recent research in social psychology showing that if a falsehood is repeated, even when it is in order to debunk it, people are more likely to believe it. This is both true (maybe, since there is a replication crisis ongoing in that field) and worrisome, but is it — as Van Norden claims — reason to cheer MSNBC’s “Morning Joe” decision never again to invite Kellyanne Conway because of her bizarre notion of “alternative facts”? I don’t know. It is very unfortunate that someone like Conway is currently a high profile government official, but isn’t a journalist supposed to challenge that sort of notion, rather than suppress it? Besides, by way of similar actions MSNBC has now gathered the reputation (deservedly or not) of the left’s Fox, which makes their decision about Conway come across to many as naked partisanship. Is this really helpful to public discourse? I’m not so sure.

Bryan says that “right to free speech is not the right to an audience,” and he is correct. But in philosophy we make a distinction between negative and positive rights. You may have, say, the negative right of being allowed to leave the country whenever you wish. But if things are such that you could never muster the means to actually leave, you do not have a corresponding positive right, and negative rights by themselves are largely useless. To pick a more concrete example, in the US (for now) women have a right to abortion. But such right is meaningless if local state legislatures make it so difficult for abortion clinics to practice that for all effective purposes a woman in Texas or Alabama has to drive hundreds of miles, or even go out of state, to get an abortion. Ironically, it is a typical tactic of the right that whenever they cannot eliminate a negative right (like abortion, again, for now) they go after its positive counterpart, thus making it difficult or impossible for people to enjoy that right. The same goes for speech: if I have a “right” to it, but I am then systematically denied audiences by a small number of gatekeepers, I might as well shout in the void. And, again, who gets to make such decisions, and on what grounds, given that there is no universal reason?

Van Norden concludes his op-ed by stating: “These views [that he criticizes] are specious, and those who espouse them are, at best, ignorant, at worst, sophists,” calling people who hold those views “invincibly ignorant and intellectual hucksters.” It sounds to me like Bryan thinks he has good reasons to think that these people’s opinions are, in fact, wrong. I agree with his assessment. And so should any reasonable person, because reason isn’t a matter of your personal opinion — across time and cultures. There are standards of evidence and argument that have been worked out over the past two and a half millennia of philosophy and science, way before the European Enlightenment came about. On my part, I prefer by far a society where we do our utmost so that more and more people are familiar with such standards and apply them properly, rather than one in which whoever happens to be in charge is going to decide which resources to apportion to whom. Call me an old fashioned Millian, in that sense.

We are all invited to the conversation, and if you disagree you may be logophobic

“There is no greater evil one can suffer than to hate reasonable discourse.” (Socrates, in the Phaedo)

You may have noticed that these days it has become really difficult to have a reasonable conversation. With almost anyone. I’m not referring to the obvious culprits: Trumpists, purveyors of alternative facts, or those who dismiss critical journalism as “fake news.” I’m not even talking about the classical examples, from creationists to believers in all sorts of alternative medicine, the paranormal, UFOs, and so forth (I wrote a whole book about those). I’m talking about my own “tribe,” if you will: progressive, left-leaning liberals.

I have been one of those since I was young, convinced that the fascism that had gripped my own country, Italy, for the two decades leading to World War II was the true evil of the world. Yet, even back in high school I experienced the disturbing intolerance of certain quarters of the left, where any thought that did not align exactly with the orthodoxy of the moment was automatically branded as either “fascist” (thus emptying that word of meaning), or, at best, “qualunquista” (literally “one with random opinions”).

I profess cosmopolitanism, the notion — going back to the ancient Cynics and Stoics — that we are all human beings, endowed by that very fact with a degree of dignity that deserves respect and, most importantly, equal rights. I regularly donate to organizations that fight on behalf of civil rights, of immigrants, and of the freedom of the press. I think that women ought to be paid equally to men and ought to be afforded exactly the same educational and working opportunities. I think it monstrous to deny rights to Lesbians, Gays, Bisexual, Transgender, and Queer/Questioning (LGBTQ). I am not religious, but I do not believe that religion is the root of all evil, nor do I believe that any particular religion is ethically worse than any other (with the exceptions of some cults like Scientology).

Yet, I have been told to go fuck myself, you transphobic bastard, simply because I tweeted — without comment — an article that explored some interesting, and to me rather puzzling, conflicts among segments of the feminist, lesbian, and transgender communities. According to this otherwise thoughtful and in-depth piece, my sexual aesthetic preference for slender women cannot but be political, which, obviously, makes me “fatphobic.” If I raise the issue of Islamic terrorism, in a reasonable and historically informed fashion, I risk being branded an Islamophobe (though, funny enough, if I mention Christian terrorism linked to abortion clinics or white supremacy, I get plenty of nods of approval, within my tribe). If I voice the opinion that while the Israeli are guilty of unlawful occupation of Palestinian territory and of slaughtering unharmed innocents, it is also true that too many Palestinians are intolerant and have systematically undermined their own case, then I am, of course, a fascist (again) Islamophobe pig. And so on and so forth.

And then there is this rather new, entirely postmodernist malady: the idea that unless someone has actually experienced something (being it colonialism, gender discrimination, racism, and so forth) then one is not entitled to express an opinion about it, especially if one happens to be a “privileged” white male.

Let me be clear about the latter two points. First, yes, I amply recognize my privilege. I just don’t call it that, because it’s not helpful. I happen to think that everyone, regardless of gender or ethnicity, ought to have a somewhat nurturing family, a decent education, job opportunities, and be safe from discrimination and police violence, like I have been throughout my life. It’s not a fucking privilege, it’s the way all human beings should live. Period. And that’s going to happen only if we stand united to fight discrimination and injustice, rather than wasting energy and time shouting down some of our own allies, shaming them for having gotten lucky at the gene-cultural lottery. That sort of shouting — especially anonymously on social media — may make you feel virtuous, but it is positively damaging to the causes you allegedly care about.

Second, yes, it is certainly the case that people who directly experience discrimination are in a unique position to talk about it, and that they should be afforded a primary place in public discourse in virtue of that very fact. But we also know from plenty of research in social psychology (see here and here, for example) that first person experience is extremely unreliable, as people filter such experiences through all sorts of cognitive biases that inevitably distort their perceptions of things. That is why we need the rest of society to also come to the discussion table. Social progress is made possible by the constructive, and yes, even critical, dialogue among many voices, the by now increasingly old fashioned value of pluralism.

At his point, some readers will have already put down this essay and rushed to social media to begin a campaign of shaming, outraged by the few paragraphs above. So I can keep talking to those who have stuck it so far. Those who have left the conversation, I maintain, suffer from a peculiar disease described by philosopher Nicholas Shackel in a delightful paper published back in 2005 (Metaphilosophy 36: 295–320), entitled “The vacuity of postmodernist methodology: logophobia.”

The Greek roots of the term are obvious: “logos” means “thought,” particularly rational thought, and “phobos” means “fear of.” Logophobia is defined by Shackel as, “a skeptical doctrine about rationality … [where] rationality cannot be an objective constraint on us but is just whatever we make it, and what we make it depends on what we value.” He adds, “[opponents] are held to disguise their self‐interested construction of rationality behind a metaphysically inflated view of rationality in which Reason with a capital R is supposed to transcend the merely empirical selves of rational beings.” In other words, logophobics claim that since reason cannot possibly solve every problem, so you can proceed with dismissing reason altogether.

Shackel’s paper is actually a serious, and at times rather technical, critique of the modus operandi of postmodernist (and deconstructionist, and anti‐rationalist feminist) authors such as Michel Foucault, Jacques Derrida, Jean‐Francois Lyotard, David Bloor, Richard Rorty and Stanley Fish, to mention just a few of the major offenders (though I personally make distinctions of degree among the named people). At one point I added to the list a number of pseudoscientists, such as the Intelligent Design proponents Bill Dembski and Michael Behe, alternative medicine “gurus” such as Deepak Chopra, or vaccination‐deniers such as Jenny McCarthy. And now, to my deep regret, it is becoming painfully obvious that both the extreme right (which, at the moment, appears to include most of the Republican party in the US), and certain increasingly vocal groups of the left, have been struck by the disease as well.

According to Shackel, logophobics have developed an arsenal of strategies to obfuscate clear thinking, which they deploy whenever pressed by a skeptic. For instance, consider Foucault’s classic thesis that there is no such thing as scientific truth, because truth is a reflection of local socioeconomic and political power: “The essential political problem for the intellectual is not to criticise the ideological contents supposedly linked to science, or to ensure that his own scientific practice is accompanied by a correct ideology, but that of ascertaining the possibility of constituting a new politics of truth.” This is apparently strong stuff, but it becomes a much milder statement when Foucault later tells us what he means by “truth”: “‘Truth’ is to be understood as a system of ordered procedures for the production, regulation, distribution, circulation and operation of statements.” Really? And here I was thinking that truth means as accurate a description of a subject‐independent reality as possible. Silly me.

Shackel calls this “troll truism,” which he defines as, “a mildly ambiguous statement by which an exciting falsehood may trade on a trivial truth.” It is a particular form of so‐called Humpty‐Dumptying, named after the character in Lewis Carroll’s Through the Looking Glass. Humpty Dumpty famously tells Alice, after she points out that he is arbitrarily switching word meanings in the course of their discussion: “When I use a word, it means just what I choose it to mean — neither more nor less.”

Now, to be fair, Foucault did have a point: science is a social activity, and scientists are human beings, prone to the same cognitive and cultural biases as everyone else. Also, some science has been in the thralls of either economic or political agendas, or both. Similarly, when Amia Srinivasan, in the article linked above, claims that even our sexual preferences have a political dimension, the thought is neither absurd nor easy to dismiss. But the political, or the economic, are not the beginning and end of everything we do or think. There are (Zeus forbid!) biological instincts and constraints on our behavior, there are matters of taste, and there are simple reasonable disagreements. To reduce everything to the political is to commit the Marxist mistake, only in the latter case everything reduces to economic struggle. No, the human experience is just too varied and complex to be explained away by mono-factorial accounts, no matter how good the intentions of those that push for such accounts.

Logophobia, I maintain, is a broad condition that can strike adult humans of all ages, genders and cultural backgrounds, especially when they have never been exposed to the basics of critical reasoning, or when they have grown up in the thralls of a powerful ideological system. And no, being a scientist, or a philosopher, does not make you immune, unfortunately. The effect of the disease can probably never be eliminated, as we all suffer from it, to an extent. But they can be reduced by means of sound education, although that requires painful effort on the part of educators and subjects alike. Once the subject is past middle school, it becomes increasingly difficult, though not quite impossible, to overcome the malady. But it’s important to try, as huge amounts of financial resources and time are wasted as a result. Occasionally, lives are lost as a direct outcome of logophobia, especially when the logophobic is a politician with the power to start a war, or a celebrity pushing an anti‐medical health practice. And important political agendas may unwittingly be delayed or even undermined when logophobic activists manage to shut down discourse they find threatening, insulting, or “fascistic.”

Needless to say, fighting the spread of logophobia is a primary responsibility of every reasonable thinking person, practicing scientist, and professional philosopher, despite the highly unfavourable odds against defeating it — which is why a thorough knowledge of the disease and of its symptoms is so crucial. Spread the word, and watch out for logophobics near you! (And beware of not becoming one yourself…)

The morality-as-cooperation hypothesis and the link between evolution and moral philosophy

“Morality” comes from the Latin moralis, which was Cicero’s translation for the Greek ethos. The Greek word is related to our idea of character, the Latin one has to do with habits and customs. In an important sense, then, morality (or ethics, used here interchangeably) is the study of how to conduct your life, and particularly how to constructively deal with other people.

Recently, a group of researchers headed by Oliver Scott Curry has put out a paper (as yet unpublished) entitled “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” which is both interesting and more than a bit irritating. (Here is the site with the full version, and here is a commentary by the lead author, put out by the Evolution Institute.) It is interesting because it provides tantalizing empirical evidence to bolster the case, made by several researchers for a long time now, that the bio-cultural origins of a moral sense lie in the need for a species like ours to foster pro-social behavior, or as Curry et al. put it, “cooperation.” Frustrating because it fails to make the crucial conceptual distinction between the origins of morality and its current function, as well to recognize the equally important difference between descriptive and prescriptive approaches to ethics. I’ll do my best to highlight both the positives and the negative in what follows.

First off, a recap of the study as presented by the authors themselves:

“What is morality? And to what extent does it vary around the world? The theory of ‘morality-as-cooperation’ argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and predicts that specific forms of cooperative behaviour — including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession — will be considered morally good wherever they arise, in all cultures. In order to test these predictions, we investigate the moral valence of these seven cooperative behaviours in the ethnographic records of 60 societies. We find that the moral valence of these behaviours is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviours are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.”

Curry’s commentary begins with more than a whiff of scientism: “What is morality? And are there any universal moral values? Scholars have debated these questions for millennia. But now, thanks to science, we have the answers.” Ah! Yes, thanks to science we have the answers! Except that those two questions are quite distinct, of course. The nature of morality has been debated by Socrates (in the as yet unsurpassed Euthyphro) and is of course the topic of the entire field of meta-ethics. Whether there are universal moral values is a rather ambiguous question: do we mean whether there ought to be (prescriptive) or whether there merely happen to be (descriptive)? Philosophers tend to be concerned with the first sense (e.g., Kant), and are more than happy to leave the second one to anthropologists (it is, after all, an empirical question!).

Curry suggests that addressing problems of social behavior is something that has kept natural selection busy, so to speak, for millions of years during the evolution that led to the human lineage, and that moreover, once cultural evolution got started people have added a panoply of norms, rules, and institutions to deal with such problems. So far so good.

The approach used by Curry and his collaborators revolves around seven principles imported from evolutionary biology: “Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains: why we engage in costly displays of prowess such as bravery and generosity; why we defer to our superiors; why we divide disputed resources fairly; and why we recognize prior possession.”

Setting aside a few caveats (like the fact that not all societies actually have a taboo against incest), this theoretical apparatus does, in fact, go a long way toward explaining the origin of the behaviors mentioned. That brought the researchers to predict the universality of the following seven moral rules: “love your family, help your group, return favors, be brave, defer to authority, be fair, and respect others’ property.” And their results do show that these rules are present across cultures, though different societies vary in the way they rank or prioritize the rules. While their data do not explain the reasons for this variation, they also found no case were any of the rules was considered bad.

The overarching conclusion: “so there is a common core of universal moral principles. Morality is always and everywhere a cooperative phenomenon.”

Woah, slow down a second here. First off, perhaps the seven principles in question are common across modern cultures, but that does not ipso facto mean that they always were, nor that they evolved by natural selection, though that is a plausible scenario. Take, for instance, “be fair.” Are we really to understand that “fairness” was a moral precept in the highly unequal ancient Egyptian, or Roman (or pretty much all other) societies? And what do we mean by “fair,” anyway? Fairness may be the same as equality, or not, depending on one’s views, so we have to have a discussion about what it means and how the concept is interpreted in various societies.

“Be brave” is another highly ambiguous idea, and again it is hard to imagine that it means the same thing in, say, modern day Japan, with its penchant for anti-individualism, and ancient Sparta, which was characterized by a cult of individual bravery. And of course there are many different ways of being brave (or cowardly).

Second, jumping to the conclusion that morality is “always and everywhere” about cooperation seems entirely unwarranted. We are not ants or bees, each of us has autonomous projects that we intend to pursue and that have a large impact on the degree to which we flourish, as individuals. Yes, human beings are essentially social animals, and we flourish within a given social framework, but I would argue that morality isn’t about cooperation, but rather about how to balance self-centered vs pro-social needs. The two aspects — myself as an autonomous individual and myself as a member of a group — are in constant tension, and it is that tension that morality addresses, not just the group aspect of it. Cooperation is just one means to the end of survival, reproduction, and flourishing.

And by the way, natural selection doesn’t give a damn about the latter, but we most certainly do. Which is why a lot of our norms, rules, and institutions don’t address mere survival and reproduction, but rather issues such as justice, fairness, and access to resources. That is, they are concerned with flourishing.

Third, one can perfectly coherently object to some of those rules even being on a list of moral dicta. Let’s consider three of them a bit more closely: help your group, defer to authority, and respect others’ property.

At the least since the ancient Cynics and Stoics introduced the concept of cosmopolitanism, it is actually highly morally questionable to “help your group” if this is taken to mean regardless of circumstances, or at the expense of other groups that have an equal claim to resources. Today, it is not acceptable to engage in wars of conquest, for instance, in order to make our group thrive (the fact that it happens is irrelevant to the moral point). It isn’t even acceptable to exploit other groups economically, or by imposing on them environmental damages for our own convenience.

As for deferring to authority, nope, that’s a no starter. It may be a common cross-cultural practice, but it’s a bad idea, and it is in fact highly immoral if one does so blindly, following the orders of whoever is in charge. That way lies the road to tyranny, which is not morally acceptable, in either ancient or modern societies.

But surely we should respect other people’s property. Well, it depends. If it is acquired unethically, even if legally, no, I don’t think there is any such moral requirement. If your wealth is both disproportionate and arrived at by exploiting others (and let’s be frank, if it is the former, it can hardly not be the latter), then it is just and fair to pass laws to relieve you of much of that burden, through proportional taxation, for instance. My libertarian friends can scream loudly all they want, but that doesn’t make their claims to having a “natural right” to property one bit less nonsense on stilts (in the immortal phrase by Jeremy Bentham).

So, by all means let us inquire into the likely biological and cultural evolution of morality. It is a fascinating topic, somewhat amenable to empirical investigation. But let’s not confuse the descriptive with the normative aspects of the question, just like we wouldn’t for a second confuse, say, an account of the origin of mathematical thinking with the delivery of a sound proof of Fermat’s last theorem. Any time Curry et al. tell me that rule X is universal it is still perfectly meaningful for me to reply, yes, but is it moral?

Socrates, weakness of the will, and addiction

Socrates“People are dying because we misunderstand how those with addiction think,” says the title of a recent article in Vox by philosopher Brendan de Kenessey, who argues that addiction is not a moral failure, and that it is the moralistic attitude of a number of politicians and a significant portion of the public that makes the problem more difficult to deal with. Addicts are not bad people who need to be punished, he says, they are sick and need help.

And he is completely right, I think. And yet, I also suggest that the bulk of the article is based on the wrong philosophical criticism. de Kenessey blames Socrates for the moralistic attitude, while he should blame certain brands of Christianity instead. Here I will not make the positive case against Christian moralism (which is well known among certain politicians of a certain party in the US), nor will I unpack the idea that addicts are sick, not bad, people, as de Kenessey does a very fine job of that in his article. But I will defend Socrates and use the occasion to talk a bit not just about addiction, but in general the phenomenon of what the Greeks called akrasia, or weakness of the will, and which Socrates thought simply does not exist.

The starting point of de Kenessey’s analysis of the Socratic problem is the Platonic dialogue known as the Protagoras, in which the discussion between the Athenian sage and one of the most famous sophists turns to the topic of akrasia. Let’s contrast two instances of alleged akrasia, brought up by de Kenessey to make his point against Socrates, and which I think, on the contrary, show pretty clearly why Socrates was correct (once we add an hidden premise to the Socratic position, a premise not discussed by de Kenessey).

Imagine yourself in front of the television, intending to binge watch a season of Black Mirror (or whatever your favorite show happens to be). You think, when you reflect on it, that this isn’t really the best use of your time, and that you should instead pick yourself up and go to the gym, as lately you’ve let yourself go a little, and you don’t feel good, both physically and psychologically. You mull it over a bit, but in the end decide to stay and watch television, with munchies to accompany the experience.

Now imagine, says de Kenessey, an addict who is driving down the projects, thinking that he really ought to stop what he is doing, turn his life around, clean up, get a job, and take care of his family. Nevertheless, he keeps driving to the corner where he regularly meets his dealer, and buys some cocaine instead.

The two cases appear to have a similar structure, like this:

Subject A has two courses of action available to him, X and Y.

A thinks that he should do X, even though he is very tempted by Y.

A ends up doing Y, rather than X.

Socrates has this to say, in the Protagoras, about this kind of situation:

“No one who knows or believes there is something else better than what he is doing, something possible, will go on doing what he had been doing when he could be doing what is better.”

This seems paradoxical, in the original meaning of the term (para doxan = uncommon opinion), as it is a straightforward observation that people, like both our hypothetical television binger and drug addict, very often don’t do what they believe to be the best thing for them. And yet, Socrates is not alone in taking this position. Modern economists such as Paul Samuelson have proposed an approach in behavioral economics known as “revealed preference,” according to which people show what they really like by what they do, not by what they say. Similarly, modern psychology has accumulated a pretty good amount of evidence that we often confabulate about the reasons why we do things, i.e., we make up reasons to justify our actions because we often don’t really have a good understanding of our own motivations.

How does Socrates defend his “paradoxical” position, which seems to fly so clearly in the face of the evidence? He thinks that people in these cases do not suffer from akrasia, i.e., weakness of the will, thus acting against their best judgment. He thinks instead that people are doing exactly what they want to do, but are doing it because of bad judgment. Doing bad things is, therefore, a matter of ignorance, not malice.

Ignorance my ass, one might easily retort. The television watcher is not ignorant, and neither is the drug addict. They don’t luck the pertinent information, they don’t need to be educated about what is going on. True, but the word used in the Platonic dialogues in this context is amathia, which although usually translated as ignorance actually means something closer to un-wisdom, the opposite of sophia, one of the roots of the word philosophy. Socrates is arguing that apparent cases of weakness of the will are actually cases of lack of wisdom — not of factual or empirical knowledge, but of the proper way to arrive at judgments given certain factual or empirical knowledge.

Ever since discovering the Socratic idea of replacing akrasia (and, more importantly, actual “evil”) with amathia I found myself to be significantly more prone to understand others’ motivations and actions, to sympathize with their manifest lack of wisdom even when I cannot possibly condone their actions, and to generally cultivate an attitude of sorrow rather than anger when people do bad things. I find this new approach liberating and far more constructive than either the akratic or, much worse, the moralistic one.

Still, isn’t de Kenessey right that Socrates ends up blaming the victim here, and that it is this sort of blame that justifies the kind of draconian measures implemented by politicians, and supported by the public, that made the so-called war on drugs a total disaster with a high cost to society, both in human and financial terms?

I don’t think so, and the reason is that if we want to read Socrates charitably we need to see that the two cases above are actually distinct, and they are distinct because of a hidden premise in the Socratic approach. That premise is that we are talking about a normally functioning human mind, not a diseased one. It was well known even in the ancient world that human beings have a tendency to reason very poorly when they are under the influence of a number of external conditions, particularly drugs (including wine). A good deal of Greek tragedy is built on that premise, such as Euripides’ The Bacchantes. That is why Diogenes Laertius, commenting on the Stoics — which were explicit followers of Socrates — says that “they will take wine, but not get drunk.” (VII.118) Getting drunk artificially impairs one’s judgment, so when one is under the influence, as we say today, one is not suffering from lack of wisdom, he’s suffering from a temporarily dysfunctional mind.

If this is a reasonable and charitable interpretation of Socrates’ take, then the two cases of the television binger and the drug addict are very different. The first is an actual case of what Socrates is arguing against Protagoras: the binger — in accordance with modern behavioral economics theory — really does prefer to stay at home to watch Black Mirror rather than going to the gym. Yes, of course he knows that in the long run he would be better off taking the second course of action, but he judges that for him, right here and right now, binging is better. His future self be damned. He is, of course, mistaken in such judgment, just like Socrates maintained.

The same reasoning, by contrast, does not apply to the drug addict, precisely because he is an addict, and therefore his judgment is impaired. He is not suffering from amathia, he is suffering from a chemical addiction. And that is why the moralist attitude criticized by de Kenessey is pernicious, because it does not recognize that the person in question is sick, not evil (or unwise, as Socrates would put it).

There is, of course, a wrinkle in all this, which de Kenessey must be aware of, and yet never mentions in his article: on the first occasion that the soon-to-be drug addict decided to take cocaine his judgment was not impaired by being sick, yet. Which means he is still responsible for the initial decision to go down that road. Now we only have two ways of looking at the onset of the addiction, then: either the person is morally bad (the moralist view), or he lacks wisdom (the Socratic view). Not only the second view is more humane, it also makes much more sense than invoking akrasia: the future drug user had not yet had the experience of being on drugs, so he couldn’t possibly have yielded to the temptation of temporary pleasure promised by the drug. More likely, he made the unwise judgment that the drug wasn’t as bad as people say, or that he will have the willpower to resist the addiction, or something along similar lines and to the same effect.

de Kenessey points out that several modern philosophers have attempted to come up with an anti-Socratic account, but they can’t agree on what’s going on: for Harry Frankfurt the desires that represent our true self are those desires that we want ourselves to have (Harry Frankfurt); for Gary Watson they are the desires that align with our judgments of what is valuable; for Michael Bratman they are the desires that cohere with our stable life plans; and for Susan Wolf they are the desires that are supported by rational deliberation (Susan Wolf).

This business of a “true self” is, however, a red herring. As de Kenessey argues, modern psychology has done away with that notion (so did David Hume, two a half century before modern psychology). But the fact remains that “we” do make decisions in response to our desires and as a function of our capacity to arrive at judgments. Whether “we” are made of a unitary self, a bundle of perceptions, or whatever, doesn’t matter. Our judgments are either made by a functional human mind (in which case we are responsible for them) or by a non-functional one (in which case we are sick and need help). The difference between the moralist and Socratic view pertains to the first, not the second case. And there one has a choice of blaming people for the evil doing, or pity them for their lack of wisdom. I find the latter course of action to be far more preferable.