Category Archives: Social & Political Philosophy

On the crucial differences between private and public forgiveness

1 (3)

(Stolen Generation apology, Adelaide, Australia, 13 February 2008)

In the aftermath of the Apartheid government in South Africa, a Truth and Reconciliation Commission (TRC) was formed to help the country move forward at the same time as it acknowledged and attempted to redress the injustices of the past. Not everyone was onboard with the project, but it turned out to be a success in terms of helping to heal the nation. Of 7,000 individuals who applied to the commission for amnesty about 10% were granted it, and the commission proposed a wealth tax to help fund reparation programs. (The South African government, however, never followed through.)


This was not the first TRC, nor would it be the last. An earlier attempt was made by Bolivia in 1982, with its National Commission of Inquiry into Disappearances, which brought together a variety of sectors of society after the end of the military rule there. The very first TRC was the one established in Uganda in 1974. Argentina followed in 1983, Chile and Nepal in 1990, El Salvador and Germany in 1992, Guatemala in 1994, Canada in 2006, and so forth.


Priscilla Hayner in Unspeakable Truths: Transitional Justice and the Challenge of Truth Commissions (Routledge, 2002) defines a TRC in this manner:


“A truth commission (1) is focused on the past, rather than ongoing, events; (2) investigates a pattern of events that took place over a period of time; (3) engages directly and broadly with the affected population, gathering information on their experiences; (4) is a temporary body, with the aim of concluding with a final report; and (5) is officially authorized or empowered by the state under review.”


I’m telling you all this because of a brilliant talk I just heard at City College, entitled “The Broken Promise of Public Forgiveness,” delivered by University of Texas-Austin PhD candidate Simone Gubler. Simone took as her starting point the formal apology proffered by the Parliament of Australia to the indigenous people of that continent, back in 2008. The apology was delivered by Prime Minister Kevin Rudd, who also asked the indigenous people for forgiveness on behalf of the nation.


And here is where things, according to Simone, got problematic. You see, a good argument can be made that forgiveness is an essentially personal process, not one that makes sense at the level of public institutions. Setting aside the obvious observation that the current non-indigenous inhabitants of Australia did not perpetrate the original crimes against the indigenous ones (though, of course, they still take advantage of the aftermath), and setting further aside the fact that groups cannot forgive (only individual members of such groups can), there is an obviously imbalanced power dynamic at play here. Asking for forgiveness at that level in a real sense imposes an implied demand on the other side, along the lines of “hey, I’m being so nice to ask you, and you are going to refuse?”


Individuals are far less likely to feel that pressure. If my partner betrays me and she asks for forgiveness I may or may not grant it. It’s up to me, and it is not a given that I will grant it. But when we scale up from the individual to a social group the dynamics change dramatically, according to Simone, so that forgiveness is no longer about contrite individuals who have come to agree that what they did is wrong, but rather about a political (possibly, though not necessarily, cynical) move in the public arena, meant to elicit a very specific response.


I must admit that I was rather skeptical of Simone’s approach when she outlined what she was going to argue for at the beginning of her talk, but it didn’t take much to convince me that she is right. And then Q&A time came, and my colleague Kate Ritchie upped the ante. She suggested that what Simone said about forgiveness also goes for official apologies. They too, are something that makes sense at the individual level, but not so much at the social one. And apologies too can be given by the wrong person, on behalf of groups who may not agree, used as a power play, and delivered because of cynical calculations. Even when not cynical in nature, both Simone and Kate agreed, requests for forgiveness as well as apologies quickly become empty. Simone mentioned the institution of a national “Sorry Day” in Australia, which, while probably initially well intentioned, has soon turned into a rote empty gesture, particularly since little or nothing is being done in the meantime to actually improve the lives of indigenous people.


Once again, my initial thought was that surely Kate’s point is problematic. But nope, a few seconds of reflection revealed to me that she was right.


But hold on, then. I began this article by talking in positive terms of the notion of Truth and Reconciliation Commissions, and now I’m agreeing with our guest speaker (it was Philosophy Day, by the way) that officially asking for forgiveness, and even issuing public apologies, is problematic. Doesn’t reconciliation imply apologies and forgiveness?


Probably, again, at the individual level, but not the public one. Consider two contrasting situations. The first one is the case mentioned above of my partner cheating on me. Let’s say that she apologized and asked for forgiveness. While I’m not mandated to grant it, it would seem that if I agree to reconcile, to get back together and resume our life as a couple, I have at least implicitly accepted her apology and forgiven her. Without that, life as a couple would be re-established on very shaky foundations indeed.


The second scenario is that of the CEO of Exxon-Mobil apologizing to the public for yet another environmental catastrophe caused by the greedy practices of his corporation. Again setting aside the degree to which it makes sense for a single individual to apologize on behalf of a large anonymous entity which he certainly does not control beyond a limited extent, what I – as a member of the public – expect from Exxon-Mobil is three things and three things only: (i) an acknowledgement of the facts; (ii) some concrete suggestion on how the corporation can repair the damage; and (iii) reasonable assurances that whatever caused the problem will not happen again in the future. Both apologies and forgiveness are entirely beside the point.


The bottom line from all the above is that there is an important difference between the individual and social levels at which concepts like apologies and forgiveness operate. Simone (or yours truly) does not deny that it makes perfect sense for an individual to apologize to another for some wrongdoing. She also agrees (and so do I) that it makes sense for an individual to ask for forgiveness, so long as it is understood that this is not an imposition on the other party, who may or may not grant it. Yet, as we have seen above, both concepts are problematic when scaled up to the level of social groups. If this is true, why do we do it, and how could we do otherwise?


I believe one source of the particular kind of mistake we are talking about is Plato. In the Republic he famously set out to investigate what makes for a just person. His strategy was to scale things up and ask first what makes for a just state (the Republic of the title). The explicit assumption was that one can go back and forth between the two levels. The ideal Republic will be guided by reason (in the form of a ruling class of philosophers), who will direct the other two components (the soldiers-auxiliaries and the regular folks-producers). Likewise, the ideal human being has a tripartite soul, and is well advised to put her rational soul in control of the spirited and appetitive ones.


Yet, most of the ancients rejected this approach, making a clear distinction between individuals and society. Both the Stoics and Cicero talked of ethics (from the Greek êthos, a word related to our idea of character) as distinct from law. Ethics (and morality, from the Latin moralis, which is how Cicero translated the Greek êthos) pertains to the individual and her character; law pertains to how we regulate things in society. A just individual is not the same as a just state. At the individual level the Stoics considered justice a character trait, having to do with treating other people fairly and with respect, but not necessarily equally (e.g., I don’t treat my daughter in the same way as a stranger, even though I try to show respect for everyone); by contrast, at the societal level a just state is one based on just laws, where everyone is treated equally, regardless of personal status.


Simone’s suggestion, then, can be reframed as a call for going back to a meaningful distinction between ethics and law, the first one functioning at the individual, the second at the societal level. It is within this framework that it makes perfect sense to say that apologies and forgiveness have an important role to play in the intercourse between individuals, while at the same time maintaining that they are problematic between groups. So politicians and corporate officers can keep their apologies, and they will not get our forgiveness. What they need to do instead is to acknowledge wrongdoing and put in place workable procedures to redress injustices.

The techno-optimists are at it again

1 (4)(the atomic explosion that destroyed Hiroshima)


Techno-optimism (a form of applied scientism, if you will) is the attitude that no matter how dire humanity’s problems, science and technology will surely come to the rescue. It tends to conveniently neglect that some of humanity’s biggest contemporary problems (say, climate change, or the risk of nuclear annihilation) are, in fact, caused by the willful misuse of science and technology. It seems odd to firmly believe that more of the same thing that caused the disease in the first place will surely cure the disease, because, you know, this time we’ll get it right.


A good example of techno-optimism is a recent article in Slate by Phil Torres, based on his new book, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. It’s actually a somewhat puzzling article, because Torres is both critical and supportive of what he calls radical human enhancement as a way to solve what he perceives as humanity’s existential risks. My take is that he mostly focuses on the wrong risks, some of which are not actually existential at all, and that his proposed solution is more likely to make things worse than better. I think of myself as a realist about technology – one who both appreciates its advantages (I’m writing this on a wonderfully advanced tablet computer!) and is aware of its dark side. But if after reading this you want to class me as a techno-pessimist, I’ll take it. Just don’t dismiss me as a Luddite, okay?


Torres begins by correctly pointing out that the current century is a bit special, in the context both of human evolution and, for that matter, the evolution of life on our planet. For the first time since life emerged 3.5 billion years ago a single, sentient species actually has developed the capacity to profoundly alter Earth’s bio- and geo-spheres. As my favorite philosopher, Spider-Man, warned us, with great power comes great responsibility, but we just don’t seem to be willing to accept that responsibility.


Torres then introduces the concepts of cognitive and moral enhancements, though the word “moral” appears only near the beginning of the piece, with “cognitive” replacing it throughout the rest of the article. That, as we shall see, is a crucial mistake. There are two classes of enhancement, conventional and radical. You are surely familiar with the conventional class (hence the name!): it includes things like education, meditation, and the absorption of caffeine. Okay, it’s an odd mix, but you get the point: anything that improves our cognitive abilities without permanently altering them in a heritable fashion, that is, across generations.


Radical enhancements are a whole different story, and while still at the borderlands between science and science fiction, surely some of them will become available within years or decades. Torres focuses his essay on radical enhancements, since he thinks these are the ones that will be necessary to stave off the existential risks faced by humanity.


One such radical enhancement is embryo selection, a process by which scientists – the wisest of all people, as we all know – pick a subset of embryos generated by a given combination of sperms and eggs, and do so repeatedly in order to improve whatever human characteristic is deemed to be desirable. Torres is perfectly aware that this is eugenics, but he deems it to be of a benign type, because it doesn’t violate people’s autonomy. I guess he hasn’t seen the film Gattaca. And yes, it is perfectly acceptable to object to sci-fi scenarios by using sci-fi philosophical thought experiments. Torres comments:


If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. … According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points – a promising method for creating superbrainy offspring in a relatively short period of time. … As Bostrom puts it … ‘a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.’


I’m afraid that Bostrom and Shulman don’t know what they are talking about, and no such strong prima facie case has been made. At all. First off, we actually don’t understand the genetic basis of intelligence. We know that IQ (which is not at all the same thing as “intelligence,” whatever that is) is heritable in humans. But “heritable” simply means that there is – other things being equal – a statistical correlation between intelligence and genetic makeup. Nothing more, and that ain’t even remotely close enough to what one would need in order to do embryo selection on intelligence, even setting aside the ethical issues, which would be far more thorny than Torres lets on.


Second, and this will become a recurring theme of my critique, “superbrainy” doesn’t seem to have a lot to do with what is really needed: wisdom, or a good moral compass. I seriously doubt that there is any correlation at all between intelligence and morality, and if I’m right, creating a super-race of hyper-intelligent beings with the same highly imperfect moral compass as Homo sapiens sapiens is a sure recipe to accelerate and magnify whatever existential threat Torres, Bostrom and Shulman may be concerned about.


Speaking of which: what does Torres consider to be an existential threat to humanity? At the top of his list he puts “apocalyptic terrorism,” the possibility that someone inspired by a “Manichean belief system” will blow all of us to smithereens with a stolen atomic weapon, in the name of ridding the world of apostates and assorted infidels, thus establishing the kingdom of God on earth.


While surely there is a risk of one such attack, notice a few important caveats. To begin with, there is no credible scenario under which a nuclear terrorist attack would be civilization-ending. Yes, someone may be able to sneak a low grade nuclear weapon in a major city and kill hundreds of thousands, millions even. That would be an unprecedented and horrifying catastrophe. But an existential threat to civilization? No. You know what really constitutes such a threat? The fact that the codes for thousands of nuclear missiles are currently in the hands of an incompetent narcissist sitting in the White House. But, curiously, there is no mention of government-based threats in Torres’ piece. Lastly, please keep in mind that this specific threat is made possible by, you guessed it, science and technology! It’s the very existence of very smart scientists and unscrupulous politicians – none of whom seems to be equipped with even a barely functioning moral compass – that has put us into this situation in the first place. And you think giving more leeway to the same folks is going to save humanity?


More generally speaking, Steven Pinker’s ‘Escalator of Reason’ hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the ‘Flynn effect.’ The most important concept here is that of ‘abstract reasoning,’ which Pinker identifies as being ‘highly correlated’ with IQ. In his words, ‘abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.’


With all due respect to Steven Pinker, I’m going to call bullshit on this one as well. As a start, “violence” has indeed declined since the second half of the 20th century (though even this conclusion comes with huge caveats about what exactly counts as violence), but there is a reason Pinker picked that particular time frame: two world wars had just taken place in the previous half century, killing millions of people, thanks to science and technology. The culmination of that period was the only nuclear attack on civilians in the history of humanity (so far), perpetrated by a government, not a Manichean terrorist; the US government, to be specific.
Moreover, there is no causal model (correlation, as Pinker knows, is not the same as causation) that actually links the Flynn effect (which is probably due to “conventional enhancement techniques,” such as better nutrition and education) and moral improvement. Indeed, I see no reason to believe that humanity at large has improved morally since the times of Socrates and Confucius. And “abstraction from the concrete particulars of immediate experience” is also the sort of thing that makes possible killing at a distance by pushing a button, or that allows many of us to reconcile the otherwise irreconcilable fact that the top 10% of the human population lives by standards historically reserved to Kings and Queens while the rest is below or barely above poverty, subject to preventable disease, or killed by violence rendered particularly effective by technologically advanced weaponry in the hands of unscrupulous governments.


Torres does acknowledge some of the limitations of the approach proposed by techno-optimists like Pinker. After writing that perhaps “idiosyncratic actors” (i.e., terrorists) would suffer from less empathy if they had a higher IQ, he remembers that some real life examples of such actors, like the Unabomber Ted Kaczynski, actually do have high IQs, and yet they are still deficient in empathy. So let me state this clearly: there is no reason whatsoever to think that IQ and empathy are correlated, which throws a big wrench in Pinker’s, Bostrom’s and similar programs of enhancement. Torres continues:


Another major concern: cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button.


Right. But, again, he and his colleagues are insist in worrying about the least likely threats, which, once more, are not actually existential. No Unabomber can end the world. But Donald Trump (just to pick on the current occupant of the WH, it’s not that I trust others a hell of a lot more) can come pretty darn close. But Torres insists:


Although cognitive enhancements could worsen some types of terror agents, the evidence – albeit indirect – suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.


Ahem, no, the evidence suggests no such thing, and in fact the production of a population of “cognitively enhanced cyborg” is a nightmare that only naive techno-optimists could possibly wish on the rest of us. Don’t these people watch any sci-fi, at all? And there is more nonsense on stilts:


It seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in ‘all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.’ … Superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.


Bostrom’s calculations are based on thin air, to be charitable. It isn’t even clear what “all-round cognitive performance” means, let alone how to improve it so efficiently, and it is even more dubitable that such an improvement would actually be an improvement. Also, what sort of model of the brain is Bostrom working with, that allows him to simply sum small percentage increases across different individuals as if they were equivalent to a gigantic increase in a single person? Moreover, look at the list of disasters: most of them are both extremely unlikely and it is just as unlikely that we would be able to do much about them (I wonder why a nearby nova explosion isn’t part of the mix), but the most worrisome ones (climate change, biodiversity loss, emerging technologies, and agential risks) are all made possible by the very same thing that is supposed to save us: more intelligent technology.


Toward the end of the essay we simply reach Pindaric levels of imaginative flights:
There could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.


Sure. Now how about coming down to Earth, our only planet, which we are the ones currently destroying, and talk some sense? One of the problems with techno-optimism is that it captures the imagination with talk of supervolcanoes and “great unknown danger” X, offering us the reassuring but extremely dangerous feeling that all we need to do to get out of the trouble we have stubbornly dug ourselves into is more science. It almost sounds like magic. Because it almost is.


Only at the very end Torres manages to slip in the crucial word that has been missing from the entire exercise: wisdom. The problem is not that we are not smart enough, but rather that we are not wise enough. Science and technology have advanced by giant leaps since the time of Socrates and Confucius, and yet these two (as well as a number of other ancient sages) have been unsurpassed in wisdom by even the most cognitive advanced human beings of the intervening two and a half millennia.


I know it sounds far more pedestrian and unexciting, but what if – for a change – we actually got serious at using the sort of conventional enhancements that are proven to work? First and foremost, education. I don’t mean the sort of STEM-oriented technical tripe that produces more barely functional human drones apt for the use of large corporations. I mean serious education, what the Greeks called paideia, the rearing of good citizens of the polis. And yes, some meditation or other kinds of spiritual exercises, to calm our mind down and center ourselves so that we can live a better and more meaningful life, instead of depending on the constant acquisition of consumer goods for our so-called happiness. And caffeine, of course, that’s always helpful.

Should “the ignorant” be denied access to audiences?

John Stuart Mill

People who ended up voting for Donald Trump were famously characterized by Hillary Clinton as the “basket of deplorables.” And I must admit that I wonder in stupor at the foolishness of US politics, the recent Italian elections, Brexit, or the re-election of Turkish strongman Recep Tayyip Erdogan. Not to mention what seem to be genuinely adoring crowds in Vladimir Putin’s Russia.

How is any of this possible? It’s always a complex combination of factors, of course, ranging from people’s socio-economic situation to their ideological or religious commitments, to deficient education, to the pure and simple human herd instinct that so annoyed Nietzsche. But surely one thing that contributes to the current insane state of affairs is the reach that pernicious ideologues have in the modern era, a reach made far more efficient by the existence of the internet and social media. And by the fact that these people are often offered platforms to address audiences by institutions such as universities, newspapers, television stations and the like.

My colleague Bryan Van Norden, a professor of philosophy at Wuhan University, as well as the author of “Taking Back Philosophy: A Multicultural Manifesto,” has published a thought provoking op-ed about institutional platforms in the New York Times. It is well worth considering in some detail, as I see where Bryan is coming from, but I consider his proposed path dangerous, and his argument self-contradictory.

He begins with a couple of examples. Ultra right-wing commentator Ann Coulter recently appeared on Fox News to say that the crying migrant children separated from their parents by the Trump administration were child actors. Van Norden comments: “Does this groundless claim deserve as much airtime as, for example, a historically informed argument from Ta-Nehisi Coates that structural racism makes the American dream possible?” University of Toronto psychologist, and darling of the alt-right, Jordan Peterson talked about how difficult it is to control “crazy women” and the fact that men naturally can muster respect only for people whom they can threat with violence. Bryan’s comments: “Does this adolescent opinion deserve as much of an audience as the nuanced thoughts of Kate Manne, a professor of philosophy at Cornell University, about the role of ‘himpathy’ in supporting misogyny?”

The classical liberal response to these questions is that Ann Coulter and Jordan Peterson ought to be accorded freedom of speech, on grounds famously laid out by John Stuart Mill in his On Liberty, published in 1859. The argument is based on the following considerations: (i) you may think opinion X is clearly wrong, but history is littered with people, even majorities, who were sure that something was wrong when it turned out that it wasn’t (say, that gays should have a right to marry); (ii) if X is indeed wrong, then we learn something from people who defend it, because we need to make clear to ourselves why a given notion is, in fact, wrong (otherwise, we reject it out of prejudice, not knowledge or understanding); (iii) truth is not an all or nothing matter, so we may learn even from partially or largely wrong opinions; (iv) if an opinion offends you, that’s not sufficient reason to suppress it; and (v) who, exactly, ought to be in charge of limiting the expression of unpopular or “offensive” opinions?

Van Norden calls the above line of reasoning “specious,” adding that it is rooted in “a naïve conception of rationality that [Mill] inherited from Enlightenment thinkers like René Descartes.” [Technically, Descartes influenced the Enlightenment, but was not an Enlightenment thinker, since he lived from 1596 to 1650, and the European Enlightenment was an 18th century thing.]

Bryan argues that “If you do have faith in a universal method of reasoning that everyone accepts, then the Millian defense of absolute free speech is sound,” but he very clearly states that there is no such thing as universal reason, so we should reject Mill’s argument. I think that Van Norden’s statement is ambiguous and that what he argues in the remainder of the NYT op-ed flatly contradicts his opening statement.

He writes: “I wish it were self-evident to everyone that we should not discriminate against people based on their sexual orientation, but the current vice president of the United States does not agree. I wish everyone agreed that it is irrational to deny the evidence that there was a mass shooting in Sandy Hook, but a syndicated radio talk show host can make a career out of arguing for the contrary.”

But the fact that Mike Pence does not agree with a given notion does not mean that the notion in question is not self-evident, it may simply be that Pence denies self-evident truths, either because he is too ignorant to see them, or because of bigotry, or political expediency. Similarly, a nutcase radio talk show host, syndicated or not, may deny empirical evidence all he wants, but that doesn’t mean that his denial is reasonable. At all.

Bryan understands why Mill, and Alexis de Tocqueville, made their argument. Mill was a strong proponent of women’s rights and an opponent of slavery, and he knew too well that many people found such topics offensive, resulting in what he famously termed a tyranny of the majority.

But, argues Van Norden, we are in a very different situation from 19th century England and America. We are witnessing the worsening of a scenario already described by the philosopher Herbert Marcuse back in 1965, when he wrote: “In endlessly dragging debates over the media, the stupid opinion is treated with the same respect as the intelligent one, the misinformed may talk as long as the informed, and propaganda rides along with education, truth with falsehood.”

This is quite obviously true, of course (or is it?). Only a foolish society would give “equal time” to the discussion of evolutionary theory and creation “science,” or to a climate researcher and a so-called “skeptic” of global warming, or a medical researcher and Jenny McCarthy. But setting aside that a lot of other cases, especially political opinions (as distinct from scientific theories) are not quite so easy to settle, what is the alternative? Mill wasn’t naive about how difficult it is for most people to wade through public controversies. He just thought that freedom of speech was the least of possible evils.

Marcuse famously advocated the outright suppression of right-wing perspectives, a position that, thankfully, Bryan does not endorse. Instead, he makes an intriguing proposal: to distinguish between free speech and just access: “access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole.”

But that comes perilously close to begging the question against Mill: on what criteria should we apportion the merit of different opinions? How do we figure out what is just? How do we measure the benefit of an opinion for the community as a whole? Recall that Van Norden has denies that there is such thing as universal reason. It follows that all such judgments are bound to be arbitrary, and therefore simply to reflect the will of the people who happen to be wielding power by virtue of controlling the limited resources Bryan is referring to. This may not be quite a tyranny of the majority, but it is still a tyranny (of the elite, perhaps?).

Let’s take a look at some of the specific examples Van Norden brings up. In 2004 one Nathaniel Abraham was fired by the Woods Hole Oceanographic Institute because he admitted to his employer that he did not believe in evolution. Correctly, Bryan asserts that Abraham has a right to his wacky opinion, but that Woods Hole has a right to fire him on the grounds that he holds such opinion. But this has nothing to do with freedom of speech or institutional access: Woods Hole is a preeminent research laboratory that carries out a lot of work on evolution, so Abraham had simply admitted to his incompetence at working there. It would be like NASA firing a flat-earth believer. Or a hospital a doctor who did not “believe” in vaccines.

The next example is more pertinent, but far less clear: Van Norden claims that a number of universities, including Columbia and NYU, should not have invited Charles Murray, the co-author of The Bell Curve: Intelligence and Class Structure in American Life to speak on their campuses. Why? Because Murray’s notions are junk science. That is true, I think (for a variety of reasons, including those explained here and here), but there are two additional factors to consider. First off, “universities” don’t invite anyone; often it is specific faculty or student associations that do. And to bar invited speakers in either case amounts to an infringement of academic freedom or students’ rights. Second, I am of the opinion that a significant chunk of what goes on in a number of legitimate university departments is either questionable or downright junk (no, I will not mention names). But, again, I don’t get to decide which is which. I do get, however, to argue — in perfectly Millian fashion — in favor or against certain programs, positions, claims, and so forth.

Bryan’s third example is the recent firing by ABC of their television star, Roseanne Barr, because of her racist public remarks. But that’s yet another situation altogether. Barr did not make her remarks on television, and she was fired from ABC because the network was (rightly, I think) embarrassed by her behavior, and feared a public backlash. Of course, had the episode happened, say, in the 1950s, ABC would have likely not moved a finger about it. I assume it is a rationally objective fact that we have made (some) improvements in our thinking about race and gender since then, but of course Van Norden cannot claim so, because he does not believe in universal reason.

Bryan mentions recent research in social psychology showing that if a falsehood is repeated, even when it is in order to debunk it, people are more likely to believe it. This is both true (maybe, since there is a replication crisis ongoing in that field) and worrisome, but is it — as Van Norden claims — reason to cheer MSNBC’s “Morning Joe” decision never again to invite Kellyanne Conway because of her bizarre notion of “alternative facts”? I don’t know. It is very unfortunate that someone like Conway is currently a high profile government official, but isn’t a journalist supposed to challenge that sort of notion, rather than suppress it? Besides, by way of similar actions MSNBC has now gathered the reputation (deservedly or not) of the left’s Fox, which makes their decision about Conway come across to many as naked partisanship. Is this really helpful to public discourse? I’m not so sure.

Bryan says that “right to free speech is not the right to an audience,” and he is correct. But in philosophy we make a distinction between negative and positive rights. You may have, say, the negative right of being allowed to leave the country whenever you wish. But if things are such that you could never muster the means to actually leave, you do not have a corresponding positive right, and negative rights by themselves are largely useless. To pick a more concrete example, in the US (for now) women have a right to abortion. But such right is meaningless if local state legislatures make it so difficult for abortion clinics to practice that for all effective purposes a woman in Texas or Alabama has to drive hundreds of miles, or even go out of state, to get an abortion. Ironically, it is a typical tactic of the right that whenever they cannot eliminate a negative right (like abortion, again, for now) they go after its positive counterpart, thus making it difficult or impossible for people to enjoy that right. The same goes for speech: if I have a “right” to it, but I am then systematically denied audiences by a small number of gatekeepers, I might as well shout in the void. And, again, who gets to make such decisions, and on what grounds, given that there is no universal reason?

Van Norden concludes his op-ed by stating: “These views [that he criticizes] are specious, and those who espouse them are, at best, ignorant, at worst, sophists,” calling people who hold those views “invincibly ignorant and intellectual hucksters.” It sounds to me like Bryan thinks he has good reasons to think that these people’s opinions are, in fact, wrong. I agree with his assessment. And so should any reasonable person, because reason isn’t a matter of your personal opinion — across time and cultures. There are standards of evidence and argument that have been worked out over the past two and a half millennia of philosophy and science, way before the European Enlightenment came about. On my part, I prefer by far a society where we do our utmost so that more and more people are familiar with such standards and apply them properly, rather than one in which whoever happens to be in charge is going to decide which resources to apportion to whom. Call me an old fashioned Millian, in that sense.

We are all invited to the conversation, and if you disagree you may be logophobic

“There is no greater evil one can suffer than to hate reasonable discourse.” (Socrates, in the Phaedo)

You may have noticed that these days it has become really difficult to have a reasonable conversation. With almost anyone. I’m not referring to the obvious culprits: Trumpists, purveyors of alternative facts, or those who dismiss critical journalism as “fake news.” I’m not even talking about the classical examples, from creationists to believers in all sorts of alternative medicine, the paranormal, UFOs, and so forth (I wrote a whole book about those). I’m talking about my own “tribe,” if you will: progressive, left-leaning liberals.

I have been one of those since I was young, convinced that the fascism that had gripped my own country, Italy, for the two decades leading to World War II was the true evil of the world. Yet, even back in high school I experienced the disturbing intolerance of certain quarters of the left, where any thought that did not align exactly with the orthodoxy of the moment was automatically branded as either “fascist” (thus emptying that word of meaning), or, at best, “qualunquista” (literally “one with random opinions”).

I profess cosmopolitanism, the notion — going back to the ancient Cynics and Stoics — that we are all human beings, endowed by that very fact with a degree of dignity that deserves respect and, most importantly, equal rights. I regularly donate to organizations that fight on behalf of civil rights, of immigrants, and of the freedom of the press. I think that women ought to be paid equally to men and ought to be afforded exactly the same educational and working opportunities. I think it monstrous to deny rights to Lesbians, Gays, Bisexual, Transgender, and Queer/Questioning (LGBTQ). I am not religious, but I do not believe that religion is the root of all evil, nor do I believe that any particular religion is ethically worse than any other (with the exceptions of some cults like Scientology).

Yet, I have been told to go fuck myself, you transphobic bastard, simply because I tweeted — without comment — an article that explored some interesting, and to me rather puzzling, conflicts among segments of the feminist, lesbian, and transgender communities. According to this otherwise thoughtful and in-depth piece, my sexual aesthetic preference for slender women cannot but be political, which, obviously, makes me “fatphobic.” If I raise the issue of Islamic terrorism, in a reasonable and historically informed fashion, I risk being branded an Islamophobe (though, funny enough, if I mention Christian terrorism linked to abortion clinics or white supremacy, I get plenty of nods of approval, within my tribe). If I voice the opinion that while the Israeli are guilty of unlawful occupation of Palestinian territory and of slaughtering unharmed innocents, it is also true that too many Palestinians are intolerant and have systematically undermined their own case, then I am, of course, a fascist (again) Islamophobe pig. And so on and so forth.

And then there is this rather new, entirely postmodernist malady: the idea that unless someone has actually experienced something (being it colonialism, gender discrimination, racism, and so forth) then one is not entitled to express an opinion about it, especially if one happens to be a “privileged” white male.

Let me be clear about the latter two points. First, yes, I amply recognize my privilege. I just don’t call it that, because it’s not helpful. I happen to think that everyone, regardless of gender or ethnicity, ought to have a somewhat nurturing family, a decent education, job opportunities, and be safe from discrimination and police violence, like I have been throughout my life. It’s not a fucking privilege, it’s the way all human beings should live. Period. And that’s going to happen only if we stand united to fight discrimination and injustice, rather than wasting energy and time shouting down some of our own allies, shaming them for having gotten lucky at the gene-cultural lottery. That sort of shouting — especially anonymously on social media — may make you feel virtuous, but it is positively damaging to the causes you allegedly care about.

Second, yes, it is certainly the case that people who directly experience discrimination are in a unique position to talk about it, and that they should be afforded a primary place in public discourse in virtue of that very fact. But we also know from plenty of research in social psychology (see here and here, for example) that first person experience is extremely unreliable, as people filter such experiences through all sorts of cognitive biases that inevitably distort their perceptions of things. That is why we need the rest of society to also come to the discussion table. Social progress is made possible by the constructive, and yes, even critical, dialogue among many voices, the by now increasingly old fashioned value of pluralism.

At his point, some readers will have already put down this essay and rushed to social media to begin a campaign of shaming, outraged by the few paragraphs above. So I can keep talking to those who have stuck it so far. Those who have left the conversation, I maintain, suffer from a peculiar disease described by philosopher Nicholas Shackel in a delightful paper published back in 2005 (Metaphilosophy 36: 295–320), entitled “The vacuity of postmodernist methodology: logophobia.”

The Greek roots of the term are obvious: “logos” means “thought,” particularly rational thought, and “phobos” means “fear of.” Logophobia is defined by Shackel as, “a skeptical doctrine about rationality … [where] rationality cannot be an objective constraint on us but is just whatever we make it, and what we make it depends on what we value.” He adds, “[opponents] are held to disguise their self‐interested construction of rationality behind a metaphysically inflated view of rationality in which Reason with a capital R is supposed to transcend the merely empirical selves of rational beings.” In other words, logophobics claim that since reason cannot possibly solve every problem, so you can proceed with dismissing reason altogether.

Shackel’s paper is actually a serious, and at times rather technical, critique of the modus operandi of postmodernist (and deconstructionist, and anti‐rationalist feminist) authors such as Michel Foucault, Jacques Derrida, Jean‐Francois Lyotard, David Bloor, Richard Rorty and Stanley Fish, to mention just a few of the major offenders (though I personally make distinctions of degree among the named people). At one point I added to the list a number of pseudoscientists, such as the Intelligent Design proponents Bill Dembski and Michael Behe, alternative medicine “gurus” such as Deepak Chopra, or vaccination‐deniers such as Jenny McCarthy. And now, to my deep regret, it is becoming painfully obvious that both the extreme right (which, at the moment, appears to include most of the Republican party in the US), and certain increasingly vocal groups of the left, have been struck by the disease as well.

According to Shackel, logophobics have developed an arsenal of strategies to obfuscate clear thinking, which they deploy whenever pressed by a skeptic. For instance, consider Foucault’s classic thesis that there is no such thing as scientific truth, because truth is a reflection of local socioeconomic and political power: “The essential political problem for the intellectual is not to criticise the ideological contents supposedly linked to science, or to ensure that his own scientific practice is accompanied by a correct ideology, but that of ascertaining the possibility of constituting a new politics of truth.” This is apparently strong stuff, but it becomes a much milder statement when Foucault later tells us what he means by “truth”: “‘Truth’ is to be understood as a system of ordered procedures for the production, regulation, distribution, circulation and operation of statements.” Really? And here I was thinking that truth means as accurate a description of a subject‐independent reality as possible. Silly me.

Shackel calls this “troll truism,” which he defines as, “a mildly ambiguous statement by which an exciting falsehood may trade on a trivial truth.” It is a particular form of so‐called Humpty‐Dumptying, named after the character in Lewis Carroll’s Through the Looking Glass. Humpty Dumpty famously tells Alice, after she points out that he is arbitrarily switching word meanings in the course of their discussion: “When I use a word, it means just what I choose it to mean — neither more nor less.”

Now, to be fair, Foucault did have a point: science is a social activity, and scientists are human beings, prone to the same cognitive and cultural biases as everyone else. Also, some science has been in the thralls of either economic or political agendas, or both. Similarly, when Amia Srinivasan, in the article linked above, claims that even our sexual preferences have a political dimension, the thought is neither absurd nor easy to dismiss. But the political, or the economic, are not the beginning and end of everything we do or think. There are (Zeus forbid!) biological instincts and constraints on our behavior, there are matters of taste, and there are simple reasonable disagreements. To reduce everything to the political is to commit the Marxist mistake, only in the latter case everything reduces to economic struggle. No, the human experience is just too varied and complex to be explained away by mono-factorial accounts, no matter how good the intentions of those that push for such accounts.

Logophobia, I maintain, is a broad condition that can strike adult humans of all ages, genders and cultural backgrounds, especially when they have never been exposed to the basics of critical reasoning, or when they have grown up in the thralls of a powerful ideological system. And no, being a scientist, or a philosopher, does not make you immune, unfortunately. The effect of the disease can probably never be eliminated, as we all suffer from it, to an extent. But they can be reduced by means of sound education, although that requires painful effort on the part of educators and subjects alike. Once the subject is past middle school, it becomes increasingly difficult, though not quite impossible, to overcome the malady. But it’s important to try, as huge amounts of financial resources and time are wasted as a result. Occasionally, lives are lost as a direct outcome of logophobia, especially when the logophobic is a politician with the power to start a war, or a celebrity pushing an anti‐medical health practice. And important political agendas may unwittingly be delayed or even undermined when logophobic activists manage to shut down discourse they find threatening, insulting, or “fascistic.”

Needless to say, fighting the spread of logophobia is a primary responsibility of every reasonable thinking person, practicing scientist, and professional philosopher, despite the highly unfavourable odds against defeating it — which is why a thorough knowledge of the disease and of its symptoms is so crucial. Spread the word, and watch out for logophobics near you! (And beware of not becoming one yourself…)

The morality-as-cooperation hypothesis and the link between evolution and moral philosophy

“Morality” comes from the Latin moralis, which was Cicero’s translation for the Greek ethos. The Greek word is related to our idea of character, the Latin one has to do with habits and customs. In an important sense, then, morality (or ethics, used here interchangeably) is the study of how to conduct your life, and particularly how to constructively deal with other people.

Recently, a group of researchers headed by Oliver Scott Curry has put out a paper (as yet unpublished) entitled “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” which is both interesting and more than a bit irritating. (Here is the site with the full version, and here is a commentary by the lead author, put out by the Evolution Institute.) It is interesting because it provides tantalizing empirical evidence to bolster the case, made by several researchers for a long time now, that the bio-cultural origins of a moral sense lie in the need for a species like ours to foster pro-social behavior, or as Curry et al. put it, “cooperation.” Frustrating because it fails to make the crucial conceptual distinction between the origins of morality and its current function, as well to recognize the equally important difference between descriptive and prescriptive approaches to ethics. I’ll do my best to highlight both the positives and the negative in what follows.

First off, a recap of the study as presented by the authors themselves:

“What is morality? And to what extent does it vary around the world? The theory of ‘morality-as-cooperation’ argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and predicts that specific forms of cooperative behaviour — including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession — will be considered morally good wherever they arise, in all cultures. In order to test these predictions, we investigate the moral valence of these seven cooperative behaviours in the ethnographic records of 60 societies. We find that the moral valence of these behaviours is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviours are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.”

Curry’s commentary begins with more than a whiff of scientism: “What is morality? And are there any universal moral values? Scholars have debated these questions for millennia. But now, thanks to science, we have the answers.” Ah! Yes, thanks to science we have the answers! Except that those two questions are quite distinct, of course. The nature of morality has been debated by Socrates (in the as yet unsurpassed Euthyphro) and is of course the topic of the entire field of meta-ethics. Whether there are universal moral values is a rather ambiguous question: do we mean whether there ought to be (prescriptive) or whether there merely happen to be (descriptive)? Philosophers tend to be concerned with the first sense (e.g., Kant), and are more than happy to leave the second one to anthropologists (it is, after all, an empirical question!).

Curry suggests that addressing problems of social behavior is something that has kept natural selection busy, so to speak, for millions of years during the evolution that led to the human lineage, and that moreover, once cultural evolution got started people have added a panoply of norms, rules, and institutions to deal with such problems. So far so good.

The approach used by Curry and his collaborators revolves around seven principles imported from evolutionary biology: “Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains: why we engage in costly displays of prowess such as bravery and generosity; why we defer to our superiors; why we divide disputed resources fairly; and why we recognize prior possession.”

Setting aside a few caveats (like the fact that not all societies actually have a taboo against incest), this theoretical apparatus does, in fact, go a long way toward explaining the origin of the behaviors mentioned. That brought the researchers to predict the universality of the following seven moral rules: “love your family, help your group, return favors, be brave, defer to authority, be fair, and respect others’ property.” And their results do show that these rules are present across cultures, though different societies vary in the way they rank or prioritize the rules. While their data do not explain the reasons for this variation, they also found no case were any of the rules was considered bad.

The overarching conclusion: “so there is a common core of universal moral principles. Morality is always and everywhere a cooperative phenomenon.”

Woah, slow down a second here. First off, perhaps the seven principles in question are common across modern cultures, but that does not ipso facto mean that they always were, nor that they evolved by natural selection, though that is a plausible scenario. Take, for instance, “be fair.” Are we really to understand that “fairness” was a moral precept in the highly unequal ancient Egyptian, or Roman (or pretty much all other) societies? And what do we mean by “fair,” anyway? Fairness may be the same as equality, or not, depending on one’s views, so we have to have a discussion about what it means and how the concept is interpreted in various societies.

“Be brave” is another highly ambiguous idea, and again it is hard to imagine that it means the same thing in, say, modern day Japan, with its penchant for anti-individualism, and ancient Sparta, which was characterized by a cult of individual bravery. And of course there are many different ways of being brave (or cowardly).

Second, jumping to the conclusion that morality is “always and everywhere” about cooperation seems entirely unwarranted. We are not ants or bees, each of us has autonomous projects that we intend to pursue and that have a large impact on the degree to which we flourish, as individuals. Yes, human beings are essentially social animals, and we flourish within a given social framework, but I would argue that morality isn’t about cooperation, but rather about how to balance self-centered vs pro-social needs. The two aspects — myself as an autonomous individual and myself as a member of a group — are in constant tension, and it is that tension that morality addresses, not just the group aspect of it. Cooperation is just one means to the end of survival, reproduction, and flourishing.

And by the way, natural selection doesn’t give a damn about the latter, but we most certainly do. Which is why a lot of our norms, rules, and institutions don’t address mere survival and reproduction, but rather issues such as justice, fairness, and access to resources. That is, they are concerned with flourishing.

Third, one can perfectly coherently object to some of those rules even being on a list of moral dicta. Let’s consider three of them a bit more closely: help your group, defer to authority, and respect others’ property.

At the least since the ancient Cynics and Stoics introduced the concept of cosmopolitanism, it is actually highly morally questionable to “help your group” if this is taken to mean regardless of circumstances, or at the expense of other groups that have an equal claim to resources. Today, it is not acceptable to engage in wars of conquest, for instance, in order to make our group thrive (the fact that it happens is irrelevant to the moral point). It isn’t even acceptable to exploit other groups economically, or by imposing on them environmental damages for our own convenience.

As for deferring to authority, nope, that’s a no starter. It may be a common cross-cultural practice, but it’s a bad idea, and it is in fact highly immoral if one does so blindly, following the orders of whoever is in charge. That way lies the road to tyranny, which is not morally acceptable, in either ancient or modern societies.

But surely we should respect other people’s property. Well, it depends. If it is acquired unethically, even if legally, no, I don’t think there is any such moral requirement. If your wealth is both disproportionate and arrived at by exploiting others (and let’s be frank, if it is the former, it can hardly not be the latter), then it is just and fair to pass laws to relieve you of much of that burden, through proportional taxation, for instance. My libertarian friends can scream loudly all they want, but that doesn’t make their claims to having a “natural right” to property one bit less nonsense on stilts (in the immortal phrase by Jeremy Bentham).

So, by all means let us inquire into the likely biological and cultural evolution of morality. It is a fascinating topic, somewhat amenable to empirical investigation. But let’s not confuse the descriptive with the normative aspects of the question, just like we wouldn’t for a second confuse, say, an account of the origin of mathematical thinking with the delivery of a sound proof of Fermat’s last theorem. Any time Curry et al. tell me that rule X is universal it is still perfectly meaningful for me to reply, yes, but is it moral?

Socrates, weakness of the will, and addiction

Socrates“People are dying because we misunderstand how those with addiction think,” says the title of a recent article in Vox by philosopher Brendan de Kenessey, who argues that addiction is not a moral failure, and that it is the moralistic attitude of a number of politicians and a significant portion of the public that makes the problem more difficult to deal with. Addicts are not bad people who need to be punished, he says, they are sick and need help.

And he is completely right, I think. And yet, I also suggest that the bulk of the article is based on the wrong philosophical criticism. de Kenessey blames Socrates for the moralistic attitude, while he should blame certain brands of Christianity instead. Here I will not make the positive case against Christian moralism (which is well known among certain politicians of a certain party in the US), nor will I unpack the idea that addicts are sick, not bad, people, as de Kenessey does a very fine job of that in his article. But I will defend Socrates and use the occasion to talk a bit not just about addiction, but in general the phenomenon of what the Greeks called akrasia, or weakness of the will, and which Socrates thought simply does not exist.

The starting point of de Kenessey’s analysis of the Socratic problem is the Platonic dialogue known as the Protagoras, in which the discussion between the Athenian sage and one of the most famous sophists turns to the topic of akrasia. Let’s contrast two instances of alleged akrasia, brought up by de Kenessey to make his point against Socrates, and which I think, on the contrary, show pretty clearly why Socrates was correct (once we add an hidden premise to the Socratic position, a premise not discussed by de Kenessey).

Imagine yourself in front of the television, intending to binge watch a season of Black Mirror (or whatever your favorite show happens to be). You think, when you reflect on it, that this isn’t really the best use of your time, and that you should instead pick yourself up and go to the gym, as lately you’ve let yourself go a little, and you don’t feel good, both physically and psychologically. You mull it over a bit, but in the end decide to stay and watch television, with munchies to accompany the experience.

Now imagine, says de Kenessey, an addict who is driving down the projects, thinking that he really ought to stop what he is doing, turn his life around, clean up, get a job, and take care of his family. Nevertheless, he keeps driving to the corner where he regularly meets his dealer, and buys some cocaine instead.

The two cases appear to have a similar structure, like this:

Subject A has two courses of action available to him, X and Y.

A thinks that he should do X, even though he is very tempted by Y.

A ends up doing Y, rather than X.

Socrates has this to say, in the Protagoras, about this kind of situation:

“No one who knows or believes there is something else better than what he is doing, something possible, will go on doing what he had been doing when he could be doing what is better.”

This seems paradoxical, in the original meaning of the term (para doxan = uncommon opinion), as it is a straightforward observation that people, like both our hypothetical television binger and drug addict, very often don’t do what they believe to be the best thing for them. And yet, Socrates is not alone in taking this position. Modern economists such as Paul Samuelson have proposed an approach in behavioral economics known as “revealed preference,” according to which people show what they really like by what they do, not by what they say. Similarly, modern psychology has accumulated a pretty good amount of evidence that we often confabulate about the reasons why we do things, i.e., we make up reasons to justify our actions because we often don’t really have a good understanding of our own motivations.

How does Socrates defend his “paradoxical” position, which seems to fly so clearly in the face of the evidence? He thinks that people in these cases do not suffer from akrasia, i.e., weakness of the will, thus acting against their best judgment. He thinks instead that people are doing exactly what they want to do, but are doing it because of bad judgment. Doing bad things is, therefore, a matter of ignorance, not malice.

Ignorance my ass, one might easily retort. The television watcher is not ignorant, and neither is the drug addict. They don’t luck the pertinent information, they don’t need to be educated about what is going on. True, but the word used in the Platonic dialogues in this context is amathia, which although usually translated as ignorance actually means something closer to un-wisdom, the opposite of sophia, one of the roots of the word philosophy. Socrates is arguing that apparent cases of weakness of the will are actually cases of lack of wisdom — not of factual or empirical knowledge, but of the proper way to arrive at judgments given certain factual or empirical knowledge.

Ever since discovering the Socratic idea of replacing akrasia (and, more importantly, actual “evil”) with amathia I found myself to be significantly more prone to understand others’ motivations and actions, to sympathize with their manifest lack of wisdom even when I cannot possibly condone their actions, and to generally cultivate an attitude of sorrow rather than anger when people do bad things. I find this new approach liberating and far more constructive than either the akratic or, much worse, the moralistic one.

Still, isn’t de Kenessey right that Socrates ends up blaming the victim here, and that it is this sort of blame that justifies the kind of draconian measures implemented by politicians, and supported by the public, that made the so-called war on drugs a total disaster with a high cost to society, both in human and financial terms?

I don’t think so, and the reason is that if we want to read Socrates charitably we need to see that the two cases above are actually distinct, and they are distinct because of a hidden premise in the Socratic approach. That premise is that we are talking about a normally functioning human mind, not a diseased one. It was well known even in the ancient world that human beings have a tendency to reason very poorly when they are under the influence of a number of external conditions, particularly drugs (including wine). A good deal of Greek tragedy is built on that premise, such as Euripides’ The Bacchantes. That is why Diogenes Laertius, commenting on the Stoics — which were explicit followers of Socrates — says that “they will take wine, but not get drunk.” (VII.118) Getting drunk artificially impairs one’s judgment, so when one is under the influence, as we say today, one is not suffering from lack of wisdom, he’s suffering from a temporarily dysfunctional mind.

If this is a reasonable and charitable interpretation of Socrates’ take, then the two cases of the television binger and the drug addict are very different. The first is an actual case of what Socrates is arguing against Protagoras: the binger — in accordance with modern behavioral economics theory — really does prefer to stay at home to watch Black Mirror rather than going to the gym. Yes, of course he knows that in the long run he would be better off taking the second course of action, but he judges that for him, right here and right now, binging is better. His future self be damned. He is, of course, mistaken in such judgment, just like Socrates maintained.

The same reasoning, by contrast, does not apply to the drug addict, precisely because he is an addict, and therefore his judgment is impaired. He is not suffering from amathia, he is suffering from a chemical addiction. And that is why the moralist attitude criticized by de Kenessey is pernicious, because it does not recognize that the person in question is sick, not evil (or unwise, as Socrates would put it).

There is, of course, a wrinkle in all this, which de Kenessey must be aware of, and yet never mentions in his article: on the first occasion that the soon-to-be drug addict decided to take cocaine his judgment was not impaired by being sick, yet. Which means he is still responsible for the initial decision to go down that road. Now we only have two ways of looking at the onset of the addiction, then: either the person is morally bad (the moralist view), or he lacks wisdom (the Socratic view). Not only the second view is more humane, it also makes much more sense than invoking akrasia: the future drug user had not yet had the experience of being on drugs, so he couldn’t possibly have yielded to the temptation of temporary pleasure promised by the drug. More likely, he made the unwise judgment that the drug wasn’t as bad as people say, or that he will have the willpower to resist the addiction, or something along similar lines and to the same effect.

de Kenessey points out that several modern philosophers have attempted to come up with an anti-Socratic account, but they can’t agree on what’s going on: for Harry Frankfurt the desires that represent our true self are those desires that we want ourselves to have (Harry Frankfurt); for Gary Watson they are the desires that align with our judgments of what is valuable; for Michael Bratman they are the desires that cohere with our stable life plans; and for Susan Wolf they are the desires that are supported by rational deliberation (Susan Wolf).

This business of a “true self” is, however, a red herring. As de Kenessey argues, modern psychology has done away with that notion (so did David Hume, two a half century before modern psychology). But the fact remains that “we” do make decisions in response to our desires and as a function of our capacity to arrive at judgments. Whether “we” are made of a unitary self, a bundle of perceptions, or whatever, doesn’t matter. Our judgments are either made by a functional human mind (in which case we are responsible for them) or by a non-functional one (in which case we are sick and need help). The difference between the moralist and Socratic view pertains to the first, not the second case. And there one has a choice of blaming people for the evil doing, or pity them for their lack of wisdom. I find the latter course of action to be far more preferable.

Against The Four: Amazon, Apple, Facebook, and Google

Amazon Apple Facebook Google“The Four” are the giant tech companies Amazon, Apple, Facebook and Google. You can read all about why they are a problem in Scott Galloway’s informative book, The Four: The Hidden Dna of Amazon, Apple, Facebook, and Google. Galloway is a Professor at NYU’s Stern School of Business, where he teaches brand strategy and digital marketing. He is also the founder of several firms including L2, Red Envelope, and Prophet. In 2012, he was named “one of the world’s 50 best business school professors” (no, this isn’t an oxymoron) by Poets & Quants. Moreover, he has served on the boards of Eddie Bauer, The New York Times Company, Gateway Computer, and Berkeley’s Haas School of Business. In other words, this is someone who knows a lot about corporate culture, and not at all a left wing moralist such as myself.

(If you don’t have time to read the book, look at these two articles that turned me onto it, in Wired magazine, and over at the BBC.)

In a nutshell, the problem with The Four is that they simply have far too much power in our lives, both in terms of the information they store about us (and how they use it), and of their financial muscle, which of course easily turns into political influence. From the BBC article:

“The four most important tech companies aren’t even just tech companies anymore. They each have embedded themselves in our lives, hugely influencing us by playing to our basic human instincts: from the eternal human search for answers to our need for love. … [Galloway] also says that the companies’ deep pockets and massive customer base are what allow the companies to start competing in different sectors and industries – like how Amazon is now producing original TV content, or how Facebook has more or less emerged as a news media platform. That has prompted scepticism and warnings from critics at places like the New York Times and Bloomberg.”

And that was before Amazon bought Whole Foods, for instance. You can dig into the details yourself, no point for me to repeat here easily found material. My objection to The Four is ethical: I am generally skeptical of any concentration of power, especially in the private sector (but not only: governments are a little better to the extent that they really are accountable to their people by means of a not too dysfunctional democracy. The US Government does not, at this point, qualify, for instance). But I also recognize that these and other tech companies have made possible a wonderful number of things, including, of course, this very blog (which in its initial incarnation, Rationally Speaking, was hosted by Google, with essays that were for many years written on a MacBook or an iPad, and are still broadcasted via Facebook).

Moreover, I am certainly not an anti-technologist. On the contrary, until recently I was a very early adopter of new technologies. I bought the first iPhone, and then an iPad (and then an iPad Pro, which replaced my laptop), signed up on Facebook very early on, have used Amazon for many years, and have a very very long browser history with Google.

And before you accuse me of naivete, I am perfectly aware that The Four aren’t the only giant tech companies to be wary of (add Twitter, Verizon, Samsung, just for starters), and the list gets far longer when one moves to large corporations in general. Ultimately, I think the only reasonable and effective way to curb corporate power is by legislation, along the model of the famous breakup of Bell back in 1982. As a society, we want innovation, and we certainly want private entities to benefit from their work. But innovation needs competition, not near monopolies, and benefiting from one’s work does not equate creating a very small class of ultra-billionaires who exploit their workers (like Amazon certainly does), including in other countries (like Apple equally certainly does).

But while we are waiting for governments to take action (more likely in Europe than in the US, at the moment — see Galloway’s take here), it doesn’t mean we have to be complicit enablers. I try to practice what in philosophy is called virtue ethics, which means that my first focus is on improving my own character, which in turn requires acting as virtuously (in the Greco-Roman, not the Christian sense) as possible. It follows, it seems to me, that I need to extricate myself as much as possible from The Four, as an initial step.

And that’s where I discovered two interesting things, which are the main objects of this post. First, it is much harder than one might at first imagine. Second, you are unlikely to get a lot of support even from friends and family, who might even exhibit hostility to your intentions. Let me explain.

They are called The Four for a reason. They are everywhere, and next to impossible to avoid, unless you are willing to completely disengage from the Internet. In the postscript, I detail the steps I have taken so far, in case anyone else wishes to try it. You can thank me later for having saved you endless hours of web searching (using DuckDuckGo, of course…).

Apple was actually the easiest to get rid of. Because their ecosystem is so tight and positively discourages any contact with the outside, once you decide to get out of it, you pretty much have to go the whole nine yards. This pained me, because I have been an Apple fan ever since I ditched Microsoft because of the poor quality of their products, back in 2004. But just a couple of weeks later, I hardly miss my iPad and iPhone, and I most certainly don’t miss the Watch, one of the most intrusive gadgets ever made.

Next was Amazon. The big steps here were to stop shopping on their online store (easy, plenty of alternatives), to replace the Kindle with one of several other high quality e-book readers, and to begin to direct readers of my own books to either publishers’ web sites or other e-book stores. of course, the bulk of my collection of books is on Amazon, but I’ll eventually get it back by way of available software that decrypts the files and turns them into the popular epub format. I still watch Amazon videos, because they are good and not available elsewhere. Before you accuse me of hypocrisy, however, keep in mind that the goal is to minimize my footprint on The Four, so to speak, not to eliminate them from my life altogether. It’s an incremental project, not a revolution.

Which brings me to Google. In a sense, I actually increased my use of their products, since now my office suite is the Google one, replacing Apple’s iWorks. But it is a temporary transition dictated by limited time available to search for long term suitable alternatives, and by the need not to disrupt several ongoing collaborative works. And at any rate, I hit Google where it hurts, their web search engine, which produces their advertisement revenues and is of course highly invasive of our privacy. DuckDuckGo does an excellent replacement job.

Finally, Facebook. This was the hardest, again unless I was willing to forgo keeping in touch with (real) friends and family, and also to give up my outreach presence (my “official” philosophy page, my participation to the largest online Stoic community, and a few other things). What I did was to get rid of their obnoxious Messenger app, as well as “unlike” and “unfollow” a crapload of pages that were, of course, generating lots of targeted advertisements. I am now using Facebook with a very small and tightly guarded circle of actual friends and family, as well as for the above mentioned outreach, nothing else.

So the bottom line of the first point is that this exercise showed me very clearly just how dependent our lives have become from The Four. Perhaps this should not have been surprising, but experiencing the full measure of it in such a short period was eye opening. The other thing that was eye opening relates the second point: the comparative lack of support, and occasionally more or less overt hostility, I got from friends and family (and, I’m sure, from some readers, now that I’ve put this out).

When I explained what I was doing and why, a good number of people were puzzled, and began immediately to mount arguments against my enterprise. “It’s useless.” “You won’t succeed” “It’s going to cost you a lot of money and time.” “What do you have against corporations?” “Are you a Luddite?” “Why do you hate America?” Okay, I made up the last one, but the others have been thrown at me fast and furious during the past few weeks.

So I patiently explained: no, I’m not a Luddite; on the contrary, I’ve always been an early user of especially electronic technology. No, it isn’t really that expensive (as I’m sure everyone knows, Apple alternatives in terms of phones and tablets are incredibly cheap by comparison). Yes, it took me some time, but I was helped by others who have similar objections and have done much of the legwork for me, and at any rate, it’s an ethical decision, it would be a bit too easy if it didn’t cost me money or time or effort.

My attitude toward corporations is the one already explained above. I am perfectly aware that if it weren’t Apple it would be someone else, but that’s not an argument about disinvesting from Apple. It’s the social activism equivalent of what in biology is called frequency dependent selection: you go for the rare phenotype, which eventually becomes the dominant, at which point you switch to the new rare, and so on.

In terms of success and utility, it depends on what one’s goal is. I am perfectly aware that Apple, Google and the others are not going to feel the pinch of my decisions. But from a virtue ethical perspective that’s not the objective: I just don’t want to be personally co-responsible for what they are doing. Moreover — and that’s why I’m writing this post and promoting Galloway’s book — if enough others do the same, the damage will be greater and greater, and it might bring about change.

Also, again in terms of success, as I said above my goal was never to completely disengage from The Four, only to distance myself from them. Many years ago I read Peter Singer’s How Are We to Live?: Ethics in an Age of Self-Interest, and it changed my life. No, I did not become a utilitarian like Singer, but I was struck by one of the first things he says in that book: don’t try to do everything at once, you will be overwhelmed, get discouraged, and fail. Instead, decide what your ethical priorities are, and then make some small but concrete steps in that direction. I discussed How Are We to Live? at a book club I founded in Knoxville, TN, and my wife at the time and I were talking about it on our way back home. We decided to follow Peter’s advice: we sold our house in the suburbs and moved downtown, near the bus lines and where we could bicycle to work; we also sold one of our two cars. Our life improved as a result, our carbon footprint went down, and we felt good about the decision. The current anti-Four action is along similar lines: I’m doing something, not everything, because I can do the former, but not the latter.

I thought my explanations were reasonable and cogent. One may still disagree, and indeed one may even agree with my take and still not act in a similar fashion, for all sorts of reasons. But my arguments hardly made I dent. Some people seemed not just to disagree with me, but to positively resent my chosen course of action. What was going on?

Then it hit me. It’s the same reaction I got when I stopped eating meat, and that my partner gets every time people find out she is a vegetarian. The same objections are immediately raised: it’s useless; it’s difficult; what’s wrong with the meat industry?; are you an environmental nuts?; do you feel somehow superior to the rest of?; why do you hate America??

It’s the next to the last one that should give you the clue. At least in my judgment, a lot of people who are not vegetarian recognize, at some level, that vegetarians have by far the better argument: no matter how you look at the issue — in terms of animal suffering, environmental degradation, treatment of labor, or even pure and simple self interest when it comes to health — vegetarianism is better. But it’s harder, too. Stakes are delicious; burgers are delightful; and everyone eats them, so it’s easier to just go along with the habit. But when you meet someone who is bucking the trend, and you are dimly aware that she has made the right choice and you haven’t, resentment kicks in. She simply must be mistaken, and you begin to rattle out a number of more or less incoherent “arguments” for why that is “obviously” the case.

I think something similar has been going on with my anti-Four strategy over the past few weeks. A number of my friends and family realize that I’m onto something (and Galloway’s book gives me plenty of well researched ammunitions, as well as the comfort to know that there are others who think and act the same). But it’s too hard, or expensive, or just inconvenient for them to follow suit. So I must be wrong. And once you know someone is wrong then you immediately begin to search for all the flaws in their reasoning, while ignoring the big ones in your own. It’s a well known cognitive fallacy.

Be that as it may. My conscience feels slightly better, in the same way and measure in which similar small decisions (to not eat meat, to try to shop locally, to voluntarily pay carbon footprint offsets when I travel by air, to change bank because my old one was a giant corporate monster, and so forth) have made me feel better. Is this going to change the world? Of course not. But what are you doing to help, right now?

_____

Postscript: Massimo’s (ongoing) anti-Four plan. Each item lists a service or product offered by Amazon, Apple, Facebook or Google, followed by a suitable, or at least less objectionable, alternative. All of these have already been implemented, it took about a week to switch completely. Remember, the goal is not perfection, just progress. Readers’ suggestions for further improvements are welcome.

Amazon > myriad alternative online retailers, obviously

Amazon Kindle > Nook, Kobo, Sony, etc.

(My) Amazon books > wherever possible I changed links to direct readers to the publishers themselves instead of the A-store

Amazon Video > Hulu, Netflix, though I still use some Amazon Video because some of their productions are unique and good

Apple’s iPad > any Android tablet (even though Android is Google, each manufacturer uses it differently, and the platform is more open than Apple’s)

Apple’s iPhone > any Android phone, except Google’s own, obviously

Apple’s Watch > back to analogical (and more stylish!)

Apple’s TV > back to simple smart TV native apps

Apple’s Music > Pandora, Spotify, or similar

Apple’s Mail > Aqua Mail (which does not track your search history), or any of a number of alternative third party clients

Apple’s office productivity (iWorks) > Google office, as first step, then independent systems, ideally open source

Apple’s Safari browser > Firefox (which does not track your history), Google Chrome not acceptable

Facebook > un-liked most pages, tightened security, limited who can ask me for “friendship”

Facebook Messenger > eliminated in favor of simple sms, or third-party apps

Google search > DuckDuckGo (which does not track your search history)

Google navigator > this is a tough one (particularly since Apple Maps is neither acceptable for this project, nor good, really), though for instance CityMappers works very well for major cities

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

In defense of the indefensible humanities

Università di Bologna

The University of Bologna, the most ancient in the world.

We keep hearing that the humanities — meaning things like literature, philosophy, history and so forth — are in crisis. Which is undeniably true, as measured in terms of dollars invested in them (including number of faculty positions, courses offered, etc.) in many contemporary universities, especially, but not only, in the United States and the UK. Many reasons have been adduced to explain this phenomenon, and there have been a number of calls to defend the humanistic disciplines on a variety of grounds.

I have my own take on this, which was crystallized in my mind several years ago, during a dinner with the Chair of the Philosophy Department at Notre Dame University. He was bragging that Notre Dame has the largest philosophy department in the country, possibly the world (I think the former statement is correct, the latter is doubtful, but still). I was then myself Chair of the Department of Philosophy at Lehman College in the Bronx, and I asked my host what accounted for their success. His response was simple and obvious: “we are a Catholic university. You simply don’t graduate from here unless you have taken a minimum of two philosophy courses.”

It is as simple as that, really. The “crisis” is an artifact of the fact that universities — especially public ones in the US — are increasingly run like businesses, where the “customer” (they used to be called students) get to pick what they want to study and how. The problem, of course, is that students, by definition, don’t know enough about what is good for them, and so should be institutionally limited in their choices. When I learned how to drive I patiently listened to my instructor and followed his lead, I didn’t design my own curriculum at driving school. The same when I learned Judo. Oh, and when I went to college, obviously. To run universities the way they are run now is purely and simply to abdicate the responsibility of teaching the next generation. Faculty and administrators, instead, become retail sellers, competing with each other to attract the highest number of customers in order to boost enrollment and bring in the tuition money that is increasingly needed because States have cut funding for “public” education, in many cases to ridiculously low levels.

I could end this post here, surely having pissed off or outraged countless students and administrators. Which is okay, since I’ve got tenure. But I recently read a refreshingly different essay on the subject, which I want to comment on. It’s titled “There is no case for the humanities,” published in American Affairs Journal, and authored by Justin Stover, a quondam fellow of All Souls College, Oxford University, and a lecturer at the University of Edinburgh. Stover provides a scholarly informed background about the history of the very concept of a university, makes excellent points, gets most of the facts right, and yet is — I maintain — spectacularly wrong in his conclusions. Or so I am going to argue.

Stover begins by arguing that there is deep conceptual confusion about what the humanities are and the reasons for studying them. He then immediately tells his readers that he will ignore the first part of the issue (what constitutes the humanities) and devote his piece to the second one (why studying them). Not necessarily a good move, in my opinion, because the reader is left — off the bat, so to speak — to having to guess what Stover means by “humanities.” Still, let’s assume that we all know what he is talking about, a la Justice Potter.

Stover’s first excellent point concerns the strange critique, and support, that both conservatives and leftists have for the humanities. The conservatives first. On the one hand, they attempt to use the coercive power of the state, and the financial muscle of private donors, in order to correct what they see as the ideological bias of the academy. On the other hand, in so doing, they are contributing to the destruction of the very professoriate that they claim to be defending. As Stover puts it:

“It is self-defeating to make common cause with corporate interests looking to co-opt the university and its public subsidy to outsource their job training and research, just for the sake of punishing the political sins of liberal professors.”

This without counting the fact that university professors tend to be liberal within the humanities, but certainly not in the social sciences, or even in the natural sciences — which are by far more powerful and influential on modern campuses.

The left doesn’t do much better, according to Stover. Progressives want to use the humanities as a force for social change and a training camp for citizen-activists, which right there is in flagrant contradiction with the mission of a university. Worse, they impose ideological litmus tests, despite their vocal protestations of being in favor of critical thinking and freedom of expression.

Stover tells us that most faculty are caught in the middle of this struggle, and that what they want to do, mostly, is to mind their business and carry out their research and scholarship on tiny, and often entirely irrelevant, domains of human knowledge. In other words, they want to do precisely what universities were originally designed to do, from the establishment of the first world university (in Bologna, Italy) back in 1088, onwards. This is an interesting — and insofar as I know correct — point:

“The critics, often well-meaning [well, I don’t know about that], think they are attacking the decadence and excess of contemporary humanities scholarship, when in fact they are striking at the very heart of the humanities as they have existed for centuries.”

One large caveat here, coming from my more extensive experience as someone who has worked in, and is familiar with the history of, not just the humanities, but the sciences as well. Everything that’s Stover has said so far, and that he says in the rest of the article, applies mutatis mutandis to the sciences. Which pretty much dispatches of his entire argument, since he is assuming from the beginning that the humanities are somehow different from the rest of academy. They are most certainly not, at least not by the light of the parameters he uses in his discussion.

The central part of the article is structured around a series of obviously provocative sections, boldly making entirely counterintuitive claims. The first one is “in praise of overspecialization,” addressing the criticism that today’s humanistic scholarship is too narrowly focused, and often concerned with minutiae that seem hardly worth bothering with. Here Stover is absolutely right that this is nothing new:

“No Scholastic ever argued how many angels could dance on the head of a pin — it takes the fevered imagination of a philosophe to come up with that question — but popular depictions of scholars in the Middle Ages indicate that their specialized pursuits were not always fully appreciated.”

Indeed, as Stover points out with dismay, it is the modern expectation that is new and way out of proportions. If you were to write, for instance, a paper or book on French clothing from 1650 to 1699, reviewers would demand that you situate your work within the broader area of literary theory, and moreover provide analyses of your material within the framework generated by the cultural milieu of the modern world. No Scholastic was ever asked to do anything like that at all.

This demand for broad context and up to date framing, according to Stover, simply results in bad scholarship:

“Take an important subject, say, democracy in classical Athens. If you ever want to go beyond a silly nursery story about Athens as the cradle of democracy … if you actually want to understand the political and social system of fifth-century Athens, you would have to delve into everything from epigraphy to the minor Attic orators, to comedy and tragedy, the Greek economy, trade relationships in Greece and the Mediterranean, coinage, ship construction, material supply chains, colonies, gender roles, even clothing and food.”

In other words, you would have to rely on a lot of narrow, “useless” scholarship.

The next section is “in defense of overproduction.” Here too, Stover’s strategy is to show that this isn’t a new problem, but a feature that has been with us from the dawn of (scholarly) time. He quotes an unspecified 13th century scholar who complained that “Aristotle offers the key to wisdom, but he hid that key in so many books.” Tens of thousands of commentaries on Peter Lombard exist, unread for hundreds of years, scattered across European universities, the reason being that this was once a standard exercise to go through to become a reputable (and licensed) teacher of theology. Overproduction doesn’t seem nearly like a sufficient term here!

Then we have “against teaching,” where Stover reminds us that scholars have always eschewed teaching, and that universities were never meant primarily as teaching (as opposed to scholarly) enterprises. I remember reading a biography of Galileo (not a humanist, but a scientist!) that commented about a letter that he wrote to a friend explaining why he was moving back to Florence from Padua: the wine is better, and the teaching load is smaller. I can relate. Stover puts it this way:

“These critiques, whether from the right or left, betray a rather limited horizon of imagination. They can only see the university as a fee-for-service corporation, a vendor hawking knowledge. … A school — be it a gymnasium or realschule, a college or a lycee, a grammar school or comprehensive, a preparatory academy or a public school — exists to teach. But the difference between a university and a school is not the mere difference of the age of the student or the level of instruction. The university is a different kind of thing.”

Indeed. Throughout its history the university has been a locus of scholarship, where the students benefit from the proximity with scholars, more a workshop than a school, at least ideally. That role has now shifted to graduate schools, in the process degrading colleges to glorified high schools, in part because actual high schools no longer do a proper job of teaching the next generation.

So Stover is right that the modern critics of the university, if they had their way, would destroy the very concept of a university, turning it instead into a slightly refined high school. He sees the contemporary university as a bizarre chimaera, and he is not wrong in this:

“The contemporary university … has become an institution for teaching undergraduates, a lab for medical and technological development in partnership with industry, a hospital, a museum (or several), a performance hall, a radio station, a landowner, a big-money (or money-losing) sports club, a research center competing for government funding, often the biggest employer for a hundred miles around, and, for a few institutions, a hedge fund.”

Which brings him finally to what he sees as the misguided attempts of late to defend the humanities. He accuses his colleagues of uttering words in which they don’t, really, believe, such as “skills,” “relevance,” “changing economy,” “engagement,” and “values.” I think he is a bit too harsh here, but I have certainly experienced, both as a faculty and as an administrator (five years as a Chair) part of what he is talking about. I can’t tell you how many useless strategic and rebranding meetings I have participated to, realizing full well that they were going to be a waste of everyone’s time.

Stover tells us that, in the end, what academic humanists really value is that their scholarship gives them participation in a particular community that they appreciate, a community in which other scholars typically share their values and interests. He rejects what he sees — rightly, mostly — as conservative paranoia about sinister plots to brainwash students with liberal dogma. Which leads him to conclude that the only justification for the humanities is within a humanistic framework, and that outside of such framework there is no case to be made:

“The humanities do not need to make a case within the university because the humanities are the heart of the university. Golfers do not need to justify the rationale for hitting little white balls to their golf clubs; philatelists do not need to explain what makes them excited about vintage postage at their local stamp collecting society.”

This is utterly wrong, and quite obviously so. The analogies simply do not hold. Golfers pay for their club memberships, and philatelists buy their own stamps. Academics, by contrast, are paid, often with public funds. So justification is most definitely needed.

Stover is correct, however, when he says that what distinguishes universities from technical schools is precisely the presence of the humanities:

“The most prestigious universities in the West are still those defined by their humanities legacy, which surrounds them with an aura of cultural standing that their professional purpose no longer justifies. … That is why every technical institute with higher aspirations has added humanities programs: accounting or law or engineering can be learned in many places, but courtoisie is passed along only in the university, and only through the humanities — and everyone knows it. … It is the lingering presence of the humanities that allows the modern university to think better of itself, and to imagine itself to be above commercial or political vulgarity.”

In the end, Stover tells us that the current weak defense of the humanities will fail, and the crisis of the university will deepen. Luckily, he says, this is not the first time, and will probably not be the last one. The university, and the humanities, will survive to fight another day:

“The way to defend the arts [and humanities] is to practice them. … Scholarship has built institutions before, and will do so again.”

Perhaps, but I’m not willing to wait and see how history unfolds. And — contra Stover — I don’t find most (though not all) of the current defenses of the humanities to be weak at all. Of course the humanities teach valuable skills to students, and there is plenty of empirical evidence to substantiate that claim. No, the sciences don’t teach “critical thinking,” by and large, and they certainly don’t teach how to think broadly and write well. And those are much more crucial, and portable, skills than learning how to run a chemical reaction or dissect a frog.

Of course the humanities teach about values. You don’t learn much about the human polis by studying astronomy or biology (as important as those disciplines are), or even engineering and medicine. You learn that from reading Shakespeare, engaging with Aristotle and Kant, seeing (and even better acting in, or producing) a play by Aristophanes. (Feel free to substitute the examples above with equivalent ones from China, Japan, Africa, South America, and so forth.)

If we yield to the neo-liberal project for the university it will not only destroy the university, it will also destroy the hope to provide the kind of public education that helps to form the next generation of intelligent, informed, critical human beings and citizens. Again, this is not something the STEM disciplines are equipped to do, with all due respect to my colleagues in science, computer science, engineering, and mathematics. I know this not just because I read widely, but from personal experience: my philosophy classes are so much more important and impactful than the ones I used to teach in biology that the comparison is simply laughable.

Against teaching? The hell with that. Teaching is by far the most important thing we do (when we do it well, not as a glorified high school). And to argue that it is not so today because it was not so during the Middle Ages is a complete non sequitur. Plenty of things were different in the past, but we have learned to do them better, or not to do them at all, if they turned out to be useless. And we are better off for it.

In praise of over-specialization and over-production? My arse. My heart aches at the immense waste of human potential represented by those tens of thousands of commentaries on Peter Lombard. What a gigantic load of lost opportunities! No, please, let’s not use that as a model for modern scholarship. Again, just because it has always been so it doesn’t mean it is a good idea to continue doing it that way. Yes, specialization is the inevitable name of the scholarly game, and Stover’s example of what is needed to develop a deep understanding of ancient Athenian democracy is a very good one. But let’s go a little lighter on additional commentaries on the philosopher or dramatist du jour, please.

Unlike Stover — whom I thank for his cogent analysis, which really pushed me to reflect more carefully on all of this — I think that a defense of the humanities, right here and right now, is synonymous with a defense of the very idea of a liberal education. Which in turn is synonymous with a defense of the possibility and hope for a vibrant democracy. Or at least a democracy that doesn’t produce the sort of obscene politics and social policies that a number of Western countries, especially the US and UK, are currently producing. We can do better, we ought to do better, we will do better.

Book Club: On Inequality 2, Equality and respect

We have seen last time that Harry Frankfurt advances a number of critiques of the commonly accepted idea that inequality is morally objectionable per se, as opposed to being objectionable for derivative reasons (e.g., because accumulation of wealth in the hands of few leads to undue influence in terms of political power). I now turn to the second (much shorter) part of his On Inequality, which discusses the relationship between equality and respect.

Frankfurt begins by stating that his analysis is compatible with a range of social policies and political viewpoints, i.e., it is neither limited to a left-leaning agenda nor opposed to it; and by rejecting the presumption that egalitarianism is an ideal of intrinsic moral importance, even though he personally supports a number of policies aimed at reducing inequality (for reasons other than the intrinsic immorality of the latter).

His claim, then, is that the moral appeal of economic egalitarianism is an illusion, since:

“Whenever it is morally important to strive for equality, it is always because doing so will promote some other value rather than because equality itself is morally desirable.” (p. 66)

Frankfurt takes on Thomas Nagel’s famous question: “How could it not be an evil that some people’s life prospects at birth are radically inferior to others’?” and responds that, empirically, it appears to be the case that inequality is compatible with having quite a bit, and that doing less well than others does not, in fact, entail doing badly.

As I argued in the course of the discussion following the first post on this book, Frankfurt is surely right if we are talking about modest discrepancies in wealth. The fact that there are some millionaires in New York City in no way implies that my life as a non-millionaire is bad, or that I am in no position to pursue my own projects and live a fulfilling life. Nor is anything of the sort implied for some people that are a little less wealthy than I am. Of course, if someone is truly poor (and that is, indeed, the case for many in the Big Apple) then those people’s ability to live a good life is seriously hampered. But that poses no problem for Frankfurt’s position, since the issue is that such people simply do not have enough, not that they have less.

But what about those, also to be found in good numbers in New York, who have so much more than is necessary even for a very comfortable life, and that frequently use their wealth to gain unfair access to the levers of power? There, Frankfurt argues, the issue — again — isn’t inequality per se, but rather the lack, or the non enforcement of laws that block the coupling between wealth and political power. The fact that these two are empirically correlated and often causally connected is not a logical necessity, says Frankfurt. Indeed, his point is that by focusing on inequality per se we miss the real problem, which is, for instance, the corruption of the political system. But despite Frankfurt’s claim that his approach does not prescribe any specific social or political reform, it actually does, since it shifts our focus from one kind of intervention (directly on inequality) to another (on corruption, or other undesirable empirical correlates of inequality).

Frankfurt again seeks to shift the attention of moral philosophers and people concerned with social justice:

“Surely what is of genuine moral concern is not formal but substantive. It is whether people have good lives, and not how their lives compare with the lives of others. … What makes it an evil that certain people have bad lives is not that some other people have better lives. The evil lies simply in the conspicuous fact that bad lives are bad.” (p. 71-73)

Frankfurt asks us to consider what is important when we consider a person’s concern for her rights, respect, and consideration. Enjoying certain rights, or being treated with consideration and respect, have inherently nothing to do with how much more or less wealthy one is compared to others, because rights, respect, etc. are accorded to members of the human society qua human beings, not in proportion to their wealth — at the least in theory.

That latter point needs a bit of commentary. Frankfurt is not being naive here, I am guessing. He is not saying that, as a matter of fact, people enjoy the same rights and respect. That is patently empirically false. But it is also obviously true that we live in a society bound by laws, and more broadly a Constitution, that is designed to apply equally to people regardless of their race, gender, religion, and socio-economic status. That in and of itself makes Frankfurt’s point that inequality is logically distinct from other social issues having to do with injustice and unfair treatment.

Consider an analogy with the ongoing issue of police brutality against minorities, and blacks in particular. When people claim that the problem is not limited to individual policemen who may be racists, but is “systemic,” what do they mean, exactly? If they mean that police departments across the country tend to be characterized (with due exceptions ) by a culture of implicit or explicit racism, and that it is this culture that results in the disproportionate killing of black men, they are probably correct. But if they mean that the laws of this country are inherently racist, then that is obviously false. We are very clearly, very explicitly, all equal under the law. Recognizing this distinction — which is analogous to the one Frankfurt seeks to draw between inequality per se and undesirable empirical correlates of inequality — leads to very different types of actions: in one case one should seek to reform police departments, in the other case to change the law of the land. They are not at all the same thing.

A bit later on Frankfurt makes another distinction that is worth pondering:

“It is easy to confuse being treated with … respect … with being treated equally. However, the two are not the same. I believe that the widespread tendency to exaggerate the moral importance of egalitarianism is due, at least in part, to a misunderstanding of the relationship between treating people equally and treating them with respect.” (p. 76)

A simple example: I make a concerted effort to treat my students with respect, qua human beings. But they are certainly not my equals (I’m the teacher, they are the students), nor are they equal to each other in all pertinent respects, as some of them are smarter, have better background knowledge, or try harder than others. Some of my students may need to be treated differently precisely because they have different needs. But they all ought to be treated with the same respect.

Frankfurt quotes Isaiah Berlin, one of the defenders of the idea that equality should be the default moral position: “The assumption is that equality needs no reasons, only inequality does so. … If I have a cake and there are ten persons among whom I wish to divide it, then if I give exactly one tenth to each, this will not, at any rate automatically, call for justification; whereas if I depart from this principle of equal division I am expected to produce a special reason.”

This, Frankfurt comments, is surely right, but only because in the hypothetical case imagined by Berlin we have no knowledge whatsoever of the people involved, their needs, and their differences. Under such conditions of total lack of information (what John Rawls’ famous called a “veil of ignorance”) equality and respect coincide. But, argues Frankfurt, this is a happenstance, not a logical necessity, “for the only characteristics of each person that are relevant [in this case] — to wit, simply those that constitute his humanity — are necessarily shared by every other human being. Therefore, the criteria of impartiality and of equality must inescapably yield, in this case, the same result.” (p. 82) But, crucially, only in this case.

Frankfurt is perfectly aware that being ignored, discounted, or not taken seriously is disturbing and has real consequences. But he insists that demands for respect should be based on the realities of a person’s own conditions, and especially on her status as a human being, and not simply on the amount of wealth that she happens, or does not happen, to have. Which means that, even in terms of respect, the issue isn’t equality per se, but a recognition of the worth and dignity of being human.

_____

Heads up: our next book club series will focus on a completely different topic and discipline. We will tackle my colleague Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind. Stay tuned.