Category Archives: Public Philosophy

Welcome!

Welcome to Footnotes to Plato! It began back in August ’15 as a blog on general philosophy, with a large component of philosophy of science. The blog has now moved to Patreon and I hope you will follow me there.

However, the full archive of 354 posts and a whopping 31,414 comments will remain permanently available for free. You will also find here a list of my books, all my technical papers in philosophy, links to columns I wrote for a variety of magazines (Skeptical Inquirer, Philosophy Now, The Philosophers’ Magazine), several downloadable collections of essays, and a number of both public and technical talks I have given. You will also find links to my various online presences (Twitter, Facebook, etc.). These pages will keep being updated as new material becomes available.

I hope you will enjoy this site and that it will help you in your continuing quest for understanding and practicing philosophy.

cheers,

Massimo Pigliucci

(the City College of New York)

How to stop a conversation: with facts and definitions

1 (5)I really ought to know better, after decades of activism on behalf of science and critical thinking, than to engage in ideologically loaded discussions with friends or family. Indeed, the opening chapter of the second edition of my Nonsense on Stilts: How to Tell Science from Bunk features two rather frustrating conversations I’ve had with a relative (I call him “Ostinato,” Italian for stubborn) and an acquaintance (“Curiosa,” Italian for curious). Neither episode led to either side moving a fraction of an inch away from their initial position, resulting mostly in aggravation and emotional distress on both sides. Still, as I explain in the book, it wasn’t time entirely wasted, since I came to a couple of important realizations while engaging in such discussions.


For instance, from Ostinato I learned that a common problem in these cases is the confusion between probability and possibility. Whenever I would explain why I don’t think it likely, say, that 9/11 was an insider’s job, or that the Moon landing was a hoax, Ostinato would reply: “but isn’t it possible?” Out of intellectual honesty I would reply, yes, of course it’s possible, in the narrow sense that those scenarios do not entail a logical contradiction. But they are extremely unlikely, and there really aren’t sufficient reasons to take them seriously. Ostinato clearly thought he had scored a major point by wrangling my admission of logical possibility, but such glee reflects a fundamental misunderstanding not just of how science works, but of how commonsense does as well. Is it possible that you will jump from the window and fly rather than crash to the ground? Yes, it is. Would you take the chance?


As for Curiosa, she taught me that a little bit of knowledge is a dangerous thing. I nicknamed her that way because she was genuinely curious and intelligent, reading widely about evolution, quantum mechanics, and everything in between. Reading yes, understanding, no. She took any evidence of even extremely marginal disagreement among scientists as, again, evidence that it is possible that what people claim is a well established notion (evolution, climate change) is, in fact, false. Again, yes, it is possible; but no, finding the occasional contrarian scientist (often ideologically motivated, as in the case of anti-evolution biochemist Michael Behe) is absolutely no reason to seriously question an established scientific theory.


You would think that Ostinato and Curiosa had taught me a good lesson, and that I wouldn’t fall for it again. Sure enough, recently a close relative of mine wanted to engage me “as a scientist and a philosopher” in a discussion of chemtrails and 9/11 truthism, sending me a long list of the “reasons” she believed both. I respectfully declined, explaining that my experience had showed me very clearly that nothing good comes out of such discussions. People talk past each other, get upset, and nobody changes his mind. My relative was taken aback by my refusal, but I felt pretty good. Part of Stoic training is the notion that one does not control other people’s opinions, motivations, and reasoning. It is okay to try to teach them, within limits (and I do: that’s why I literally teach courses on this stuff, and write books about it), but failing that, one just has to put up with them.


And yet, Stoicism also reminds me that I ain’t no sage, and that I am labile to slip back at the next occasion. Which I did, a couple of days after Thanksgiving! This time I was having dinner with someone we’ll call Sorprendente (Italian for surprising, the reason for the nickname will become apparent in a moment). She is a very intelligent and highly educated person, who, moreover, is involved in a profession that very much requires critical thinking and intellectual acumen.


Imagine then my astonishment when I discovered that Sorprendente flat out denies the existence of a patriarchy, both historically and in contemporary America. I care enough about this sort of thing that I immediately felt the adrenaline rush to my head, which meant – unfortunately – that I had to fight what I already knew was an impossible battle: to explain certain things to Sorprendente without losing my temper. Anger, as Seneca famously put it, is temporary madness, and should not be indulged under any circumstances. Let alone when you are trying to convince someone you know of a notion that she is adamantly opposed to.


This post isn’t about convincing you that we do live in a patriarchal society. If you don’t think so already there probably is little I can do in a blog post to change your mind. Besides, there are plenty of excellent resources out there (like this one; or this one; or, if you are more practically minded, this one). Rather, I want to reflect on a new (to me) strategy deployed by Sorprendente, a strategy that I didn’t expect in general, and certainly not from someone who very much relies for her job on using the two concepts she dismissed at dinner with me.


Said two concepts are: definitions and facts. When Sorprendente admitted that most positions of powers in our society are held by men I made the comment that that’s part of the definition of a patriarchy. Indeed, here is how the Merriam-Webster puts it:


“Patriarchy (noun). Social organization marked by the supremacy of the father in the clan or family, the legal dependence of wives and children, and the reckoning of descent and inheritance in the male line. Broadly: control by men of a disproportionately large share of power.”


While, thankfully, we are slowly moving away from the first group of markers of a patriarchy (in the West and in some other parts of the world, certainly not everywhere, by a long shot), the second one (the bit after “broadly”) very much applies, even according to Sorprendente herself.


And yet she curtly informed me that “definitions are conversations stoppers.” Wait, what? Definitions of words are, seems to me, crucial to any kind of discourse. Yes, it is true that dictionaries are both descriptive and prescriptive. They are descriptive in the sense that if the common usage of a word changes they will update accordingly; prescriptive because they tell us what currently counts as correct usage. “It’s just semantics” is one of the most irritating responses one can get in the middle of a discussion. Of course semantics (and definitions) are important. If we don’t agree on the meaning of the words we use we are talking past each other, with no possibility whatsoever of understanding. All I was trying to say was that – according to Sorprendente’s own admission – the facts on the ground correspond to the definition of a patriarchy, which means that it becomes inherently contradictory to agree with those facts and yet insist in denying that we live in a patriarchy.


Speaking of facts. Apparently, bringing those up also is a conversation stopper, and it is therefore highly impolite. Here things got truly bizarre. To begin with, it was Sorprendente who brought up a fact, in the form of a statistic: she claimed, as partial evidence that women are not oppressed, that their average life span is 10 years longer than men’s. This is biology, one of my areas of expertise, and the facts can readily be checked.


First off, the 10 years figure is false. The true figure, as it happens, varies from country to country: 6.7 years in the US, a whopping 12 in Russia, and a mere 0.1 in Bangladesh. Second, part of the gap is due to biological reasons: women have two copies of the X chromosome, while men only have one copy (because we have the tiny Y instead). As a result, men are exposed to hundreds more genetically influenced diseases than women, and their mortality is higher, both early in life and throughout. Apparently, however, bringing up these obviously pertinent facts on my part was a rude conversation stopper. Translated: I should be free to bring up whatever false information I want, but you are not allowed to contradict me on the basis of factually correct information. Remember that Sorprendente’s job deals with the accurate verification and interpretation of facts. Oh boy.


Regardless, why would she think that a longer life span is proof that we don’t live in a patriarchy? (Indeed, according to her logic, since women have the statistical advantage, we should conclude that we live in a matriarchal society.) Because women have been and to some extent still are are “shielded” from dangerous jobs, like joining the military, which is an “obvious” example of concern on the part of men. No patriarchy. QED.


This makes little sense on a number of levels. A military career has always (since the time of the ancient Greeks) be considered a manly job precisely because women have been thought of as inferior or inadequate for that sort of activity. This is exactly what one would expect in a patriarchy. Moreover, it is likely true that most men “care” for women and want to protect them. This is in no way incompatible with the notion of sexism; indeed, being patronizing toward someone who doesn’t actually need to be protected is one of the symptoms of sexism and other discriminatory attitudes. Not to mention that women are now increasingly accepted in the military. This is true both for the US (average life span gap 6.7 years) and Bangladesh (average life span gap 0.1 years). It doesn’t take a rocket scientist to figure out that this is simply not a factor in explaining why women live longer than men.


Ah, said Sorprendente, but then if we live in a patriarchal society, how do you explain that there are millions more men than women in prison? This, I tried to respond, actually confuses two different issues, since the majority of men in American prisons are minorities, and particularly blacks and hispanics. The differential is a result of a combination of racism, poverty, and lack of education and therefore job opportunities. It appears, again, to have nothing to do with the issue of patriarchy.


Very clearly, I wasn’t getting anywhere, and both Sorprendente and I were becoming increasingly upset. At which point a thought suddenly struck me and I asked: are you by any chance into Jordan Peterson? Yes, came the response, I think he makes some good points. And that, my friends, was the real conversation stopper.

Why I’m a still a (non-card carrying) Skeptic

1 (1)I just came back from Las Vegas, where I had a lovely time at the annual CSICon event, organized by the folks that bring you Skeptical Inquirer magazine, among other things. As I’ve done almost since the beginning of my involvement with the skeptic movement, back in, ghasp, 1997, I’ve delivered a bit of a gadfly talk. This one was about scientism, reminding my fellow skeptics that they have a tendency to overdo it with the science thing, at times coming across nearly as evangelical and even obtuse as their usual targets, from creationists to UFO believers. After asking the audience to be patient with me and not serving me hemlock for lunch, I minced no words and criticized by name some of the big shots in the field, from Neil deGrasse Tyson to Richard Dawkins, from Sam Harris to Steven Pinker. And of course several of those people were giving talks at the same conference, either right before or right after me.


No hemlock was served, and I got less resistance to my chastising than usual from the audience. Some people even approached me later on telling me how much they appreciated my reminder that our community is not perfect and we need to do better. It was all very congenial, set against the perfect backdrop of the ultimate fake city in the world, and accompanied by the occasional dirty martini.


On my way back to New York I then got a tweet from a follower linking to yet another “I resign from the skeptic movement and hand in my skeptic card” article, written by a prominent (former) skeptic. It doesn’t matter who. The list of complaints by that author are familiar: a tendency toward scientism, a certain degree of sexism within the movement, and a public failure to lead by some of the de facto leaders. The same issues that I have been complaining about for years (for instance, here). But I have not quit, and do not intend to quit. Why?


The uncharitable answer would be because I’m part of the privileged elite. I doubt anyone would seriously consider me a “leader” in the movement, but I have certainly been prominent enough. And I am a male. White. Heterosexual. The problem is, uncharitable views are highly unhelpful, and I’m on record advocating on behalf of diversity in the movement, against sexual harassment, and – as I mentioned above – have made a mini-career of stinging the big shots every time I think they deserve it, which is rather often. So I’m afraid a casual dismissal based on my gender, sexual preference and ethnicity will not do. Quite apart from the fact that it would be obviously hypocritical on the part of anyone who claims that gender, sexual preference and ethnicity should not be grounds for blanket statements of any kind.


No, I stay because I believe in the fundamental soundness of the ideas that define modern skepticism, and also because I think quitting to create another group is an example of an all too common fallacy: the notion that, despite all historical evidence to the contrary, next time we’ll definitely get it right and finally create utopia on earth. Let me elaborate on each point in turn.
“Skepticism,” of course, has a long history in philosophy and science. The original Skeptics of ancient Greece and Rome where philosophers who maintained that human knowledge is either highly fallible or downright impossible (depending on which teacher of the school you refer to). Consequently, they figured that the reasonable thing to do was to either abstain entirely from any opinion, or at least to hold on to such opinions as lightly as possible. Theirs wasn’t just an epistemological stance: they turned this into a style of life, whereby they sought serenity of mind by way of detaching themselves emotionally from those opinions (political, religious) that others held so strongly and often died for. Not my cup of tea, but if you think about it, it’s not a bad approach to good living at all.


The philosopher that embodies modern skepticism most closely, however, is the Scottish Enlightenment figure par excellence, David Hume. He held an attitude of open inquiry, considering every notion worth investigating and leaving the (provisional) verdict of such investigations to the empirical evidence. He famously said that a reasonable person proportions his beliefs to the available facts, a phrase later turned by Carl Sagan in his hallmark motto: extraordinary claims require extraordinary evidence.


The contemporary skeptic movement was the brainchild of people like philosopher Paul Kurtz (the founder of the organizations that preceded CSI, as well as of Skeptical Inquirer), magician James “the Amazing” Randi (organizer of the long running conference that preceded CSICon, known as TAM, The Amazing Meeting), Carl Sagan himself, and a number of others. Initially, the movement was rather narrowly devoted to the debunking of pseudoscientific claims ranging from UFOs to telepathy, and from Bigfoot to astrology.


More recently, mainly through the efforts of a new generation of leaders – including but not limited to Steve Novella and his group, Michael Shermer, Barry Karr, and so forth – the scope of skeptical analysis has broadened to include modern challenges like those posed by the anti-vax movement and, of course, climate change. Even more recently, young people from a more diverse crowd, finally including several women like Rebecca Watson, Susan Gerbic, Kavin Senapathy, Julia Galef, and many others, have further expanded the discourse to include an evidence-based treatment of political issues, such as gender rights and racism.


The values of the skeptic movement, therefore, encompass a broad set that I am definitely on board with. At its best, the community is about reason broadly construed, critical but open minded analysis of extraordinary claim, support for science based education and critical thinking, and welcoming diversity within its ranks.


Of course, the reality is, shall we say, more complex. There has been plenty of sexual harassment scandals, involving high profile members of the community. There is that pesky tendency toward closing one’s mind and dismissing rather than investigating claims of the paranormal. And there is a new, annoying, vogue to reject philosophy, despite the fact that a skepticism (or even a science) without philosophical foundations is simply impossible.


But this leads me to the second point: I think it far more sensible to stay and fight for reform and improvement rather than to “hand my skeptic card” (there is no such thing, of course) and walk away. Because those who have walked away have, quite frankly, gone nowhere. Some have attempted to create a better version of what they have left, like the thankfully short-lived “Atheism+” experiment of a few years ago.


The problem with leaving and creating an alternative is that the new group will soon enough inevitably be characterized by the same or similar issues, because people are people. They diverge in their opinions, they get vehemently attached to those opinions, and they fight tooth and nails for them. Moreover, people are also fallible, so they will in turn engage in the same or similar behaviors as the ones that led to the splintering of the group in the first place, including discrimination and harassment. So the whole “I’m leaving and creating a new church over there” kind of approach ends up being self defeating and dispersing resources and energy that could far better be used to improve our own household from within while keep fighting the good fights we inherited from the likes of Kurtz and Sagan.


So, no, I’m not leaving the skeptic movement. I will keep going to CSICon, NECSS, the CICAP Fest, and wherever else they’ll invite me. I will keep up my self assigned role of gadfly, annoying enough people and hopefully energizing a larger number so that we keep getting things more and more right. After all, this is about making the world into an at least slightly better place, not into our personal utopia tailored to our favorite political ideology.

Neil deGrasse Tyson “debunks” Spider-Man. And that’s just wrong

1 (3)I’ve spent a significant part of my academic and public careers investigating and opposing pseudoscience. One of my role models in this quest has always been astronomer Carl Sagan, the original host of the landmark PBS series Cosmos. I have met and interviewed the new host, Neil deGrasse Tyson, the director of the Hayden Planetarium at the American Museum of Natural History. Despite our differences about the value of philosophy (he’s dead wrong on that one), Neil too got into the debunking business. But – unlike Sagan – does it with more than a whiff of scientism, and occasionally in a spectacularly wrongheaded fashion.


Take, for instance, last week’s mini-appearance on The Late Show with Stephen Colbert, one of my favorite programs to laugh at the crap currently affecting the planet (as we all known, a sense of humor is the best defense against the universe). On September 14th, Tyson was featured in a one-minute video entitled “Superpowers debunked, with Neil deGrasse Tyson.” What? Why do we need to “debunk” superpowers? Does anyone actually think there exists a god of thunder named Thor, who comes from a mythical place known as Asgard? But apparently the “problem” is pressing enough for our debunker-in-chief to use a popular nationally televised show to tackle it. Here is, in part, what Neil said (and no, this isn’t a joke, he was serious):


Let’s tackle Spider-Man.


No, let’s not! Spider-Man is one of my favorite superheroes, a (fictional) role model, motivated by a more than decent philosophy of life: with great powers comes great responsibility (he got that from Uncle Ben). Something Tyson has, apparently, not learned. He goes on:


He’s bitten by a radioactive spider. Don’t we know from experience that radioactivity give your organs cancer? So, he would just be a dead kid, not one with superpowers.


No kidding, Sherlock. Do we really need the awesome reasoning powers of a star national science popularizer to figure out that Spider-Man’s origin story doesn’t stand up to even casual scrutiny? Doesn’t Neil realize that this is fiction, for crying out loud? Well, apparently, he does, sort of:


Of course it’s fiction, so I don’t have a problem with fiction, but if you think you are going to do this experiment, and try to make that happen to you, I’ve got news for you: it’s not gonna work.
Well, Neil, apparently you do have a problem with fiction. I still remember that on my podcast, years ago, you complained about the aliens in Avatar, because the females had breasts, which are – obviously – a mammalian trait. Really? That’s what bothered you in that movie? Never heard of suspending disbelief and just enjoy a nice story?


Also, who on earth is going to be tempted to repeat in real life the “experiment” that generated Spider-Man? And even if an enterprising and badly informed kid wanted to, where would he get a radioactive spider? Lastly:


I’ve got news for you: it’s not gonna work.


You think?


All right, end of my anti-Tyson rant in defense of Spider-Man. The more serious issue here is: why did he feel the need to do such a silly thing in the first place? I suspect that’s because Neil, like a number of “skeptics” I know, is affected by two maladies: the above mentioned scientism and a strong sense of intellectual superiority to the common rabble.


Scientism is defined by the Merriam-Webster as “an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation.” I don’t know whether commentaries on comic book superheroes qualify as an area of investigation, but clearly Tyson felt it necessary to bring the awesome power of science and critical thinking to debunking the dangerous notion that being bitten by a radioactive spider will give you magical powers.


I really think the skeptic community should stay as far away as possible from the whole notion of debunking (and yes, I’ve been guilty of using that word myself, in the past). For one thing, it conveys a sense of preconceived outcome: you know a priori that the object of your debunking is nonsense, which isn’t exactly in line with the ideal scientific spirit of open inquiry. That’s why my favorite actual skeptic is philosopher David Hume, who famously said that a reasonable person’s beliefs should be proportionate to the evidence, a phrase later turned by Sagan into his famous “extraordinary claims require extraordinary evidence.” Sagan, like Hume, was open to a serious consideration of phenomena like UFOs and telepathy, even though he did not believe in them. At one point he risked his career and reputation in order to organize a scientific conference on UFO sightings. I simply cannot imagine a similar attitude being sported by Neil deGrasse Tyson.


For another thing, “debunking” strongly conveys the impression that one thinks that the people who believe in the notion to be debunked are simpletons barely worth consideration. Perhaps some are, but I’ve met plenty of really smart creationists, for instance, a notion that would sound to Tyson as the quintessential oxymoron. Which brings me to his second malady (one, again, from which I have suffered myself, and that I’m trying really hard to overcome): intellectual snobbism. People like Tyson (or, say, Richard Dawkins) exude the attitude at every turn, as on display in the short Colbert video that got me started with this post. The problem (other than that it’s simply not nice) is than snobbism isn’t going to get you converts. It only plays well with your own faithful crowd.


This is because of something that Aristotle realized back 23 centuries ago, and which he explained at great length in his book on rhetoric. Presumably, Neil, Dawkins, and others want the same thing that Sagan, Stephen Gould (another one of my role models), and myself want: to engage a broader public on the nature of science, and to widen the appreciation and practice of critical thinking. But Aristotle realized that this goal requires the deployment of three concepts: Logos, Ethos, and Pathos.


Logos refers to the idea that our first priority should be to get our facts and our reasoning right. In the case of Neil’s “debunking” of Spider-Man, yeah, he got the biological facts straight, as much as that isn’t going to do anyone any good.


Ethos means character: you need to establish your credentials with your audience. And by credentials Aristotle didn’t mean the fact that you have a PhD (Tyson has one, from Columbia University), but that you are a good, trustworthy person. I can’t comment on the degree to which Neil fits this description, because I don’t know him well enough; but he certainly comes across as condescending in this video and on many other occasions, a character trait that Aristotle would not have approved of. (One more time: I have been guilty of the same before, and I’ve been actively working on improving the situation.)


Pathos refers to the establishment of an emotional connection with your audience. This is something that scientists are actively trained not to do, under the mistaken impression that emotional connection is the same thing as emotional manipulation. But this is the case only if the agent is unscrupulous and manipulative, not if he’s acting as a genuine human being. We humans need emotional connections, without which we are prone to distrust whoever is talking to us. In the video Tyson makes absolutely no effort to connect with his audience. Indeed, it isn’t even clear who is audience is, exactly (certainly, not fans of Spider-Man!), and therefore what the point of the whole exercise actually was.


So, by all means let us nurture good science communicators, which Neil deGrasse Tyson most certainly is. We do need them. But they really ought to read a bit of Aristotle (oh no, philosophy!), and also relax about the questionable science of movies like Avatar or comic books like Spider-Man.


Speaking of which, let me leave you with the delightfully corny original animated series soundtrack. Try to enjoy it without feeling the urge to “debunk” it, okay?

The non-problem of moral luck

The Good Place - trolley dilemma

The Good Place is an unusual comedy on NBC, featuring a professor of moral philosophy among its main characters. My friend Skye Cleary has interviewed the real life philosopher who consults for the show, Todd May of Clemson University, for the blog of the American Philosophical Association. The exchange is definitely worth a read. In this post I will make an argument that one can learn more about moral philosophy from watching a single episode of the show than by listening to a technical talk in that same field while attending the APA’s own annual meeting.

Episode five of the second season of TGP features a sophisticated discussion of the infamous trolley problem, a thought experiment in ethics that has by now generated a cottage industry among both philosophers and neuroscientists. I will not explain for the n-th time what the problem consists of, you can look it up on Wikipedia. Suffice to say that the more I study virtue ethics, the more I become skeptical of the value of much modern moral philosophy, with its indulging in more and more convoluted hypothetical situations that seem to be designed more to show off the cleverness of the people working in the field than to actually help the rest of us live an ethical life. It is no coincidence that the dilemma is always framed in terms of what a deontologist or a utilitarian would do, those two frameworks having gotten further and further away from any relevance to real life, contra to what either Immanuel Kant or John Stuart Mill surely intended.

At any rate, the episode in question features a theoretical lecture on trolleys by the resident philosophical character, Chidi (played by the excellent William Jackson Harper). One of those on the receiving end of the lecture is the demon-turning-good-guy Michael (played by the awesome Ted Danson). During the lecture, Michael becomes impatient with the theory, so he snaps his fingers and transports Chidi, his friend Eleanor (played by Kristen Bell) and himself aboard an actual trolley, about to kill what appear to be real people. Michael then asks Chidi for a real-life demonstration: what is the philosopher going to do when suddenly faced with the dilemma, in the field, so to speak? Hilarity (and mayhem) quickly ensue. The episode is so good that I made my students watch it and comment on it.

Michael’s point is well taken: ethics is not (or ought not to be!) a theoretical exercise in cleverness, but a guide to navigating life’s real situations, and Chidi the philosopher — while very good in theory — fails spectacularly at it. I was thinking of that sit-com imparted lesson while attending a talk at the Eastern APA meeting last January, delivered by Philip Swenson of the College of William and Mary. In the following I will pick on Swenson a bit, not because his talk was bad (it wasn’t), but because it is an example of a way of doing philosophy that I increasingly object to, on ground of indulgence in irrelevant logic chopping.

Swenson set out to propose a solution to the “problem” of moral luck. He began, of course, with a couple of hypothetical situations:

Resultant luck case. Alice and Bill both go on walks along a riverbank. Both encounter a drowning child and attempt a rescue. They make the same choices and attempt the same actions. Alice’s rescue succeeds, but a sudden current prevents Bill’s attempt from succeeding, and the child drowns.

Circumstantial luck case. Alice goes for a walk along a riverbank and encounters a drowning child. She rescues the child. On a separate occasion, Claire goes for a walk along the riverbank. She does not encounter a drowning child. If Claire had encountered a drowning child she would have rescued the child.

What’s the problem? I mean, other than for the (fortunately hypothetical) child who occasionally drowns? Swenson is bothered by the fact that, in the first case, if we say that Alice is more praiseworthy than Bill, it looks as though we accept something apparently horrible called “resultant moral luck.” In the second case, if we say that Alice is more praiseworthy than Claire, then we accept something equally objectionable, called “circumstantial moral luck.” As Swenson puts it:

“Rejecting circumstantial moral luck appears to require a very significant revision to everyday moral judgment. Consider the plausible claim that a great many people all over the world are not so different from those who went along with the rise of the Nazis. Many people would have done similar things under similar circumstances. If we accept this and reject circumstantial luck then it looks as though some radical claim or other will follow.”

That would be, in case the reasoning isn’t clear, the radical claim that most of us are not as good as we think, and that if we had lived under the Nazi we would have been just as culpable as the majority of the German population of the time for the Holocaust. But it doesn’t end here, there is a third case to consider:

Constitutive luck case. Alice goes for a walk along a riverbank and encounters a drowning child. she rescues the child. On a separate occasion Daniel goes for a walk along the riverbank and also encounters a drowning child. Because Daniel is — through no previous fault of his own — cruel and uncaring, he refrains from rescuing the child. However, if he had possessed Alice’s naturally caring disposition, he would have rescued the child.

Swenson went on to remind the audience of the two classical “solutions” found in the philosophical literature for the problem of moral luck: “responsibility skepticism” (deny that anyone is ever praiseworthy or blameworthy at all), and the “responsibility explosion” (say that people are praiseworthy or blameworthy in virtue of what they would have done in various circumstances they never actually faced, equipped with character traits they never had).

He then goes on to present his own solution to the problem, which involves a strange calculation of moral desert levels, beginning with the assumption that the “expected desert level” for an agent is zero, and continuing with the notion that we can then assign points to different ethical situations according to a variety of criteria. I will not go into the details because they are irrelevant to my argument here. Which is that we should reject this whole approach to moral philosophy, period.

To begin with, I find bizarre the very idea that we should engage in some sort of morality ledger construction, keeping score of the praiseworthiness or blameworthiness of people. Why? What the heck is the point? Are we some sort of god who has to decide on where to send people in the afterlife? (That, incidentally, is the premise of TGP show. And it’s very funny.) Take the first scenario, the case of resultant luck. It wouldn’t cross my mind for a second to say that Alice is more praiseworthy than Bill just because Bill did not succeed in his attempt at rescuing the drowning child. On the contrary, I am in awe of anyone who would attempt the rescue, regardless of whether s/he succeeds or not.

The circumstantial luck case is even more out there: there is no reason for us to consider Claire at all. If the circumstances were such as not to test her moral fiber, fine, why should that be an issue of any sort? Alice is to be praised for her attempted (and successful) rescue, the question of what Claire would have done simply did not arise, and that’s the end of that.

The last scenario, that of constitutive luck, is interesting, but only academically. To begin with, my view — contra Swenson’s stated hypothesis — is that adult human beings are morally responsible by simple virtue of being adults. That’s what it means to be an adult, regardless of the circumstances of one’s childhood. But if Daniel has an aberrant character because, say, of some developmental abnormality in his brain, or perhaps because a tumor is interfering with his moral decision making brain network, then fine, he is not to be blamed for his inaction. That’s no skin off of Alice’s nose, because moral desert is not (or should not be) a competition! Again, why the karmic obsession with keeping scores?

What about the choice between responsibility skepticism and the responsibility explosion? It seems to me that a society cannot function without a reasonable attribution of responsibility for the actions of its (adult, normally functioning) members. But one shouldn’t be carried away and start thinking of all possible hypothetical scenarios. Ethics should be concerned with what actually happens to real people, not with how hypothetical individuals would behave under (infinite) hypothetical circumstances. If you care about the latter, I suggest you’ve got your priorities seriously screwed up.

In the end, the “problem” of moral luck is not a problem at all. When Thomas Nagel wrote his now classical paper by that title, back in 1979, I took it to call our attention to the humbling fact that we may be far less moral than we like to think, and that that observation ought to make us more sympathetic toward the above mentioned ordinary Germans under the Nazi. To cure us of moral hubris, as it were. That is a very good practical lesson, nudging us toward being both less complacent about our own abilities and more charitable toward the shortcomings of others. But if the whole thing degenerates into an entirely impractical mathematical exercise in the assignment of praise and blame we have lost track of what ethics should be about. As the Stoic philosopher Epictetus put it 19 centuries ago:

“If you didn’t learn these things in order to demonstrate them in practice, what did you learn them for?” (Discourses I, 29.35)

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

In defense of the indefensible humanities

Università di Bologna

The University of Bologna, the most ancient in the world.

We keep hearing that the humanities — meaning things like literature, philosophy, history and so forth — are in crisis. Which is undeniably true, as measured in terms of dollars invested in them (including number of faculty positions, courses offered, etc.) in many contemporary universities, especially, but not only, in the United States and the UK. Many reasons have been adduced to explain this phenomenon, and there have been a number of calls to defend the humanistic disciplines on a variety of grounds.

I have my own take on this, which was crystallized in my mind several years ago, during a dinner with the Chair of the Philosophy Department at Notre Dame University. He was bragging that Notre Dame has the largest philosophy department in the country, possibly the world (I think the former statement is correct, the latter is doubtful, but still). I was then myself Chair of the Department of Philosophy at Lehman College in the Bronx, and I asked my host what accounted for their success. His response was simple and obvious: “we are a Catholic university. You simply don’t graduate from here unless you have taken a minimum of two philosophy courses.”

It is as simple as that, really. The “crisis” is an artifact of the fact that universities — especially public ones in the US — are increasingly run like businesses, where the “customer” (they used to be called students) get to pick what they want to study and how. The problem, of course, is that students, by definition, don’t know enough about what is good for them, and so should be institutionally limited in their choices. When I learned how to drive I patiently listened to my instructor and followed his lead, I didn’t design my own curriculum at driving school. The same when I learned Judo. Oh, and when I went to college, obviously. To run universities the way they are run now is purely and simply to abdicate the responsibility of teaching the next generation. Faculty and administrators, instead, become retail sellers, competing with each other to attract the highest number of customers in order to boost enrollment and bring in the tuition money that is increasingly needed because States have cut funding for “public” education, in many cases to ridiculously low levels.

I could end this post here, surely having pissed off or outraged countless students and administrators. Which is okay, since I’ve got tenure. But I recently read a refreshingly different essay on the subject, which I want to comment on. It’s titled “There is no case for the humanities,” published in American Affairs Journal, and authored by Justin Stover, a quondam fellow of All Souls College, Oxford University, and a lecturer at the University of Edinburgh. Stover provides a scholarly informed background about the history of the very concept of a university, makes excellent points, gets most of the facts right, and yet is — I maintain — spectacularly wrong in his conclusions. Or so I am going to argue.

Stover begins by arguing that there is deep conceptual confusion about what the humanities are and the reasons for studying them. He then immediately tells his readers that he will ignore the first part of the issue (what constitutes the humanities) and devote his piece to the second one (why studying them). Not necessarily a good move, in my opinion, because the reader is left — off the bat, so to speak — to having to guess what Stover means by “humanities.” Still, let’s assume that we all know what he is talking about, a la Justice Potter.

Stover’s first excellent point concerns the strange critique, and support, that both conservatives and leftists have for the humanities. The conservatives first. On the one hand, they attempt to use the coercive power of the state, and the financial muscle of private donors, in order to correct what they see as the ideological bias of the academy. On the other hand, in so doing, they are contributing to the destruction of the very professoriate that they claim to be defending. As Stover puts it:

“It is self-defeating to make common cause with corporate interests looking to co-opt the university and its public subsidy to outsource their job training and research, just for the sake of punishing the political sins of liberal professors.”

This without counting the fact that university professors tend to be liberal within the humanities, but certainly not in the social sciences, or even in the natural sciences — which are by far more powerful and influential on modern campuses.

The left doesn’t do much better, according to Stover. Progressives want to use the humanities as a force for social change and a training camp for citizen-activists, which right there is in flagrant contradiction with the mission of a university. Worse, they impose ideological litmus tests, despite their vocal protestations of being in favor of critical thinking and freedom of expression.

Stover tells us that most faculty are caught in the middle of this struggle, and that what they want to do, mostly, is to mind their business and carry out their research and scholarship on tiny, and often entirely irrelevant, domains of human knowledge. In other words, they want to do precisely what universities were originally designed to do, from the establishment of the first world university (in Bologna, Italy) back in 1088, onwards. This is an interesting — and insofar as I know correct — point:

“The critics, often well-meaning [well, I don’t know about that], think they are attacking the decadence and excess of contemporary humanities scholarship, when in fact they are striking at the very heart of the humanities as they have existed for centuries.”

One large caveat here, coming from my more extensive experience as someone who has worked in, and is familiar with the history of, not just the humanities, but the sciences as well. Everything that’s Stover has said so far, and that he says in the rest of the article, applies mutatis mutandis to the sciences. Which pretty much dispatches of his entire argument, since he is assuming from the beginning that the humanities are somehow different from the rest of academy. They are most certainly not, at least not by the light of the parameters he uses in his discussion.

The central part of the article is structured around a series of obviously provocative sections, boldly making entirely counterintuitive claims. The first one is “in praise of overspecialization,” addressing the criticism that today’s humanistic scholarship is too narrowly focused, and often concerned with minutiae that seem hardly worth bothering with. Here Stover is absolutely right that this is nothing new:

“No Scholastic ever argued how many angels could dance on the head of a pin — it takes the fevered imagination of a philosophe to come up with that question — but popular depictions of scholars in the Middle Ages indicate that their specialized pursuits were not always fully appreciated.”

Indeed, as Stover points out with dismay, it is the modern expectation that is new and way out of proportions. If you were to write, for instance, a paper or book on French clothing from 1650 to 1699, reviewers would demand that you situate your work within the broader area of literary theory, and moreover provide analyses of your material within the framework generated by the cultural milieu of the modern world. No Scholastic was ever asked to do anything like that at all.

This demand for broad context and up to date framing, according to Stover, simply results in bad scholarship:

“Take an important subject, say, democracy in classical Athens. If you ever want to go beyond a silly nursery story about Athens as the cradle of democracy … if you actually want to understand the political and social system of fifth-century Athens, you would have to delve into everything from epigraphy to the minor Attic orators, to comedy and tragedy, the Greek economy, trade relationships in Greece and the Mediterranean, coinage, ship construction, material supply chains, colonies, gender roles, even clothing and food.”

In other words, you would have to rely on a lot of narrow, “useless” scholarship.

The next section is “in defense of overproduction.” Here too, Stover’s strategy is to show that this isn’t a new problem, but a feature that has been with us from the dawn of (scholarly) time. He quotes an unspecified 13th century scholar who complained that “Aristotle offers the key to wisdom, but he hid that key in so many books.” Tens of thousands of commentaries on Peter Lombard exist, unread for hundreds of years, scattered across European universities, the reason being that this was once a standard exercise to go through to become a reputable (and licensed) teacher of theology. Overproduction doesn’t seem nearly like a sufficient term here!

Then we have “against teaching,” where Stover reminds us that scholars have always eschewed teaching, and that universities were never meant primarily as teaching (as opposed to scholarly) enterprises. I remember reading a biography of Galileo (not a humanist, but a scientist!) that commented about a letter that he wrote to a friend explaining why he was moving back to Florence from Padua: the wine is better, and the teaching load is smaller. I can relate. Stover puts it this way:

“These critiques, whether from the right or left, betray a rather limited horizon of imagination. They can only see the university as a fee-for-service corporation, a vendor hawking knowledge. … A school — be it a gymnasium or realschule, a college or a lycee, a grammar school or comprehensive, a preparatory academy or a public school — exists to teach. But the difference between a university and a school is not the mere difference of the age of the student or the level of instruction. The university is a different kind of thing.”

Indeed. Throughout its history the university has been a locus of scholarship, where the students benefit from the proximity with scholars, more a workshop than a school, at least ideally. That role has now shifted to graduate schools, in the process degrading colleges to glorified high schools, in part because actual high schools no longer do a proper job of teaching the next generation.

So Stover is right that the modern critics of the university, if they had their way, would destroy the very concept of a university, turning it instead into a slightly refined high school. He sees the contemporary university as a bizarre chimaera, and he is not wrong in this:

“The contemporary university … has become an institution for teaching undergraduates, a lab for medical and technological development in partnership with industry, a hospital, a museum (or several), a performance hall, a radio station, a landowner, a big-money (or money-losing) sports club, a research center competing for government funding, often the biggest employer for a hundred miles around, and, for a few institutions, a hedge fund.”

Which brings him finally to what he sees as the misguided attempts of late to defend the humanities. He accuses his colleagues of uttering words in which they don’t, really, believe, such as “skills,” “relevance,” “changing economy,” “engagement,” and “values.” I think he is a bit too harsh here, but I have certainly experienced, both as a faculty and as an administrator (five years as a Chair) part of what he is talking about. I can’t tell you how many useless strategic and rebranding meetings I have participated to, realizing full well that they were going to be a waste of everyone’s time.

Stover tells us that, in the end, what academic humanists really value is that their scholarship gives them participation in a particular community that they appreciate, a community in which other scholars typically share their values and interests. He rejects what he sees — rightly, mostly — as conservative paranoia about sinister plots to brainwash students with liberal dogma. Which leads him to conclude that the only justification for the humanities is within a humanistic framework, and that outside of such framework there is no case to be made:

“The humanities do not need to make a case within the university because the humanities are the heart of the university. Golfers do not need to justify the rationale for hitting little white balls to their golf clubs; philatelists do not need to explain what makes them excited about vintage postage at their local stamp collecting society.”

This is utterly wrong, and quite obviously so. The analogies simply do not hold. Golfers pay for their club memberships, and philatelists buy their own stamps. Academics, by contrast, are paid, often with public funds. So justification is most definitely needed.

Stover is correct, however, when he says that what distinguishes universities from technical schools is precisely the presence of the humanities:

“The most prestigious universities in the West are still those defined by their humanities legacy, which surrounds them with an aura of cultural standing that their professional purpose no longer justifies. … That is why every technical institute with higher aspirations has added humanities programs: accounting or law or engineering can be learned in many places, but courtoisie is passed along only in the university, and only through the humanities — and everyone knows it. … It is the lingering presence of the humanities that allows the modern university to think better of itself, and to imagine itself to be above commercial or political vulgarity.”

In the end, Stover tells us that the current weak defense of the humanities will fail, and the crisis of the university will deepen. Luckily, he says, this is not the first time, and will probably not be the last one. The university, and the humanities, will survive to fight another day:

“The way to defend the arts [and humanities] is to practice them. … Scholarship has built institutions before, and will do so again.”

Perhaps, but I’m not willing to wait and see how history unfolds. And — contra Stover — I don’t find most (though not all) of the current defenses of the humanities to be weak at all. Of course the humanities teach valuable skills to students, and there is plenty of empirical evidence to substantiate that claim. No, the sciences don’t teach “critical thinking,” by and large, and they certainly don’t teach how to think broadly and write well. And those are much more crucial, and portable, skills than learning how to run a chemical reaction or dissect a frog.

Of course the humanities teach about values. You don’t learn much about the human polis by studying astronomy or biology (as important as those disciplines are), or even engineering and medicine. You learn that from reading Shakespeare, engaging with Aristotle and Kant, seeing (and even better acting in, or producing) a play by Aristophanes. (Feel free to substitute the examples above with equivalent ones from China, Japan, Africa, South America, and so forth.)

If we yield to the neo-liberal project for the university it will not only destroy the university, it will also destroy the hope to provide the kind of public education that helps to form the next generation of intelligent, informed, critical human beings and citizens. Again, this is not something the STEM disciplines are equipped to do, with all due respect to my colleagues in science, computer science, engineering, and mathematics. I know this not just because I read widely, but from personal experience: my philosophy classes are so much more important and impactful than the ones I used to teach in biology that the comparison is simply laughable.

Against teaching? The hell with that. Teaching is by far the most important thing we do (when we do it well, not as a glorified high school). And to argue that it is not so today because it was not so during the Middle Ages is a complete non sequitur. Plenty of things were different in the past, but we have learned to do them better, or not to do them at all, if they turned out to be useless. And we are better off for it.

In praise of over-specialization and over-production? My arse. My heart aches at the immense waste of human potential represented by those tens of thousands of commentaries on Peter Lombard. What a gigantic load of lost opportunities! No, please, let’s not use that as a model for modern scholarship. Again, just because it has always been so it doesn’t mean it is a good idea to continue doing it that way. Yes, specialization is the inevitable name of the scholarly game, and Stover’s example of what is needed to develop a deep understanding of ancient Athenian democracy is a very good one. But let’s go a little lighter on additional commentaries on the philosopher or dramatist du jour, please.

Unlike Stover — whom I thank for his cogent analysis, which really pushed me to reflect more carefully on all of this — I think that a defense of the humanities, right here and right now, is synonymous with a defense of the very idea of a liberal education. Which in turn is synonymous with a defense of the possibility and hope for a vibrant democracy. Or at least a democracy that doesn’t produce the sort of obscene politics and social policies that a number of Western countries, especially the US and UK, are currently producing. We can do better, we ought to do better, we will do better.

Clickbaiting and the evils of Western philosophy

The title of this blog is Footnotes to Plato. This is not because I am inordinately fond of Plato (among the ancients I prefer the Stoics, as many readers know), nor because I literally believe the famous phrase by Alfred North Whitehead from which the blog title derives: “The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato” (Process and Reality, p. 39, Free Press, 1979). I’m pretty sure he didn’t mean it seriously, or at least I hope so.

The quip, however, does hint at a historical reality: Plato is — for good and for ill —the single most influential Western philosopher, in good part because he touched on pretty much every major topic that subsequent philosophers have been preoccupied with. The reason for this, in turn, is arguably twofold: on the one hand, he truly was a towering figure, who had a lot to say about all sorts of things; on the other hand, he was one of the earliest philosophers, which means that the field was completely open and ripe with a bunch of low hanging fruits. This isn’t a thing peculiar to philosophy: Galileo made a huge number of discoveries, from the craters of the Moon to the rings of Saturn, simply because he was the first one to use a telescope.

[Yes, I’m aware that we still study Plato in philosophy, but we don’t study Galileo in science. There are good reasons for this, which have nothing to do with an alleged superiority of science and everything to do with the fact that science and philosophy are different kinds of disciplines, with different methods and concerns. So is mathematics. And literary criticism. See here for an entire book devoted to that topic.]

Back to Whitehead: notice that the phrase specifically refers to the European philosophical tradition. An obvious acknowledgment of the existence of several other traditions, over which Plato had little or no influence. Which brings me to the point of the current post. During the last several weeks I’ve been sparring on Twitter with Bryan van Norden, a self-described “leading scholar” of Confucianism, Daoism, and Buddhism, based in Singapore. He has written a book, just out, entitled Taking Back Philosophy: A Multicultural Manifesto, in support of which he has published a piece in Aeon magazine. It is that piece that has triggered our back and forth, which has, unfortunately, reached rare levels of unpleasantness.

The title of the Aeon article is “Why the Western Philosophical canon is xenophobic and racist,” a rare instance of vilification of an entire field and of an indiscriminate attack on every professional working within it. Is van Norden justified in his accusations? Is such an obvious clickbait the best way to foster a constructive dialogue about the problem? Let’s take a look.

First though, let me make clear that I agree with some of the substance of van Norden’s article (and, presumably, book). Philosophy departments the world over — not just in North America or Europe — should indeed be teaching as many of the varied philosophical traditions as logistically possible. Then again, that goes also for history departments, or literature, and so forth, I would think.

Second, the crucial kernel of truth in van Norden’s argument is the problem famously identified by Edward W. Said in his 1978 book, Orientalism. Said defined Orientalism as a patronizing attitude on the part of “the West” in its representations of “the East,” an attitude that is inextricably tied with Colonialism between the 16th and 19th centuries. Some of the victims, according to Said, have been complicit with the West, as for instance in the case of the romantic aura surrounding descriptions of Arab Culture, which originated with French, British, and American writers, but was then deployed by Arab elites for their own repressive purposes.

Said’s work is important and well known in Western departments, though it has to be noted that it targeted primarily literature (not philosophy), and that it has in turn been criticized in part because of its over-reliance on questionable poststructural methods of analysis. Be that as it may, I know of no one in contemporary philosophy or literature departments in the West who is not aware of Said’s work and very sensitive to potential charges of Orientalism. van Norden clearly disagrees, so let’s take a look at what he says in some detail.

He begins his article with a statement just as bald (and as false) as its title: “Mainstream philosophy in the so-called West is narrow-minded, unimaginative, and even xenophobic. I know I am levelling a serious charge. But how else can we explain the fact that the rich philosophical traditions of China, India, Africa, and the Indigenous peoples of the Americas are completely ignored by almost all philosophy departments in both Europe and the English-speaking world?”

Of course, van Norden provides little empirical evidence for that gross generalization. He relies heavily on a survey of graduate programs in American departments, according to which 10% of those programs have a specialist in Chinese philosophy, and many don’t have courses on other non-Western philosophies. That’s not good, but note that it refers to graduate programs (which tend to be highly specialized), and that we are not given comparative numbers of how many Chinese, or African graduate programs feature specialists in Western philosophy. Many graduate programs in the US also lack specialists in philosophy of science, say, or in aesthetics, and so forth. Moreover, at the undergraduate level things are certainly better, with many departments featuring regular offerings in Chinese, Indian and African philosophy.

While the situation can and should be improved, this is hardly good evidence of racism and xenophobia. More likely, it is the result of lack of training (until recently) in those areas, as well as of budget cuts in the humanities in general, which makes it increasingly difficult to hire full time faculty, in any specialty. And of course, while American society is indeed culturally diverse, it is still made of mostly “Western” students (and faculty), which is the simplest explanation for why there are similar biases also in history and literature departments. Simply put, the charge of racism and xenophobia is vicious, smells of moral grandstanding, and is entirely counterproductive. I’m squarely on van Norden’s side when it comes to increasing multicultural courses, but my case is hardly going to be helped by indiscriminately accusing my colleagues of racism and xenophobia.

The bulk of the Aeon article, in facts, is a concession to the fact that Eastern philosophy has been taken seriously by a lot of authors in the Western tradition, from the translations of Confucius curated by the Jesuits to Leibniz’ interest in Chinese philosophy. Who, then, is the culprit for the current sorry state of affairs? Kant, of course. van Norden presents his own version of the recent history of Western philosophy, in which Kant is made to be a racist uber-villain. There is no doubt that Kant was “racist” from our standpoint, and racist comments are easy to find in the writings of Hume and Mill as well, to mention just a couple of other prominent figures of modern philosophy. This is not surprising because they were all products of the Enlightenment, and the Enlightenment was the time when “scientific racism” was developed, the notion, allegedly based on the best science of the time, according to which most non-Western “races” were clearly intellectually inferior.

Now, I completely agree with van Norden that scientific racism was shameful, though many progressive thinkers endorsed it at the time, just like in the early 20th century it was mostly progressives who powered the eugenic movement. Moreover, I do think that race is not, in fact, a biological category (as I’ve written on several occasions, for instance here). I also certainly do not deny that there is racism in our society and I don’t think that individual philosophers are exceptional in that respect.

But it is like for van Norden time has stopped at the Enlightenment. The Romantic backlash never happened. Continental philosophy is ignored, even though many of its exponents have been influenced by Eastern writers. And the postmodern (mostly, but not entirely, unfortunate) uprising never took place either. Truly, it is only the analytic tradition that downplays non-Western (and Continental) contributions, and that’s largely because, in fact, those are hardly compatible styles of doing philosophy. But I for one fervently hope that analytic philosophy is on its way out so that we can get on with the business of doing relevant (as opposed to logic chopping) philosophy.

Back to Kant. van Norden writes: “Kant is easily one of the four or five most influential philosophers in the Western tradition. He asserted that the Chinese, Indians, Africans and the Indigenous peoples of the Americas are congenitally incapable of philosophy. And contemporary Western philosophers take it for granted that there is no Chinese, Indian, African or Native American philosophy. If this is a coincidence, it is a stunning one.” It is not a coincidence because a crucial part of that statement is utterly false. Yes, Kant certainly is one of the most influential modern philosophers. But definitely not because of that sort of statements, which he did, unfortunately, make. That said, where on earth did van Norden get the idea that “contemporary Western philosophers take it for granted that there is no Chinese, Indian, African or Native American philosophy”? On what planet is he living in? Surely not the one in which the philosophers and departments I know of actually exist.

van Norden offers no evidence for that sweeping statement, of course, except a couple of anecdotes, one of which features Derrida going to China in 2001 and telling his stunned hosts that “China does not have any philosophy, only thought.” Well, I never had a high opinion of Derrida (to put it mildly), and this is one more confirmation that I was right. But so what? Why not focus instead on people like my CUNY colleague Graham Priest, one of the top logicians in the world, who has been blending Eastern and Western philosophy in his work on paraconsistent logic? Because that wouldn’t fit the clickbait narrative, of course.

Let’s analyze for a moment how the issue of, shall we call it “great men’s blunders” is treated outside of philosophy. Take physics, and in particular Newton, a figure that, ironically, strongly influenced Kant, who famously wanted to put moral philosophy on the same firm footing as the sort of natural philosophy that was being done by Newton. It turns out that Newton was a nasty little man, prone to vengeance and abuse of power, and that moreover he spent (wasted would be a better term) a larger portion of his life doing alchemy and Biblical criticism rather than physics. But nobody today focuses on Newton’s personal failures, nor do we read what he wrote about alchemy and the Old Testament. Why not? Because a healthy approach to people’s personal and professional failures is to acknowledge them while at the same time focus on whatever it is of good that they produced in their fields. I’m not about to discard Newtonian mechanics because of Newton’s failures in other respects. Similarly, we shouldn’t revise the history of (Western) philosophy and downplay the positive contributions of Hume, Kant and Mill, among others, because they also said things that by contemporary standards are racist.

Moreover, and this goes conveniently unmentioned in van Norden’s article, Kant was also an anti-colonialist and endorsed a general philosophy of cosmopolitanism. Similarly, Mill did make racist comments, and yet he wrote On Liberty, as well as — with his wife Harriet Taylor Mill — The Subjection of Women. Hume did utter racist remarks, but also wrote cogent, and very modern sounding, essays on moral and political philosophy. Go figure, people are complicated! (Want one more example? Plato did not question slavery, unlike, say, Zeno of Citium. But he advocated for the intellectual equality of women. Which part should we discard and which adopt, you think?)

Perhaps feeling a bit short in terms of overtly “Orientalist” philosophers, van Norden even mentions Antonin Scalia, who apparently referred to the thought of Confucius as “the mystical aphorisms of the fortune cookie.” I got news for van Norden, besides being a first class asshole, Scalia was not a philosopher, nor was his thought representative of philosophy departments.

van Norden ends his article by suggesting that we should add more coverage of non-Western philosophies in the curricula offered by American universities. Yes! And that is precisely what we are doing. But we also have to deal with the realities on the ground, meaning mostly that a lot of philosophy departments simply do not have the resources necessary to do a good job as it is, let alone to branch out in new directions. And yet, I don’t know a single colleague who is both not aware and unsympathetic to van Norden’s worry. If we really want to make progress, are clickbait titles along the lines of “the Western Philosophical canon is racist and xenophobic” going to be helpful? Do we really think that conversations get started and progress gets made that way?

In response to one of my tweets asking for hard data, van Norden replied by quoting an article in the LA Times that reports the following statistics: “African Americans constitute 13% of the US population, 7% of PhD recipients across fields, 2% of PhD recipients in philosophy, and less than 0.5% of authors in the most prominent philosophy journals.”

This is bad, obviously. But van Norden’s reading at face value of what the numbers mean is naive at best, willfully ignorant at worst. Let us set aside the obvious observation that correlation does not imply any particular causal scenario. (I mean, the ratio of female to male nurses in the US is a whopping 9.5:1. Surely nobody in his right mind is going to make an argument based on that figure that hospitals engage in reverse sexism and discriminate against male applicants, right?) The most likely explanation for the philosophy figures isn’t structural racism within the profession, but rather a combination of two other factors: structural racism at the pre-college levels, and culture. I have been on plenty of search committees hiring faculty, as well as on admission committees looking for graduate students. You have no idea how much at pains my colleagues and I have always been to look for minorities (and women). Every. Single. Time. The problem is that blacks and Hispanics are at a structural disadvantage from the very beginning, meaning from kindergarten, and things hardly get better in grade school. That’s a major reason why by the time we get to graduate school and tenure track positions the numbers are abysmal. The issue is not structural racism within the philosophical profession, it is structural racism in society at large.

The second reason for those numbers is culture, as in many of my minority students telling me that they experience strong pressure both from peers and from their families to drop philosophy and major instead in a “real” field, like engineering, pre-med, or pre-law. There are good reasons for this, having to do with the increasingly stratospheric cost of a college education, even in so-called public schools (which nowadays get only a fraction of their budget from States), and with the fact that many of these students are the first in their family to actually go to college. If I were one of their parents I would be concerned as well about “wasting” my tuition money on something as “useless” as philosophy. (Even though, turns out, majoring in philosophy is an excellent bet in terms of post-graduation employment.)

Does any of the above prove that philosophy, as a profession, does not have a problem with racism (and sexism)? No, it doesn’t. But van Norden has done very little to show that it does, relying on selected anecdotal evidence and hastily interpreted surveys to level what he himself recognizes as a “a serious charge.” As Hume would have put it — in his frequent non-racist moments — a wise person’s belief should be proportionate to the evidence, and van Norden’s most certainly is not.

By all means, let us fix whatever is wrong with the philosophical profession. But let’s do it by engaging in constructive and nuanced discourse, not in blatant clickbaiting for the sake of selling books. Let’s do it because we are genuinely concerned about future generations, avoiding the temptation of putting ourselves on a high moral pedestal. And above all let’s do it fairly, without tainting countless people with broad accusations of racism.

Know thyself: still excellent advice, after all these years

“gnothi seauton,” know thyself

I have been at Delphi twice already, and I plan on going back again. It is a truly magical place. No, I don’t believe in “magic,” I’m talking about real magic, the sense of awe that strikes you when you arrive there. Despite the tourist shops, the bed and breakfasts, and the restaurants, you cannot avoid been struck by the sheer beauty of the place: a green mountainous peak overlooking a deep valley, from where you can see the Aegean Sea in the distance. No wonder the ancients thought it a place privileged by the gods, as testified today by the beautiful ruins of the temples of Apollo and Athena.

It is in Delphi, of course, that the most famous Oracle of the ancient world resided. Still today you can see the omphalos (i.e., navel), the stone that allowed direct communication between the priestess and the gods. Modern science has suggested that the location is characterized by significant underground quantities of ethylene or methane, which may cause hallucinations to people exposed to them. So far, however, this is speculation, and not really germane to the psychological power of the Oracle. The advice given by the priestess of Apollo, regardless of its natural trigger, was often sound, if not necessarily amenable to an immediate interpretation.

One of my favorite stories is that of Themistocles, the Athenian general who was told that Athens will successfully defend itself from the powerful army of the Persian king Xerxes by building a wall of wood (“Though all else shall be taken, Zeus, the all seeing, grants that the wooden wall only shall not fail”). The notion, of course, is ridiculous on its face. Surely the mighty Persians would not be stopped in their tracks by mere wood. But interpret the advice more creatively, as Themistocles did, and you realize that the wood in question was that of the ships forming the formidable Athenian navy, which did, in fact, annihilate the opponent fleet at the battle of Salamis.

Temple of Athena at Delphi (Photo by the Author)

Delphi was also famous for a list of “commandments” that were allegedly assembled from the wisdom of the Seven Sages, a legendary group of philosophers, statesmen, and law-givers from the early history of Greece. Perhaps the most famous of such commandments was “know thyself,” which has since inspired countless philosophers, most famously informing Socrates’ entire career as a gadfly to the good people of Athens (who repaid him for his trouble, as we know, by putting him to death by hemlock).

Now an article published in Aeon magazine by Bence Nanay (a professor of philosophy at the University of Antwerp, Belgium) tells us not only that “know thyself” is “silly” advice, but that it’s actively dangerous. While Nanay has a point, I will argue that it is his own article that is, in fact, dangerous.

Nanay tells us that the Delphic injunction is based on an untenable picture of the self, and of how we make decisions — though I wonder how he knows which theory of mind and psychological agency was endorsed by whoever chiseled the famous phrase on the entrance to the temple of Apollo.

He invites us to consider a simple situation: “You go to the local cafe and order an espresso. Why? Just a momentary whim? Trying something new? Maybe you know that the owner is Italian and she would judge you if you ordered a cappuccino after 11am? Or are you just an espresso kind of person? I suspect that the last of these options best reflects your choices. You do much of what you do because you think it meshes with the kind of person you think you are. You order eggs Benedict because you’re an eggs Benedict kind of person. It’s part of who you are. And this goes for many of our daily choices.”

The notion is that we have somewhat stable ideas about who we are, which is practically useful, since it saves us a lot of time whenever we have to make decisions. Except if you go to Starbucks, because they have far too many choices. Then again, no self respecting Italian would go to Starbucks. Or order a cappuccino after 11am. (See what I did there? I have an image of myself as a self respecting Italian, hence my choices about where to get my coffee and when it is proper to order a cappuccino. Also, no Parmesan cheese on seafood pasta, please.)

But of course, as Nanay reminds his readers, we also change, all the time. On occasion these changes are sudden and dramatic, and therefore very noticeable. Many people feel and act differently after having had a child, for instance. Or having experienced a trauma, such as a diagnosis of cancer. Many changes, though, are subtle and slow, yet cumulative over time. It is this second kind of change that creates the major problem for the Delphic injunction, apparently: “The problem is this: if we change while our self-image remains the same, then there will be a deep abyss between who we are and who we think we are. And this leads to conflict.”

Not only that. We apparently suffer from what psychologists call the “end of history illusion,” the idea that, right now, we are final, finished products. This, and not our selves of five, ten, or twenty years ago, is who we really are, and who we will keep being until our demise. The end of history illusion is, of course, nonsense. We are never finished, as the only constant throughout our life is precisely that things, including ourselves, change. You can see why Nanay is worried.

The problem concerns much more than your choices of morning java: “Maybe you used to genuinely enjoy doing philosophy, but you no longer do. But as being a philosopher is such a stable feature of your self-image, you keep doing it. There is a huge difference between what you like and what you do. What you do is dictated not by what you like, but by what kind of person you think you are.”

Theater and temple of Apollo at Delphi (Photo by the Author)

In an interesting twist, Nanay even manages to blame our addiction to social media on this alleged incongruence between who we are and who we think we are. That incongruence not only wastes a lot of our time and efforts (because, robotically, we keep doing things we no longer enjoy or think important), it also generates a fair degree of cognitive dissonance between reality and our image of reality. And cognitive dissonance, again the psychologists helpfully remind us, is emotionally costly. “Hiding a gaping contradiction between what we like and what we do takes significant mental effort and this leaves little energy to do anything else. And if you have little mental energy left, it is so much more difficult to switch off the TV or to resist spending half an hour looking at Facebook or Instagram.” Now you tell me!

Nanay concludes that “If we take the importance of change in our lives seriously, [following the Oracle] just isn’t an option. You might be able to know what you think of yourself in this moment. But what you think of yourself is very different from who you are and what you actually like. And in a couple of days or weeks, all of this might change anyway.” He then concludes with a pseudo-profound piece of poetry from André Gide, who wrote in Autumn Leaves (1950): “A caterpillar who seeks to know himself would never become a butterfly.”

Right. Then again, caterpillars are too stupid to philosophize about themselves, not to mention that their are profoundly ignorant of their own biology. And does anyone really believe that, except (maybe) for traumatic experiences, we can change a lot in mere days or weeks?

I hope it is clear what the central flow in Nanay’s argument is: he is assuming an essentialist view of the self, the self conceived as the “true,” unchanging part of who we are, which people are supposed to “discover” in order to live authentic lives. I’m sure some Ancient Greeks did hold to a similar notion (Plato comes to mind), though they were usually far too good observers of human psychology to fall into that trap. It is not at all clear whether whoever came up with the Delphic injunction subscribed to such an untenable theory of the self. What is abundantly clear is that “know thyself” is very good advice regardless, indeed even more so if our selves are dynamic bundles of perceptions, sensations, desires, and deliberations, to paraphrase and build on David Hume.

Let’s consider the more serious of Nanay’s examples, that of the philosopher who doesn’t realize that he doesn’t believe in philosophizing anymore. I don’t know whether that example was autobiographic, but I can certainly counter it with an autobiographical anecdote of my own. Ever since I can remember I wanted to be a scientist, a dream that eventually came through when I was appointed assistant professor of botany and evolutionary biology at the University of Tennessee in Knoxville, back in the distant 1995.

I had a reasonably successful career for several years in my chosen field of specialization, gene-environment interactions, rising through the ranks of associate and then full professor with tenure. My self image had been one of a scientist since I was five or six years old, and it had served me well until my late thirties and early forties.

Then a midlife crisis ensued, partly precisely because my reflections about myself began to alert me of some sort of growing gap between my mental image of me and how I was feeling while doing what I was doing. I realized that I was less and less interested in laboratory and field research, and more and more in theoretical and conceptual issues. And the step from the latter to philosophy of science wasn’t very big. Partly because such conscious reflections (the “know thyself” part), and partly because of serendipitous events, I was able to enroll as a graduate student in philosophy, publish a book and several papers in the field, and eventually switch career and become a full time philosopher.

That’s where I am now, though other adjustments have occurred in the meantime, like my increased interest in public philosophy, and my novel interest in Stoicism. These changes, too, were made actionable by the fact that I have a habit of reflecting about my feelings and experiences, trying as much as possible to keep adjusting what I actually do and what I want to do, in a never ending exercise of reflective equilibrium.

The bottom line is that my life, I can confidently assert, has been made better and better by trying to follow the Delphic commandment. I suspect the same is true of other people, who can benefit from a monitoring of the evolving “self,” coupled with the occasional redirection and adjustment of what they do or pursue. Contra Nanay, it is this process of self knowledge that reduces, or even preempts, the cognitive dissonance he refers to. And, apparently, it will also save you a lot of wasted time on Facebook and Instagram.

What is truly dangerous is not to follow the not at all “silly” advice that has served Socrates and so many others since. You may end up mispending a good chunk of your life if you ignore it. And if you have the chance, go to Delphi. You’ll thank me for it.

An embarrassing moment for the skeptical movement

IMG_8356Twentyone years ago physicist Alan Sokal perpetrated his famous hoax at the expense of the postmodernist journal Social Text. It was at the height of the so-called “science wars” of the ’90s, and Sokal, as a scientist fed up with a lot of extreme statements about the social construction of science, thought of scoring a rhetorical point by embarrassing the other side. He wrote a fake paper entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” full of scientific-sounding nonsense and submitted to the editors of Social Text. They didn’t send it out for peer reviewed and published it as a welcome example of a scientist embracing the postmodernist cause.

Sokal then proceeded to unveil the hoax in the now defunct Lingua Franca, a magazine devoted to academic affairs, thus exposing the sloppy practiced of the editors of Social Text while at the same time embarrassing the postmodernist community.

Sokal, however, is no intellectual lightweight, and he wrote a sober assessment of the significance of his stunt, for instance stating:

“From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty.”

Move forward to the present. Philosopher Peter Boghossian (not to be confused with NYU’s Paul Boghossian) and author James Lindsay (henceforth, B&L) attempted to replicate the Sokal hoax by trick-publishing a silly paper entitled “The Conceptual Penis as a Social Construct.” The victim, in this case, was the journal Cogent Social Sciences, which sent out the submission for review and accepted it in record time (one month). After which, B&L triumphantly exposed their stunt in Skeptic magazine.

But the similarities between the two episodes end there. Rather than showing Sokal’s restraint on the significance of the hoax, B&L went full blast. They see themselves as exposing a “deeply troubling” problem with the modern academy:

“The echo-chamber of morally driven fashionable nonsense coming out of the postmodernist social ‘sciences’ in general, and gender studies departments in particular … As we see it, gender studies in its current form needs to do some serious housecleaning.”

And (a large chunk of especially influential people in) the skeptic community joined the victory parade:

“We are proud to publish this exposé of a hoaxed article published in a peer-reviewed journal today.” (Michael Shermer)

“This is glorious. Well done!” (Sam Harris)

“Sokal-style satire on pretentious ‘gender studies.'” (Richard Dawkins)

“New academic hoax: a bogus paper on ‘the conceptual penis’ gets published in a ‘high-quality peer-reviewed’ journal.” (Steven Pinker)

“Cultural studies, including women’s studies, are particularly prone to the toxic combinations of jargon and ideology that makes for such horrible ‘scholarship.'” (Jerry Coyne)

Except that a mildly closer look shows that Boghossian and Lindsay are no Sokals, and that the hoax should actually be treated as an embarrassment for the skeptic community. Let’s do a bit of, ahem, deconstructing of the conceptual penis affair.

(i) Like the Sokal hoax, the sample size is n=1. Since Boghossian teaches critical thinking, he ought to know that pretty much nothing can be concluded from that sort of “sampling” of the relevant population. That’s why Sokal properly understood his hoax as a rhetorical success, a way to put the spotlight on the problem, not of showing anything broader than “that the editors of one rather marginal journal were derelict in their intellectual duty.”

(ii) The B&L paper was actually rejected by the first journal it was submitted to, NORMA: The International Journal for Masculinity Study. Boghossian and Lindsay admit this, but add that they were “invited” to resubmit to Cogent Social Sciences, which is handled by the same prestigious Taylor & Francis publishing group that handles NORMA. The reality is that NORMA itself doesn’t make it even on the list of top 115 publications in gender studies, which makes it an unranked journal, not a “top” one. also, if you check Cogent Social Sciences’ web site you will see that it operates independently of Taylor & Francis. Oh, fun fact: NORMA’s impact fact is a whopping zero… And remember, it actually rejected the paper.

(iii) The “invitation” to resubmit to Cogent Social Sciences was likely an automated email directing the authors to an obvious pay-to-publish vanity journal. See if you can spot the clues from the journal’s description of their acceptance policies. First, authors are invited to “pay what they can” in order to publish their papers; second, they say they are very “friendly” to prospective authors; lastly, they say that they do not “necessarily reject” papers with no impact. Does that sound to you like a respectable outlet, in any field?

(iv) But isn’t Cogent Social Sciences said to be “high quality” by the Directory of Open Access Journals (DOAJ)? It may be, but the DOAJ is community run, has no official standing, and to make it on its list of recommended publications a journal “must exercise peer-review with an editor and an editorial board or editorial review…. carried out by at least two editors.” Even vanity journals easily meet those criteria.

All of the above said, I am indeed weary of “studies” fields, of which women and gender studies are just a couple of examples. As I’ve written in the past, my experience actually interacting with some faculty and students in those programs has been that they do have a tendency to insularity, which could be remedied by integrating them into the appropriate classic departments, like philosophy, history, comparative literature, and the like. That, in fact, was the original intention when these programs first appeared decades ago, and my understanding is that it was the traditional departments that did not want to go down that route, in order to protect their turf, faculty lines, and students tuition money.

It is also the case that many in “X Studies” programs embrace left-leaning politics and see themselves as activists first, scholars next. This is a problem, as the two roles may lead to conflict, in which activism may prevail at the expense of sound scholarship. But the problem isn’t confined to X Studies, as it is found, for instance, in ecology (where a lot of practitioners are also involved with environmentalist organizations), cultural anthropology (protection, not just study, of indigenous populations), and frankly even critical thinking and philosophy. I have made a career of studying pseudoscience (academically) while at the same time advocating on behalf of science and reason (blogs, books, articles, podcasts). So the two activities shouldn’t be seen as ipso facto incompatible (as, for instance, social psychologist Jonathan Haidt does). But one does need to thread cautiously nonetheless.

Finally, my observation by talking to colleagues in X studies and reading some of their papers (an approach that Boghossian and Lindsay boast of having rejected, because they apparently know a priori that it’s all bullshit), is that there is a tendency to embrace a form of environmental determinism — as opposed to its genetic counterpart — about human cognitive and cultural traits. This attitude is not scientifically sound, and it even generates internal conflict, as in the case of some radical feminists who reject any talk of being “trapped in the wrong body” by transgender people. As someone who has actually studied gene-environment interactions I am extremely skeptical of any simplistic claim of either genetic or environmental determination. Human beings are exceedingly complex and inherently cultural organisms, and the best bet is to assume that pretty much everything we do is the highly intricate result of a continuous interplay among genes, developmental systems, and environments.

So yes, X Studies are potentially problematic, and they probably ought to undergo academic review as a concept, as well as be subjected to sustained, external scholarly criticism. But this is absolutely not what the B&L stunt has done. Not even close.

And of course, for balance, let’s remember that science too is subject to disturbingly similar problems (thanks to Ketan Joshi for this brief summary, to which many, many more entries could easily be added — here is a similarly good take):

* Andrew Wakefield, a British anti-vaccination campaigner, notoriously managed to publish a fraudulent paper in the (really) prestigious medical journal Lancet in 1998.

* A US nuclear physics conference accepted a paper written entirely in autocomplete.

* A trio of MIT graduate students created an algorithm that produces fake scientific papers, and in 2013 IEEE and Springer Publishing (really seriously academic publishers) found a whopping 120 published papers that had been generated by the program.

* A paper entitled “Get me off your fucking mailing list” was accepted for publication by a computer science journal.

* A 2013 hoax saw a scientific paper about anti-cancer properties in a chemical extracted from a fictional lichen published in several hundred journals.

And of course let’s not forget the current, very serious, replication crisis in both medical research and psychology. Or the fact that the pharmaceutical industry has created entire fake journals in order to publish studies “friendly” to their bottom line. And these are fields that — unlike gender studies — actually attract millions of dollars in funding and whose “research” affects people’s lives directly.

But I don’t see Boghossian, Lindsay, Shermer, Dawkins, Coyne, Pinker or Harris flooding their Twitter feeds with news of the intellectual bankruptcy of biology, physics, computer science, and medicine. Why not?

Well, here is one possibility:

“American liberalism has slipped into a kind of moral panic about racial gender and sexual identity that has distorted liberalism’s message” — Michael Shermer, 18 November 2016

“Gender Studies is primarily composed of radical ideologues who view indoctrination as their primary duty. These departments must be defunded” –Peter Boghossian, 25 April 2016

Turns out that a good number of “skeptics” are actually committed to the political cause of libertarianism. This is fine in and of itself, since we are all entitled to our political opinions. But it becomes a problem when it is used as a filter to inform your allegedly critical thinking. And it becomes particularly problematic when libertarian skeptics go on a rampage accusing others of ideological bias and calling for their defunding. Self-criticism before other-criticism, people — it’s the virtuous thing to do.

This latest episode does not, unfortunately, surprise me at all. It fits a pattern that has concerned me for years, as someone who has been very active within the movement and who still identifies with its core tenets. When Steven Pinker openly embraces scientism, turning an epistemic vice into a virtue; or when atheists think that their position amounts to anything more than a negative metaphysical stance — and think that being nasty about it is the way forward; or when atheism, skepticism and scientism are confused with each other for ideological purposes; then I get seriously worried about the future of a movement that has so much potential to help keep the light of reason alive in a society that desperately needs it.

The Boghossian and Lindsay hoax falls far short of the goal of demonstrating that gender studies is full of nonsense. But it does expose for all the world to see the problematic condition of the skeptic movement. Someone should try to wrestle it away from the ideologues currently running it, returning it to its core mission of critical analysis, including, and indeed beginning with, self-criticism. Call it Socratic Skepticism(TM).

_____

Update: Steven Pinker has admitted on Twitter that the hoax was a bad idea: “‘Gender studies’ is an academic field that deserves criticism, but The ‘Conceptual Penis’ hoax missed the mark.”