Category Archives: Public Philosophy

Welcome!

Welcome to Footnotes to Plato! It began back in August ’15 as a blog on general philosophy, with a large component of philosophy of science. The blog has now moved to Patreon and I hope you will follow me there.

However, the full archive of 354 posts and a whopping 31,414 comments will remain permanently available for free. You will also find here a list of my books, all my technical papers in philosophy, links to columns I wrote for a variety of magazines (Skeptical Inquirer, Philosophy Now, The Philosophers’ Magazine), several downloadable collections of essays, and a number of both public and technical talks I have given. You will also find links to my various online presences (Twitter, Facebook, etc.). These pages will keep being updated as new material becomes available.

I hope you will enjoy this site and that it will help you in your continuing quest for understanding and practicing philosophy.

cheers,

Massimo Pigliucci

(the City College of New York)

How to stop a conversation: with facts and definitions

1 (5)I really ought to know better, after decades of activism on behalf of science and critical thinking, than to engage in ideologically loaded discussions with friends or family. Indeed, the opening chapter of the second edition of my Nonsense on Stilts: How to Tell Science from Bunk features two rather frustrating conversations I’ve had with a relative (I call him “Ostinato,” Italian for stubborn) and an acquaintance (“Curiosa,” Italian for curious). Neither episode led to either side moving a fraction of an inch away from their initial position, resulting mostly in aggravation and emotional distress on both sides. Still, as I explain in the book, it wasn’t time entirely wasted, since I came to a couple of important realizations while engaging in such discussions.


For instance, from Ostinato I learned that a common problem in these cases is the confusion between probability and possibility. Whenever I would explain why I don’t think it likely, say, that 9/11 was an insider’s job, or that the Moon landing was a hoax, Ostinato would reply: “but isn’t it possible?” Out of intellectual honesty I would reply, yes, of course it’s possible, in the narrow sense that those scenarios do not entail a logical contradiction. But they are extremely unlikely, and there really aren’t sufficient reasons to take them seriously. Ostinato clearly thought he had scored a major point by wrangling my admission of logical possibility, but such glee reflects a fundamental misunderstanding not just of how science works, but of how commonsense does as well. Is it possible that you will jump from the window and fly rather than crash to the ground? Yes, it is. Would you take the chance?


As for Curiosa, she taught me that a little bit of knowledge is a dangerous thing. I nicknamed her that way because she was genuinely curious and intelligent, reading widely about evolution, quantum mechanics, and everything in between. Reading yes, understanding, no. She took any evidence of even extremely marginal disagreement among scientists as, again, evidence that it is possible that what people claim is a well established notion (evolution, climate change) is, in fact, false. Again, yes, it is possible; but no, finding the occasional contrarian scientist (often ideologically motivated, as in the case of anti-evolution biochemist Michael Behe) is absolutely no reason to seriously question an established scientific theory.


You would think that Ostinato and Curiosa had taught me a good lesson, and that I wouldn’t fall for it again. Sure enough, recently a close relative of mine wanted to engage me “as a scientist and a philosopher” in a discussion of chemtrails and 9/11 truthism, sending me a long list of the “reasons” she believed both. I respectfully declined, explaining that my experience had showed me very clearly that nothing good comes out of such discussions. People talk past each other, get upset, and nobody changes his mind. My relative was taken aback by my refusal, but I felt pretty good. Part of Stoic training is the notion that one does not control other people’s opinions, motivations, and reasoning. It is okay to try to teach them, within limits (and I do: that’s why I literally teach courses on this stuff, and write books about it), but failing that, one just has to put up with them.


And yet, Stoicism also reminds me that I ain’t no sage, and that I am labile to slip back at the next occasion. Which I did, a couple of days after Thanksgiving! This time I was having dinner with someone we’ll call Sorprendente (Italian for surprising, the reason for the nickname will become apparent in a moment). She is a very intelligent and highly educated person, who, moreover, is involved in a profession that very much requires critical thinking and intellectual acumen.


Imagine then my astonishment when I discovered that Sorprendente flat out denies the existence of a patriarchy, both historically and in contemporary America. I care enough about this sort of thing that I immediately felt the adrenaline rush to my head, which meant – unfortunately – that I had to fight what I already knew was an impossible battle: to explain certain things to Sorprendente without losing my temper. Anger, as Seneca famously put it, is temporary madness, and should not be indulged under any circumstances. Let alone when you are trying to convince someone you know of a notion that she is adamantly opposed to.


This post isn’t about convincing you that we do live in a patriarchal society. If you don’t think so already there probably is little I can do in a blog post to change your mind. Besides, there are plenty of excellent resources out there (like this one; or this one; or, if you are more practically minded, this one). Rather, I want to reflect on a new (to me) strategy deployed by Sorprendente, a strategy that I didn’t expect in general, and certainly not from someone who very much relies for her job on using the two concepts she dismissed at dinner with me.


Said two concepts are: definitions and facts. When Sorprendente admitted that most positions of powers in our society are held by men I made the comment that that’s part of the definition of a patriarchy. Indeed, here is how the Merriam-Webster puts it:


“Patriarchy (noun). Social organization marked by the supremacy of the father in the clan or family, the legal dependence of wives and children, and the reckoning of descent and inheritance in the male line. Broadly: control by men of a disproportionately large share of power.”


While, thankfully, we are slowly moving away from the first group of markers of a patriarchy (in the West and in some other parts of the world, certainly not everywhere, by a long shot), the second one (the bit after “broadly”) very much applies, even according to Sorprendente herself.


And yet she curtly informed me that “definitions are conversations stoppers.” Wait, what? Definitions of words are, seems to me, crucial to any kind of discourse. Yes, it is true that dictionaries are both descriptive and prescriptive. They are descriptive in the sense that if the common usage of a word changes they will update accordingly; prescriptive because they tell us what currently counts as correct usage. “It’s just semantics” is one of the most irritating responses one can get in the middle of a discussion. Of course semantics (and definitions) are important. If we don’t agree on the meaning of the words we use we are talking past each other, with no possibility whatsoever of understanding. All I was trying to say was that – according to Sorprendente’s own admission – the facts on the ground correspond to the definition of a patriarchy, which means that it becomes inherently contradictory to agree with those facts and yet insist in denying that we live in a patriarchy.


Speaking of facts. Apparently, bringing those up also is a conversation stopper, and it is therefore highly impolite. Here things got truly bizarre. To begin with, it was Sorprendente who brought up a fact, in the form of a statistic: she claimed, as partial evidence that women are not oppressed, that their average life span is 10 years longer than men’s. This is biology, one of my areas of expertise, and the facts can readily be checked.


First off, the 10 years figure is false. The true figure, as it happens, varies from country to country: 6.7 years in the US, a whopping 12 in Russia, and a mere 0.1 in Bangladesh. Second, part of the gap is due to biological reasons: women have two copies of the X chromosome, while men only have one copy (because we have the tiny Y instead). As a result, men are exposed to hundreds more genetically influenced diseases than women, and their mortality is higher, both early in life and throughout. Apparently, however, bringing up these obviously pertinent facts on my part was a rude conversation stopper. Translated: I should be free to bring up whatever false information I want, but you are not allowed to contradict me on the basis of factually correct information. Remember that Sorprendente’s job deals with the accurate verification and interpretation of facts. Oh boy.


Regardless, why would she think that a longer life span is proof that we don’t live in a patriarchy? (Indeed, according to her logic, since women have the statistical advantage, we should conclude that we live in a matriarchal society.) Because women have been and to some extent still are are “shielded” from dangerous jobs, like joining the military, which is an “obvious” example of concern on the part of men. No patriarchy. QED.


This makes little sense on a number of levels. A military career has always (since the time of the ancient Greeks) be considered a manly job precisely because women have been thought of as inferior or inadequate for that sort of activity. This is exactly what one would expect in a patriarchy. Moreover, it is likely true that most men “care” for women and want to protect them. This is in no way incompatible with the notion of sexism; indeed, being patronizing toward someone who doesn’t actually need to be protected is one of the symptoms of sexism and other discriminatory attitudes. Not to mention that women are now increasingly accepted in the military. This is true both for the US (average life span gap 6.7 years) and Bangladesh (average life span gap 0.1 years). It doesn’t take a rocket scientist to figure out that this is simply not a factor in explaining why women live longer than men.


Ah, said Sorprendente, but then if we live in a patriarchal society, how do you explain that there are millions more men than women in prison? This, I tried to respond, actually confuses two different issues, since the majority of men in American prisons are minorities, and particularly blacks and hispanics. The differential is a result of a combination of racism, poverty, and lack of education and therefore job opportunities. It appears, again, to have nothing to do with the issue of patriarchy.


Very clearly, I wasn’t getting anywhere, and both Sorprendente and I were becoming increasingly upset. At which point a thought suddenly struck me and I asked: are you by any chance into Jordan Peterson? Yes, came the response, I think he makes some good points. And that, my friends, was the real conversation stopper.

Why I’m a still a (non-card carrying) Skeptic

1 (1)I just came back from Las Vegas, where I had a lovely time at the annual CSICon event, organized by the folks that bring you Skeptical Inquirer magazine, among other things. As I’ve done almost since the beginning of my involvement with the skeptic movement, back in, ghasp, 1997, I’ve delivered a bit of a gadfly talk. This one was about scientism, reminding my fellow skeptics that they have a tendency to overdo it with the science thing, at times coming across nearly as evangelical and even obtuse as their usual targets, from creationists to UFO believers. After asking the audience to be patient with me and not serving me hemlock for lunch, I minced no words and criticized by name some of the big shots in the field, from Neil deGrasse Tyson to Richard Dawkins, from Sam Harris to Steven Pinker. And of course several of those people were giving talks at the same conference, either right before or right after me.


No hemlock was served, and I got less resistance to my chastising than usual from the audience. Some people even approached me later on telling me how much they appreciated my reminder that our community is not perfect and we need to do better. It was all very congenial, set against the perfect backdrop of the ultimate fake city in the world, and accompanied by the occasional dirty martini.


On my way back to New York I then got a tweet from a follower linking to yet another “I resign from the skeptic movement and hand in my skeptic card” article, written by a prominent (former) skeptic. It doesn’t matter who. The list of complaints by that author are familiar: a tendency toward scientism, a certain degree of sexism within the movement, and a public failure to lead by some of the de facto leaders. The same issues that I have been complaining about for years (for instance, here). But I have not quit, and do not intend to quit. Why?


The uncharitable answer would be because I’m part of the privileged elite. I doubt anyone would seriously consider me a “leader” in the movement, but I have certainly been prominent enough. And I am a male. White. Heterosexual. The problem is, uncharitable views are highly unhelpful, and I’m on record advocating on behalf of diversity in the movement, against sexual harassment, and – as I mentioned above – have made a mini-career of stinging the big shots every time I think they deserve it, which is rather often. So I’m afraid a casual dismissal based on my gender, sexual preference and ethnicity will not do. Quite apart from the fact that it would be obviously hypocritical on the part of anyone who claims that gender, sexual preference and ethnicity should not be grounds for blanket statements of any kind.


No, I stay because I believe in the fundamental soundness of the ideas that define modern skepticism, and also because I think quitting to create another group is an example of an all too common fallacy: the notion that, despite all historical evidence to the contrary, next time we’ll definitely get it right and finally create utopia on earth. Let me elaborate on each point in turn.
“Skepticism,” of course, has a long history in philosophy and science. The original Skeptics of ancient Greece and Rome where philosophers who maintained that human knowledge is either highly fallible or downright impossible (depending on which teacher of the school you refer to). Consequently, they figured that the reasonable thing to do was to either abstain entirely from any opinion, or at least to hold on to such opinions as lightly as possible. Theirs wasn’t just an epistemological stance: they turned this into a style of life, whereby they sought serenity of mind by way of detaching themselves emotionally from those opinions (political, religious) that others held so strongly and often died for. Not my cup of tea, but if you think about it, it’s not a bad approach to good living at all.


The philosopher that embodies modern skepticism most closely, however, is the Scottish Enlightenment figure par excellence, David Hume. He held an attitude of open inquiry, considering every notion worth investigating and leaving the (provisional) verdict of such investigations to the empirical evidence. He famously said that a reasonable person proportions his beliefs to the available facts, a phrase later turned by Carl Sagan in his hallmark motto: extraordinary claims require extraordinary evidence.


The contemporary skeptic movement was the brainchild of people like philosopher Paul Kurtz (the founder of the organizations that preceded CSI, as well as of Skeptical Inquirer), magician James “the Amazing” Randi (organizer of the long running conference that preceded CSICon, known as TAM, The Amazing Meeting), Carl Sagan himself, and a number of others. Initially, the movement was rather narrowly devoted to the debunking of pseudoscientific claims ranging from UFOs to telepathy, and from Bigfoot to astrology.


More recently, mainly through the efforts of a new generation of leaders – including but not limited to Steve Novella and his group, Michael Shermer, Barry Karr, and so forth – the scope of skeptical analysis has broadened to include modern challenges like those posed by the anti-vax movement and, of course, climate change. Even more recently, young people from a more diverse crowd, finally including several women like Rebecca Watson, Susan Gerbic, Kavin Senapathy, Julia Galef, and many others, have further expanded the discourse to include an evidence-based treatment of political issues, such as gender rights and racism.


The values of the skeptic movement, therefore, encompass a broad set that I am definitely on board with. At its best, the community is about reason broadly construed, critical but open minded analysis of extraordinary claim, support for science based education and critical thinking, and welcoming diversity within its ranks.


Of course, the reality is, shall we say, more complex. There has been plenty of sexual harassment scandals, involving high profile members of the community. There is that pesky tendency toward closing one’s mind and dismissing rather than investigating claims of the paranormal. And there is a new, annoying, vogue to reject philosophy, despite the fact that a skepticism (or even a science) without philosophical foundations is simply impossible.


But this leads me to the second point: I think it far more sensible to stay and fight for reform and improvement rather than to “hand my skeptic card” (there is no such thing, of course) and walk away. Because those who have walked away have, quite frankly, gone nowhere. Some have attempted to create a better version of what they have left, like the thankfully short-lived “Atheism+” experiment of a few years ago.


The problem with leaving and creating an alternative is that the new group will soon enough inevitably be characterized by the same or similar issues, because people are people. They diverge in their opinions, they get vehemently attached to those opinions, and they fight tooth and nails for them. Moreover, people are also fallible, so they will in turn engage in the same or similar behaviors as the ones that led to the splintering of the group in the first place, including discrimination and harassment. So the whole “I’m leaving and creating a new church over there” kind of approach ends up being self defeating and dispersing resources and energy that could far better be used to improve our own household from within while keep fighting the good fights we inherited from the likes of Kurtz and Sagan.


So, no, I’m not leaving the skeptic movement. I will keep going to CSICon, NECSS, the CICAP Fest, and wherever else they’ll invite me. I will keep up my self assigned role of gadfly, annoying enough people and hopefully energizing a larger number so that we keep getting things more and more right. After all, this is about making the world into an at least slightly better place, not into our personal utopia tailored to our favorite political ideology.

Neil deGrasse Tyson “debunks” Spider-Man. And that’s just wrong

1 (3)I’ve spent a significant part of my academic and public careers investigating and opposing pseudoscience. One of my role models in this quest has always been astronomer Carl Sagan, the original host of the landmark PBS series Cosmos. I have met and interviewed the new host, Neil deGrasse Tyson, the director of the Hayden Planetarium at the American Museum of Natural History. Despite our differences about the value of philosophy (he’s dead wrong on that one), Neil too got into the debunking business. But – unlike Sagan – does it with more than a whiff of scientism, and occasionally in a spectacularly wrongheaded fashion.


Take, for instance, last week’s mini-appearance on The Late Show with Stephen Colbert, one of my favorite programs to laugh at the crap currently affecting the planet (as we all known, a sense of humor is the best defense against the universe). On September 14th, Tyson was featured in a one-minute video entitled “Superpowers debunked, with Neil deGrasse Tyson.” What? Why do we need to “debunk” superpowers? Does anyone actually think there exists a god of thunder named Thor, who comes from a mythical place known as Asgard? But apparently the “problem” is pressing enough for our debunker-in-chief to use a popular nationally televised show to tackle it. Here is, in part, what Neil said (and no, this isn’t a joke, he was serious):


Let’s tackle Spider-Man.


No, let’s not! Spider-Man is one of my favorite superheroes, a (fictional) role model, motivated by a more than decent philosophy of life: with great powers comes great responsibility (he got that from Uncle Ben). Something Tyson has, apparently, not learned. He goes on:


He’s bitten by a radioactive spider. Don’t we know from experience that radioactivity give your organs cancer? So, he would just be a dead kid, not one with superpowers.


No kidding, Sherlock. Do we really need the awesome reasoning powers of a star national science popularizer to figure out that Spider-Man’s origin story doesn’t stand up to even casual scrutiny? Doesn’t Neil realize that this is fiction, for crying out loud? Well, apparently, he does, sort of:


Of course it’s fiction, so I don’t have a problem with fiction, but if you think you are going to do this experiment, and try to make that happen to you, I’ve got news for you: it’s not gonna work.
Well, Neil, apparently you do have a problem with fiction. I still remember that on my podcast, years ago, you complained about the aliens in Avatar, because the females had breasts, which are – obviously – a mammalian trait. Really? That’s what bothered you in that movie? Never heard of suspending disbelief and just enjoy a nice story?


Also, who on earth is going to be tempted to repeat in real life the “experiment” that generated Spider-Man? And even if an enterprising and badly informed kid wanted to, where would he get a radioactive spider? Lastly:


I’ve got news for you: it’s not gonna work.


You think?


All right, end of my anti-Tyson rant in defense of Spider-Man. The more serious issue here is: why did he feel the need to do such a silly thing in the first place? I suspect that’s because Neil, like a number of “skeptics” I know, is affected by two maladies: the above mentioned scientism and a strong sense of intellectual superiority to the common rabble.


Scientism is defined by the Merriam-Webster as “an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation.” I don’t know whether commentaries on comic book superheroes qualify as an area of investigation, but clearly Tyson felt it necessary to bring the awesome power of science and critical thinking to debunking the dangerous notion that being bitten by a radioactive spider will give you magical powers.


I really think the skeptic community should stay as far away as possible from the whole notion of debunking (and yes, I’ve been guilty of using that word myself, in the past). For one thing, it conveys a sense of preconceived outcome: you know a priori that the object of your debunking is nonsense, which isn’t exactly in line with the ideal scientific spirit of open inquiry. That’s why my favorite actual skeptic is philosopher David Hume, who famously said that a reasonable person’s beliefs should be proportionate to the evidence, a phrase later turned by Sagan into his famous “extraordinary claims require extraordinary evidence.” Sagan, like Hume, was open to a serious consideration of phenomena like UFOs and telepathy, even though he did not believe in them. At one point he risked his career and reputation in order to organize a scientific conference on UFO sightings. I simply cannot imagine a similar attitude being sported by Neil deGrasse Tyson.


For another thing, “debunking” strongly conveys the impression that one thinks that the people who believe in the notion to be debunked are simpletons barely worth consideration. Perhaps some are, but I’ve met plenty of really smart creationists, for instance, a notion that would sound to Tyson as the quintessential oxymoron. Which brings me to his second malady (one, again, from which I have suffered myself, and that I’m trying really hard to overcome): intellectual snobbism. People like Tyson (or, say, Richard Dawkins) exude the attitude at every turn, as on display in the short Colbert video that got me started with this post. The problem (other than that it’s simply not nice) is than snobbism isn’t going to get you converts. It only plays well with your own faithful crowd.


This is because of something that Aristotle realized back 23 centuries ago, and which he explained at great length in his book on rhetoric. Presumably, Neil, Dawkins, and others want the same thing that Sagan, Stephen Gould (another one of my role models), and myself want: to engage a broader public on the nature of science, and to widen the appreciation and practice of critical thinking. But Aristotle realized that this goal requires the deployment of three concepts: Logos, Ethos, and Pathos.


Logos refers to the idea that our first priority should be to get our facts and our reasoning right. In the case of Neil’s “debunking” of Spider-Man, yeah, he got the biological facts straight, as much as that isn’t going to do anyone any good.


Ethos means character: you need to establish your credentials with your audience. And by credentials Aristotle didn’t mean the fact that you have a PhD (Tyson has one, from Columbia University), but that you are a good, trustworthy person. I can’t comment on the degree to which Neil fits this description, because I don’t know him well enough; but he certainly comes across as condescending in this video and on many other occasions, a character trait that Aristotle would not have approved of. (One more time: I have been guilty of the same before, and I’ve been actively working on improving the situation.)


Pathos refers to the establishment of an emotional connection with your audience. This is something that scientists are actively trained not to do, under the mistaken impression that emotional connection is the same thing as emotional manipulation. But this is the case only if the agent is unscrupulous and manipulative, not if he’s acting as a genuine human being. We humans need emotional connections, without which we are prone to distrust whoever is talking to us. In the video Tyson makes absolutely no effort to connect with his audience. Indeed, it isn’t even clear who is audience is, exactly (certainly, not fans of Spider-Man!), and therefore what the point of the whole exercise actually was.


So, by all means let us nurture good science communicators, which Neil deGrasse Tyson most certainly is. We do need them. But they really ought to read a bit of Aristotle (oh no, philosophy!), and also relax about the questionable science of movies like Avatar or comic books like Spider-Man.


Speaking of which, let me leave you with the delightfully corny original animated series soundtrack. Try to enjoy it without feeling the urge to “debunk” it, okay?

The non-problem of moral luck

The Good Place - trolley dilemma

The Good Place is an unusual comedy on NBC, featuring a professor of moral philosophy among its main characters. My friend Skye Cleary has interviewed the real life philosopher who consults for the show, Todd May of Clemson University, for the blog of the American Philosophical Association. The exchange is definitely worth a read. In this post I will make an argument that one can learn more about moral philosophy from watching a single episode of the show than by listening to a technical talk in that same field while attending the APA’s own annual meeting.

Episode five of the second season of TGP features a sophisticated discussion of the infamous trolley problem, a thought experiment in ethics that has by now generated a cottage industry among both philosophers and neuroscientists. I will not explain for the n-th time what the problem consists of, you can look it up on Wikipedia. Suffice to say that the more I study virtue ethics, the more I become skeptical of the value of much modern moral philosophy, with its indulging in more and more convoluted hypothetical situations that seem to be designed more to show off the cleverness of the people working in the field than to actually help the rest of us live an ethical life. It is no coincidence that the dilemma is always framed in terms of what a deontologist or a utilitarian would do, those two frameworks having gotten further and further away from any relevance to real life, contra to what either Immanuel Kant or John Stuart Mill surely intended.

At any rate, the episode in question features a theoretical lecture on trolleys by the resident philosophical character, Chidi (played by the excellent William Jackson Harper). One of those on the receiving end of the lecture is the demon-turning-good-guy Michael (played by the awesome Ted Danson). During the lecture, Michael becomes impatient with the theory, so he snaps his fingers and transports Chidi, his friend Eleanor (played by Kristen Bell) and himself aboard an actual trolley, about to kill what appear to be real people. Michael then asks Chidi for a real-life demonstration: what is the philosopher going to do when suddenly faced with the dilemma, in the field, so to speak? Hilarity (and mayhem) quickly ensue. The episode is so good that I made my students watch it and comment on it.

Michael’s point is well taken: ethics is not (or ought not to be!) a theoretical exercise in cleverness, but a guide to navigating life’s real situations, and Chidi the philosopher — while very good in theory — fails spectacularly at it. I was thinking of that sit-com imparted lesson while attending a talk at the Eastern APA meeting last January, delivered by Philip Swenson of the College of William and Mary. In the following I will pick on Swenson a bit, not because his talk was bad (it wasn’t), but because it is an example of a way of doing philosophy that I increasingly object to, on ground of indulgence in irrelevant logic chopping.

Swenson set out to propose a solution to the “problem” of moral luck. He began, of course, with a couple of hypothetical situations:

Resultant luck case. Alice and Bill both go on walks along a riverbank. Both encounter a drowning child and attempt a rescue. They make the same choices and attempt the same actions. Alice’s rescue succeeds, but a sudden current prevents Bill’s attempt from succeeding, and the child drowns.

Circumstantial luck case. Alice goes for a walk along a riverbank and encounters a drowning child. She rescues the child. On a separate occasion, Claire goes for a walk along the riverbank. She does not encounter a drowning child. If Claire had encountered a drowning child she would have rescued the child.

What’s the problem? I mean, other than for the (fortunately hypothetical) child who occasionally drowns? Swenson is bothered by the fact that, in the first case, if we say that Alice is more praiseworthy than Bill, it looks as though we accept something apparently horrible called “resultant moral luck.” In the second case, if we say that Alice is more praiseworthy than Claire, then we accept something equally objectionable, called “circumstantial moral luck.” As Swenson puts it:

“Rejecting circumstantial moral luck appears to require a very significant revision to everyday moral judgment. Consider the plausible claim that a great many people all over the world are not so different from those who went along with the rise of the Nazis. Many people would have done similar things under similar circumstances. If we accept this and reject circumstantial luck then it looks as though some radical claim or other will follow.”

That would be, in case the reasoning isn’t clear, the radical claim that most of us are not as good as we think, and that if we had lived under the Nazi we would have been just as culpable as the majority of the German population of the time for the Holocaust. But it doesn’t end here, there is a third case to consider:

Constitutive luck case. Alice goes for a walk along a riverbank and encounters a drowning child. she rescues the child. On a separate occasion Daniel goes for a walk along the riverbank and also encounters a drowning child. Because Daniel is — through no previous fault of his own — cruel and uncaring, he refrains from rescuing the child. However, if he had possessed Alice’s naturally caring disposition, he would have rescued the child.

Swenson went on to remind the audience of the two classical “solutions” found in the philosophical literature for the problem of moral luck: “responsibility skepticism” (deny that anyone is ever praiseworthy or blameworthy at all), and the “responsibility explosion” (say that people are praiseworthy or blameworthy in virtue of what they would have done in various circumstances they never actually faced, equipped with character traits they never had).

He then goes on to present his own solution to the problem, which involves a strange calculation of moral desert levels, beginning with the assumption that the “expected desert level” for an agent is zero, and continuing with the notion that we can then assign points to different ethical situations according to a variety of criteria. I will not go into the details because they are irrelevant to my argument here. Which is that we should reject this whole approach to moral philosophy, period.

To begin with, I find bizarre the very idea that we should engage in some sort of morality ledger construction, keeping score of the praiseworthiness or blameworthiness of people. Why? What the heck is the point? Are we some sort of god who has to decide on where to send people in the afterlife? (That, incidentally, is the premise of TGP show. And it’s very funny.) Take the first scenario, the case of resultant luck. It wouldn’t cross my mind for a second to say that Alice is more praiseworthy than Bill just because Bill did not succeed in his attempt at rescuing the drowning child. On the contrary, I am in awe of anyone who would attempt the rescue, regardless of whether s/he succeeds or not.

The circumstantial luck case is even more out there: there is no reason for us to consider Claire at all. If the circumstances were such as not to test her moral fiber, fine, why should that be an issue of any sort? Alice is to be praised for her attempted (and successful) rescue, the question of what Claire would have done simply did not arise, and that’s the end of that.

The last scenario, that of constitutive luck, is interesting, but only academically. To begin with, my view — contra Swenson’s stated hypothesis — is that adult human beings are morally responsible by simple virtue of being adults. That’s what it means to be an adult, regardless of the circumstances of one’s childhood. But if Daniel has an aberrant character because, say, of some developmental abnormality in his brain, or perhaps because a tumor is interfering with his moral decision making brain network, then fine, he is not to be blamed for his inaction. That’s no skin off of Alice’s nose, because moral desert is not (or should not be) a competition! Again, why the karmic obsession with keeping scores?

What about the choice between responsibility skepticism and the responsibility explosion? It seems to me that a society cannot function without a reasonable attribution of responsibility for the actions of its (adult, normally functioning) members. But one shouldn’t be carried away and start thinking of all possible hypothetical scenarios. Ethics should be concerned with what actually happens to real people, not with how hypothetical individuals would behave under (infinite) hypothetical circumstances. If you care about the latter, I suggest you’ve got your priorities seriously screwed up.

In the end, the “problem” of moral luck is not a problem at all. When Thomas Nagel wrote his now classical paper by that title, back in 1979, I took it to call our attention to the humbling fact that we may be far less moral than we like to think, and that that observation ought to make us more sympathetic toward the above mentioned ordinary Germans under the Nazi. To cure us of moral hubris, as it were. That is a very good practical lesson, nudging us toward being both less complacent about our own abilities and more charitable toward the shortcomings of others. But if the whole thing degenerates into an entirely impractical mathematical exercise in the assignment of praise and blame we have lost track of what ethics should be about. As the Stoic philosopher Epictetus put it 19 centuries ago:

“If you didn’t learn these things in order to demonstrate them in practice, what did you learn them for?” (Discourses I, 29.35)

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

In defense of the indefensible humanities

Università di Bologna

The University of Bologna, the most ancient in the world.

We keep hearing that the humanities — meaning things like literature, philosophy, history and so forth — are in crisis. Which is undeniably true, as measured in terms of dollars invested in them (including number of faculty positions, courses offered, etc.) in many contemporary universities, especially, but not only, in the United States and the UK. Many reasons have been adduced to explain this phenomenon, and there have been a number of calls to defend the humanistic disciplines on a variety of grounds.

I have my own take on this, which was crystallized in my mind several years ago, during a dinner with the Chair of the Philosophy Department at Notre Dame University. He was bragging that Notre Dame has the largest philosophy department in the country, possibly the world (I think the former statement is correct, the latter is doubtful, but still). I was then myself Chair of the Department of Philosophy at Lehman College in the Bronx, and I asked my host what accounted for their success. His response was simple and obvious: “we are a Catholic university. You simply don’t graduate from here unless you have taken a minimum of two philosophy courses.”

It is as simple as that, really. The “crisis” is an artifact of the fact that universities — especially public ones in the US — are increasingly run like businesses, where the “customer” (they used to be called students) get to pick what they want to study and how. The problem, of course, is that students, by definition, don’t know enough about what is good for them, and so should be institutionally limited in their choices. When I learned how to drive I patiently listened to my instructor and followed his lead, I didn’t design my own curriculum at driving school. The same when I learned Judo. Oh, and when I went to college, obviously. To run universities the way they are run now is purely and simply to abdicate the responsibility of teaching the next generation. Faculty and administrators, instead, become retail sellers, competing with each other to attract the highest number of customers in order to boost enrollment and bring in the tuition money that is increasingly needed because States have cut funding for “public” education, in many cases to ridiculously low levels.

I could end this post here, surely having pissed off or outraged countless students and administrators. Which is okay, since I’ve got tenure. But I recently read a refreshingly different essay on the subject, which I want to comment on. It’s titled “There is no case for the humanities,” published in American Affairs Journal, and authored by Justin Stover, a quondam fellow of All Souls College, Oxford University, and a lecturer at the University of Edinburgh. Stover provides a scholarly informed background about the history of the very concept of a university, makes excellent points, gets most of the facts right, and yet is — I maintain — spectacularly wrong in his conclusions. Or so I am going to argue.

Stover begins by arguing that there is deep conceptual confusion about what the humanities are and the reasons for studying them. He then immediately tells his readers that he will ignore the first part of the issue (what constitutes the humanities) and devote his piece to the second one (why studying them). Not necessarily a good move, in my opinion, because the reader is left — off the bat, so to speak — to having to guess what Stover means by “humanities.” Still, let’s assume that we all know what he is talking about, a la Justice Potter.

Stover’s first excellent point concerns the strange critique, and support, that both conservatives and leftists have for the humanities. The conservatives first. On the one hand, they attempt to use the coercive power of the state, and the financial muscle of private donors, in order to correct what they see as the ideological bias of the academy. On the other hand, in so doing, they are contributing to the destruction of the very professoriate that they claim to be defending. As Stover puts it:

“It is self-defeating to make common cause with corporate interests looking to co-opt the university and its public subsidy to outsource their job training and research, just for the sake of punishing the political sins of liberal professors.”

This without counting the fact that university professors tend to be liberal within the humanities, but certainly not in the social sciences, or even in the natural sciences — which are by far more powerful and influential on modern campuses.

The left doesn’t do much better, according to Stover. Progressives want to use the humanities as a force for social change and a training camp for citizen-activists, which right there is in flagrant contradiction with the mission of a university. Worse, they impose ideological litmus tests, despite their vocal protestations of being in favor of critical thinking and freedom of expression.

Stover tells us that most faculty are caught in the middle of this struggle, and that what they want to do, mostly, is to mind their business and carry out their research and scholarship on tiny, and often entirely irrelevant, domains of human knowledge. In other words, they want to do precisely what universities were originally designed to do, from the establishment of the first world university (in Bologna, Italy) back in 1088, onwards. This is an interesting — and insofar as I know correct — point:

“The critics, often well-meaning [well, I don’t know about that], think they are attacking the decadence and excess of contemporary humanities scholarship, when in fact they are striking at the very heart of the humanities as they have existed for centuries.”

One large caveat here, coming from my more extensive experience as someone who has worked in, and is familiar with the history of, not just the humanities, but the sciences as well. Everything that’s Stover has said so far, and that he says in the rest of the article, applies mutatis mutandis to the sciences. Which pretty much dispatches of his entire argument, since he is assuming from the beginning that the humanities are somehow different from the rest of academy. They are most certainly not, at least not by the light of the parameters he uses in his discussion.

The central part of the article is structured around a series of obviously provocative sections, boldly making entirely counterintuitive claims. The first one is “in praise of overspecialization,” addressing the criticism that today’s humanistic scholarship is too narrowly focused, and often concerned with minutiae that seem hardly worth bothering with. Here Stover is absolutely right that this is nothing new:

“No Scholastic ever argued how many angels could dance on the head of a pin — it takes the fevered imagination of a philosophe to come up with that question — but popular depictions of scholars in the Middle Ages indicate that their specialized pursuits were not always fully appreciated.”

Indeed, as Stover points out with dismay, it is the modern expectation that is new and way out of proportions. If you were to write, for instance, a paper or book on French clothing from 1650 to 1699, reviewers would demand that you situate your work within the broader area of literary theory, and moreover provide analyses of your material within the framework generated by the cultural milieu of the modern world. No Scholastic was ever asked to do anything like that at all.

This demand for broad context and up to date framing, according to Stover, simply results in bad scholarship:

“Take an important subject, say, democracy in classical Athens. If you ever want to go beyond a silly nursery story about Athens as the cradle of democracy … if you actually want to understand the political and social system of fifth-century Athens, you would have to delve into everything from epigraphy to the minor Attic orators, to comedy and tragedy, the Greek economy, trade relationships in Greece and the Mediterranean, coinage, ship construction, material supply chains, colonies, gender roles, even clothing and food.”

In other words, you would have to rely on a lot of narrow, “useless” scholarship.

The next section is “in defense of overproduction.” Here too, Stover’s strategy is to show that this isn’t a new problem, but a feature that has been with us from the dawn of (scholarly) time. He quotes an unspecified 13th century scholar who complained that “Aristotle offers the key to wisdom, but he hid that key in so many books.” Tens of thousands of commentaries on Peter Lombard exist, unread for hundreds of years, scattered across European universities, the reason being that this was once a standard exercise to go through to become a reputable (and licensed) teacher of theology. Overproduction doesn’t seem nearly like a sufficient term here!

Then we have “against teaching,” where Stover reminds us that scholars have always eschewed teaching, and that universities were never meant primarily as teaching (as opposed to scholarly) enterprises. I remember reading a biography of Galileo (not a humanist, but a scientist!) that commented about a letter that he wrote to a friend explaining why he was moving back to Florence from Padua: the wine is better, and the teaching load is smaller. I can relate. Stover puts it this way:

“These critiques, whether from the right or left, betray a rather limited horizon of imagination. They can only see the university as a fee-for-service corporation, a vendor hawking knowledge. … A school — be it a gymnasium or realschule, a college or a lycee, a grammar school or comprehensive, a preparatory academy or a public school — exists to teach. But the difference between a university and a school is not the mere difference of the age of the student or the level of instruction. The university is a different kind of thing.”

Indeed. Throughout its history the university has been a locus of scholarship, where the students benefit from the proximity with scholars, more a workshop than a school, at least ideally. That role has now shifted to graduate schools, in the process degrading colleges to glorified high schools, in part because actual high schools no longer do a proper job of teaching the next generation.

So Stover is right that the modern critics of the university, if they had their way, would destroy the very concept of a university, turning it instead into a slightly refined high school. He sees the contemporary university as a bizarre chimaera, and he is not wrong in this:

“The contemporary university … has become an institution for teaching undergraduates, a lab for medical and technological development in partnership with industry, a hospital, a museum (or several), a performance hall, a radio station, a landowner, a big-money (or money-losing) sports club, a research center competing for government funding, often the biggest employer for a hundred miles around, and, for a few institutions, a hedge fund.”

Which brings him finally to what he sees as the misguided attempts of late to defend the humanities. He accuses his colleagues of uttering words in which they don’t, really, believe, such as “skills,” “relevance,” “changing economy,” “engagement,” and “values.” I think he is a bit too harsh here, but I have certainly experienced, both as a faculty and as an administrator (five years as a Chair) part of what he is talking about. I can’t tell you how many useless strategic and rebranding meetings I have participated to, realizing full well that they were going to be a waste of everyone’s time.

Stover tells us that, in the end, what academic humanists really value is that their scholarship gives them participation in a particular community that they appreciate, a community in which other scholars typically share their values and interests. He rejects what he sees — rightly, mostly — as conservative paranoia about sinister plots to brainwash students with liberal dogma. Which leads him to conclude that the only justification for the humanities is within a humanistic framework, and that outside of such framework there is no case to be made:

“The humanities do not need to make a case within the university because the humanities are the heart of the university. Golfers do not need to justify the rationale for hitting little white balls to their golf clubs; philatelists do not need to explain what makes them excited about vintage postage at their local stamp collecting society.”

This is utterly wrong, and quite obviously so. The analogies simply do not hold. Golfers pay for their club memberships, and philatelists buy their own stamps. Academics, by contrast, are paid, often with public funds. So justification is most definitely needed.

Stover is correct, however, when he says that what distinguishes universities from technical schools is precisely the presence of the humanities:

“The most prestigious universities in the West are still those defined by their humanities legacy, which surrounds them with an aura of cultural standing that their professional purpose no longer justifies. … That is why every technical institute with higher aspirations has added humanities programs: accounting or law or engineering can be learned in many places, but courtoisie is passed along only in the university, and only through the humanities — and everyone knows it. … It is the lingering presence of the humanities that allows the modern university to think better of itself, and to imagine itself to be above commercial or political vulgarity.”

In the end, Stover tells us that the current weak defense of the humanities will fail, and the crisis of the university will deepen. Luckily, he says, this is not the first time, and will probably not be the last one. The university, and the humanities, will survive to fight another day:

“The way to defend the arts [and humanities] is to practice them. … Scholarship has built institutions before, and will do so again.”

Perhaps, but I’m not willing to wait and see how history unfolds. And — contra Stover — I don’t find most (though not all) of the current defenses of the humanities to be weak at all. Of course the humanities teach valuable skills to students, and there is plenty of empirical evidence to substantiate that claim. No, the sciences don’t teach “critical thinking,” by and large, and they certainly don’t teach how to think broadly and write well. And those are much more crucial, and portable, skills than learning how to run a chemical reaction or dissect a frog.

Of course the humanities teach about values. You don’t learn much about the human polis by studying astronomy or biology (as important as those disciplines are), or even engineering and medicine. You learn that from reading Shakespeare, engaging with Aristotle and Kant, seeing (and even better acting in, or producing) a play by Aristophanes. (Feel free to substitute the examples above with equivalent ones from China, Japan, Africa, South America, and so forth.)

If we yield to the neo-liberal project for the university it will not only destroy the university, it will also destroy the hope to provide the kind of public education that helps to form the next generation of intelligent, informed, critical human beings and citizens. Again, this is not something the STEM disciplines are equipped to do, with all due respect to my colleagues in science, computer science, engineering, and mathematics. I know this not just because I read widely, but from personal experience: my philosophy classes are so much more important and impactful than the ones I used to teach in biology that the comparison is simply laughable.

Against teaching? The hell with that. Teaching is by far the most important thing we do (when we do it well, not as a glorified high school). And to argue that it is not so today because it was not so during the Middle Ages is a complete non sequitur. Plenty of things were different in the past, but we have learned to do them better, or not to do them at all, if they turned out to be useless. And we are better off for it.

In praise of over-specialization and over-production? My arse. My heart aches at the immense waste of human potential represented by those tens of thousands of commentaries on Peter Lombard. What a gigantic load of lost opportunities! No, please, let’s not use that as a model for modern scholarship. Again, just because it has always been so it doesn’t mean it is a good idea to continue doing it that way. Yes, specialization is the inevitable name of the scholarly game, and Stover’s example of what is needed to develop a deep understanding of ancient Athenian democracy is a very good one. But let’s go a little lighter on additional commentaries on the philosopher or dramatist du jour, please.

Unlike Stover — whom I thank for his cogent analysis, which really pushed me to reflect more carefully on all of this — I think that a defense of the humanities, right here and right now, is synonymous with a defense of the very idea of a liberal education. Which in turn is synonymous with a defense of the possibility and hope for a vibrant democracy. Or at least a democracy that doesn’t produce the sort of obscene politics and social policies that a number of Western countries, especially the US and UK, are currently producing. We can do better, we ought to do better, we will do better.