Category Archives: Epistemology

How to stop a conversation: with facts and definitions

1 (5)I really ought to know better, after decades of activism on behalf of science and critical thinking, than to engage in ideologically loaded discussions with friends or family. Indeed, the opening chapter of the second edition of my Nonsense on Stilts: How to Tell Science from Bunk features two rather frustrating conversations I’ve had with a relative (I call him “Ostinato,” Italian for stubborn) and an acquaintance (“Curiosa,” Italian for curious). Neither episode led to either side moving a fraction of an inch away from their initial position, resulting mostly in aggravation and emotional distress on both sides. Still, as I explain in the book, it wasn’t time entirely wasted, since I came to a couple of important realizations while engaging in such discussions.

For instance, from Ostinato I learned that a common problem in these cases is the confusion between probability and possibility. Whenever I would explain why I don’t think it likely, say, that 9/11 was an insider’s job, or that the Moon landing was a hoax, Ostinato would reply: “but isn’t it possible?” Out of intellectual honesty I would reply, yes, of course it’s possible, in the narrow sense that those scenarios do not entail a logical contradiction. But they are extremely unlikely, and there really aren’t sufficient reasons to take them seriously. Ostinato clearly thought he had scored a major point by wrangling my admission of logical possibility, but such glee reflects a fundamental misunderstanding not just of how science works, but of how commonsense does as well. Is it possible that you will jump from the window and fly rather than crash to the ground? Yes, it is. Would you take the chance?

As for Curiosa, she taught me that a little bit of knowledge is a dangerous thing. I nicknamed her that way because she was genuinely curious and intelligent, reading widely about evolution, quantum mechanics, and everything in between. Reading yes, understanding, no. She took any evidence of even extremely marginal disagreement among scientists as, again, evidence that it is possible that what people claim is a well established notion (evolution, climate change) is, in fact, false. Again, yes, it is possible; but no, finding the occasional contrarian scientist (often ideologically motivated, as in the case of anti-evolution biochemist Michael Behe) is absolutely no reason to seriously question an established scientific theory.

You would think that Ostinato and Curiosa had taught me a good lesson, and that I wouldn’t fall for it again. Sure enough, recently a close relative of mine wanted to engage me “as a scientist and a philosopher” in a discussion of chemtrails and 9/11 truthism, sending me a long list of the “reasons” she believed both. I respectfully declined, explaining that my experience had showed me very clearly that nothing good comes out of such discussions. People talk past each other, get upset, and nobody changes his mind. My relative was taken aback by my refusal, but I felt pretty good. Part of Stoic training is the notion that one does not control other people’s opinions, motivations, and reasoning. It is okay to try to teach them, within limits (and I do: that’s why I literally teach courses on this stuff, and write books about it), but failing that, one just has to put up with them.

And yet, Stoicism also reminds me that I ain’t no sage, and that I am labile to slip back at the next occasion. Which I did, a couple of days after Thanksgiving! This time I was having dinner with someone we’ll call Sorprendente (Italian for surprising, the reason for the nickname will become apparent in a moment). She is a very intelligent and highly educated person, who, moreover, is involved in a profession that very much requires critical thinking and intellectual acumen.

Imagine then my astonishment when I discovered that Sorprendente flat out denies the existence of a patriarchy, both historically and in contemporary America. I care enough about this sort of thing that I immediately felt the adrenaline rush to my head, which meant – unfortunately – that I had to fight what I already knew was an impossible battle: to explain certain things to Sorprendente without losing my temper. Anger, as Seneca famously put it, is temporary madness, and should not be indulged under any circumstances. Let alone when you are trying to convince someone you know of a notion that she is adamantly opposed to.

This post isn’t about convincing you that we do live in a patriarchal society. If you don’t think so already there probably is little I can do in a blog post to change your mind. Besides, there are plenty of excellent resources out there (like this one; or this one; or, if you are more practically minded, this one). Rather, I want to reflect on a new (to me) strategy deployed by Sorprendente, a strategy that I didn’t expect in general, and certainly not from someone who very much relies for her job on using the two concepts she dismissed at dinner with me.

Said two concepts are: definitions and facts. When Sorprendente admitted that most positions of powers in our society are held by men I made the comment that that’s part of the definition of a patriarchy. Indeed, here is how the Merriam-Webster puts it:

“Patriarchy (noun). Social organization marked by the supremacy of the father in the clan or family, the legal dependence of wives and children, and the reckoning of descent and inheritance in the male line. Broadly: control by men of a disproportionately large share of power.”

While, thankfully, we are slowly moving away from the first group of markers of a patriarchy (in the West and in some other parts of the world, certainly not everywhere, by a long shot), the second one (the bit after “broadly”) very much applies, even according to Sorprendente herself.

And yet she curtly informed me that “definitions are conversations stoppers.” Wait, what? Definitions of words are, seems to me, crucial to any kind of discourse. Yes, it is true that dictionaries are both descriptive and prescriptive. They are descriptive in the sense that if the common usage of a word changes they will update accordingly; prescriptive because they tell us what currently counts as correct usage. “It’s just semantics” is one of the most irritating responses one can get in the middle of a discussion. Of course semantics (and definitions) are important. If we don’t agree on the meaning of the words we use we are talking past each other, with no possibility whatsoever of understanding. All I was trying to say was that – according to Sorprendente’s own admission – the facts on the ground correspond to the definition of a patriarchy, which means that it becomes inherently contradictory to agree with those facts and yet insist in denying that we live in a patriarchy.

Speaking of facts. Apparently, bringing those up also is a conversation stopper, and it is therefore highly impolite. Here things got truly bizarre. To begin with, it was Sorprendente who brought up a fact, in the form of a statistic: she claimed, as partial evidence that women are not oppressed, that their average life span is 10 years longer than men’s. This is biology, one of my areas of expertise, and the facts can readily be checked.

First off, the 10 years figure is false. The true figure, as it happens, varies from country to country: 6.7 years in the US, a whopping 12 in Russia, and a mere 0.1 in Bangladesh. Second, part of the gap is due to biological reasons: women have two copies of the X chromosome, while men only have one copy (because we have the tiny Y instead). As a result, men are exposed to hundreds more genetically influenced diseases than women, and their mortality is higher, both early in life and throughout. Apparently, however, bringing up these obviously pertinent facts on my part was a rude conversation stopper. Translated: I should be free to bring up whatever false information I want, but you are not allowed to contradict me on the basis of factually correct information. Remember that Sorprendente’s job deals with the accurate verification and interpretation of facts. Oh boy.

Regardless, why would she think that a longer life span is proof that we don’t live in a patriarchy? (Indeed, according to her logic, since women have the statistical advantage, we should conclude that we live in a matriarchal society.) Because women have been and to some extent still are are “shielded” from dangerous jobs, like joining the military, which is an “obvious” example of concern on the part of men. No patriarchy. QED.

This makes little sense on a number of levels. A military career has always (since the time of the ancient Greeks) be considered a manly job precisely because women have been thought of as inferior or inadequate for that sort of activity. This is exactly what one would expect in a patriarchy. Moreover, it is likely true that most men “care” for women and want to protect them. This is in no way incompatible with the notion of sexism; indeed, being patronizing toward someone who doesn’t actually need to be protected is one of the symptoms of sexism and other discriminatory attitudes. Not to mention that women are now increasingly accepted in the military. This is true both for the US (average life span gap 6.7 years) and Bangladesh (average life span gap 0.1 years). It doesn’t take a rocket scientist to figure out that this is simply not a factor in explaining why women live longer than men.

Ah, said Sorprendente, but then if we live in a patriarchal society, how do you explain that there are millions more men than women in prison? This, I tried to respond, actually confuses two different issues, since the majority of men in American prisons are minorities, and particularly blacks and hispanics. The differential is a result of a combination of racism, poverty, and lack of education and therefore job opportunities. It appears, again, to have nothing to do with the issue of patriarchy.

Very clearly, I wasn’t getting anywhere, and both Sorprendente and I were becoming increasingly upset. At which point a thought suddenly struck me and I asked: are you by any chance into Jordan Peterson? Yes, came the response, I think he makes some good points. And that, my friends, was the real conversation stopper.

Should “the ignorant” be denied access to audiences?

John Stuart Mill

People who ended up voting for Donald Trump were famously characterized by Hillary Clinton as the “basket of deplorables.” And I must admit that I wonder in stupor at the foolishness of US politics, the recent Italian elections, Brexit, or the re-election of Turkish strongman Recep Tayyip Erdogan. Not to mention what seem to be genuinely adoring crowds in Vladimir Putin’s Russia.

How is any of this possible? It’s always a complex combination of factors, of course, ranging from people’s socio-economic situation to their ideological or religious commitments, to deficient education, to the pure and simple human herd instinct that so annoyed Nietzsche. But surely one thing that contributes to the current insane state of affairs is the reach that pernicious ideologues have in the modern era, a reach made far more efficient by the existence of the internet and social media. And by the fact that these people are often offered platforms to address audiences by institutions such as universities, newspapers, television stations and the like.

My colleague Bryan Van Norden, a professor of philosophy at Wuhan University, as well as the author of “Taking Back Philosophy: A Multicultural Manifesto,” has published a thought provoking op-ed about institutional platforms in the New York Times. It is well worth considering in some detail, as I see where Bryan is coming from, but I consider his proposed path dangerous, and his argument self-contradictory.

He begins with a couple of examples. Ultra right-wing commentator Ann Coulter recently appeared on Fox News to say that the crying migrant children separated from their parents by the Trump administration were child actors. Van Norden comments: “Does this groundless claim deserve as much airtime as, for example, a historically informed argument from Ta-Nehisi Coates that structural racism makes the American dream possible?” University of Toronto psychologist, and darling of the alt-right, Jordan Peterson talked about how difficult it is to control “crazy women” and the fact that men naturally can muster respect only for people whom they can threat with violence. Bryan’s comments: “Does this adolescent opinion deserve as much of an audience as the nuanced thoughts of Kate Manne, a professor of philosophy at Cornell University, about the role of ‘himpathy’ in supporting misogyny?”

The classical liberal response to these questions is that Ann Coulter and Jordan Peterson ought to be accorded freedom of speech, on grounds famously laid out by John Stuart Mill in his On Liberty, published in 1859. The argument is based on the following considerations: (i) you may think opinion X is clearly wrong, but history is littered with people, even majorities, who were sure that something was wrong when it turned out that it wasn’t (say, that gays should have a right to marry); (ii) if X is indeed wrong, then we learn something from people who defend it, because we need to make clear to ourselves why a given notion is, in fact, wrong (otherwise, we reject it out of prejudice, not knowledge or understanding); (iii) truth is not an all or nothing matter, so we may learn even from partially or largely wrong opinions; (iv) if an opinion offends you, that’s not sufficient reason to suppress it; and (v) who, exactly, ought to be in charge of limiting the expression of unpopular or “offensive” opinions?

Van Norden calls the above line of reasoning “specious,” adding that it is rooted in “a naïve conception of rationality that [Mill] inherited from Enlightenment thinkers like René Descartes.” [Technically, Descartes influenced the Enlightenment, but was not an Enlightenment thinker, since he lived from 1596 to 1650, and the European Enlightenment was an 18th century thing.]

Bryan argues that “If you do have faith in a universal method of reasoning that everyone accepts, then the Millian defense of absolute free speech is sound,” but he very clearly states that there is no such thing as universal reason, so we should reject Mill’s argument. I think that Van Norden’s statement is ambiguous and that what he argues in the remainder of the NYT op-ed flatly contradicts his opening statement.

He writes: “I wish it were self-evident to everyone that we should not discriminate against people based on their sexual orientation, but the current vice president of the United States does not agree. I wish everyone agreed that it is irrational to deny the evidence that there was a mass shooting in Sandy Hook, but a syndicated radio talk show host can make a career out of arguing for the contrary.”

But the fact that Mike Pence does not agree with a given notion does not mean that the notion in question is not self-evident, it may simply be that Pence denies self-evident truths, either because he is too ignorant to see them, or because of bigotry, or political expediency. Similarly, a nutcase radio talk show host, syndicated or not, may deny empirical evidence all he wants, but that doesn’t mean that his denial is reasonable. At all.

Bryan understands why Mill, and Alexis de Tocqueville, made their argument. Mill was a strong proponent of women’s rights and an opponent of slavery, and he knew too well that many people found such topics offensive, resulting in what he famously termed a tyranny of the majority.

But, argues Van Norden, we are in a very different situation from 19th century England and America. We are witnessing the worsening of a scenario already described by the philosopher Herbert Marcuse back in 1965, when he wrote: “In endlessly dragging debates over the media, the stupid opinion is treated with the same respect as the intelligent one, the misinformed may talk as long as the informed, and propaganda rides along with education, truth with falsehood.”

This is quite obviously true, of course (or is it?). Only a foolish society would give “equal time” to the discussion of evolutionary theory and creation “science,” or to a climate researcher and a so-called “skeptic” of global warming, or a medical researcher and Jenny McCarthy. But setting aside that a lot of other cases, especially political opinions (as distinct from scientific theories) are not quite so easy to settle, what is the alternative? Mill wasn’t naive about how difficult it is for most people to wade through public controversies. He just thought that freedom of speech was the least of possible evils.

Marcuse famously advocated the outright suppression of right-wing perspectives, a position that, thankfully, Bryan does not endorse. Instead, he makes an intriguing proposal: to distinguish between free speech and just access: “access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole.”

But that comes perilously close to begging the question against Mill: on what criteria should we apportion the merit of different opinions? How do we figure out what is just? How do we measure the benefit of an opinion for the community as a whole? Recall that Van Norden has denies that there is such thing as universal reason. It follows that all such judgments are bound to be arbitrary, and therefore simply to reflect the will of the people who happen to be wielding power by virtue of controlling the limited resources Bryan is referring to. This may not be quite a tyranny of the majority, but it is still a tyranny (of the elite, perhaps?).

Let’s take a look at some of the specific examples Van Norden brings up. In 2004 one Nathaniel Abraham was fired by the Woods Hole Oceanographic Institute because he admitted to his employer that he did not believe in evolution. Correctly, Bryan asserts that Abraham has a right to his wacky opinion, but that Woods Hole has a right to fire him on the grounds that he holds such opinion. But this has nothing to do with freedom of speech or institutional access: Woods Hole is a preeminent research laboratory that carries out a lot of work on evolution, so Abraham had simply admitted to his incompetence at working there. It would be like NASA firing a flat-earth believer. Or a hospital a doctor who did not “believe” in vaccines.

The next example is more pertinent, but far less clear: Van Norden claims that a number of universities, including Columbia and NYU, should not have invited Charles Murray, the co-author of The Bell Curve: Intelligence and Class Structure in American Life to speak on their campuses. Why? Because Murray’s notions are junk science. That is true, I think (for a variety of reasons, including those explained here and here), but there are two additional factors to consider. First off, “universities” don’t invite anyone; often it is specific faculty or student associations that do. And to bar invited speakers in either case amounts to an infringement of academic freedom or students’ rights. Second, I am of the opinion that a significant chunk of what goes on in a number of legitimate university departments is either questionable or downright junk (no, I will not mention names). But, again, I don’t get to decide which is which. I do get, however, to argue — in perfectly Millian fashion — in favor or against certain programs, positions, claims, and so forth.

Bryan’s third example is the recent firing by ABC of their television star, Roseanne Barr, because of her racist public remarks. But that’s yet another situation altogether. Barr did not make her remarks on television, and she was fired from ABC because the network was (rightly, I think) embarrassed by her behavior, and feared a public backlash. Of course, had the episode happened, say, in the 1950s, ABC would have likely not moved a finger about it. I assume it is a rationally objective fact that we have made (some) improvements in our thinking about race and gender since then, but of course Van Norden cannot claim so, because he does not believe in universal reason.

Bryan mentions recent research in social psychology showing that if a falsehood is repeated, even when it is in order to debunk it, people are more likely to believe it. This is both true (maybe, since there is a replication crisis ongoing in that field) and worrisome, but is it — as Van Norden claims — reason to cheer MSNBC’s “Morning Joe” decision never again to invite Kellyanne Conway because of her bizarre notion of “alternative facts”? I don’t know. It is very unfortunate that someone like Conway is currently a high profile government official, but isn’t a journalist supposed to challenge that sort of notion, rather than suppress it? Besides, by way of similar actions MSNBC has now gathered the reputation (deservedly or not) of the left’s Fox, which makes their decision about Conway come across to many as naked partisanship. Is this really helpful to public discourse? I’m not so sure.

Bryan says that “right to free speech is not the right to an audience,” and he is correct. But in philosophy we make a distinction between negative and positive rights. You may have, say, the negative right of being allowed to leave the country whenever you wish. But if things are such that you could never muster the means to actually leave, you do not have a corresponding positive right, and negative rights by themselves are largely useless. To pick a more concrete example, in the US (for now) women have a right to abortion. But such right is meaningless if local state legislatures make it so difficult for abortion clinics to practice that for all effective purposes a woman in Texas or Alabama has to drive hundreds of miles, or even go out of state, to get an abortion. Ironically, it is a typical tactic of the right that whenever they cannot eliminate a negative right (like abortion, again, for now) they go after its positive counterpart, thus making it difficult or impossible for people to enjoy that right. The same goes for speech: if I have a “right” to it, but I am then systematically denied audiences by a small number of gatekeepers, I might as well shout in the void. And, again, who gets to make such decisions, and on what grounds, given that there is no universal reason?

Van Norden concludes his op-ed by stating: “These views [that he criticizes] are specious, and those who espouse them are, at best, ignorant, at worst, sophists,” calling people who hold those views “invincibly ignorant and intellectual hucksters.” It sounds to me like Bryan thinks he has good reasons to think that these people’s opinions are, in fact, wrong. I agree with his assessment. And so should any reasonable person, because reason isn’t a matter of your personal opinion — across time and cultures. There are standards of evidence and argument that have been worked out over the past two and a half millennia of philosophy and science, way before the European Enlightenment came about. On my part, I prefer by far a society where we do our utmost so that more and more people are familiar with such standards and apply them properly, rather than one in which whoever happens to be in charge is going to decide which resources to apportion to whom. Call me an old fashioned Millian, in that sense.

We are all invited to the conversation, and if you disagree you may be logophobic

“There is no greater evil one can suffer than to hate reasonable discourse.” (Socrates, in the Phaedo)

You may have noticed that these days it has become really difficult to have a reasonable conversation. With almost anyone. I’m not referring to the obvious culprits: Trumpists, purveyors of alternative facts, or those who dismiss critical journalism as “fake news.” I’m not even talking about the classical examples, from creationists to believers in all sorts of alternative medicine, the paranormal, UFOs, and so forth (I wrote a whole book about those). I’m talking about my own “tribe,” if you will: progressive, left-leaning liberals.

I have been one of those since I was young, convinced that the fascism that had gripped my own country, Italy, for the two decades leading to World War II was the true evil of the world. Yet, even back in high school I experienced the disturbing intolerance of certain quarters of the left, where any thought that did not align exactly with the orthodoxy of the moment was automatically branded as either “fascist” (thus emptying that word of meaning), or, at best, “qualunquista” (literally “one with random opinions”).

I profess cosmopolitanism, the notion — going back to the ancient Cynics and Stoics — that we are all human beings, endowed by that very fact with a degree of dignity that deserves respect and, most importantly, equal rights. I regularly donate to organizations that fight on behalf of civil rights, of immigrants, and of the freedom of the press. I think that women ought to be paid equally to men and ought to be afforded exactly the same educational and working opportunities. I think it monstrous to deny rights to Lesbians, Gays, Bisexual, Transgender, and Queer/Questioning (LGBTQ). I am not religious, but I do not believe that religion is the root of all evil, nor do I believe that any particular religion is ethically worse than any other (with the exceptions of some cults like Scientology).

Yet, I have been told to go fuck myself, you transphobic bastard, simply because I tweeted — without comment — an article that explored some interesting, and to me rather puzzling, conflicts among segments of the feminist, lesbian, and transgender communities. According to this otherwise thoughtful and in-depth piece, my sexual aesthetic preference for slender women cannot but be political, which, obviously, makes me “fatphobic.” If I raise the issue of Islamic terrorism, in a reasonable and historically informed fashion, I risk being branded an Islamophobe (though, funny enough, if I mention Christian terrorism linked to abortion clinics or white supremacy, I get plenty of nods of approval, within my tribe). If I voice the opinion that while the Israeli are guilty of unlawful occupation of Palestinian territory and of slaughtering unharmed innocents, it is also true that too many Palestinians are intolerant and have systematically undermined their own case, then I am, of course, a fascist (again) Islamophobe pig. And so on and so forth.

And then there is this rather new, entirely postmodernist malady: the idea that unless someone has actually experienced something (being it colonialism, gender discrimination, racism, and so forth) then one is not entitled to express an opinion about it, especially if one happens to be a “privileged” white male.

Let me be clear about the latter two points. First, yes, I amply recognize my privilege. I just don’t call it that, because it’s not helpful. I happen to think that everyone, regardless of gender or ethnicity, ought to have a somewhat nurturing family, a decent education, job opportunities, and be safe from discrimination and police violence, like I have been throughout my life. It’s not a fucking privilege, it’s the way all human beings should live. Period. And that’s going to happen only if we stand united to fight discrimination and injustice, rather than wasting energy and time shouting down some of our own allies, shaming them for having gotten lucky at the gene-cultural lottery. That sort of shouting — especially anonymously on social media — may make you feel virtuous, but it is positively damaging to the causes you allegedly care about.

Second, yes, it is certainly the case that people who directly experience discrimination are in a unique position to talk about it, and that they should be afforded a primary place in public discourse in virtue of that very fact. But we also know from plenty of research in social psychology (see here and here, for example) that first person experience is extremely unreliable, as people filter such experiences through all sorts of cognitive biases that inevitably distort their perceptions of things. That is why we need the rest of society to also come to the discussion table. Social progress is made possible by the constructive, and yes, even critical, dialogue among many voices, the by now increasingly old fashioned value of pluralism.

At his point, some readers will have already put down this essay and rushed to social media to begin a campaign of shaming, outraged by the few paragraphs above. So I can keep talking to those who have stuck it so far. Those who have left the conversation, I maintain, suffer from a peculiar disease described by philosopher Nicholas Shackel in a delightful paper published back in 2005 (Metaphilosophy 36: 295–320), entitled “The vacuity of postmodernist methodology: logophobia.”

The Greek roots of the term are obvious: “logos” means “thought,” particularly rational thought, and “phobos” means “fear of.” Logophobia is defined by Shackel as, “a skeptical doctrine about rationality … [where] rationality cannot be an objective constraint on us but is just whatever we make it, and what we make it depends on what we value.” He adds, “[opponents] are held to disguise their self‐interested construction of rationality behind a metaphysically inflated view of rationality in which Reason with a capital R is supposed to transcend the merely empirical selves of rational beings.” In other words, logophobics claim that since reason cannot possibly solve every problem, so you can proceed with dismissing reason altogether.

Shackel’s paper is actually a serious, and at times rather technical, critique of the modus operandi of postmodernist (and deconstructionist, and anti‐rationalist feminist) authors such as Michel Foucault, Jacques Derrida, Jean‐Francois Lyotard, David Bloor, Richard Rorty and Stanley Fish, to mention just a few of the major offenders (though I personally make distinctions of degree among the named people). At one point I added to the list a number of pseudoscientists, such as the Intelligent Design proponents Bill Dembski and Michael Behe, alternative medicine “gurus” such as Deepak Chopra, or vaccination‐deniers such as Jenny McCarthy. And now, to my deep regret, it is becoming painfully obvious that both the extreme right (which, at the moment, appears to include most of the Republican party in the US), and certain increasingly vocal groups of the left, have been struck by the disease as well.

According to Shackel, logophobics have developed an arsenal of strategies to obfuscate clear thinking, which they deploy whenever pressed by a skeptic. For instance, consider Foucault’s classic thesis that there is no such thing as scientific truth, because truth is a reflection of local socioeconomic and political power: “The essential political problem for the intellectual is not to criticise the ideological contents supposedly linked to science, or to ensure that his own scientific practice is accompanied by a correct ideology, but that of ascertaining the possibility of constituting a new politics of truth.” This is apparently strong stuff, but it becomes a much milder statement when Foucault later tells us what he means by “truth”: “‘Truth’ is to be understood as a system of ordered procedures for the production, regulation, distribution, circulation and operation of statements.” Really? And here I was thinking that truth means as accurate a description of a subject‐independent reality as possible. Silly me.

Shackel calls this “troll truism,” which he defines as, “a mildly ambiguous statement by which an exciting falsehood may trade on a trivial truth.” It is a particular form of so‐called Humpty‐Dumptying, named after the character in Lewis Carroll’s Through the Looking Glass. Humpty Dumpty famously tells Alice, after she points out that he is arbitrarily switching word meanings in the course of their discussion: “When I use a word, it means just what I choose it to mean — neither more nor less.”

Now, to be fair, Foucault did have a point: science is a social activity, and scientists are human beings, prone to the same cognitive and cultural biases as everyone else. Also, some science has been in the thralls of either economic or political agendas, or both. Similarly, when Amia Srinivasan, in the article linked above, claims that even our sexual preferences have a political dimension, the thought is neither absurd nor easy to dismiss. But the political, or the economic, are not the beginning and end of everything we do or think. There are (Zeus forbid!) biological instincts and constraints on our behavior, there are matters of taste, and there are simple reasonable disagreements. To reduce everything to the political is to commit the Marxist mistake, only in the latter case everything reduces to economic struggle. No, the human experience is just too varied and complex to be explained away by mono-factorial accounts, no matter how good the intentions of those that push for such accounts.

Logophobia, I maintain, is a broad condition that can strike adult humans of all ages, genders and cultural backgrounds, especially when they have never been exposed to the basics of critical reasoning, or when they have grown up in the thralls of a powerful ideological system. And no, being a scientist, or a philosopher, does not make you immune, unfortunately. The effect of the disease can probably never be eliminated, as we all suffer from it, to an extent. But they can be reduced by means of sound education, although that requires painful effort on the part of educators and subjects alike. Once the subject is past middle school, it becomes increasingly difficult, though not quite impossible, to overcome the malady. But it’s important to try, as huge amounts of financial resources and time are wasted as a result. Occasionally, lives are lost as a direct outcome of logophobia, especially when the logophobic is a politician with the power to start a war, or a celebrity pushing an anti‐medical health practice. And important political agendas may unwittingly be delayed or even undermined when logophobic activists manage to shut down discourse they find threatening, insulting, or “fascistic.”

Needless to say, fighting the spread of logophobia is a primary responsibility of every reasonable thinking person, practicing scientist, and professional philosopher, despite the highly unfavourable odds against defeating it — which is why a thorough knowledge of the disease and of its symptoms is so crucial. Spread the word, and watch out for logophobics near you! (And beware of not becoming one yourself…)

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

On arrogance (with notes on souls and cosmic consciousness)

The NYAS panel, left to right: Emily Esfahani Smith, yours truly, Michael Ruse, host Steve Paulson, and Jay Lombard

Last week I participated to an interesting panel discussion at the New York Academy of Science, on “Seeking the why of our existence.” We were supposed to talk about meaning and purpose. I am usually somewhat weary of these sorts of panels, as the topic is often vague and open to far too much interpretation, and you never know what the other panelists’ take is going to be until you are on stage and find yourself thinking: “how do I respond to that??”

Nevertheless, I accepted, partly at the prospect of enjoying the stunning view of Manhattan from n. 7 World Trade Center, where the Academy is located, partly because my esteemed colleague Michael Ruse was also on the panel, and partly because, well, how bad could it possibly be? Joining Michael and me were Jay Lombard, MD, Clinical Director of Neuroscience at LifeSpan Medicine; creator, co-founder, and Chief Scientific Officer at Genomind; author of “The Mind of God: Neuroscience, Faith, and a Search for the Soul.” And Emily Esfahani Smith, MAPP, Writer, journalist, and author of “The Power of Meaning: Crafting a Life That Matters.” The whole thing moderated by journalist Steve Paulson, Executive Producer of Wisconsin Public Radio’s “To the Best of Our Knowledge.”

Sure enough, it was not bad at all, it actually turned out to be a pleasant evening with good questions from the audience and very able moderation by Paulson. Still, I managed to get called “arrogant” twice, by two fellow panelists (and a third time by a member of the audience), one of whom was Michael himself! (If you know Michael, genial Brit that he is, you may think it odd that he would hurl such an insult to someone else.) My Stoic training has taught me not to get offended, so I responded with humor rather than resentment. But the whole episode made me think about why I do so often receive such label. Immediately discarding the admittedly possible answer that I really am arrogant, I figured something else must be afoot.

Let’s begin with the basics, that is, with the dictionary definition of arrogant:

1. Making claims or pretensions to superior importance or rights; overbearingly assuming; insolently proud.
2. Characterized by or proceeding from arrogance, or a sense of superiority, self-importance, or entitlement.

Now let me tell you why first Jay Lombard, MD, and then Michael himself, thought it appropriate to use the epithet with me. You will be the judge of whether they were justified.

Lombard, pretty early on in the evening, said that — as a neuroscientist — he thinks it is obvious that souls exist. I did a double take, shook my head, and asked what he meant by that. He was ambivalent. On the one hand, it seems, he meant what most people mean by that term: some sort of vaguely defined, incorporeal thing that survives our bodily death and decay, and that in some way carries our “essence” to whatever “next stage” of existence. But when I pressed him, he said that the soul was “the same as” the self.

Well, for one, those two definitions are not at all compatible, unless by “self” one means an incorporeal thing that survives our bodily death and decay — and most people, especially cognitive scientists, definitely do not mean that when they use the word. Moreover, as I pointed out to Lombard, the “self” is, at best, a dynamic “bundle of sensations,” as David Hume so perceptively described it back in the 18th century. And modern cognitive science is squarely behind this notion, as opposed to some Platonic conception of the self as being an unchangeable essence of who we “really” are.

Moreover, I told my interlocutor that he was engaging in a bit of bate and switch: if by “soul” he truly meant the self, why was he using a word so metaphysically and theologically loaded. If, conversely, he meant the above mentioned immaterial essence, then I was pretty confident that there is no such thing. That, of course, is when I got the label of “arrogant.”

Arguably, I should have been more careful with my language. I should have said that there is no scientific evidence for the existence of souls (defined as above), and that, moreover, there is no particular reason to think they exist. Hence, it is perfectly reasonable to take the provisional position that they don’t, until proven wrong. To invoke Hume again, a reasonable person proportions her beliefs to the evidence. No evidence, no justification for belief. It’s as Bayesian as that. But these qualifications should have been obvious from the context of the conversation, with no need to spell them out. Lombard, instead of seeking clarifications of my position (as I had done of his) chose to interpret it in the least charitable way possible, a good rhetorical move, perhaps, but a bad philosophical one.

The discussion went off in a number of other directions, and then I got a second “that’s arrogant” accusation, near the end of the evening, this time by Michael Ruse. I do not, unfortunately, recall the precise wording of that bit of the conversation, but what I was arguing was that human mental powers — including consciousness — are of a degree the like of which is nowhere to be found in the animal world. Again, I probably should have been very careful to clarify that what I meant by that was that the quantitative differences between us and every other living organisms are such that they essentially amount to qualitative differences, not that they are, in fact, qualitative. But Michael — a philosopher! — decided to use the same rhetorical strategy adopted by Lombard, rather than actually engage in a conversation. Scoring points, apparently, is more essential than understanding.

Now, my position on this is far from radical or unsubstantiated, and is very well defended, for instance, by one of the scientists who has actually spent decades of his career studying cognition in humans, as well as its evolution: Kevin Laland, the author of Darwin’s Unfinished Synthesis: How Culture Made the Human Mind, the book we are going to tackle next in our book club series. He has tons of evidence that licenses the conclusion that human beings are incredibly different from anything else on earth, when it comes to the mind.

I know that in these times of revived interest in panpsychism it is not cool to say that humans are special, even though researchers who actually work on these issues agree that they are (in the so-quantitative-that-it-becomes-qualitative sense just described). Hell, some people even think that bacteria and plants are conscious, though of course there is not a shred of evidence that they are (invoking Mr. Hume again). On my part, I simply think that one ought to be careful about making those claims, if nothing else because vegetarians and vegans are going to be really upset. (I’m not kidding: I have vegetarian friends who are very concerned by the possibility that the carrots they eat may be sentient.)

So I fully expected a negative reaction from Lombard, but not from Michael! And yet he accused me of going “Cartesian,” as in assuming that animals are simply robots, while only humans have the divine spark. This would be comical except for the fact that Michael ought to have known better. We have frequented each other, and known about each other’s work, and in fact even collaborated on a number of projects, for literally decades. He knows I am an atheist (which means I don’t believe in divine sparks of any kind), and that I am an evolutionary biologist (which means that I don’t believe in any sort of qualitative exceptionality of Homo sapiens). And yet, I was the arrogant one because I stated the obvious, scientifically grounded, reality, while he got away waxing poetic about the entirely implausible, and certainly completely lacking in evidence, notion that rocks and atoms have degrees of consciousness!

Back to the definition of arrogance, seems to me pretty clear that I wasn’t “making claims or pretensions to superior importance or rights,” was not “overbearingly assuming,” and certainly not “insolently proud.” But I was reminding the good doctor Lombard, as well as my colleague and endowed chair professor of philosophy, that honest intellectualism is bound by reason and evidence. If there is anything that could reasonably qualify as arrogant is precisely what both Lombard and Ruse where doing: making sweeping ontological claims, i.e., claims about what is real, without a shred of empirical evidence to back them up. This, after all, was a panel discussion held at the NY Academy of Science, not of science fiction, fantasy, or wild speculation. It is a disservice to the public to lend credence — with impressive titles such as MD and PhD — to notions that are speculative at best, and incoherent or false at worst.

Do I know for a fact that atoms are not conscious, or that souls do not exist? Nope. But in both cases the burden of proof is squarely on the shoulders of those who do. It is not arrogant to proportion one’s beliefs to the evidence. On the contrary, it is the only epistemically modest thing to do.

The Greeks were right about love

An Ancient Greek symposium

Love is a fundamental aspect of the human experience. When people are incapable of it we think there is something seriously wrong with them, something that makes them almost inhuman. And yet there are many different types of love, for one’s partner, one’s children, one’s parents, even one’s nation or God. Each carries with it its own version of pains and pleasures. Yet, our vocabulary may not be well suited to such an important phenomenon. When I first came to the United States I was struck by the fact that the English language, the tongue of Shakespeare, has only one word for the notion. In Italian, for instance, I would never say to my daughter “I love you,” it would sound weird and incestuous. And that’s only one problem.

Even if we limit ourselves to “romantic” love, i.e., to love for one’s partner or spouse, the modern notion is confused compared to both what sound psychology and modern neuroscience tell us. A classical study on the progression of love in human beings found major differences between the neural underpinnings of intense romantic love (see also this, more recent, study), of the type we feel at the beginning of a relationship, and what scientists call limerence, the more subdued, but deeper attachment we may or may not develop for a partner with whom we were initially “in love.” (If you are curious about the details, the first phase activates mostly the striatum, part of the nucleus accumbens, the pleasure center of the brain; the second one activates also the insula, which the brain uses to assign value to our activities so that we continue to engage in them.)

Insisting in using the same word to describe these two conditions confuses two very different biological and psychological (and hence also social and cultural) phenomena. More crucially, expecting — both individually and as a society — that once two people embark on the first phase they will necessarily move to the second, which will itself automatically last a life time (the famous “and they lived happily ever after” of Disney-style tales) is absurd and arguably the cause of a lot of unnecessary pain and suffering.

The modern idea of romantic love probably traces back to the medieval concept of chivalry, made popular as well as more specific by French, Italian and Spanish troubadours, and finally canonized for Western audiences during the Romantic era that followed, as a backlash, the Enlightenment. However, it is arguably the Ancient Greeks and Romans that got it right, or at least closer to the mark. Interestingly, they used three different words that translate to “love,” none of which actually corresponds to our (confused) conception of romantic love. (A good and short introduction can be found in this video.)

To begin with, there is agápē, which means love that comes with an aspect of charity, in the sense of benevolence, embedded into it. This is the sort of love we have for our children, but also for our spouse or partner. Early Christians adopted the term to mean the unconditional love that God has for his children. As Thomas Aquinas put it, agápē means “to will the good of another” (in Summa Theologiae I-II, 26, 4).

Second, we have érōs, which in part does mean, as the modern word “erotic” indicates, sexual attraction for someone (the sort of stuff that stimulated the striatum in the neuro-studies linked above). However, Plato for one expanded the concept to indicate, after maturation and contemplation, love for beauty itself. This is the origin of the phrase “Platonic love,” which does not mean love without sex, necessarily, but rather love of the ideal Form of Beauty itself. This may begin with erotic attraction, but eventually transcends it.

In the Symposium we are treated by a lesson on love by none other than Socrates, who says that érōs allows the soul to recall knowledge of beauty and thus to arrive at an understanding of spiritual truth. In this sense, then, both lovers and philosophers are inspired by érōs.

Finally, we have philía, which describes a sense of affection and regard among equals. Aristotle uses this word to characterize love between friends, for family members, or of community. It is a virtuous type of love, often cast as of a brotherly sort, but with a component of enjoyment.

Notice that what distinguishes the three types of love recognized by the Greeks is not the object of love, but rather the modality of the sentiment. Take the specific instance of a long-term “romantic” relationship. Ideally, what one wants in that case is a particular combination of the three modes: we unconditionally (i.e., not because it is to our advantage) want the good of the other person (agápē); we want not just physical attraction, but a deeper appreciation of the “beauty” of our partner, in terms of his or her character traits (érōs); and we want to be their friends and to enjoy their company for its own sake (philía).

What distinguishes love of a partner from love of our children, friends, country or God, then, is the specific modulation of all three types of Greek “love.” Notice that none of them obviously maps to the initial phase of romantic love in the modern conception. That’s because, arguably, that phase is not, in fact, love itself, but (potentially) preliminary to love. If it lasts, and if it matures, then it becomes love. If this is even approximately correct, than a lot of what our literature, movies and advertisements describe as love is — rightly conceived — no such thing.


Postscriptum: various readers have commented on the fact that I have left out a fourth Greek word for love, storgē. That’s because it was actually rarely used in ancient texts, but still, it does help add even more nuance to the concept. Storgē means affection, especially (but not only) of the kind one has toward parents and children, and includes a component of empathy of the type felt naturally toward one’s children. Storgē was also used to indicate love for a country, or even a sports team, and — interestingly — in situations when one has to put up with unpleasant things, as in the oxymoronic phrase “love for a tyrant.”

Book Club: The Edge of Reason 6, the five characteristics of rational discourse

The Pioneer plaque

Julian Baggini’s The Edge of Reason, of which we have so far examined the first five chapters, ends its second part with a discussion of the distinguishing characteristics of objective rational discourse. He begins it by suggesting that the problem with the classic (Platonic, really) view of reason is that it treats reason as an heteronomous entity, something coming from the outside, imposed on us by eternal laws of logic. Instead, human reason is, well, human, i.e., autonomous, shaped from the inside, shaped by the characteristics and limitations of what it means to be human in the first place.

That said, Julian immediately qualifies, reason does have a component of heteronimity, in that it cannot simply be a self-serving instrument entirely detached from how the world actually is, but rather has to account for the brute facts of external reality. Reason, he says, is nothing if it doesn’t aspire to objectivity, and this brings him to propose a definition of rational argument: “the giving of objective reasons for belief.”

However, if you recall our previous discussions of Baggini’s book, you will immediately notice a tension here: he has been arguing for a somewhat deflated, human, view of reason, and now he’s going to ask for objectivity? Well, yes, but he puts forth a deflated view of objectivity itself, one that he derives from the philosopher Thomas Nagel.

Nagel wrote a famous book, back in 1986, in which he argued that objectivity is often conceived in terms that he summarized with the oxymoronic phrase of the view from nowhere. Particularly, science aspires to such a view, which both Nagel and Baggini see as hopelessly misguided.

Julian’s intriguing example of an attempt to achieve a view from nowhere is the famous plaques that were put onboard the two Pioneer spacecrafts launched in the early ‘70s, and which are now outside the confines of the solar system (see top figure). The plaques were designed by astronomer Carl Sagan as a symbolic attempt to communicate with possible alien beings. (Symbolic because there is pretty much no chance in hell that the Pioneers will ever actually reach another habitable world, given their speed and their cosmic trajectories.)

The plaques convey stylized graphic information about the solar system and humanity itself, attempting to provide reading keys for the aliens, for instance by representing the position of the solar system with reference to fourteen nearby pulsar stars, as well as by drawing the symbol of the hydrogen atom, the most abundant element in the universe.

But for all that, the plaque is going to be very difficult to decipher for a non-human intelligence. For one thing, of course, it assumes that the aliens are capable of perceiving visual information, which is far from obviously true. Notice also the use of an arrow to indicate the trajectory of the Pioneer probe itself, as well as the male human being drawn with a raised hand and an open palm, in the universal sign of greeting. Universal, of course, for human beings. Just like the arrow indicates direction for us, because our prehistory included hunters who used arrows to catch their prey. There is no reason whatsoever to imagine that an alien culture would recognize either one of these human graphic conventions. One of the reasons it will be difficult to communicate with alien intelligence is precisely because no species truly speaks a universal language. There is no view from nowhere.

As Julian summarizes the problem: “Even if the truths we latch on to are indeed objective, they are always framed within our human ways of understanding, by our language and our senses. … Truth has to be seen from some perspective or other, even if it is in itself purely objective.”

Nagel’s view of knowledge is nuanced. There is a continuum from the entirely subjective (I know that I like chocolate) to the purely objective (1+1=2), with much of interest to us lying somewhere in the middle. The more our knowledge of things is linked to our particular framework, the more it depends on the idiosyncrasies of human senses and reasoning, the more it veers towards the subjective, as much as it aspires to reach the other end of the spectrum.

Baggini points out that it is a mistake to confuse objectivity with truth. I can report a subjective fact in a truthful matter (it is true that I like chocolate), and I can state an objective facts untruthfully (the distance between the Sun and the Earth is 300 million kilometers — which it isn’t). That’s why talk of “objective truth” is not redundant.

Julian goes on to say that “rationality and objectivity are usually seen as natural bedfellows. My suggestion is that their link is more intimate than this. To offer a rational argument just is to provide objective reasons for belief, reasons which can include both evidence and argumentative moves.” Notice the inclusion of argumentative moves, not just evidence. The “facts” never speak for themselves, they need to be framed within a given argument, and that argument can be rational, and yet fail.

Which brings him to discuss what he sees as five characteristics of objective reasons and arguments: they are comprehensibility, assessability, defeasibility, interest-neutrality and being compelling.

Let’s begin with comprehensibility. The quintessential example of subjective judgment is art: I can tell you that I like, say, Van Gogh’s Starry Night, and you can reply that you don’t. But we can then move from simple statements of likes and dislikes to a discussion of aesthetics if I begin to tell you why I like the painting, describing its vivid colors and evocative shapes, and so forth. My argument may not convince you, but it may prompt you to take a second look. Importantly, by giving you reasons for my subjective aesthetic judgment I have made a (small) move toward the objective end of Nagel’s spectrum, because I have made comprehensible to you my reasons for liking Starry Night.

Van Gogh’s Starry Night

For Julian, “an argument that is in principle comprehensible by any rational agent is more rational than one that is comprehensible only by certain types of rational agent,” even though any given argument may not be comprehensible by just any rational agent. For instance, in the case of art, it helps to know something about its history to better appreciate how to situate certain paintings that may at first appear strange and puzzling.

For something to be objective, through, it has also to be assessable. If others do not have any way to judge the truth of your assertions, then those assertions are hopelessly stuck right at the subjective end of Nagel’s spectrum.

Baggini notes that “whenever people appeal to inner convictions, esoteric revelations or diktats from authorities they are evading objective scrutiny by keeping key elements of their justification hidden.” But he acknowledges a kind of circularity in his reasoning: he invokes the concept of a rational agent in explaining what rationality is. Yet this sort of circularity is not vicious, since any definition turns out to be, if pushed far enough, circular. He is just saying that “a rational agent is one who can understand and assess objective arguments, and an objective argument is rational if it can be understood and assessed. These terms all hang together.”

After a brief discussion of several attempts to make sense of rational argument, from David Hume’s distinction between “matters of facts and relations of ideas,” to the logical positivists’ verifiability criterion, to Popper’s falsificationism, Julian says: “if an idea is too vague it will be dismissed as woolly and hand-waving. Too precise, however, and the logic-choppers will be out to unpick its contradictions and inconsistencies. As Aristotle’s immortal adage states, ‘It is the mark of the trained mind never to expect more precision in the treatment of any subject than the nature of that subject permits’ — nor less, we might add. The Goldilocks state of philosophy is to be precise enough to be saying something substantive but not so precise as to ride roughshod over the complexities and ambiguities of the real world.”

Which leads us to a discussion of the third criterion: defeasibility. One of the reasons Popper’s famous criterion of falsifiability for demarcating science from non-science did not work is because “a rational argument is always in principle defeasible — open to revision or rejection — by public criteria of argument and evidence,” and in turn “defeasibility is a property of all propositions with any degree of objectivity, however small.” The history of science is littered with theories that at some point ran into apparent empirical falsification, only to be rescued by scientists because of the adoption of suitably modified versions of the original theory. The Copernican view of the solar system did not work particularly well because Copernicus had assumed the orbits of the planets to be circular. When Kepler made the modification of treating them as elliptical the calculations matched the empirical evidence far better.

The fourth criterion is interest neutrality. Here Baggini helps himself to a thought experiment. Imagine a philosopher who is captured by a super-villain and is presented with the following ultimatum: either believe, genuinely believe, that 1+1=3, or I’ll destroy the world. (The super-villain can check by way of brain imaging whether the philosopher really believes something or not.)

This would seem to be a case of a belief that meets our first three criteria: it is assessable, comprehensible, and defeasible. And yet, it now seems that the philosopher has objective reasons to believe something false.

The way out is that we need to distinguish between two modes of rationality: in the service of an end, or as an end in itself. The super-villain’s threat “does not provide a rational argument to believe that 1+1=3, but a rational argument why it is prudent to believe 1+1=3.” There is a huge difference between the two. Julian calls one practical rationality (yeah, I’ll try to believe that 1+1=3 if it saves the world) and the other epistemic rationality (in reality, 1+1<>3). Both require assessability, comprehensibility, and defeasibility, but epistemic rationality also requires interest-neutrality, which the super-villain vs the philosopher case clearly lacks.

Importantly, practical rationality rests on epistemic rationality, as an agent, “in order to make the right decision, must assess the evidence in an interest-neutral way, and only then decide what she ought to do in order to serve the interests she takes to be most important.” It follows that it may, in some cases, be practically rational to believe something that is not epistemically rational.

Baggini acknowledges that these days there is a lot of skepticism in the very possibility of interest-neutral rationality. He mentions the sort of postmodern criticism by the likes of philosopher Michael Foucault, for instance, the idea that claims to knowledge are always connected to the striving for power or the serving of particular interests. But he may as well have cited also the sort of research conducted by contemporary social psychologists like Jonathan Haidt, which highlight motivated reasoning and rationalization over rationality.

But Julian makes a good argument that — pace both Foucault and Haidt (both of whom do make good points) — if one insists in arguing that all rationality is practical / motivated and not interest neutral one ends up in absurdity (and, I would add, in self-defeat).

He mentions an interview he conducted with radical postmodern philosopher Luce Irigaray, who “notoriously suggested that perhaps even E = mc^2 is a ‘sexed equation,’ expressing masculine dominance. Why? Because ‘it privileges the speed of light over other speeds that are vitally necessary to us.’” I hope you appreciate the ridiculousness of this, without the need of further explanation.

Baggini distinguishes between the claim that there is no such thing as a value-free practice of science (true) from the stronger claim that there are no value-free scientific claims (false, E really does equal mc^2). Another example of this is provided by philosopher Tim Lewens, who said that Marx and Engels were right when they argued that Darwin’s thought was “steeped in the industrial capitalist milieu.” It does not follow at all, however, that the Darwinian theory of evolution is not, in fact, a good account of why we see such a bewildering variety of living organisms on our planet.

Finally, we get to the fifth element of a rational claim: compulsion. “Turned over and examined on all sides, any rational agent who understands the argument should find herself feeling forced – or at least strongly pushed – to accept the conclusion, whether she likes it or not.”

The case of believing that 1+1=2 is an obvious one. Baggini also says that a rational agent should feel compelled to accept the existence of a causal link between smoking and cancer, on the basis of the available evidence. I would add the theory of evolution, climate change, and the fact that vaccines do not cause autism, among many others.

Of course, it is painfully true that not everyone accepts rational arguments (remember that an argument can be rational and still wrong, by the way). To which Julian sensibly replies:

“If you can see that an argument is assessable by all, and that assessment has not raised any major problems with it; if you also understand it; and if you can see that it does not require you to share any particular agent’s interests to accept it: such an argument must carry with it a certain degree of force. There is a sense here in which there is simply nothing left for a rational agent to say to someone who claims to have followed all these steps but is still not convinced.”

As I often want to tell some of my students (but I don’t because I’m too polite and because it would be counterproductive): I can explain it to you, but I cannot understand it for you…

Baggini points out that the failure to translate the rational force of an argument into the psychological force of accepting and acting on it is particularly evident in cases of ethics. For instance, many people readily acknowledge that vegetarians got the better moral argument, and yet refuse to follow through and switch to a vegetarian diet (I include myself here, as I am a pescatarian, and sometimes even just a reducetarian).

After having discussed his five criteria for rational arguments, Julian turns to what he calls the boundaries of rationality. His approach can be applied, he suggests, to common cases where people think they are providing rational arguments, but in fact fail to do so, as in instances of reasoning relying on anecdotal evidence, or of claims based on mystical insight.

Take the case of someone who says that homeopathy works because it has worked for him and for several people he knows. Baggini’s analysis goes like this:

“We can see why these reasons deserve to be considered as attempts to provide a rational case for the efficacy of homeopathy. They appear to be comprehensible and assessable, and in turn defeasible. Those who offer them also see them as interest-neutral and compelling. We should reject them, however, because when we examine these supposed characteristics more carefully, they are not present to a sufficient degree. Assessability is the key here.”

Julian distinguishes between arguments that are rational or irrational (their mode), and arguments that are good or bad (their quality), and further adds that some arguments are simply non-rational, as in when people say that they have acquired special insights into the nature of the world by virtue of taking LSD. Mystical experiences do provide reasons for belief, since we believe all sorts of things as a result of direct experience. But they don’t provide rational reasons for belief.

In the section of this chapter entitled “Rational catholicism,” Baggini reiterates his idea that good (human) reasoning requires judgment, but again clarifies that “good judgement is much more than just opinion, and something less than the mere following of logical rules.” He uses this to elaborate on his take about why philosophy, unlike science, does not lead to agreement about the issues being discussed: “Philosophy relies entirely on rationality and nothing but [i.e., empirical evidence does not directly enter into it, unlike in science]. This involves a high degree of commitment to the rigors of argument but also, ultimately, an acceptance that rational argument does not lead linearly to only one answer, since you cannot take judgement away from rationality.” I think he is partially correct here, but see my full treatment of why philosophy makes progress in a way different from science in my book devoted to that topic.

The last section of the chapter is a call for ending the “truth wars”: “[there now is] a stand-off between what Bernard Williams called the ‘deniers’ — those who deny that there is such a thing as ‘the truth’ that reason aims at — and the ‘party of common sense,’ those who claim that the truth really is out there. Williams wrote that ‘the deniers and the party of common sense, with their respective styles of philosophy, pass each other by.’”

But there is a greater cause that should be common to both parties: “that greater cause is a commitment to reason, no matter what reservations some may have about the history, use and connotations of that term. Despite their apparent differences, it should be obvious that both ‘deniers’ and the ‘party of common sense’ share something like the thin conception of reason and rationality that I have been defending.”

To keep arguing against each other in the face of post-truth and alternative facts would be to end up in the tragicomic situation of the strife among the People’s Front of Judea, the Judean People’s Front, and the Judean Popular People’s Front…

Book Club: The Edge of Reason 5, the challenge of psychology

Let us continue our in-depths discussion of Julian Baggini’s The Edge of Reason, a book that aims, in a sense, at striking a balance between the Scylla of scientistic rationalism and the Charybdis of anti-rational relativism. Chapter 5 concerns what Julian calls “the challenge of psychology,” the idea that since much of our thinking is unconscious, we are not really rational beings, as much as rationalizing ones.

The chapter begins with a short introduction to the famous trolley dilemma, introduced by philosopher Philippa Foot as a tool to bring out our moral intuitions. I will not summarize the thought experiment, since it is well known. Baggini says that it is obvious that when many people “go consequentialist” in one version of the dilemma, and “Kantian” in another, this is because different psychological intuitions, not any explicit moral reasoning, are at play. Which immediately brings him to Daniel Kahneman’s famous distinction between “System 1” and “System 2” reasoning: the version of the dilemma that involves a more personal interaction with others is likely to trigger our emotional responses (System 1), while the impersonal version activates our thinking in terms of large numbers and consequences (System 2).

The problem, of course, is that it may be difficult, philosophically speaking, to make sense of one’s diverging reactions to the different situations posed by the trolley dilemma: “if asked why we should not push the person, we don’t say, ‘I don’t know, it just feels wrong.’ Rather, we come up with various rational justifications, such as the idea that it is wrong to use a person as a means to an end — even when this is just what we were prepared to do in the lever case.”

Kahneman himself seems pretty pessimistic about the sort of inference about human reasoning that we should make from his research: “when asked if his 45 years of study had changed the way that he makes decisions, [Kahneman] had to reply, ‘They haven’t really, very little, because System 1, the intuitive system, the fast thinking, is really quite immune to change. Most of us just go to our graves with the same perceptual system we were born with.’”

Setting aside that even the interviewer had a hard time taking Kahneman’s words at face value, Baggini says “not so fast,” so to speak. He points out that System 1 is an “enemy of reason” only if we conceptualize reason as identical to formal logic, which he has been at pains to argue, in the previous five chapters, is far too narrow a conception.

Julian maintains that the sort of “gut feelings” we sometimes have, especially, but not only, when it comes to moral situations, are in fact the result of quick heuristics embedded into System 1: “Heuristics are cognitive shortcuts, and the key is that they wouldn’t have evolved if they didn’t work more often than not. The problem is that they are so deep rooted that we often find ourselves using them even when we don’t need a quick, snappy solution but cool, calm reasoning.”

Julian seems to hint, in the passage above, that these System 1-based heuristics are the result of biologically rooted instincts, and surely in part that is the case. But I don’t see why they cannot also be the outcome of accumulated experiences, and more likely a deeply intertwined combination of both.

Baggini goes on to suggest that it isn’t at all obvious — as utilitarians, or Kantian deontologists, would argue — that moral questions ought to be analyzed solely on the basis of “cold” (i.e., impartial) reason. The most obvious case, he maintains, is that of parental love. As parents we are partial to our children, and given a choice between intervening on behalf of our child or on behalf of a stranger’s child, we do not hesitate and choose the former. And rightly so, says Julian, as the world wouldn’t likely be a better place if everyone treated their kids as random members of the population. That, of course, generates a tension between “local” ethics (i.e., our personal moral decisions) and “universal” ethics (what we should do when we think of humanity at large). Welcome to the human condition, where sound judgment (which, remember, for Baggini is what defines reason in the broadest terms) is a necessary component of our existence. And where Systems 1 and 2 constantly interplay.

Julian then moves to the perilous territory of “gendered” reason: what if it turns out that people of different genders think in significantly, if not radically, different ways, ways that are deeply rooted in their gender identity? Should we then not talk about reason(s), in the plural, instead of the singular term, and concept, we inherited from the Enlightenment?

He reports a strange conversation he had with the French philosopher Luce Irigaray, who has been influenced by the Lacanian school of psychotherapy, and who thinks of gender differences in a somewhat radical fashion: “When I interviewed her, I suggested that [her position] means that in a sense I was not meeting her at all, since we could not share the same understanding. She agreed. ‘In this moment we seem to be in the same place, inhabiting the same space, the same time, the same country, the same culture, the same language. In a way it is only an illusion.’”

Julian labels this an “extreme” position, “frankly not supported by the best evidence of psychology.” I’m slightly more blunt: it’s nonsense on stilts.

He elaborates along lines that seem eminently sound to me: “Feminist philosophy, for instance, is not separate from all other philosophy. A feminist critique of epistemology (theory of knowledge) has its force because it suggests there is something epistemology is missing because of distortions rooted in gender, distortions it seeks to remedy. Such a critique would lack any power if it amounted to the claim that there is male epistemology and female epistemology, and each of the two should mind their own business.” Exactly, though the latter is, indeed, the position of some radical feminists and gender studies scholars.

Baggini goes on to analyze the gender gap within the philosophical profession, ascribing it to the intellectual culture within, in terms of the assumption that discussions have to be value-neutral (while feminism, most obviously, isn’t), and especially that academic philosophy is characterized by the encouragement of a confrontational approach toward colleagues, which makes a number of women feel very uncomfortable.

All of this certainly does play a role (and indeed, I’ve seen it with my own eyes), but I would like to remind people that a comparable gender gap exists within plenty of other fields where there is no such (special) culture of confrontation, and where there are no approaches to technical matters that depart from value neutrality: mathematics, chemistry, physics and engineering come to mind. So I dispute the idea that the gender gap in philosophy is peculiar to the field, or that the profession itself should undergo some kind of radical change in order to resolve the problem. The problem is going to be resolved in the same way in which it is being addressed in other fields: by encouraging young girls to embrace areas that have been seen as traditionally “male,” on the simple ground that there is no reason at all why they shouldn’t succeed in them. And of course by an explicitly fair treatment of women undergraduate and graduate students, as well as faculty at different ranks. Something, incidentally, that philosophy as a profession is very aware of and has been implementing for years through the efforts of the American Philosophical Association.

So what does psychology tell us about human reason? Baggini suggests a revision of Plato’s famous analogy between the human mind and a chariot led by two horses: “we would do better not to think of the human soul as comprising two wildly different horses and a controlling charioteer, but as being one single equine which draws on all sorts of cognitive tools, from the conscious, systemic and deliberative to the automatic, unconscious and affective.” It’s more a mule than a thoroughbred, he says. The image may be less ennobling, but it is “better to be a many-skilled mule than one-trick pony.”

Prove it! The burden of proof in science vs pseudoscience disputes

IMG_9677Time to look back at one of my technical papers, this one published in 2013 with my friend and collaborator Maarten Boudry in the journal Philosophia, and entitled “Prove it! The burden of proof in science vs Pseudoscience disputes.” (As with all my technical papers, they can be downloaded from my DropBox, by going to this link.)

The starting point for the paper is that the concept of burden of proof is used in a wide range of discourses, from philosophy to law, science, skepticism, and even in everyday reasoning. Maarten and I, then, provide an analysis of the proper deployment of burden of proof, focusing in particular on skeptical discussions of pseudoscience and the paranormal, where burden of proof assignments are most poignant and relatively clear-cut. We argue that burden of proof is often misapplied or used as a mere rhetorical gambit, with little appreciation of the underlying principles. The paper elaborates on an important distinction between evidential and prudential varieties of burdens of proof, which is cashed out in terms of Bayesian probabilities and error management theory. Finally, we explore the relationship between burden of proof and several (alleged) informal logical fallacies. This allows us to get a firmer grip on the concept and its applications in different domains, and also to clear up some confusions with regard to when exactly some fallacies (ad hominem, ad ignorantiam, and petitio principii) may or may not occur.

To begin with, an important distinction needs to be made between prudential and evidential burden of proof (BoP). The prudential BoP is applicable when there are cost asymmetries in arriving at two judgments about whatever matter is under dispute, whereas the evidential burden of proof applies when there are no such cost asymmetries involved.

Consider, for instance, the question of the safety of food additives. If approached as a straightforward scientific question, then the relevant concept is that of evidential BoP: there is no “cost” associated with arriving at the right judgment, other than the symmetric cost in getting a chunk of reality wrong. But if we approach the issue of food additives from the standpoint of its potential consequences for public health, there is a differential cost in getting the wrong answer, so the idea of prudential BoP seems more appropriate.

The (controversial) precautionary principle, which is an application of the prudential burden of proof, states that — if a certain action or policy is suspected to be harmful — the burden falls on those who believe that a new policy or course of action is not harmful. The status quo is perceived as less costly than a potentially dangerous new policy or course of action. In more general terms, the prudential BoP can be applied in situations where the cost of a false positive is significantly different (greater or smaller) from the cost of a false negative.

Examples of prudential BoP where the cost associated with a false negative outweighs that of a false positive include smoke detection alarms, environmental hazards, cancer screening, etc. An example of the opposite case, where false positives are perceived as more costly, include the presumption of innocence in a court of law. This principle in American criminal law clearly skews things in favor of the defendant, but this is done because the risk of a false positive (convicting an innocent) is treated as much less acceptable than the risk of a false negative (exonerating a guilty party).

Of course, cases of prudential BoP always involve an evidential dimension as well, while the opposite is not the case. In prudential BoP, cost asymmetries have to be taken into account in addition to prior probabilities. For example, in discussions about cancer and cell phones, the initial plausibility of low-energy electromagnetic radiation being carcinogenic has to be taken into account in addition to cost asymmetries. If prior probabilities are ignored, the precautionary principle is misused and can have paralyzing effects on public policy. Conversely, one cannot just invoke a Bayesian perspective (as useful as it is) to settle issues where cost asymmetries are involved, since even when competing claims have equal priors, a prudential approach (but not an evidential one) could easily tip the balance in favor of one claim over the other.

There are a number of important discussions in science, pseudoscience, and even in straightforward philosophical argumentation, that can reasonably be approached either from an evidential or from a prudential perspective, depending on the interest of the parties involved. For instance, the force of the philosophical argument behind Pascal’s wager is supposed to be that the risk of a false negative (you don’t believe there is a god, but it turns out there is one) is much higher than that of a false positive (because of the threat of eternal damnation in Hell). By contrast, to take another philosophical example dealing with the import of paranormal or supernatural hypotheses: the risk (in terms of practical consequences) of falsely accepting the existence of Bertrand Russell’s tea pot orbiting the sun (false positive) seems to be the same as the risk of rejecting the tea pot when there really is one (false negative).

Maarten and I then discuss a number of sources of subjectivity in the judgment of were the burden of proof lies, as well as a distinction between “global” and “local” burden of proof, where the global BoP is fixed throughout a discussion, because it is related to what a discussant ultimately wishes to establish (or her opponents wishes to deny). Within that broad goal, however, a number of local burdens of proof may arise, which shift during the debate itself, as they pertain to smaller pieces of the overall puzzle.

We move on to consider how BoP should be assigned. In 1970, Brown characterized the request for meeting the burden by a given side in a debate as amounting to the claim that, prima facie, that side’s position is more initially plausible than the alternative(s). Brown’s framework does not involve the costs associated with different judgments, and can thus be seen as a characterization of evidential BoP. A major exponent of modern skepticism, Michael Shermer, describes the principle of (evidential) BoP as follows: “The person making the extraordinary claim has the burden of proving to the experts and to the community at large that his or her belief has more validity than the one almost everyone else accepts.” Psychologist Terence Hines, in another compendium on pseudoscience, agrees that the burden should fall on the claimant of the extraordinary, because “it is often impossible to disprove even a clearly ridiculous claim,” such as that Santa Claus exists.

We discuss a formal analysis of these ideas, carried out by Larry Laudan (details in the paper), but warn that such an analysis should not be cause for too much complacency on the part of the skeptic of pseudoscience, since it doesn’t license an automatic rejection of any claim of the paranormal or extranormal, except when the prior probability of the paranormal hypothesis is exactly zero (e.g., when it is logically incoherent). The reason why BoP rests on the believers is also often misconstrued in the skeptical community. The evidential BoP is not on “whoever makes the positive claim.”

First, it is very easy to turn any positive claim into a negative one, and vice versa, by simple application of basic logical rules. In general, affirming P is exactly the same as denying ~P. Any existential claim can be translated into a negative universal, and vice versa. Resorting to such moves would merely amount to sophistic word play rather than a substantive consideration of epistemic burden.

Second, there are cases in which the BoP rests on those who are putting forth what may most plausibly be construed as the “negative” claim, in the sense of denying the material existence of some X. For example, the burden of proof is no longer on historians to provide evidence of Zyklon B use in the nazi concentration camps, although, apart from logical sophistries, they are the ones making a “positive” claim. In this case, then, the BoP rests on those making the “negative” claim.

In most discussions of pseudoscience and the paranormal, admittedly, the believers in pseudoscientific notions are making positive claims, in the sense of affirming the existence of entities (spaceships, psi force, qi energy lines, auras) that are rejected by modern science, but this — per se — is not the reason why the BoP rests on them. Evidential BoP assignment always reflects substantial background knowledge and prior probabilities, and these assumptions of plausibility, we argue, should be based on the expert consensus on the matter.

Maarten and I go on explore the role of Occam’s razor in this debate and then tackle what we call the technical burden. Believers of the paranormal and supernatural have often tried to turn the tables on skeptics, finding various ways to shift the BoP back to the latter. In particular, rhetorical moves of the type “you can’t prove it wrong” are unfair requests that fail to appreciate the proper BoP procedure. In some cases, such requests can be straightforwardly fulfilled (e.g., it is very easy to prove that the co-authors of this paper, at this very moment, have far less than $1 M dollar in their pockets), but even then, the skeptic is doing the accuser a favor in taking on a BoP that does not really fall on him (we are under no obligation to empty our pockets after each such gratuitous insinuation).

Similarly, if ufologists claim that some crop circle was left by a space ship, the BoP is firmly on their side to come up with extraordinary evidence. If the skeptic chooses to take on their sophistic challenge to “prove that there was no spaceship” by way of providing direct or circumstantial evidence that that particular crop circle was in fact a human hoax, they are indulging the believers by taking on a BoP that, rationally speaking, does not pertain to them at all.

For most actual para/extranormal claims, however, the space of possibilities cannot be exhausted in a finite (and suitably short) time. For instance, to arrive at proof that there are no alien spaceships visiting earth — at any moment, not just in the case of a specific alleged incident — would require a type of temporally protracted exhaustive monitoring of the entire planet’s surface, something that it is so far beyond current technological possibility that the request can easily be dismissed as a simple debating trick.

This, however, leaves the skeptic with a dilemma. Although it may sometimes be rhetorically persuasive for her to take on a BoP that, strictly speaking, does not fall on her (for example, providing a natural explanation of a given UFO sighting), this may be perceived as an implicit acknowledgement that skeptics do carry the negative BoP for every single anomaly that believers come up with. The result is a mug’s game for skeptics: all believers have to do is throw around challenges for the skeptic, who will surely not be able to answer every single one of them. To refer again to the ufological literature, even ardent skeptics do admit that a small percentage (at most 10%, and likely significantly less than that) of alleged UFOs cannot be turned into IFOs (Identified Flying Objects), even after direct investigation of the available evidence.

There are at least three replies the skeptic has available here. To begin with, investigative resources are limited, especially when it comes to likely pseudoscientific claims, so it should not be surprising that on a certain number of occasions the researcher simply does not have sufficient means to carry out a positive identification of the allegedly unexplained phenomenon.

Second, even in the case of genuinely scientific questions one has to contend with limited epistemic access to the relevant phenomena, access that can be affected by the lack of sufficient empirical traces or by the intrinsic epistemic limits of human reason. Think of the long — and so far still largely unsuccessful — quest for an explanation for the origin of life, for instance.

Third, as Thomas Kuhn reminded us, even successful “normal” science constantly has to deal with a number of unsolved “puzzles,” and it is only when the puzzles become numerous and widespread that they genuinely begin to threaten the reigning paradigm, forcing scientists to seek alternative theoretical frameworks. Even if skeptics cannot provide a complete explanation for every single anomaly, what they often can do is to offer promissory notes for explanations, speculating about potential natural interpretations. Given that the BoP really falls on believers to come up with convincing evidence, this is all that can be expected from skeptics under these circumstances.

Intelligent Design proponents and assorted creationists, for instance, have often pointed to alleged instances of “irreducible complexity” in the living world: biological systems that are so intricate that they could not possibly have evolved. In dealing with such challenges, evolutionary biologists can suggest possible evolutionary pathways leading to a given complex biological structure. When they have done so, there is an extra BoP on ID advocates to rule out all of the proposed natural explanations. Contrary to what believers think, the BoP is not on skeptics to demonstrate which one of the natural explanations is the correct one. Given the overwhelming evidence for the power of natural selection to produce adaptive complexity, and the difficulty of garnering information about a distant evolutionary past, this kind of informed speculation is all that is needed to put ID arguments to rest (of course, evidence of specific mutations and selection processes further strengthens the case for evolution, but its fate no longer depends on it). The amount of anomalies (in casu, evolutionary puzzles) has simply not come even close to the Kuhnian threshold for a paradigm shift, though of course this says nothing about whether it might do so in the future.

At this point the paper changes direction somewhat, and Maarten and I provide a discussion of so-called informal logical fallacies. I strongly suggest interested people to check the paper for the details, but we basically argue that too often skeptics (and now, increasingly, believers) throw out the “you committed logical fallacy X” as if that were the end of all discussion. Sometimes an informal fallacy is not a fallacy at all, but actually a good heuristic, or a relevant piece of information.

Let’s say, for instance, that during court proceedings a lawyer for the defense points out that a hostile witness has a history of being unreliable and of lying, or perhaps has something to gain if the accused is convicted. The prosecution can’t just shout “ad hominem!” and be done with it, since information about the character and/or personal interests of the witness are, in fact, germane to the case, even though of course they don’t prove that the witness is lying on this particular occasion.

In conclusion, the word “skepticism” has, of course, a long and venerable history in philosophy. When it comes to disputes about allegedly pseudoscientific notions, though, the term may refer to one of two distinct attitudes: one corresponds to someone who knows that the para- or extra-normal claim is wrong and is out to prove it. Although this may in fact be the case in many actual instances, such a figure is not at all intellectually interesting. The second meaning is the Humean sense in which “a wise man proportions his belief to the evidence.” If we are to be honest Humean skeptics, though, we need to set the bar for evidence of extraordinary claims at the right level, not as low as a gullible believer would wish it, but not as high as for the BoP to be impossible to meet.

Modern skeptics are fond of quoting Carl Sagan’s rendition of the Humean dictum mentioned above: “Extraordinary claims require extraordinary evidence.” This is fine as far as it goes, but we clearly need criteria to credibly establish when a claim is indeed “extraordinary,” and what would count as commensurate evidence. Hume’s own famous argument against miracles is sometimes (uncharitably, we think) interpreted as amounting to a statement of the impossibility, not just very low likelihood, of miracles, and people who believe in ufological or paranormal phenomena echo that sentiment when they claim that skeptics will never be satisfied no matter how compelling the evidence is going to be.

However, Hume’s approach in Of Miracles can be reasonably reformulated in Bayesian terms, with the priors — and consequently the BoP — being set by the accepted background conditions pertinent to the dispute at hand. Seen from this perspective, all we need to avoid are the extremes of setting our priors to 0 (complete skepticism) or to 1 (complete belief), since no amount of data can possibly move us away from those limit cases. Indeed, there are some instances in the skeptical literature on pseudoscience where priors have significantly moved over time. For instance, while acupuncture is still criticized in terms of both the underlying theory and the exaggerated claims of its supporters, there may now be sufficient evidence of its limited efficacy that a skeptic needs to reconsider outright rejection. This is even more so for a variety of transcendental meditation techniques, where again one may reasonably reject the underlying metaphysics while agreeing that qua techniques they do work for a range of claimed effects.

If anything, it is harder to find prominent exponents of para- or extra-normal beliefs that have changed their mind in the face of skeptical arguments (though even those can be found, if one digs deep enough). Which brings us to the last point in this paper (which I haven’t discussed above): discussions of BoP in the context of science vs pseudoscience disputes are, of course, a type of Wittgenstenian language game that presupposes a minimum commonality of standards. People cannot agree on how to fairly allocate BoP unless they find themselves at the least in the same ballpark when it comes to the type of background knowledge that constraints the priors pertinent to the dispute at hand. And that is precisely the most common obstacle in debates between skeptics and believers: the former too often simply reject out of hand even the possibility of an anomalous phenomenon turning out to be real, while the latter are equally quick to label the entire scientific enterprise as “too reductionist” or narrow minded to be able to come to terms with novel phenomena. This sort of impasse depends on a widespread lack of appreciation for the sort of epistemic issues Maarten and I have described in this paper, but it also boils down at least in part to individual psychological attitudes, whereof a philosopher is better served not to speak.

The most important philosopher you never heard of

The latest video in the Sophia “Dan & Massimo” series covered a philosopher you likely never heard of, and yet you should. We talked about Wilfrid Sellars (1912-1989), who had a big influence on Dan and who I discovered only relatively recently, to my delight.

Sellars is perhaps most famous for his distinction between what he called the “scientific image” and the “manifest image” of the world, meaning our understanding of how things are from, respectively, the scientific and the commonsense standpoints.

Continue reading