Category Archives: Philosophy of Science

Vindicating science by bringing it down, part II

1 (1)As we have seen in part I, scientists and early philosophers of science adopted an idealized conception of science, which focuses on the successful end result of scientific activity, and in which there is no place for any influence of the social, or indeed, for any of the actors involved in the scientific endeavor. All of that is swept under the carpet. But the fact that the eventual goal of science is to eliminate the social does not imply that social factors have no important role to play in the process. Science, after all, is nothing but the concerted effort of (sometimes not to) humble human brains, none of which was designed to unravel the mysteries of the world on its own.


In the past couple of decades, science has been brought down to earth again by sociologists, cognitive scientists, evolutionary psychologists, and historians. Unfortunately, the opposition between the rational and the social is still besetting the naturalization of science. The backlash against the traditional conception of science, epitomized by the logical positivists and their intellectual heirs, has swung the pendulum in the opposite direction. Still under the spell of the dichotomy between rational and social which we discussed in part I, many science naturalizers have assumed that, as they bring science down to earth, its pretensions will start to unravel.


The truth of the matter, however, is that all beliefs, the true and the false ones, or the scientific and the pseudoscientific ones, have a causal history, involving cognitive and social factors. If we want to understand how people come to believe stuff, even simple and obvious propositions (e.g., dolphins are mammals) are in need of an explanation. Likewise, if we want to understand how scientists have been able to unearth all sorts of true beliefs about the world, we need to understand what kinds of people scientists are, what kind of cognitive strategies they bring to bear on their research questions, what the social organization of science is, and how hypotheses are tested and evaluated within a scientific community.


The development of a cognitively and socially rich account of science has been delayed by the widespread misconception that such an account would compromise the epistemic standing of science. Because of our habit of pitting social and rational explanations against each other, we assume that the intrusion of sociology and psychology into the citadel of science will eat away at its foundations.


At the heart of this opposition between the social and the rational lies an individualist conception of reason, shared by both camps in the science wars. The notion of the social as a contaminant of the rational, to which even social constructivists seem to subscribe, is more indebted to logical positivism than the latter would like to admit. Radical sociologists during the science wars of the ‘90s were led astray by the very same intuition that made the logical positivists allergic to social explanations – only now they were welcoming the opposite conclusion. As philosopher David Hull put it:


“Because science did not possess the ideal characteristics that the ‘positivists’ insisted that it should, knowledge-claims made by scientists had no more warrant than those of magicians, faith healers, and politicians.”


Both camps are wrong. The simple opposition between the rational and the social-psychological explanations goes against the grain of naturalism. Scientific knowledge does not drop out of thin air: it is embodied in real human beings. If our best scientific theories in some way reflect the world out there, this must have come about through the usual perceptual capacities and cognitive operations, with available technological equipment, and in a complex network of social interactions. How else could it have come about?


The sociologists are right that science is a deeply social endeavor, and that all scientific knowledge is in this sense “socially constructed.” No single individual marooned on a desert island, no matter how brilliant, would be capable of finding out any of the significant truths about the universe that we currently possess. Though the history of science has known some solitary geniuses, working in relative isolation from their peers, even they were still engaged in a collective enterprise, in the sense that they were building on the work of numerous predecessors.


The realization that science is a deeply social enterprise, and that scientific consensus is reached through coalition forming and competition, should not surprise us. The question is what particular social organization is exemplified by science, and whether this is conducive to its epistemic aspirations. Scientists are human beings, warts and all. If scientists collectively succeed in finding out significant truths about the universe, while other endeavors have failed in this regard, this must have come about through the particular social dynamics of science.


Many scientists believe that being objective and impartial are the cardinal virtues of science, and that bias and prejudice make one unsuitable for scientific work. Although the culture of science rightly encourages these virtues, they are by no means necessary for the success of science. Indeed, a certain modicum of bias in this or that direction may actually facilitate the progress of science.


It is not a problem that an individual scientist is biased, or emotionally attached to a particular hypothesis. The social organization of science makes sure that these biases will be balanced by others tilting in different directions. A standard example of this is the reorientation of aspects of medical research as a result of feminist epistemological critiques: it is now increasingly acknowledged that, for example, we cannot conduct drug tests solely on a population of (mostly white, middle aged) men and simply assume that the results can be extrapolated to other human biological populations. Again, Hull:


“The objectivity that matters so much in science is not primarily a characteristic of individual scientists but of scientific communities. Scientists rarely refute their own pet hypotheses, especially after they have appeared in print, but that is all right. Their fellow scientists will be happy to expose these hypotheses to severe testing.”


A desire for fame and success is often viewed as unworthy of a real scientist. The goal of science is truth for its own sake. Although such base motives may indeed compromise one’s scientific work, if allowed to be unchecked, there is no convincing reason why they would stand in the way of significant discoveries. Even spite, jealousy, and the desire to humiliate a rival can result in excellent scientific work, if the competing parties know that they have to abide by certain rules, and will be called out whenever they violate them.


In a good number of cases, social influences are not an impediment to the epistemic ambition of science, but rather a facilitator of scientific progress. Science harnesses some of the baser motives of human behavior in the service of truth, making sure that the interplay of scientists’ individual interests and biases mostly align with epistemic progress.


Darwin may have been right from the start about the fact of common ancestry, but his theory would not have carried the day as swiftly as it did without his indefatigable efforts to enlist allies to the cause and to engage and negotiate with his critics. All the parties in the dispute were trying to enlist nature as their ally, but Darwin of course had one big advantage: nature really was on his side all along. In the long run, therefore, as evidence accumulates and factions wax and wane, the social influences on science will be filtered out, and rightly so.

Vindicating science by bringing it down, part I

1What is the role of the social in science? If one consults science textbooks, one will find that the social dimension of scientific knowledge is conspicuously absent. Science is supposed to reflect the way the world really is, independent of our petty human lives. It is, in the classical view, the epitome of a rational endeavor, free from social influences. Of course, science is carried out by human beings, but their individual backgrounds and social lives are simply taken to be irrelevant. What matters are the intellectual merits of a theory, not who conceived it. What matters is the evidence, not who gathered it. This stark contrast between the social and the rational can be found in philosophical accounts of science as well. Because social factors are rendered invisible in the end products of science, many philosophers have underestimated their constructive role in the acquisition of scientific knowledge.


In recent decades, sociologists and historians have tried to bring science back to earth, but many of them have unwittingly bought into the same simplistic opposition. Social influences on science have been relished by its cynical critics and resisted by its admirers, and for the same reason: the fear (or hope) that it would destroy the credentials of science. In a paper I co-wrote with my frequent partner in crime, Maarten Boudry (published in Perspectives in Science and Culture, edited by K. Rutten, S. Blancke, and R. Soetaert, Purdue University Press) we discuss the historical roots of this opposition, culminating in the sorry spectacle of the science wars. This post provides extensive excerpts from that paper, I hope you’ll enjoy it.


When do we feel the need to explain why someone beliefs something? Not all beliefs held by our fellow human beings appear to produce an epistemic itch. People believe that dolphins are mammals, that the earth orbits around the sun, and that World War II ended in 1945, but we rarely wonder how they arrived at such homely truths. Beliefs such as these are just obvious, and no sane person would dispute them. That said, who told you when WWII ended? Where did you acquire the belief that dolphins are mammals, or that the earth goes around the sun? Your sources for these convictions are hard to track down.


Memories of these facts are called semantic by psychologists, to distinguish them from episodic memories, related to things that happen to us. Episodic memories carry a tag with time, place, and the situation we acquired them. Not so for semantic memories, likely because doing so would be a waste of brain resources.


Take the belief that coal is black. If we ask you what your reasons are for believing that, you would probably be puzzled. The first answer that comes to mind is: “Why, because it is black, of course!” It doesn’t matter how you came to know that. You could have learned it in any number of ways. Anyone in doubt about the color of coal can quickly retrieve the answer through any number of sources.


Because the truth of such beliefs is obvious, we rarely question how other people acquired them, or how they can justify them. It seems as if such beliefs just drop out of thin air, without much in the way of a causal history.


That said, how do we account for other kinds of beliefs (as held by others, of course)? Beliefs that are false, quirky, idiosyncratic, or plainly irrational produce an epistemic itch. We want to explain how people end up embracing them. Who told him such nonsense? Did he fall for one of those conspiracy theories circulating on the internet?


We resort to special explanations only when something goes wrong. True beliefs that are part of common knowledge are taken at face value, but false and foolish beliefs cry out for an explanation. This is where social and cultural explanations come in. Such explanations, however, are not invoked when we account for true and justified beliefs. Only when rationality breaks down, it seems, a space is opened up for psychological explanations to fill. We seem to think that there is an association between the irrational and the social, but not between the rational and the social.


In the classical view, science is the epitome of reason. It is objective and impartial. It is ruthless in its indifference to what we fear or fancy. When it comes to the content of science, nature has the final say in the matter. Social, political, and ideological influences on science are anathema. When writing science textbooks, and for many other purposes, the social influences on the development of scientific theories can be safely ignored,  just like with many of our mundane beliefs about the world. Sure, there is a story to be told about how scientists pooled their efforts to acquire this or that piece of knowledge, who published it first, who convinced whom, and so on. But the details of this story make no difference: an alternative history of science would ultimately have led to the same result.


As a result, especially in the natural sciences, students are simply taught scientific theories as if they descended down from some Platonic heaven. The vagaries of scientific history, the false starts, wrong turns, and dead ends, the protracted controversies between rival views, the forerunners and pioneers of scientific ideas – all of this is rendered invisible.


For long, philosophers of science have also treated science in splendid isolation from the social world. Hans Reichenbach, one of the major proponents of logical positivism, taught us to strictly separate the context of discovery from the context of justification. The first deals with the historical conception of a scientific hypothesis, and is of little interest to philosophers trying to understand the logic of science. Philosophers of science should be solely concerned with how a scientific hypothesis, once it appears on the scene, relates to observations, whether it is internally consistent, whether it is falsifiable, and so on.


(next: bringing science down to earth)

Why Alex Rosenberg is wrong just about everything

1 (4)Philosophy is my second academic career. My first one was in science, evolutionary biology, to be specific. Depending on how you look at it, this makes me either unusually competent in two normally widely distinct areas of academic scholarship, or barely making the passing grade in both. Be that as it may, I have made a personal hobby to observe my new profession from the outside, as much as it is possible, sort of like an anthropologist looking into a different yet sufficient familiar culture.


One of the things I’ve noticed is that philosophers are unusually critical of their own field, with a number of prominent ones, both now and historically, actually arguing that it should be dismantled, usually in favor of science (or linguistics). I will not get into that debate here, as I’ve covered in detail before.


Another frequent observation is that of a high frequency of colleagues who are fascinating for being very smart, well regarded in the field, and yet – in my admittedly non humble opinion – completely wrong. Perhaps the quintessential example is David Chalmers, he of “philosophical zombies,” “hard problem of consciousness,” “singularity,” “mind uploading,” “panpsychism,” and similar inane notions. But this post isn’t about David.


It’s about Alex Rosenberg. Alex is on the faculty at the prestigious Duke University in North Carolina, and someone I think should get a medal (together with Chalmers, of course) for the highest number of wrongheaded papers in a philosophical career. I met him a few years ago during a two-day conference on “Moving naturalism forward,” organized by cosmologist Sean Carroll. The conference was fun, but Alex kept trying to convince us of a notion that he called “happy nihilism,” according to which the universe is devoid of meaning (of course it is, meaning is a human construct), free will doesn’t exist (of course it doesn’t, if one uses the term in the contra-causal sense), and yet, somehow, we can still decide to take all of this on board and be happy.


Setting aside the devastating criticism Alex got at the conference from Dan Dennett, Owen Flanagan, Terrence Deacon, and others, this is also the same bleak picture of the world he presented in his dismal The Atheist’s Guide to Reality, which I reviewed for The Philosophers’ Magazine. Here is a taste of my thinking at the time:


“As a former scientist and now philosopher, I have chastised some of my colleagues for their scientistic attitude. … Thanks to [Rosenberg], I can no longer be accused of fighting a straw man. Rosenberg’s attempt is valiant and will give people much to think about. Except, of course, that according to Rosenberg we cannot really think such things because scientism ‘says’ that chunks of matter cannot possibly produce insights about anything at all, on penalty of violating physicalism.”


Nevermind that such statements are obviously self-contradictory. What was I doing while reading Alex’s book if not thinking about what he wrote? And what was he doing while writing the book? These are all illusions, claims Alex, apparently using the word “illusion” in a novel and profound way that the rest of us are unaware of. I continued my review:


“Take Rosenberg’s denial of the existence of conscious decision-making. Consciousness for him is an epiphenomenon of the brain’s activity. … His major piece of evidence? Benjamin Libet’s experiments in cognitive science. … We are informed [that] ‘consciousness is probably too big a deal not to have been organized by natural selection to solve some design problem or other, perhaps several. Exactly what its functions are, what design problem it solves, neuroscience has not yet figured out.’”


Seriously? Let us set aside that Alex completely misinterprets the implications of Libet’s famous experiments, even contradicting Libet’s own interpretation. He admits that natural selection must have evolved consciousness – which depends on brain structures that are exceedingly metabolically costly – for some reason, but he can’t think of one. Hmm, let’s see, how about the ability to reflect on our actions, make deliberate decisions, plan things ahead? Oh right, those are all illusions. Naturally. Me again:


“For Rosenberg there is no free will, morality, meaning, aboutness and so on because, you see, ‘the physical facts fix all the facts.’ We are never told exactly what this slogan actually means. Well, I’m a big fan of physics, but last time I checked, it didn’t, for instance, ‘fix’ the fact that 2+2=4.”


Nor does physic fix anything at all in the rest of mathematics. And in logic. Continuing the review:

“Rosenberg thinks that economics, the social sciences (not to mention literature, the arts, and his own field of philosophy) are all ‘stories’ that may entertain us, but that should by no means be taken seriously. He doesn’t seem to realize that science – not to mention his very book – also tells stories … because that is the way human beings communicate knowledge and achieve understanding. Science is the right type of story if you want to know about cosmology, but not if you want to learn logic.”


Or history. Or art. I concluded:


“Rosenberg’s scientistic nihilism is analogous to radical skepticism about reality. … It’s thought provoking, there is no scientific evidence that can possibly rule in its favor or against it, and it is best promptly forgotten so that you can get back to thinking about the things that really matter to you.”


Alex, impervious to criticism (well, “he” is only a bunch of subatomic particles without will or aboutness, so – to be fair – how could he change his mind, especially given that the latter is an illusion?), has continued along the same vein in recent years. Just in the last few weeks I’ve read two more articles by him that finally prompted me to write this essay.


The first one, published in The Verge, is actually an interview conducted by Angela Chen, in which Alex “explains” how our addiction to stories keeps us from understanding history. The interview is about (but wait, nothing is about anything!) his book How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories.” First problem: whenever I hear the words “the neuroscience of…” I instinctively reach for my gun (fortunately, I’m a quasi-pacifist, and I don’t own guns). That’s because nowadays a lot of nonsense is written in the name of neuroscience, unfortunately.


The main trust of Alex’s argument is that neuroscience undermines what is often referred to as our “theory of mind,” the ability to guess other people’s thoughts and motivations. Since historians deploy – without realizing it – a theory of mind whenever they talk about this or that historical figure’s motivations for acting one way or another, their theorizing is made hopelessly obsolete by the modern science of the brain.


Except that Alex is making an astounding mistake here, very similar to the one made, for instance, by fellow atheist Sam Harris in his The Moral Landscape (see my review here). He is confusing a mechanistic explanation of X for the explanation of X, apparently forgetting (or simply outright denying) that explanations – which are human constructs, let us not forget – can be given at different levels, and using different language, depending on how useful they are to the target recipients, i.e., other human beings.


Let me give you an analogous example to show just how bizarre Alex’s claim that neuroscience does away with historical explanations really is. Imagine we were interested in the “neural correlates,” as cognitive scientists call them, of mathematical problem solving. We can stick someone – even a mathematician – into an fMRI machine and find out which areas of her brain lit up when she is involved in simple or complex mathematical thinking, from solving a basic equation to demonstrating Fermat’s Last Theorem.


Now, we will surely find some such neural correlates. We have to, since everything we do, and certainly any kind of higher, conscious thinking, has to be done by way of engaging one part or another of our brains. Otherwise, it would be magic.


But now imagine that our neuroscientist completes his experiment, gets the mathematician out of the fMRI machine, and gingerly informs her that mathematicians are no longer needed, because neuroscience has discovered which areas of the brain they use to solve mathematical problems. Crazy, right? Well, it’s no different from Alex’s reasoning for getting rid of historians, or Harris’ “argument” (I’m using the word charitably) for concluding that science, and neuroscience (which just happens to be his own field) in particular, can now answer moral questions. Ethicists can go play golf.


A few weeks later, Alex did it again! This time in an article he penned himself for 3:AM Magazine, entitled “Is neuroscience a bigger threat than artificial intelligence?” Oh boy. It’s the same basic idea that he has been peddling since The Atheist’s Guide to Reality, though – as in The Verge article – this time it isn’t physics that “fixes all the facts,” it is neuroscience that answers all the questions.


After acknowledging the (alleged, and I think way overblown) threat posed by future advanced AI to humanity (you know, the Singularity, again, Terminator and that sort of things), Alex informs us that the real existential downfall of humanity comes from the research of four Nobel-winning neuroscientists: Eric Kandel, John O’Keefe, Edvard [sic], and May-Britt Moser. What have they done?


“Between them they have shown that the human brain doesn’t work the way conscious experience suggests at all. Instead it operates to deliver human achievements in the way IBM’s Watson does. Thoughts with meaning have no more role in the human brain than in artificial intelligence.”


By now you have surely guessed that this is, again, about the alleged failure of the theory of mind, and that, once again, Alex is simply confusing different levels of explanation, an elementary mistake that you would think a trained philosopher simply wouldn’t make.


The fascinating thing is that Alex actually acknowledges that there is quite a bit of evidence for the theory of mind:


“Several sources of evidence suggest that we have an innate mind-reading ability more powerful than other primates. It’s an ability to track other people’s actions that is triggered soon after birth. Child psychologists have established its operation in pre-linguistic toddlers, while primatologists have shown its absence in other primates even when they exceed infants in other forms of reasoning. Social psychologists have established deficiencies in its deployment among children on the Autism spectrum. fMRI and transcranial magnetic stimulation studies have localized a brain region that delivers this mind-reading ability. Evolutionary anthropology, game theory and experimental economics have established the indispensability of powerful mind reading for the cooperation and collaboration that resulted in Hominin genus’s rapid ascent of the African savanna’s food chain.”


None of this matters, because neuroscience has (allegedly) “revealed” to us that the theory of mind is “quite as much of a dead end as Ptolemaic astronomy.” Why? Because Kandel and colleagues have shown that if you look into the brain you won’t find beliefs, desires, or reasons, but only specific, dynamic neural pathways.


No kidding, Sherlock. That’s because what we call beliefs, desires and reasons are instantiated in the brain by way of specific neural pathways. The neurobiological level is more basic – but, crucially, no more true – than the psychological one. They provide complementary, not competing, explanations of the same phenomenon. One explanation is more useful to biologists and neuroscientists, another one to psychologists, historians, and art critics, among others.


It’s like the much abused and misunderstood example of the chair in which you may be sitting at this particular time. Physics tells us that said chair is “really” just a collection of quarks, interacting in the way prescribed by the fundamental laws of nature. This is certainly the case, but by a long shot not the whole picture. Your chair is also “solid” at the level of analysis pertinent to human beings who wish to sit down in order to read a blog post, not to mention those other human beings that designed and built the chair itself. The chair is most definitely not an illusion, just because it can be (usefully, depending on the context) be described in different ways. Explanatory complementarity, not competition.


A side note, as a biologist, on Kandel et al.’s indubitably scientifically fascinating work: it was done on rats, because the pertinent experiments are too invasive and unethical to be conducted on human beings. With his usual braggadocio, Alex informs us that this doesn’t matter at all:


“Of course you could argue that what Nobel Prize winning research shows about rats is irrelevant to humans. But you’d be flying in the face of clinical evidence about human deficits and disorders, anatomical and physiological identities between the structure of rat and human brains, and the detailed molecular biology of learning and information transmission in the neuronal circuitry of both us and Rattus rattus, the very reasons neuroscientists interested in human brains have invested so much time and effort in learning how rat brains work. And won Nobel Prizes for doing it.”


I got news for Alex: while, again, Kandel et al.’s research is most certainly important, enough to win the Nobel, translating things from rats to humans is definitely not that obvious or straightforward. It is simply false that rat and human brains have a large number of anatomical and physiological identities, as the perusal of any introductory book on mammalian anatomy will readily confirm. Heck, our brains are substantially different from those of higher primates like chimpanzees and bonobos, which is a major reason we need to be careful when we extrapolate from the latter (let alone rats) to humans. For instance, we have little to go by, in terms of comparative brain anatomy and physiology, to explain exquisite and crucially human traits like language (not just communication) and iterative cultural evolution. Take a look at this book by my colleague Kevin Laland to appreciate just how carefully biologists (as distinct from some philosophers) are when it comes to interspecies comparisons.


Don’t get me wrong. Alex Rosenberg is a really smart guy, and his misguided writings are necessary in order to sharpen our thinking about all sorts of matters. After all, the British Royal Society awarded physicist Fred Hoyle (the author of the steady state theory in cosmology, which for a while rivaled the big bang theory) a medal for the highest number of wrong ideas proposed in a scientific career. This was not an example of British sarcasm, they meant it in all seriousness, as Hoyle’s theories have arguably played an important role in advancing cosmology. Perhaps we should establish a similar prize in philosophy. I have a couple of candidates in mind…

The problem with (some) philosophy and metaphysics of perception

1 (2)Does the world exist independently of any given human mind? Most of us – including many philosophers – would answer yes, and even think the question itself more than a bit weird. But of course it is possible that we live, say, in a simulation imposed on us by external entities, a la Matrix movie. Or perhaps we are all just instantiations of a single cosmic mind, which we could simply refer to as God.


This, however, is not what this essay is about. Let us assume for the sake of discussion that the world does exist independently of the human mind. The next, and somewhat more interesting, question is: how do we perceive such world? In the philosophy and metaphysics of perception one position is referred to as naive realism, the notion that we can have, and usually do have, veridical perceptions about the world. What you see, is what you get, so to speak.


I think this position is rather hopeless, and yet it was vigorously defended by a colleague of mine at a recent colloquium at City College. Umrao Sethi, Assistant Professor of Philosophy at Lehman College in New York, gave a talk entitled “Sensible Over-Determination.” In the following I wish to recount an interesting exchange Sethi and I have had during the q&a following her talk, because it puts two views not just of the philosophy of perception, but of philosophy itself, in stark contrast with each other. Please understand that I am not singling out Sethi, but simply using her particular talk to make a more general point about (some) contemporary philosophy.
First, though, here is the abstract of Sethi’s talk. It’s rather technical, so I will explain the relevant bits immediately below:


“I argue that sensible qualities are ‘ontologically flexible’ – they can be instantiated either in virtue of having material bearers or in virtue of being perceived by minds. Given that these qualities have disjunctive conditions on instantiation, a single instance of such a quality can be over-determined – that is, have its existence simultaneously guaranteed both in virtue of having a material bearer and in virtue of being perceived. I make use of this metaphysical insight to show that, contrary to received wisdom, naïve realism is compatible with the possibility of qualitatively matching hallucinations. Both hallucinations and ordinary perceptions acquaint us with instances of the same sensible qualities. While the instances in hallucination are mind-dependent, those in veridical perception are not. The latter are over-determined, and, thus, mind-independent – they can continue to exist unperceived, because, in addition to the minds that perceive them, their existence is guaranteed by the material objects that are their bearers.”


The two key terms here are “hallucinations” and “sensible qualities.” What Sethi was arguing, among other things, is that hallucinations do not present a problem for naive realists. But why would they? The problem goes all the way back to Descartes and his famous thought experiment based on radical doubt. Descartes realized that he could be mistaken about a lot of things, particularly his sensorial perceptions about the world. We all know, for instance, that we can have dreams or hallucinations during which, for a time at least, we cannot tell apart physical reality from what is being conjured up by our minds. If this is the case, Descartes thought, then we can never be absolutely sure of any aspect of our alleged knowledge of the world.


His solution to the problem, famously, was his cogito argument: I may be mistaken about the external world, and even about mathematical or logical truths (because of systematic biases in my reasoning). But there is no way I can be mistaken about the fact that I am a thinking being of some sort. Cogito, ergo sum: I think, therefore I am.


Descartes’ was the last great attempt in philosophy to establish certain foundations of knowledge. And it failed. Sure, I know that I think, but – contra Descartes – that’s pretty much all I will ever know with certainty. There is no way to rebuild knowledge of mathematics, logic, and even less so empirical reality, building on that meager starting point. Most philosophers since have therefore accepted the notion that our knowledge is provisional, and that we could, in principle, be mistaken about all sorts of things.


Not so naive realists, who cling to the hope that we get – most of the times – “veridical” (i.e., true) impressions of the external world via our senses. How so? One path for the naive realist is to reject the argument from hallucination, and show that we can always tell the difference between what our minds conjure and what we perceive about the world.


One way to do this is simply to deny that hallucinations ever feel exactly the same as impression from the external world. I’ve never had a hallucination, but the literature on the subject, seems to me, is pretty clear: at least some of the times, hallucinations (spontaneous, drug-induced, or especially if induced by sensory deprivation) are subjectively indistinguishable from the real thing. In other words, the problem can’t just be swept under the rug.


Sethi conceded this point at least for the sake of discussion, but insisted that we can nevertheless tell the difference. This is where sensible qualities come in. The quintessential one, the one she used the most during her talk, and the very same one that sparked our little debate during the q&a, is color.


Consider a red apple. You may hallucinate a red apple, and “perceive” redness associated with the apple, even though there is no actual apple out there. Or you may perceive redness in the apple because there is, in fact, a red apple that you are looking at. Sethi’s point is that we can tell the difference between these two cases because the second one is “overdetermined,” meaning that there are two reasons we have the first person experience of redness in the case of real objects: (i) because our mind is conjuring one (just like in the case of the hallucination), and (ii) because there is a red apple out there (unlike the case of the hallucination). It is this over-determination that implies mind-independence, according to Sehti.


The problem is – as I pointed out in the q&a – that “redness” is actually not out there (though the apple may be). We know quite a bit about the physiology of color perception and we understand that what is out there are the surface properties of the apple, as well as the characteristics of light at certain wavelengths that can be perceived by the human eye. “Color,” however, results from the specific interaction among those external physical properties and the specific, evolved, sensorial apparatus of a human being. That apparatus includes of course our eyes, with their anatomical and cellular structure, the optical nerve that connects to the brain, and a number of brain systems devoted to the reception and internal representation of what we perceive. Color is not out there, and it is not mind-independent. Ever.


We know this from a variety of sources, including the fact that some human beings (like yours truly) are color-blind (partially, in my case), so that they perceive the exact same external object differently. We also know that other animals do not perceive colors at all (bulls, for instance: when they get mad at someone waving a red cloth in front of them they are reacting to the movement, not the redness of the cloth). Yet other animals perceive colors differently (bees, for example, see ultraviolet, which we cannot perceive).


So what modern science tells us is that the human mind creates a virtual simulation of external reality all the time, and that the veridicity of such simulation depends not just on whatever is out there, but on the details of the human perceptual and cognitive systems. We live in a perpetual Matrix of our own creation, so to speak. That is why we cannot, in fact, always tell the difference between hallucinations and the real world, even if a real, mind-independent world does exist. Descartes was right, and naive realists are wrong.


Here is where things got interesting outside of the narrow scope of an academic talk in a philosophy department. When I pointed out the above to Sethi, her response was that she does not accept the physiological account! Science, she said, is itself based on metaphysical assumptions, and one is free to reject such assumptions. She evidently chose the latter course.


But hold on a second. Yes, science takes certain metaphysical assumptions on board without examination, like the very fact that reality is mind-independent. But I thought (naively, as it turns out!) that a contemporary philosopher is best served by taking very, very seriously whatever science tells us about whatever issue we are discussing philosophically. In this case, to ignore modern physiology and brain science seems hazardous to say the least.


Curious, I asked Sethi why she chose the anti-science path. In other words, what were her metaphysical commitments? Her answer floored me, but also gave me an interesting insight into my own profession. She said that if she went along with standard physiology she would have to agree that consciousness is a biological phenomenon rooted in the brain and how it interacts with the external world.


Again naively, I said, obviously, what else could it be? This is where she mentioned the so-called “hard problem” of consciousness, made famous by NYU philosopher of mind David Chalmers. Here is how the problem is summarized in the excellent entry in the Internet Encyclopedia of Philosophy:


“It is the problem of explaining why there is ‘something it is like’ for a subject in conscious experience … The usual methods of science involve explanation of functional, dynamical, and structural properties – explanation of what a thing does, how it changes over time, and how it is put together.  But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science.”


Except that I don’t think one can “meaningfully” ask that at all. I have made the argument that the so-called hard problem is in fact a category mistake. It confuses scientific explanation with first person experience. We may (possibly will) arrive at a complete scientific explanation of consciousness (we are nowhere near, at the moment, but we are making some progress). When we do, however, we will still be unable to feed that explanation into a human mind and have that mind experience, say, redness.


The reason for this is no mystery at all, and does not point to any alleged limitation of scientific explanations. It’s the same reason why we can have a complete understanding of, say, how to ride a bicycle, and yet immediately fall down as soon as we mount one for the first time. Understanding derives from something that philosophers call “knowledge-that,” while the skill required to properly ride a bike is an instance of “knowledge-of.” The two are distinct, the first being conceptual, the second one literally embodied in our biological beings. There is absolutely nothing missing from our understanding if we can’t also ride. Confusing the two is a category mistake analogous to being shown the buildings, faculty, students, and administrations of Oxford University and still ask, “yes, but where is the university?”


The reason I went into some detail into all of this is because I think that Chalmers-style philosophizing is what gives philosophy a bad name. Let me be clear about this: I am on record arguing that philosophy is not the same thing as science, and that not all meaningful philosophical questions have scientific answers (e.g., how should we think of personal identity? or: is modus ponens a kind of valid deductive inference? or: what is a cause?). I am, in other words, not advocating scientism.


But I am advocating the notion that philosophers take science seriously and refrain from inventing questions just because they wish to ignore science and convince themselves that they can go “deeper.” The kind of analytic metaphysics done by Chalmers (and Sethi) does not serve philosophy well, I think, as it ignores or dismisses science at its own peril. Some philosophers have been pushing an alternative approach, which they call “scientific” metaphysics, though I think it would be best described as science-informed metaphysics.


Philosophy, as I have argued recently does make progress, and one example is the acknowledgment of the failure of the rationalist program in epistemology, of which Descartes was the last, valiant defender, and that is in a sense the reference point of analytic metaphysics. Let us then count Descartes’ failure as an example of progress, and not push philosophy back four centuries.

They’ve done it again: another embarrassing moment for the skeptic movement

1In a few days I will be in Las Vegas. No, it’s not what you may be thinking about. I’ll be the token skeptic at one of the largest conferences of skeptics: CSICon, courtesy of the same people who publish Skeptical Inquirer magazine, for which I wrote a column on the nature of science for a decade. I say “token skeptic” because I have been invited by the organizers to talk about scientism, the notion that sometimes science itself is adopted as an ideology, applied everywhere even though it doesn’t belong or is not particularly useful (here is a video about this).


I have been both a member and a friendly internal critic of the skeptic community since the late ‘90s, and I have been reminded of the value of such a gadfly-like role very recently, with the publication of yet another “skeptical” hoax co-authored by philosopher Peter Boghossian and author James Lindsay, this time accompanied by Areo magazine’s Helen Pluckrose. The hoax purports to demonstrate once and for all that what the authors disdainfully refer to as “grievance studies” (i.e., black studies, race studies, women studies, gender studies, and allied fields) is a sham hopelessly marred by leftist ideological bias. The hoax doesn’t do any such thing, although those fields are, in fact, problematic. What the stunt accomplishes instead is to reveal the authors’ own ideological bias, as well as the poverty of critical thinking by major exponents of the self-professed skeptic community. But let’s proceed in order.


Boghossian and Lindsay made a first, awkward attempt at this last year, by submitting a single fake paper entitled “The Conceptual Penis as a Social Construct.” It was a disaster: the paper was, in fact, rejected by the first (very low ranking) journal they submitted it to, and only got published in an unranked, pay-per-publish journal later on. Here is my commentary on why Boghossian and Lindsay’s achievement was simply to shine a negative light on the skeptic movement, and here is a panel discussion about their failure at the North East Conference on Science and Skepticism later on in the year. That did not stop major exponents of the skeptic movement, from Michael Shermer to Steven Pinker, from Richard Dawkins to Sam Harris and Jerry Coyne, from praising Boghossian and Lindsay, which is why I maintain the episode was an embarrassment for the whole community.


The hoax, of course, was modeled after the famous one perpetrated by NYU physicist Alan Sokal at the expense of the (non peer reviewed) postmodernist journal Social Text, back in the ‘90s, at the height of the so-called science wars. Sokal, however, is far more cautious and reasonable than Boghossian & co., writing about his own stunt:


From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty.


In fact, Sokal himself published some good criticisms of the conceptual penis hoax.


Not having learned their lesson at all, Boghossian & co. engaged in a larger project of the same kind, this time sending out 21 fake papers to a number of journals, mostly in women and gender studies. Two thirds of the papers were rejected. Of the seven accepted papers, one was a collection of (bad) poetry, and thus really irrelevant to the objective at hand; two were simply boring and confusing, like a lot of academic papers; one was a self-referential piece on academic hoaxes that one independent commentator actually judged to be making “somewhat plausible arguments”; and three more included fake empirical evidence. As Daniel Engber says in Slate:


One can point to lots of silly-sounding published data from many other fields of study, including strictly scientific ones. Are those emblematic of ‘corruption’ too?


Indeed, there are several examples of this in the literature, like a 2013 hoax that saw a scientific paper about anti-cancer properties in a chemical extracted from a fictional lichen published in several hundred journals. Hundreds, not just half a dozen!


It’s very well worth reading the entirety of Engber’s commentary, which exposes several problematic aspects of the Boghossian et al.’s stunt. The major issues, as I see them, are the following:


1. Hoaxes are ethically problematic, and I honestly think Portland State University should start an academic investigation of the practices of Peter Boghossian. In the first place, I doubt the study (which was published in Aero magazine, not in a peer reviewed journal!) obtained the standard clearance required for research on human subjects. Second, the whole enterprise of academic publishing assumes that one is not faking things, particularly data. So tricking reviewers in that fashion at the very least breaches the ethical norms of any field of scholarship.


2. The authors make a big deal of the ideological slant of the fields they target, apparently entirely oblivious to their own ideological agenda, which explicitly targeted mostly women and gender studies. Both Boghossian and Lindsay have published a series of tweets (see Engber’s essay) that nakedly display their bias. Is the pot calling the kettle black?


3. While we can certainly agree that it is disturbing that academic journals publish any paper that is more or less obviously fake, this is not a good criticism of the target fields. You know what that would look like? It would take the form of a serious, in-depth analysis of arguments proposed by scholars in those fields. But Boghossian & co. actually proudly proclaimed, after their first hoax, that they have never read a paper in “X studies,” which means that – literally – they don’t know what they are talking about. Here is one example of how to do it.


4. What Boghossian et al. really want to convey is that “X studies” are intellectually bankrupt, unlike other academic disciplines, particularly scientific ones. But as the example of the anti-cancer hoax mentioned above, and several others, show, this is simply not the case. Corruption of academic culture, resulting either from ideological bias or from financial interests (pharmaceutical companies are well known to establish entire fake journals to push their products) is not limited to certain small corners of the humanities.


5. In a related fashion – and surprisingly given that Boghossian actually teaches critical thinking – while the first hoax fatally suffered from a sample size of n=1, the new one is plagued by the simple fact that it has no control! Without a similar systematic attempt being directed at journals in other fields (particularly scientific ones) we can conclude precious little about the specific state of “X studies.”


That said, do I think that the fields targeted by Boghossian & co. are problematic? Yes, as I’ve written before. Here the most useful commentary on the hoax has been published in the New York Times by William Eggington. As he puts it:


The problem is not that philosophers, historians or English professors are interested in, say, questions of how gender or racial identity or bias is expressed in culture or thought. Gender and racial identity are universally present and vitally important across all the areas that the humanities study and hence should be central concerns. The problem, rather, is that scholars who study these questions have been driven into sub-specializations that are not always seen as integral to larger fields or to the humanities as a whole. Sometimes they have been driven there by departments that are reluctant to accept them; sometimes they have been driven there by their own conviction that they alone have the standing to investigate these topics.


That strikes me as exactly right. “X studies” programs should be integrated within a university, either (ideally) in broad multidisciplinary programs, or within the most suitable departments, such as History, Philosophy, Sociology, and the like.


Eggington blames academic hyperspecialization for the current sorry state of affairs in these fields, as well as the “publish or perish” attitude that has plagued academia for decades now. But guess what? “X studies” are most definitely not the only ones to suffer from these problems. They are endemic to the whole of modern academy, including the natural sciences. Indeed, we should be far more worried about the influence of ideology and big money on scientific fields than on small areas of the humanities. After all, it is in the name of science that we spend billions annually, and it is from science that we expect miracles of medicine and technology.


As Engber writes in the Slate commentary, notwithstanding the dire warnings of Boghossian, Pinker, Harris, Dawkins and all the others:


Surprise, surprise: Civilization hasn’t yet collapsed. In spite of Derrida and Social Text, we somehow found a means of treating AIDS, and if we’re still at loggerheads about the need to deal with global warming, one can’t really blame the queer and gender theorists or imagine that the problem started with the Academic Left. (Hey, I wonder if those dang sociologists might have something interesting to say about climate change denial?)


The new Boghossian-led hoax is another example of badly executed, ideologically driven stunt that targets narrow fields with little impact while leaving alone the big elephants in the room. It is, in the end, yet another embarrassment for the skeptical community, as well as a reflection of the authors’ own biases and narrow mindedness.

The techno-optimists are at it again

1 (4)(the atomic explosion that destroyed Hiroshima)


Techno-optimism (a form of applied scientism, if you will) is the attitude that no matter how dire humanity’s problems, science and technology will surely come to the rescue. It tends to conveniently neglect that some of humanity’s biggest contemporary problems (say, climate change, or the risk of nuclear annihilation) are, in fact, caused by the willful misuse of science and technology. It seems odd to firmly believe that more of the same thing that caused the disease in the first place will surely cure the disease, because, you know, this time we’ll get it right.


A good example of techno-optimism is a recent article in Slate by Phil Torres, based on his new book, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. It’s actually a somewhat puzzling article, because Torres is both critical and supportive of what he calls radical human enhancement as a way to solve what he perceives as humanity’s existential risks. My take is that he mostly focuses on the wrong risks, some of which are not actually existential at all, and that his proposed solution is more likely to make things worse than better. I think of myself as a realist about technology – one who both appreciates its advantages (I’m writing this on a wonderfully advanced tablet computer!) and is aware of its dark side. But if after reading this you want to class me as a techno-pessimist, I’ll take it. Just don’t dismiss me as a Luddite, okay?


Torres begins by correctly pointing out that the current century is a bit special, in the context both of human evolution and, for that matter, the evolution of life on our planet. For the first time since life emerged 3.5 billion years ago a single, sentient species actually has developed the capacity to profoundly alter Earth’s bio- and geo-spheres. As my favorite philosopher, Spider-Man, warned us, with great power comes great responsibility, but we just don’t seem to be willing to accept that responsibility.


Torres then introduces the concepts of cognitive and moral enhancements, though the word “moral” appears only near the beginning of the piece, with “cognitive” replacing it throughout the rest of the article. That, as we shall see, is a crucial mistake. There are two classes of enhancement, conventional and radical. You are surely familiar with the conventional class (hence the name!): it includes things like education, meditation, and the absorption of caffeine. Okay, it’s an odd mix, but you get the point: anything that improves our cognitive abilities without permanently altering them in a heritable fashion, that is, across generations.


Radical enhancements are a whole different story, and while still at the borderlands between science and science fiction, surely some of them will become available within years or decades. Torres focuses his essay on radical enhancements, since he thinks these are the ones that will be necessary to stave off the existential risks faced by humanity.


One such radical enhancement is embryo selection, a process by which scientists – the wisest of all people, as we all know – pick a subset of embryos generated by a given combination of sperms and eggs, and do so repeatedly in order to improve whatever human characteristic is deemed to be desirable. Torres is perfectly aware that this is eugenics, but he deems it to be of a benign type, because it doesn’t violate people’s autonomy. I guess he hasn’t seen the film Gattaca. And yes, it is perfectly acceptable to object to sci-fi scenarios by using sci-fi philosophical thought experiments. Torres comments:


If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. … According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points – a promising method for creating superbrainy offspring in a relatively short period of time. … As Bostrom puts it … ‘a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.’


I’m afraid that Bostrom and Shulman don’t know what they are talking about, and no such strong prima facie case has been made. At all. First off, we actually don’t understand the genetic basis of intelligence. We know that IQ (which is not at all the same thing as “intelligence,” whatever that is) is heritable in humans. But “heritable” simply means that there is – other things being equal – a statistical correlation between intelligence and genetic makeup. Nothing more, and that ain’t even remotely close enough to what one would need in order to do embryo selection on intelligence, even setting aside the ethical issues, which would be far more thorny than Torres lets on.


Second, and this will become a recurring theme of my critique, “superbrainy” doesn’t seem to have a lot to do with what is really needed: wisdom, or a good moral compass. I seriously doubt that there is any correlation at all between intelligence and morality, and if I’m right, creating a super-race of hyper-intelligent beings with the same highly imperfect moral compass as Homo sapiens sapiens is a sure recipe to accelerate and magnify whatever existential threat Torres, Bostrom and Shulman may be concerned about.


Speaking of which: what does Torres consider to be an existential threat to humanity? At the top of his list he puts “apocalyptic terrorism,” the possibility that someone inspired by a “Manichean belief system” will blow all of us to smithereens with a stolen atomic weapon, in the name of ridding the world of apostates and assorted infidels, thus establishing the kingdom of God on earth.


While surely there is a risk of one such attack, notice a few important caveats. To begin with, there is no credible scenario under which a nuclear terrorist attack would be civilization-ending. Yes, someone may be able to sneak a low grade nuclear weapon in a major city and kill hundreds of thousands, millions even. That would be an unprecedented and horrifying catastrophe. But an existential threat to civilization? No. You know what really constitutes such a threat? The fact that the codes for thousands of nuclear missiles are currently in the hands of an incompetent narcissist sitting in the White House. But, curiously, there is no mention of government-based threats in Torres’ piece. Lastly, please keep in mind that this specific threat is made possible by, you guessed it, science and technology! It’s the very existence of very smart scientists and unscrupulous politicians – none of whom seems to be equipped with even a barely functioning moral compass – that has put us into this situation in the first place. And you think giving more leeway to the same folks is going to save humanity?


More generally speaking, Steven Pinker’s ‘Escalator of Reason’ hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the ‘Flynn effect.’ The most important concept here is that of ‘abstract reasoning,’ which Pinker identifies as being ‘highly correlated’ with IQ. In his words, ‘abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.’


With all due respect to Steven Pinker, I’m going to call bullshit on this one as well. As a start, “violence” has indeed declined since the second half of the 20th century (though even this conclusion comes with huge caveats about what exactly counts as violence), but there is a reason Pinker picked that particular time frame: two world wars had just taken place in the previous half century, killing millions of people, thanks to science and technology. The culmination of that period was the only nuclear attack on civilians in the history of humanity (so far), perpetrated by a government, not a Manichean terrorist; the US government, to be specific.
Moreover, there is no causal model (correlation, as Pinker knows, is not the same as causation) that actually links the Flynn effect (which is probably due to “conventional enhancement techniques,” such as better nutrition and education) and moral improvement. Indeed, I see no reason to believe that humanity at large has improved morally since the times of Socrates and Confucius. And “abstraction from the concrete particulars of immediate experience” is also the sort of thing that makes possible killing at a distance by pushing a button, or that allows many of us to reconcile the otherwise irreconcilable fact that the top 10% of the human population lives by standards historically reserved to Kings and Queens while the rest is below or barely above poverty, subject to preventable disease, or killed by violence rendered particularly effective by technologically advanced weaponry in the hands of unscrupulous governments.


Torres does acknowledge some of the limitations of the approach proposed by techno-optimists like Pinker. After writing that perhaps “idiosyncratic actors” (i.e., terrorists) would suffer from less empathy if they had a higher IQ, he remembers that some real life examples of such actors, like the Unabomber Ted Kaczynski, actually do have high IQs, and yet they are still deficient in empathy. So let me state this clearly: there is no reason whatsoever to think that IQ and empathy are correlated, which throws a big wrench in Pinker’s, Bostrom’s and similar programs of enhancement. Torres continues:


Another major concern: cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button.


Right. But, again, he and his colleagues are insist in worrying about the least likely threats, which, once more, are not actually existential. No Unabomber can end the world. But Donald Trump (just to pick on the current occupant of the WH, it’s not that I trust others a hell of a lot more) can come pretty darn close. But Torres insists:


Although cognitive enhancements could worsen some types of terror agents, the evidence – albeit indirect – suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.


Ahem, no, the evidence suggests no such thing, and in fact the production of a population of “cognitively enhanced cyborg” is a nightmare that only naive techno-optimists could possibly wish on the rest of us. Don’t these people watch any sci-fi, at all? And there is more nonsense on stilts:


It seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in ‘all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.’ … Superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.


Bostrom’s calculations are based on thin air, to be charitable. It isn’t even clear what “all-round cognitive performance” means, let alone how to improve it so efficiently, and it is even more dubitable that such an improvement would actually be an improvement. Also, what sort of model of the brain is Bostrom working with, that allows him to simply sum small percentage increases across different individuals as if they were equivalent to a gigantic increase in a single person? Moreover, look at the list of disasters: most of them are both extremely unlikely and it is just as unlikely that we would be able to do much about them (I wonder why a nearby nova explosion isn’t part of the mix), but the most worrisome ones (climate change, biodiversity loss, emerging technologies, and agential risks) are all made possible by the very same thing that is supposed to save us: more intelligent technology.


Toward the end of the essay we simply reach Pindaric levels of imaginative flights:
There could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.


Sure. Now how about coming down to Earth, our only planet, which we are the ones currently destroying, and talk some sense? One of the problems with techno-optimism is that it captures the imagination with talk of supervolcanoes and “great unknown danger” X, offering us the reassuring but extremely dangerous feeling that all we need to do to get out of the trouble we have stubbornly dug ourselves into is more science. It almost sounds like magic. Because it almost is.


Only at the very end Torres manages to slip in the crucial word that has been missing from the entire exercise: wisdom. The problem is not that we are not smart enough, but rather that we are not wise enough. Science and technology have advanced by giant leaps since the time of Socrates and Confucius, and yet these two (as well as a number of other ancient sages) have been unsurpassed in wisdom by even the most cognitive advanced human beings of the intervening two and a half millennia.


I know it sounds far more pedestrian and unexciting, but what if – for a change – we actually got serious at using the sort of conventional enhancements that are proven to work? First and foremost, education. I don’t mean the sort of STEM-oriented technical tripe that produces more barely functional human drones apt for the use of large corporations. I mean serious education, what the Greeks called paideia, the rearing of good citizens of the polis. And yes, some meditation or other kinds of spiritual exercises, to calm our mind down and center ourselves so that we can live a better and more meaningful life, instead of depending on the constant acquisition of consumer goods for our so-called happiness. And caffeine, of course, that’s always helpful.

Neil deGrasse Tyson “debunks” Spider-Man. And that’s just wrong

1 (3)I’ve spent a significant part of my academic and public careers investigating and opposing pseudoscience. One of my role models in this quest has always been astronomer Carl Sagan, the original host of the landmark PBS series Cosmos. I have met and interviewed the new host, Neil deGrasse Tyson, the director of the Hayden Planetarium at the American Museum of Natural History. Despite our differences about the value of philosophy (he’s dead wrong on that one), Neil too got into the debunking business. But – unlike Sagan – does it with more than a whiff of scientism, and occasionally in a spectacularly wrongheaded fashion.


Take, for instance, last week’s mini-appearance on The Late Show with Stephen Colbert, one of my favorite programs to laugh at the crap currently affecting the planet (as we all known, a sense of humor is the best defense against the universe). On September 14th, Tyson was featured in a one-minute video entitled “Superpowers debunked, with Neil deGrasse Tyson.” What? Why do we need to “debunk” superpowers? Does anyone actually think there exists a god of thunder named Thor, who comes from a mythical place known as Asgard? But apparently the “problem” is pressing enough for our debunker-in-chief to use a popular nationally televised show to tackle it. Here is, in part, what Neil said (and no, this isn’t a joke, he was serious):


Let’s tackle Spider-Man.


No, let’s not! Spider-Man is one of my favorite superheroes, a (fictional) role model, motivated by a more than decent philosophy of life: with great powers comes great responsibility (he got that from Uncle Ben). Something Tyson has, apparently, not learned. He goes on:


He’s bitten by a radioactive spider. Don’t we know from experience that radioactivity give your organs cancer? So, he would just be a dead kid, not one with superpowers.


No kidding, Sherlock. Do we really need the awesome reasoning powers of a star national science popularizer to figure out that Spider-Man’s origin story doesn’t stand up to even casual scrutiny? Doesn’t Neil realize that this is fiction, for crying out loud? Well, apparently, he does, sort of:


Of course it’s fiction, so I don’t have a problem with fiction, but if you think you are going to do this experiment, and try to make that happen to you, I’ve got news for you: it’s not gonna work.
Well, Neil, apparently you do have a problem with fiction. I still remember that on my podcast, years ago, you complained about the aliens in Avatar, because the females had breasts, which are – obviously – a mammalian trait. Really? That’s what bothered you in that movie? Never heard of suspending disbelief and just enjoy a nice story?


Also, who on earth is going to be tempted to repeat in real life the “experiment” that generated Spider-Man? And even if an enterprising and badly informed kid wanted to, where would he get a radioactive spider? Lastly:


I’ve got news for you: it’s not gonna work.


You think?


All right, end of my anti-Tyson rant in defense of Spider-Man. The more serious issue here is: why did he feel the need to do such a silly thing in the first place? I suspect that’s because Neil, like a number of “skeptics” I know, is affected by two maladies: the above mentioned scientism and a strong sense of intellectual superiority to the common rabble.


Scientism is defined by the Merriam-Webster as “an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation.” I don’t know whether commentaries on comic book superheroes qualify as an area of investigation, but clearly Tyson felt it necessary to bring the awesome power of science and critical thinking to debunking the dangerous notion that being bitten by a radioactive spider will give you magical powers.


I really think the skeptic community should stay as far away as possible from the whole notion of debunking (and yes, I’ve been guilty of using that word myself, in the past). For one thing, it conveys a sense of preconceived outcome: you know a priori that the object of your debunking is nonsense, which isn’t exactly in line with the ideal scientific spirit of open inquiry. That’s why my favorite actual skeptic is philosopher David Hume, who famously said that a reasonable person’s beliefs should be proportionate to the evidence, a phrase later turned by Sagan into his famous “extraordinary claims require extraordinary evidence.” Sagan, like Hume, was open to a serious consideration of phenomena like UFOs and telepathy, even though he did not believe in them. At one point he risked his career and reputation in order to organize a scientific conference on UFO sightings. I simply cannot imagine a similar attitude being sported by Neil deGrasse Tyson.


For another thing, “debunking” strongly conveys the impression that one thinks that the people who believe in the notion to be debunked are simpletons barely worth consideration. Perhaps some are, but I’ve met plenty of really smart creationists, for instance, a notion that would sound to Tyson as the quintessential oxymoron. Which brings me to his second malady (one, again, from which I have suffered myself, and that I’m trying really hard to overcome): intellectual snobbism. People like Tyson (or, say, Richard Dawkins) exude the attitude at every turn, as on display in the short Colbert video that got me started with this post. The problem (other than that it’s simply not nice) is than snobbism isn’t going to get you converts. It only plays well with your own faithful crowd.


This is because of something that Aristotle realized back 23 centuries ago, and which he explained at great length in his book on rhetoric. Presumably, Neil, Dawkins, and others want the same thing that Sagan, Stephen Gould (another one of my role models), and myself want: to engage a broader public on the nature of science, and to widen the appreciation and practice of critical thinking. But Aristotle realized that this goal requires the deployment of three concepts: Logos, Ethos, and Pathos.


Logos refers to the idea that our first priority should be to get our facts and our reasoning right. In the case of Neil’s “debunking” of Spider-Man, yeah, he got the biological facts straight, as much as that isn’t going to do anyone any good.


Ethos means character: you need to establish your credentials with your audience. And by credentials Aristotle didn’t mean the fact that you have a PhD (Tyson has one, from Columbia University), but that you are a good, trustworthy person. I can’t comment on the degree to which Neil fits this description, because I don’t know him well enough; but he certainly comes across as condescending in this video and on many other occasions, a character trait that Aristotle would not have approved of. (One more time: I have been guilty of the same before, and I’ve been actively working on improving the situation.)


Pathos refers to the establishment of an emotional connection with your audience. This is something that scientists are actively trained not to do, under the mistaken impression that emotional connection is the same thing as emotional manipulation. But this is the case only if the agent is unscrupulous and manipulative, not if he’s acting as a genuine human being. We humans need emotional connections, without which we are prone to distrust whoever is talking to us. In the video Tyson makes absolutely no effort to connect with his audience. Indeed, it isn’t even clear who is audience is, exactly (certainly, not fans of Spider-Man!), and therefore what the point of the whole exercise actually was.


So, by all means let us nurture good science communicators, which Neil deGrasse Tyson most certainly is. We do need them. But they really ought to read a bit of Aristotle (oh no, philosophy!), and also relax about the questionable science of movies like Avatar or comic books like Spider-Man.


Speaking of which, let me leave you with the delightfully corny original animated series soundtrack. Try to enjoy it without feeling the urge to “debunk” it, okay?