Category Archives: Philosophy of Science

Vindicating science by bringing it down, part II

1 (1)As we have seen in part I, scientists and early philosophers of science adopted an idealized conception of science, which focuses on the successful end result of scientific activity, and in which there is no place for any influence of the social, or indeed, for any of the actors involved in the scientific endeavor. All of that is swept under the carpet. But the fact that the eventual goal of science is to eliminate the social does not imply that social factors have no important role to play in the process. Science, after all, is nothing but the concerted effort of (sometimes not to) humble human brains, none of which was designed to unravel the mysteries of the world on its own.


In the past couple of decades, science has been brought down to earth again by sociologists, cognitive scientists, evolutionary psychologists, and historians. Unfortunately, the opposition between the rational and the social is still besetting the naturalization of science. The backlash against the traditional conception of science, epitomized by the logical positivists and their intellectual heirs, has swung the pendulum in the opposite direction. Still under the spell of the dichotomy between rational and social which we discussed in part I, many science naturalizers have assumed that, as they bring science down to earth, its pretensions will start to unravel.


The truth of the matter, however, is that all beliefs, the true and the false ones, or the scientific and the pseudoscientific ones, have a causal history, involving cognitive and social factors. If we want to understand how people come to believe stuff, even simple and obvious propositions (e.g., dolphins are mammals) are in need of an explanation. Likewise, if we want to understand how scientists have been able to unearth all sorts of true beliefs about the world, we need to understand what kinds of people scientists are, what kind of cognitive strategies they bring to bear on their research questions, what the social organization of science is, and how hypotheses are tested and evaluated within a scientific community.


The development of a cognitively and socially rich account of science has been delayed by the widespread misconception that such an account would compromise the epistemic standing of science. Because of our habit of pitting social and rational explanations against each other, we assume that the intrusion of sociology and psychology into the citadel of science will eat away at its foundations.


At the heart of this opposition between the social and the rational lies an individualist conception of reason, shared by both camps in the science wars. The notion of the social as a contaminant of the rational, to which even social constructivists seem to subscribe, is more indebted to logical positivism than the latter would like to admit. Radical sociologists during the science wars of the ‘90s were led astray by the very same intuition that made the logical positivists allergic to social explanations – only now they were welcoming the opposite conclusion. As philosopher David Hull put it:


“Because science did not possess the ideal characteristics that the ‘positivists’ insisted that it should, knowledge-claims made by scientists had no more warrant than those of magicians, faith healers, and politicians.”


Both camps are wrong. The simple opposition between the rational and the social-psychological explanations goes against the grain of naturalism. Scientific knowledge does not drop out of thin air: it is embodied in real human beings. If our best scientific theories in some way reflect the world out there, this must have come about through the usual perceptual capacities and cognitive operations, with available technological equipment, and in a complex network of social interactions. How else could it have come about?


The sociologists are right that science is a deeply social endeavor, and that all scientific knowledge is in this sense “socially constructed.” No single individual marooned on a desert island, no matter how brilliant, would be capable of finding out any of the significant truths about the universe that we currently possess. Though the history of science has known some solitary geniuses, working in relative isolation from their peers, even they were still engaged in a collective enterprise, in the sense that they were building on the work of numerous predecessors.


The realization that science is a deeply social enterprise, and that scientific consensus is reached through coalition forming and competition, should not surprise us. The question is what particular social organization is exemplified by science, and whether this is conducive to its epistemic aspirations. Scientists are human beings, warts and all. If scientists collectively succeed in finding out significant truths about the universe, while other endeavors have failed in this regard, this must have come about through the particular social dynamics of science.


Many scientists believe that being objective and impartial are the cardinal virtues of science, and that bias and prejudice make one unsuitable for scientific work. Although the culture of science rightly encourages these virtues, they are by no means necessary for the success of science. Indeed, a certain modicum of bias in this or that direction may actually facilitate the progress of science.


It is not a problem that an individual scientist is biased, or emotionally attached to a particular hypothesis. The social organization of science makes sure that these biases will be balanced by others tilting in different directions. A standard example of this is the reorientation of aspects of medical research as a result of feminist epistemological critiques: it is now increasingly acknowledged that, for example, we cannot conduct drug tests solely on a population of (mostly white, middle aged) men and simply assume that the results can be extrapolated to other human biological populations. Again, Hull:


“The objectivity that matters so much in science is not primarily a characteristic of individual scientists but of scientific communities. Scientists rarely refute their own pet hypotheses, especially after they have appeared in print, but that is all right. Their fellow scientists will be happy to expose these hypotheses to severe testing.”


A desire for fame and success is often viewed as unworthy of a real scientist. The goal of science is truth for its own sake. Although such base motives may indeed compromise one’s scientific work, if allowed to be unchecked, there is no convincing reason why they would stand in the way of significant discoveries. Even spite, jealousy, and the desire to humiliate a rival can result in excellent scientific work, if the competing parties know that they have to abide by certain rules, and will be called out whenever they violate them.


In a good number of cases, social influences are not an impediment to the epistemic ambition of science, but rather a facilitator of scientific progress. Science harnesses some of the baser motives of human behavior in the service of truth, making sure that the interplay of scientists’ individual interests and biases mostly align with epistemic progress.


Darwin may have been right from the start about the fact of common ancestry, but his theory would not have carried the day as swiftly as it did without his indefatigable efforts to enlist allies to the cause and to engage and negotiate with his critics. All the parties in the dispute were trying to enlist nature as their ally, but Darwin of course had one big advantage: nature really was on his side all along. In the long run, therefore, as evidence accumulates and factions wax and wane, the social influences on science will be filtered out, and rightly so.

Vindicating science by bringing it down, part I

1What is the role of the social in science? If one consults science textbooks, one will find that the social dimension of scientific knowledge is conspicuously absent. Science is supposed to reflect the way the world really is, independent of our petty human lives. It is, in the classical view, the epitome of a rational endeavor, free from social influences. Of course, science is carried out by human beings, but their individual backgrounds and social lives are simply taken to be irrelevant. What matters are the intellectual merits of a theory, not who conceived it. What matters is the evidence, not who gathered it. This stark contrast between the social and the rational can be found in philosophical accounts of science as well. Because social factors are rendered invisible in the end products of science, many philosophers have underestimated their constructive role in the acquisition of scientific knowledge.


In recent decades, sociologists and historians have tried to bring science back to earth, but many of them have unwittingly bought into the same simplistic opposition. Social influences on science have been relished by its cynical critics and resisted by its admirers, and for the same reason: the fear (or hope) that it would destroy the credentials of science. In a paper I co-wrote with my frequent partner in crime, Maarten Boudry (published in Perspectives in Science and Culture, edited by K. Rutten, S. Blancke, and R. Soetaert, Purdue University Press) we discuss the historical roots of this opposition, culminating in the sorry spectacle of the science wars. This post provides extensive excerpts from that paper, I hope you’ll enjoy it.


When do we feel the need to explain why someone beliefs something? Not all beliefs held by our fellow human beings appear to produce an epistemic itch. People believe that dolphins are mammals, that the earth orbits around the sun, and that World War II ended in 1945, but we rarely wonder how they arrived at such homely truths. Beliefs such as these are just obvious, and no sane person would dispute them. That said, who told you when WWII ended? Where did you acquire the belief that dolphins are mammals, or that the earth goes around the sun? Your sources for these convictions are hard to track down.


Memories of these facts are called semantic by psychologists, to distinguish them from episodic memories, related to things that happen to us. Episodic memories carry a tag with time, place, and the situation we acquired them. Not so for semantic memories, likely because doing so would be a waste of brain resources.


Take the belief that coal is black. If we ask you what your reasons are for believing that, you would probably be puzzled. The first answer that comes to mind is: “Why, because it is black, of course!” It doesn’t matter how you came to know that. You could have learned it in any number of ways. Anyone in doubt about the color of coal can quickly retrieve the answer through any number of sources.


Because the truth of such beliefs is obvious, we rarely question how other people acquired them, or how they can justify them. It seems as if such beliefs just drop out of thin air, without much in the way of a causal history.


That said, how do we account for other kinds of beliefs (as held by others, of course)? Beliefs that are false, quirky, idiosyncratic, or plainly irrational produce an epistemic itch. We want to explain how people end up embracing them. Who told him such nonsense? Did he fall for one of those conspiracy theories circulating on the internet?


We resort to special explanations only when something goes wrong. True beliefs that are part of common knowledge are taken at face value, but false and foolish beliefs cry out for an explanation. This is where social and cultural explanations come in. Such explanations, however, are not invoked when we account for true and justified beliefs. Only when rationality breaks down, it seems, a space is opened up for psychological explanations to fill. We seem to think that there is an association between the irrational and the social, but not between the rational and the social.


In the classical view, science is the epitome of reason. It is objective and impartial. It is ruthless in its indifference to what we fear or fancy. When it comes to the content of science, nature has the final say in the matter. Social, political, and ideological influences on science are anathema. When writing science textbooks, and for many other purposes, the social influences on the development of scientific theories can be safely ignored,  just like with many of our mundane beliefs about the world. Sure, there is a story to be told about how scientists pooled their efforts to acquire this or that piece of knowledge, who published it first, who convinced whom, and so on. But the details of this story make no difference: an alternative history of science would ultimately have led to the same result.


As a result, especially in the natural sciences, students are simply taught scientific theories as if they descended down from some Platonic heaven. The vagaries of scientific history, the false starts, wrong turns, and dead ends, the protracted controversies between rival views, the forerunners and pioneers of scientific ideas – all of this is rendered invisible.


For long, philosophers of science have also treated science in splendid isolation from the social world. Hans Reichenbach, one of the major proponents of logical positivism, taught us to strictly separate the context of discovery from the context of justification. The first deals with the historical conception of a scientific hypothesis, and is of little interest to philosophers trying to understand the logic of science. Philosophers of science should be solely concerned with how a scientific hypothesis, once it appears on the scene, relates to observations, whether it is internally consistent, whether it is falsifiable, and so on.


(next: bringing science down to earth)

Why Alex Rosenberg is wrong just about everything

1 (4)Philosophy is my second academic career. My first one was in science, evolutionary biology, to be specific. Depending on how you look at it, this makes me either unusually competent in two normally widely distinct areas of academic scholarship, or barely making the passing grade in both. Be that as it may, I have made a personal hobby to observe my new profession from the outside, as much as it is possible, sort of like an anthropologist looking into a different yet sufficient familiar culture.


One of the things I’ve noticed is that philosophers are unusually critical of their own field, with a number of prominent ones, both now and historically, actually arguing that it should be dismantled, usually in favor of science (or linguistics). I will not get into that debate here, as I’ve covered in detail before.


Another frequent observation is that of a high frequency of colleagues who are fascinating for being very smart, well regarded in the field, and yet – in my admittedly non humble opinion – completely wrong. Perhaps the quintessential example is David Chalmers, he of “philosophical zombies,” “hard problem of consciousness,” “singularity,” “mind uploading,” “panpsychism,” and similar inane notions. But this post isn’t about David.


It’s about Alex Rosenberg. Alex is on the faculty at the prestigious Duke University in North Carolina, and someone I think should get a medal (together with Chalmers, of course) for the highest number of wrongheaded papers in a philosophical career. I met him a few years ago during a two-day conference on “Moving naturalism forward,” organized by cosmologist Sean Carroll. The conference was fun, but Alex kept trying to convince us of a notion that he called “happy nihilism,” according to which the universe is devoid of meaning (of course it is, meaning is a human construct), free will doesn’t exist (of course it doesn’t, if one uses the term in the contra-causal sense), and yet, somehow, we can still decide to take all of this on board and be happy.


Setting aside the devastating criticism Alex got at the conference from Dan Dennett, Owen Flanagan, Terrence Deacon, and others, this is also the same bleak picture of the world he presented in his dismal The Atheist’s Guide to Reality, which I reviewed for The Philosophers’ Magazine. Here is a taste of my thinking at the time:


“As a former scientist and now philosopher, I have chastised some of my colleagues for their scientistic attitude. … Thanks to [Rosenberg], I can no longer be accused of fighting a straw man. Rosenberg’s attempt is valiant and will give people much to think about. Except, of course, that according to Rosenberg we cannot really think such things because scientism ‘says’ that chunks of matter cannot possibly produce insights about anything at all, on penalty of violating physicalism.”


Nevermind that such statements are obviously self-contradictory. What was I doing while reading Alex’s book if not thinking about what he wrote? And what was he doing while writing the book? These are all illusions, claims Alex, apparently using the word “illusion” in a novel and profound way that the rest of us are unaware of. I continued my review:


“Take Rosenberg’s denial of the existence of conscious decision-making. Consciousness for him is an epiphenomenon of the brain’s activity. … His major piece of evidence? Benjamin Libet’s experiments in cognitive science. … We are informed [that] ‘consciousness is probably too big a deal not to have been organized by natural selection to solve some design problem or other, perhaps several. Exactly what its functions are, what design problem it solves, neuroscience has not yet figured out.’”


Seriously? Let us set aside that Alex completely misinterprets the implications of Libet’s famous experiments, even contradicting Libet’s own interpretation. He admits that natural selection must have evolved consciousness – which depends on brain structures that are exceedingly metabolically costly – for some reason, but he can’t think of one. Hmm, let’s see, how about the ability to reflect on our actions, make deliberate decisions, plan things ahead? Oh right, those are all illusions. Naturally. Me again:


“For Rosenberg there is no free will, morality, meaning, aboutness and so on because, you see, ‘the physical facts fix all the facts.’ We are never told exactly what this slogan actually means. Well, I’m a big fan of physics, but last time I checked, it didn’t, for instance, ‘fix’ the fact that 2+2=4.”


Nor does physic fix anything at all in the rest of mathematics. And in logic. Continuing the review:

“Rosenberg thinks that economics, the social sciences (not to mention literature, the arts, and his own field of philosophy) are all ‘stories’ that may entertain us, but that should by no means be taken seriously. He doesn’t seem to realize that science – not to mention his very book – also tells stories … because that is the way human beings communicate knowledge and achieve understanding. Science is the right type of story if you want to know about cosmology, but not if you want to learn logic.”


Or history. Or art. I concluded:


“Rosenberg’s scientistic nihilism is analogous to radical skepticism about reality. … It’s thought provoking, there is no scientific evidence that can possibly rule in its favor or against it, and it is best promptly forgotten so that you can get back to thinking about the things that really matter to you.”


Alex, impervious to criticism (well, “he” is only a bunch of subatomic particles without will or aboutness, so – to be fair – how could he change his mind, especially given that the latter is an illusion?), has continued along the same vein in recent years. Just in the last few weeks I’ve read two more articles by him that finally prompted me to write this essay.


The first one, published in The Verge, is actually an interview conducted by Angela Chen, in which Alex “explains” how our addiction to stories keeps us from understanding history. The interview is about (but wait, nothing is about anything!) his book How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories.” First problem: whenever I hear the words “the neuroscience of…” I instinctively reach for my gun (fortunately, I’m a quasi-pacifist, and I don’t own guns). That’s because nowadays a lot of nonsense is written in the name of neuroscience, unfortunately.


The main trust of Alex’s argument is that neuroscience undermines what is often referred to as our “theory of mind,” the ability to guess other people’s thoughts and motivations. Since historians deploy – without realizing it – a theory of mind whenever they talk about this or that historical figure’s motivations for acting one way or another, their theorizing is made hopelessly obsolete by the modern science of the brain.


Except that Alex is making an astounding mistake here, very similar to the one made, for instance, by fellow atheist Sam Harris in his The Moral Landscape (see my review here). He is confusing a mechanistic explanation of X for the explanation of X, apparently forgetting (or simply outright denying) that explanations – which are human constructs, let us not forget – can be given at different levels, and using different language, depending on how useful they are to the target recipients, i.e., other human beings.


Let me give you an analogous example to show just how bizarre Alex’s claim that neuroscience does away with historical explanations really is. Imagine we were interested in the “neural correlates,” as cognitive scientists call them, of mathematical problem solving. We can stick someone – even a mathematician – into an fMRI machine and find out which areas of her brain lit up when she is involved in simple or complex mathematical thinking, from solving a basic equation to demonstrating Fermat’s Last Theorem.


Now, we will surely find some such neural correlates. We have to, since everything we do, and certainly any kind of higher, conscious thinking, has to be done by way of engaging one part or another of our brains. Otherwise, it would be magic.


But now imagine that our neuroscientist completes his experiment, gets the mathematician out of the fMRI machine, and gingerly informs her that mathematicians are no longer needed, because neuroscience has discovered which areas of the brain they use to solve mathematical problems. Crazy, right? Well, it’s no different from Alex’s reasoning for getting rid of historians, or Harris’ “argument” (I’m using the word charitably) for concluding that science, and neuroscience (which just happens to be his own field) in particular, can now answer moral questions. Ethicists can go play golf.


A few weeks later, Alex did it again! This time in an article he penned himself for 3:AM Magazine, entitled “Is neuroscience a bigger threat than artificial intelligence?” Oh boy. It’s the same basic idea that he has been peddling since The Atheist’s Guide to Reality, though – as in The Verge article – this time it isn’t physics that “fixes all the facts,” it is neuroscience that answers all the questions.


After acknowledging the (alleged, and I think way overblown) threat posed by future advanced AI to humanity (you know, the Singularity, again, Terminator and that sort of things), Alex informs us that the real existential downfall of humanity comes from the research of four Nobel-winning neuroscientists: Eric Kandel, John O’Keefe, Edvard [sic], and May-Britt Moser. What have they done?


“Between them they have shown that the human brain doesn’t work the way conscious experience suggests at all. Instead it operates to deliver human achievements in the way IBM’s Watson does. Thoughts with meaning have no more role in the human brain than in artificial intelligence.”


By now you have surely guessed that this is, again, about the alleged failure of the theory of mind, and that, once again, Alex is simply confusing different levels of explanation, an elementary mistake that you would think a trained philosopher simply wouldn’t make.


The fascinating thing is that Alex actually acknowledges that there is quite a bit of evidence for the theory of mind:


“Several sources of evidence suggest that we have an innate mind-reading ability more powerful than other primates. It’s an ability to track other people’s actions that is triggered soon after birth. Child psychologists have established its operation in pre-linguistic toddlers, while primatologists have shown its absence in other primates even when they exceed infants in other forms of reasoning. Social psychologists have established deficiencies in its deployment among children on the Autism spectrum. fMRI and transcranial magnetic stimulation studies have localized a brain region that delivers this mind-reading ability. Evolutionary anthropology, game theory and experimental economics have established the indispensability of powerful mind reading for the cooperation and collaboration that resulted in Hominin genus’s rapid ascent of the African savanna’s food chain.”


None of this matters, because neuroscience has (allegedly) “revealed” to us that the theory of mind is “quite as much of a dead end as Ptolemaic astronomy.” Why? Because Kandel and colleagues have shown that if you look into the brain you won’t find beliefs, desires, or reasons, but only specific, dynamic neural pathways.


No kidding, Sherlock. That’s because what we call beliefs, desires and reasons are instantiated in the brain by way of specific neural pathways. The neurobiological level is more basic – but, crucially, no more true – than the psychological one. They provide complementary, not competing, explanations of the same phenomenon. One explanation is more useful to biologists and neuroscientists, another one to psychologists, historians, and art critics, among others.


It’s like the much abused and misunderstood example of the chair in which you may be sitting at this particular time. Physics tells us that said chair is “really” just a collection of quarks, interacting in the way prescribed by the fundamental laws of nature. This is certainly the case, but by a long shot not the whole picture. Your chair is also “solid” at the level of analysis pertinent to human beings who wish to sit down in order to read a blog post, not to mention those other human beings that designed and built the chair itself. The chair is most definitely not an illusion, just because it can be (usefully, depending on the context) be described in different ways. Explanatory complementarity, not competition.


A side note, as a biologist, on Kandel et al.’s indubitably scientifically fascinating work: it was done on rats, because the pertinent experiments are too invasive and unethical to be conducted on human beings. With his usual braggadocio, Alex informs us that this doesn’t matter at all:


“Of course you could argue that what Nobel Prize winning research shows about rats is irrelevant to humans. But you’d be flying in the face of clinical evidence about human deficits and disorders, anatomical and physiological identities between the structure of rat and human brains, and the detailed molecular biology of learning and information transmission in the neuronal circuitry of both us and Rattus rattus, the very reasons neuroscientists interested in human brains have invested so much time and effort in learning how rat brains work. And won Nobel Prizes for doing it.”


I got news for Alex: while, again, Kandel et al.’s research is most certainly important, enough to win the Nobel, translating things from rats to humans is definitely not that obvious or straightforward. It is simply false that rat and human brains have a large number of anatomical and physiological identities, as the perusal of any introductory book on mammalian anatomy will readily confirm. Heck, our brains are substantially different from those of higher primates like chimpanzees and bonobos, which is a major reason we need to be careful when we extrapolate from the latter (let alone rats) to humans. For instance, we have little to go by, in terms of comparative brain anatomy and physiology, to explain exquisite and crucially human traits like language (not just communication) and iterative cultural evolution. Take a look at this book by my colleague Kevin Laland to appreciate just how carefully biologists (as distinct from some philosophers) are when it comes to interspecies comparisons.


Don’t get me wrong. Alex Rosenberg is a really smart guy, and his misguided writings are necessary in order to sharpen our thinking about all sorts of matters. After all, the British Royal Society awarded physicist Fred Hoyle (the author of the steady state theory in cosmology, which for a while rivaled the big bang theory) a medal for the highest number of wrong ideas proposed in a scientific career. This was not an example of British sarcasm, they meant it in all seriousness, as Hoyle’s theories have arguably played an important role in advancing cosmology. Perhaps we should establish a similar prize in philosophy. I have a couple of candidates in mind…

The problem with (some) philosophy and metaphysics of perception

1 (2)Does the world exist independently of any given human mind? Most of us – including many philosophers – would answer yes, and even think the question itself more than a bit weird. But of course it is possible that we live, say, in a simulation imposed on us by external entities, a la Matrix movie. Or perhaps we are all just instantiations of a single cosmic mind, which we could simply refer to as God.


This, however, is not what this essay is about. Let us assume for the sake of discussion that the world does exist independently of the human mind. The next, and somewhat more interesting, question is: how do we perceive such world? In the philosophy and metaphysics of perception one position is referred to as naive realism, the notion that we can have, and usually do have, veridical perceptions about the world. What you see, is what you get, so to speak.


I think this position is rather hopeless, and yet it was vigorously defended by a colleague of mine at a recent colloquium at City College. Umrao Sethi, Assistant Professor of Philosophy at Lehman College in New York, gave a talk entitled “Sensible Over-Determination.” In the following I wish to recount an interesting exchange Sethi and I have had during the q&a following her talk, because it puts two views not just of the philosophy of perception, but of philosophy itself, in stark contrast with each other. Please understand that I am not singling out Sethi, but simply using her particular talk to make a more general point about (some) contemporary philosophy.
First, though, here is the abstract of Sethi’s talk. It’s rather technical, so I will explain the relevant bits immediately below:


“I argue that sensible qualities are ‘ontologically flexible’ – they can be instantiated either in virtue of having material bearers or in virtue of being perceived by minds. Given that these qualities have disjunctive conditions on instantiation, a single instance of such a quality can be over-determined – that is, have its existence simultaneously guaranteed both in virtue of having a material bearer and in virtue of being perceived. I make use of this metaphysical insight to show that, contrary to received wisdom, naïve realism is compatible with the possibility of qualitatively matching hallucinations. Both hallucinations and ordinary perceptions acquaint us with instances of the same sensible qualities. While the instances in hallucination are mind-dependent, those in veridical perception are not. The latter are over-determined, and, thus, mind-independent – they can continue to exist unperceived, because, in addition to the minds that perceive them, their existence is guaranteed by the material objects that are their bearers.”


The two key terms here are “hallucinations” and “sensible qualities.” What Sethi was arguing, among other things, is that hallucinations do not present a problem for naive realists. But why would they? The problem goes all the way back to Descartes and his famous thought experiment based on radical doubt. Descartes realized that he could be mistaken about a lot of things, particularly his sensorial perceptions about the world. We all know, for instance, that we can have dreams or hallucinations during which, for a time at least, we cannot tell apart physical reality from what is being conjured up by our minds. If this is the case, Descartes thought, then we can never be absolutely sure of any aspect of our alleged knowledge of the world.


His solution to the problem, famously, was his cogito argument: I may be mistaken about the external world, and even about mathematical or logical truths (because of systematic biases in my reasoning). But there is no way I can be mistaken about the fact that I am a thinking being of some sort. Cogito, ergo sum: I think, therefore I am.


Descartes’ was the last great attempt in philosophy to establish certain foundations of knowledge. And it failed. Sure, I know that I think, but – contra Descartes – that’s pretty much all I will ever know with certainty. There is no way to rebuild knowledge of mathematics, logic, and even less so empirical reality, building on that meager starting point. Most philosophers since have therefore accepted the notion that our knowledge is provisional, and that we could, in principle, be mistaken about all sorts of things.


Not so naive realists, who cling to the hope that we get – most of the times – “veridical” (i.e., true) impressions of the external world via our senses. How so? One path for the naive realist is to reject the argument from hallucination, and show that we can always tell the difference between what our minds conjure and what we perceive about the world.


One way to do this is simply to deny that hallucinations ever feel exactly the same as impression from the external world. I’ve never had a hallucination, but the literature on the subject, seems to me, is pretty clear: at least some of the times, hallucinations (spontaneous, drug-induced, or especially if induced by sensory deprivation) are subjectively indistinguishable from the real thing. In other words, the problem can’t just be swept under the rug.


Sethi conceded this point at least for the sake of discussion, but insisted that we can nevertheless tell the difference. This is where sensible qualities come in. The quintessential one, the one she used the most during her talk, and the very same one that sparked our little debate during the q&a, is color.


Consider a red apple. You may hallucinate a red apple, and “perceive” redness associated with the apple, even though there is no actual apple out there. Or you may perceive redness in the apple because there is, in fact, a red apple that you are looking at. Sethi’s point is that we can tell the difference between these two cases because the second one is “overdetermined,” meaning that there are two reasons we have the first person experience of redness in the case of real objects: (i) because our mind is conjuring one (just like in the case of the hallucination), and (ii) because there is a red apple out there (unlike the case of the hallucination). It is this over-determination that implies mind-independence, according to Sehti.


The problem is – as I pointed out in the q&a – that “redness” is actually not out there (though the apple may be). We know quite a bit about the physiology of color perception and we understand that what is out there are the surface properties of the apple, as well as the characteristics of light at certain wavelengths that can be perceived by the human eye. “Color,” however, results from the specific interaction among those external physical properties and the specific, evolved, sensorial apparatus of a human being. That apparatus includes of course our eyes, with their anatomical and cellular structure, the optical nerve that connects to the brain, and a number of brain systems devoted to the reception and internal representation of what we perceive. Color is not out there, and it is not mind-independent. Ever.


We know this from a variety of sources, including the fact that some human beings (like yours truly) are color-blind (partially, in my case), so that they perceive the exact same external object differently. We also know that other animals do not perceive colors at all (bulls, for instance: when they get mad at someone waving a red cloth in front of them they are reacting to the movement, not the redness of the cloth). Yet other animals perceive colors differently (bees, for example, see ultraviolet, which we cannot perceive).


So what modern science tells us is that the human mind creates a virtual simulation of external reality all the time, and that the veridicity of such simulation depends not just on whatever is out there, but on the details of the human perceptual and cognitive systems. We live in a perpetual Matrix of our own creation, so to speak. That is why we cannot, in fact, always tell the difference between hallucinations and the real world, even if a real, mind-independent world does exist. Descartes was right, and naive realists are wrong.


Here is where things got interesting outside of the narrow scope of an academic talk in a philosophy department. When I pointed out the above to Sethi, her response was that she does not accept the physiological account! Science, she said, is itself based on metaphysical assumptions, and one is free to reject such assumptions. She evidently chose the latter course.


But hold on a second. Yes, science takes certain metaphysical assumptions on board without examination, like the very fact that reality is mind-independent. But I thought (naively, as it turns out!) that a contemporary philosopher is best served by taking very, very seriously whatever science tells us about whatever issue we are discussing philosophically. In this case, to ignore modern physiology and brain science seems hazardous to say the least.


Curious, I asked Sethi why she chose the anti-science path. In other words, what were her metaphysical commitments? Her answer floored me, but also gave me an interesting insight into my own profession. She said that if she went along with standard physiology she would have to agree that consciousness is a biological phenomenon rooted in the brain and how it interacts with the external world.


Again naively, I said, obviously, what else could it be? This is where she mentioned the so-called “hard problem” of consciousness, made famous by NYU philosopher of mind David Chalmers. Here is how the problem is summarized in the excellent entry in the Internet Encyclopedia of Philosophy:


“It is the problem of explaining why there is ‘something it is like’ for a subject in conscious experience … The usual methods of science involve explanation of functional, dynamical, and structural properties – explanation of what a thing does, how it changes over time, and how it is put together.  But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science.”


Except that I don’t think one can “meaningfully” ask that at all. I have made the argument that the so-called hard problem is in fact a category mistake. It confuses scientific explanation with first person experience. We may (possibly will) arrive at a complete scientific explanation of consciousness (we are nowhere near, at the moment, but we are making some progress). When we do, however, we will still be unable to feed that explanation into a human mind and have that mind experience, say, redness.


The reason for this is no mystery at all, and does not point to any alleged limitation of scientific explanations. It’s the same reason why we can have a complete understanding of, say, how to ride a bicycle, and yet immediately fall down as soon as we mount one for the first time. Understanding derives from something that philosophers call “knowledge-that,” while the skill required to properly ride a bike is an instance of “knowledge-of.” The two are distinct, the first being conceptual, the second one literally embodied in our biological beings. There is absolutely nothing missing from our understanding if we can’t also ride. Confusing the two is a category mistake analogous to being shown the buildings, faculty, students, and administrations of Oxford University and still ask, “yes, but where is the university?”


The reason I went into some detail into all of this is because I think that Chalmers-style philosophizing is what gives philosophy a bad name. Let me be clear about this: I am on record arguing that philosophy is not the same thing as science, and that not all meaningful philosophical questions have scientific answers (e.g., how should we think of personal identity? or: is modus ponens a kind of valid deductive inference? or: what is a cause?). I am, in other words, not advocating scientism.


But I am advocating the notion that philosophers take science seriously and refrain from inventing questions just because they wish to ignore science and convince themselves that they can go “deeper.” The kind of analytic metaphysics done by Chalmers (and Sethi) does not serve philosophy well, I think, as it ignores or dismisses science at its own peril. Some philosophers have been pushing an alternative approach, which they call “scientific” metaphysics, though I think it would be best described as science-informed metaphysics.


Philosophy, as I have argued recently does make progress, and one example is the acknowledgment of the failure of the rationalist program in epistemology, of which Descartes was the last, valiant defender, and that is in a sense the reference point of analytic metaphysics. Let us then count Descartes’ failure as an example of progress, and not push philosophy back four centuries.

They’ve done it again: another embarrassing moment for the skeptic movement

1In a few days I will be in Las Vegas. No, it’s not what you may be thinking about. I’ll be the token skeptic at one of the largest conferences of skeptics: CSICon, courtesy of the same people who publish Skeptical Inquirer magazine, for which I wrote a column on the nature of science for a decade. I say “token skeptic” because I have been invited by the organizers to talk about scientism, the notion that sometimes science itself is adopted as an ideology, applied everywhere even though it doesn’t belong or is not particularly useful (here is a video about this).


I have been both a member and a friendly internal critic of the skeptic community since the late ‘90s, and I have been reminded of the value of such a gadfly-like role very recently, with the publication of yet another “skeptical” hoax co-authored by philosopher Peter Boghossian and author James Lindsay, this time accompanied by Areo magazine’s Helen Pluckrose. The hoax purports to demonstrate once and for all that what the authors disdainfully refer to as “grievance studies” (i.e., black studies, race studies, women studies, gender studies, and allied fields) is a sham hopelessly marred by leftist ideological bias. The hoax doesn’t do any such thing, although those fields are, in fact, problematic. What the stunt accomplishes instead is to reveal the authors’ own ideological bias, as well as the poverty of critical thinking by major exponents of the self-professed skeptic community. But let’s proceed in order.


Boghossian and Lindsay made a first, awkward attempt at this last year, by submitting a single fake paper entitled “The Conceptual Penis as a Social Construct.” It was a disaster: the paper was, in fact, rejected by the first (very low ranking) journal they submitted it to, and only got published in an unranked, pay-per-publish journal later on. Here is my commentary on why Boghossian and Lindsay’s achievement was simply to shine a negative light on the skeptic movement, and here is a panel discussion about their failure at the North East Conference on Science and Skepticism later on in the year. That did not stop major exponents of the skeptic movement, from Michael Shermer to Steven Pinker, from Richard Dawkins to Sam Harris and Jerry Coyne, from praising Boghossian and Lindsay, which is why I maintain the episode was an embarrassment for the whole community.


The hoax, of course, was modeled after the famous one perpetrated by NYU physicist Alan Sokal at the expense of the (non peer reviewed) postmodernist journal Social Text, back in the ‘90s, at the height of the so-called science wars. Sokal, however, is far more cautious and reasonable than Boghossian & co., writing about his own stunt:


From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty.


In fact, Sokal himself published some good criticisms of the conceptual penis hoax.


Not having learned their lesson at all, Boghossian & co. engaged in a larger project of the same kind, this time sending out 21 fake papers to a number of journals, mostly in women and gender studies. Two thirds of the papers were rejected. Of the seven accepted papers, one was a collection of (bad) poetry, and thus really irrelevant to the objective at hand; two were simply boring and confusing, like a lot of academic papers; one was a self-referential piece on academic hoaxes that one independent commentator actually judged to be making “somewhat plausible arguments”; and three more included fake empirical evidence. As Daniel Engber says in Slate:


One can point to lots of silly-sounding published data from many other fields of study, including strictly scientific ones. Are those emblematic of ‘corruption’ too?


Indeed, there are several examples of this in the literature, like a 2013 hoax that saw a scientific paper about anti-cancer properties in a chemical extracted from a fictional lichen published in several hundred journals. Hundreds, not just half a dozen!


It’s very well worth reading the entirety of Engber’s commentary, which exposes several problematic aspects of the Boghossian et al.’s stunt. The major issues, as I see them, are the following:


1. Hoaxes are ethically problematic, and I honestly think Portland State University should start an academic investigation of the practices of Peter Boghossian. In the first place, I doubt the study (which was published in Aero magazine, not in a peer reviewed journal!) obtained the standard clearance required for research on human subjects. Second, the whole enterprise of academic publishing assumes that one is not faking things, particularly data. So tricking reviewers in that fashion at the very least breaches the ethical norms of any field of scholarship.


2. The authors make a big deal of the ideological slant of the fields they target, apparently entirely oblivious to their own ideological agenda, which explicitly targeted mostly women and gender studies. Both Boghossian and Lindsay have published a series of tweets (see Engber’s essay) that nakedly display their bias. Is the pot calling the kettle black?


3. While we can certainly agree that it is disturbing that academic journals publish any paper that is more or less obviously fake, this is not a good criticism of the target fields. You know what that would look like? It would take the form of a serious, in-depth analysis of arguments proposed by scholars in those fields. But Boghossian & co. actually proudly proclaimed, after their first hoax, that they have never read a paper in “X studies,” which means that – literally – they don’t know what they are talking about. Here is one example of how to do it.


4. What Boghossian et al. really want to convey is that “X studies” are intellectually bankrupt, unlike other academic disciplines, particularly scientific ones. But as the example of the anti-cancer hoax mentioned above, and several others, show, this is simply not the case. Corruption of academic culture, resulting either from ideological bias or from financial interests (pharmaceutical companies are well known to establish entire fake journals to push their products) is not limited to certain small corners of the humanities.


5. In a related fashion – and surprisingly given that Boghossian actually teaches critical thinking – while the first hoax fatally suffered from a sample size of n=1, the new one is plagued by the simple fact that it has no control! Without a similar systematic attempt being directed at journals in other fields (particularly scientific ones) we can conclude precious little about the specific state of “X studies.”


That said, do I think that the fields targeted by Boghossian & co. are problematic? Yes, as I’ve written before. Here the most useful commentary on the hoax has been published in the New York Times by William Eggington. As he puts it:


The problem is not that philosophers, historians or English professors are interested in, say, questions of how gender or racial identity or bias is expressed in culture or thought. Gender and racial identity are universally present and vitally important across all the areas that the humanities study and hence should be central concerns. The problem, rather, is that scholars who study these questions have been driven into sub-specializations that are not always seen as integral to larger fields or to the humanities as a whole. Sometimes they have been driven there by departments that are reluctant to accept them; sometimes they have been driven there by their own conviction that they alone have the standing to investigate these topics.


That strikes me as exactly right. “X studies” programs should be integrated within a university, either (ideally) in broad multidisciplinary programs, or within the most suitable departments, such as History, Philosophy, Sociology, and the like.


Eggington blames academic hyperspecialization for the current sorry state of affairs in these fields, as well as the “publish or perish” attitude that has plagued academia for decades now. But guess what? “X studies” are most definitely not the only ones to suffer from these problems. They are endemic to the whole of modern academy, including the natural sciences. Indeed, we should be far more worried about the influence of ideology and big money on scientific fields than on small areas of the humanities. After all, it is in the name of science that we spend billions annually, and it is from science that we expect miracles of medicine and technology.


As Engber writes in the Slate commentary, notwithstanding the dire warnings of Boghossian, Pinker, Harris, Dawkins and all the others:


Surprise, surprise: Civilization hasn’t yet collapsed. In spite of Derrida and Social Text, we somehow found a means of treating AIDS, and if we’re still at loggerheads about the need to deal with global warming, one can’t really blame the queer and gender theorists or imagine that the problem started with the Academic Left. (Hey, I wonder if those dang sociologists might have something interesting to say about climate change denial?)


The new Boghossian-led hoax is another example of badly executed, ideologically driven stunt that targets narrow fields with little impact while leaving alone the big elephants in the room. It is, in the end, yet another embarrassment for the skeptical community, as well as a reflection of the authors’ own biases and narrow mindedness.

The techno-optimists are at it again

1 (4)(the atomic explosion that destroyed Hiroshima)


Techno-optimism (a form of applied scientism, if you will) is the attitude that no matter how dire humanity’s problems, science and technology will surely come to the rescue. It tends to conveniently neglect that some of humanity’s biggest contemporary problems (say, climate change, or the risk of nuclear annihilation) are, in fact, caused by the willful misuse of science and technology. It seems odd to firmly believe that more of the same thing that caused the disease in the first place will surely cure the disease, because, you know, this time we’ll get it right.


A good example of techno-optimism is a recent article in Slate by Phil Torres, based on his new book, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. It’s actually a somewhat puzzling article, because Torres is both critical and supportive of what he calls radical human enhancement as a way to solve what he perceives as humanity’s existential risks. My take is that he mostly focuses on the wrong risks, some of which are not actually existential at all, and that his proposed solution is more likely to make things worse than better. I think of myself as a realist about technology – one who both appreciates its advantages (I’m writing this on a wonderfully advanced tablet computer!) and is aware of its dark side. But if after reading this you want to class me as a techno-pessimist, I’ll take it. Just don’t dismiss me as a Luddite, okay?


Torres begins by correctly pointing out that the current century is a bit special, in the context both of human evolution and, for that matter, the evolution of life on our planet. For the first time since life emerged 3.5 billion years ago a single, sentient species actually has developed the capacity to profoundly alter Earth’s bio- and geo-spheres. As my favorite philosopher, Spider-Man, warned us, with great power comes great responsibility, but we just don’t seem to be willing to accept that responsibility.


Torres then introduces the concepts of cognitive and moral enhancements, though the word “moral” appears only near the beginning of the piece, with “cognitive” replacing it throughout the rest of the article. That, as we shall see, is a crucial mistake. There are two classes of enhancement, conventional and radical. You are surely familiar with the conventional class (hence the name!): it includes things like education, meditation, and the absorption of caffeine. Okay, it’s an odd mix, but you get the point: anything that improves our cognitive abilities without permanently altering them in a heritable fashion, that is, across generations.


Radical enhancements are a whole different story, and while still at the borderlands between science and science fiction, surely some of them will become available within years or decades. Torres focuses his essay on radical enhancements, since he thinks these are the ones that will be necessary to stave off the existential risks faced by humanity.


One such radical enhancement is embryo selection, a process by which scientists – the wisest of all people, as we all know – pick a subset of embryos generated by a given combination of sperms and eggs, and do so repeatedly in order to improve whatever human characteristic is deemed to be desirable. Torres is perfectly aware that this is eugenics, but he deems it to be of a benign type, because it doesn’t violate people’s autonomy. I guess he hasn’t seen the film Gattaca. And yes, it is perfectly acceptable to object to sci-fi scenarios by using sci-fi philosophical thought experiments. Torres comments:


If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. … According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points – a promising method for creating superbrainy offspring in a relatively short period of time. … As Bostrom puts it … ‘a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.’


I’m afraid that Bostrom and Shulman don’t know what they are talking about, and no such strong prima facie case has been made. At all. First off, we actually don’t understand the genetic basis of intelligence. We know that IQ (which is not at all the same thing as “intelligence,” whatever that is) is heritable in humans. But “heritable” simply means that there is – other things being equal – a statistical correlation between intelligence and genetic makeup. Nothing more, and that ain’t even remotely close enough to what one would need in order to do embryo selection on intelligence, even setting aside the ethical issues, which would be far more thorny than Torres lets on.


Second, and this will become a recurring theme of my critique, “superbrainy” doesn’t seem to have a lot to do with what is really needed: wisdom, or a good moral compass. I seriously doubt that there is any correlation at all between intelligence and morality, and if I’m right, creating a super-race of hyper-intelligent beings with the same highly imperfect moral compass as Homo sapiens sapiens is a sure recipe to accelerate and magnify whatever existential threat Torres, Bostrom and Shulman may be concerned about.


Speaking of which: what does Torres consider to be an existential threat to humanity? At the top of his list he puts “apocalyptic terrorism,” the possibility that someone inspired by a “Manichean belief system” will blow all of us to smithereens with a stolen atomic weapon, in the name of ridding the world of apostates and assorted infidels, thus establishing the kingdom of God on earth.


While surely there is a risk of one such attack, notice a few important caveats. To begin with, there is no credible scenario under which a nuclear terrorist attack would be civilization-ending. Yes, someone may be able to sneak a low grade nuclear weapon in a major city and kill hundreds of thousands, millions even. That would be an unprecedented and horrifying catastrophe. But an existential threat to civilization? No. You know what really constitutes such a threat? The fact that the codes for thousands of nuclear missiles are currently in the hands of an incompetent narcissist sitting in the White House. But, curiously, there is no mention of government-based threats in Torres’ piece. Lastly, please keep in mind that this specific threat is made possible by, you guessed it, science and technology! It’s the very existence of very smart scientists and unscrupulous politicians – none of whom seems to be equipped with even a barely functioning moral compass – that has put us into this situation in the first place. And you think giving more leeway to the same folks is going to save humanity?


More generally speaking, Steven Pinker’s ‘Escalator of Reason’ hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the ‘Flynn effect.’ The most important concept here is that of ‘abstract reasoning,’ which Pinker identifies as being ‘highly correlated’ with IQ. In his words, ‘abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.’


With all due respect to Steven Pinker, I’m going to call bullshit on this one as well. As a start, “violence” has indeed declined since the second half of the 20th century (though even this conclusion comes with huge caveats about what exactly counts as violence), but there is a reason Pinker picked that particular time frame: two world wars had just taken place in the previous half century, killing millions of people, thanks to science and technology. The culmination of that period was the only nuclear attack on civilians in the history of humanity (so far), perpetrated by a government, not a Manichean terrorist; the US government, to be specific.
Moreover, there is no causal model (correlation, as Pinker knows, is not the same as causation) that actually links the Flynn effect (which is probably due to “conventional enhancement techniques,” such as better nutrition and education) and moral improvement. Indeed, I see no reason to believe that humanity at large has improved morally since the times of Socrates and Confucius. And “abstraction from the concrete particulars of immediate experience” is also the sort of thing that makes possible killing at a distance by pushing a button, or that allows many of us to reconcile the otherwise irreconcilable fact that the top 10% of the human population lives by standards historically reserved to Kings and Queens while the rest is below or barely above poverty, subject to preventable disease, or killed by violence rendered particularly effective by technologically advanced weaponry in the hands of unscrupulous governments.


Torres does acknowledge some of the limitations of the approach proposed by techno-optimists like Pinker. After writing that perhaps “idiosyncratic actors” (i.e., terrorists) would suffer from less empathy if they had a higher IQ, he remembers that some real life examples of such actors, like the Unabomber Ted Kaczynski, actually do have high IQs, and yet they are still deficient in empathy. So let me state this clearly: there is no reason whatsoever to think that IQ and empathy are correlated, which throws a big wrench in Pinker’s, Bostrom’s and similar programs of enhancement. Torres continues:


Another major concern: cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button.


Right. But, again, he and his colleagues are insist in worrying about the least likely threats, which, once more, are not actually existential. No Unabomber can end the world. But Donald Trump (just to pick on the current occupant of the WH, it’s not that I trust others a hell of a lot more) can come pretty darn close. But Torres insists:


Although cognitive enhancements could worsen some types of terror agents, the evidence – albeit indirect – suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.


Ahem, no, the evidence suggests no such thing, and in fact the production of a population of “cognitively enhanced cyborg” is a nightmare that only naive techno-optimists could possibly wish on the rest of us. Don’t these people watch any sci-fi, at all? And there is more nonsense on stilts:


It seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in ‘all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.’ … Superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.


Bostrom’s calculations are based on thin air, to be charitable. It isn’t even clear what “all-round cognitive performance” means, let alone how to improve it so efficiently, and it is even more dubitable that such an improvement would actually be an improvement. Also, what sort of model of the brain is Bostrom working with, that allows him to simply sum small percentage increases across different individuals as if they were equivalent to a gigantic increase in a single person? Moreover, look at the list of disasters: most of them are both extremely unlikely and it is just as unlikely that we would be able to do much about them (I wonder why a nearby nova explosion isn’t part of the mix), but the most worrisome ones (climate change, biodiversity loss, emerging technologies, and agential risks) are all made possible by the very same thing that is supposed to save us: more intelligent technology.


Toward the end of the essay we simply reach Pindaric levels of imaginative flights:
There could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.


Sure. Now how about coming down to Earth, our only planet, which we are the ones currently destroying, and talk some sense? One of the problems with techno-optimism is that it captures the imagination with talk of supervolcanoes and “great unknown danger” X, offering us the reassuring but extremely dangerous feeling that all we need to do to get out of the trouble we have stubbornly dug ourselves into is more science. It almost sounds like magic. Because it almost is.


Only at the very end Torres manages to slip in the crucial word that has been missing from the entire exercise: wisdom. The problem is not that we are not smart enough, but rather that we are not wise enough. Science and technology have advanced by giant leaps since the time of Socrates and Confucius, and yet these two (as well as a number of other ancient sages) have been unsurpassed in wisdom by even the most cognitive advanced human beings of the intervening two and a half millennia.


I know it sounds far more pedestrian and unexciting, but what if – for a change – we actually got serious at using the sort of conventional enhancements that are proven to work? First and foremost, education. I don’t mean the sort of STEM-oriented technical tripe that produces more barely functional human drones apt for the use of large corporations. I mean serious education, what the Greeks called paideia, the rearing of good citizens of the polis. And yes, some meditation or other kinds of spiritual exercises, to calm our mind down and center ourselves so that we can live a better and more meaningful life, instead of depending on the constant acquisition of consumer goods for our so-called happiness. And caffeine, of course, that’s always helpful.

Neil deGrasse Tyson “debunks” Spider-Man. And that’s just wrong

1 (3)I’ve spent a significant part of my academic and public careers investigating and opposing pseudoscience. One of my role models in this quest has always been astronomer Carl Sagan, the original host of the landmark PBS series Cosmos. I have met and interviewed the new host, Neil deGrasse Tyson, the director of the Hayden Planetarium at the American Museum of Natural History. Despite our differences about the value of philosophy (he’s dead wrong on that one), Neil too got into the debunking business. But – unlike Sagan – does it with more than a whiff of scientism, and occasionally in a spectacularly wrongheaded fashion.


Take, for instance, last week’s mini-appearance on The Late Show with Stephen Colbert, one of my favorite programs to laugh at the crap currently affecting the planet (as we all known, a sense of humor is the best defense against the universe). On September 14th, Tyson was featured in a one-minute video entitled “Superpowers debunked, with Neil deGrasse Tyson.” What? Why do we need to “debunk” superpowers? Does anyone actually think there exists a god of thunder named Thor, who comes from a mythical place known as Asgard? But apparently the “problem” is pressing enough for our debunker-in-chief to use a popular nationally televised show to tackle it. Here is, in part, what Neil said (and no, this isn’t a joke, he was serious):


Let’s tackle Spider-Man.


No, let’s not! Spider-Man is one of my favorite superheroes, a (fictional) role model, motivated by a more than decent philosophy of life: with great powers comes great responsibility (he got that from Uncle Ben). Something Tyson has, apparently, not learned. He goes on:


He’s bitten by a radioactive spider. Don’t we know from experience that radioactivity give your organs cancer? So, he would just be a dead kid, not one with superpowers.


No kidding, Sherlock. Do we really need the awesome reasoning powers of a star national science popularizer to figure out that Spider-Man’s origin story doesn’t stand up to even casual scrutiny? Doesn’t Neil realize that this is fiction, for crying out loud? Well, apparently, he does, sort of:


Of course it’s fiction, so I don’t have a problem with fiction, but if you think you are going to do this experiment, and try to make that happen to you, I’ve got news for you: it’s not gonna work.
Well, Neil, apparently you do have a problem with fiction. I still remember that on my podcast, years ago, you complained about the aliens in Avatar, because the females had breasts, which are – obviously – a mammalian trait. Really? That’s what bothered you in that movie? Never heard of suspending disbelief and just enjoy a nice story?


Also, who on earth is going to be tempted to repeat in real life the “experiment” that generated Spider-Man? And even if an enterprising and badly informed kid wanted to, where would he get a radioactive spider? Lastly:


I’ve got news for you: it’s not gonna work.


You think?


All right, end of my anti-Tyson rant in defense of Spider-Man. The more serious issue here is: why did he feel the need to do such a silly thing in the first place? I suspect that’s because Neil, like a number of “skeptics” I know, is affected by two maladies: the above mentioned scientism and a strong sense of intellectual superiority to the common rabble.


Scientism is defined by the Merriam-Webster as “an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation.” I don’t know whether commentaries on comic book superheroes qualify as an area of investigation, but clearly Tyson felt it necessary to bring the awesome power of science and critical thinking to debunking the dangerous notion that being bitten by a radioactive spider will give you magical powers.


I really think the skeptic community should stay as far away as possible from the whole notion of debunking (and yes, I’ve been guilty of using that word myself, in the past). For one thing, it conveys a sense of preconceived outcome: you know a priori that the object of your debunking is nonsense, which isn’t exactly in line with the ideal scientific spirit of open inquiry. That’s why my favorite actual skeptic is philosopher David Hume, who famously said that a reasonable person’s beliefs should be proportionate to the evidence, a phrase later turned by Sagan into his famous “extraordinary claims require extraordinary evidence.” Sagan, like Hume, was open to a serious consideration of phenomena like UFOs and telepathy, even though he did not believe in them. At one point he risked his career and reputation in order to organize a scientific conference on UFO sightings. I simply cannot imagine a similar attitude being sported by Neil deGrasse Tyson.


For another thing, “debunking” strongly conveys the impression that one thinks that the people who believe in the notion to be debunked are simpletons barely worth consideration. Perhaps some are, but I’ve met plenty of really smart creationists, for instance, a notion that would sound to Tyson as the quintessential oxymoron. Which brings me to his second malady (one, again, from which I have suffered myself, and that I’m trying really hard to overcome): intellectual snobbism. People like Tyson (or, say, Richard Dawkins) exude the attitude at every turn, as on display in the short Colbert video that got me started with this post. The problem (other than that it’s simply not nice) is than snobbism isn’t going to get you converts. It only plays well with your own faithful crowd.


This is because of something that Aristotle realized back 23 centuries ago, and which he explained at great length in his book on rhetoric. Presumably, Neil, Dawkins, and others want the same thing that Sagan, Stephen Gould (another one of my role models), and myself want: to engage a broader public on the nature of science, and to widen the appreciation and practice of critical thinking. But Aristotle realized that this goal requires the deployment of three concepts: Logos, Ethos, and Pathos.


Logos refers to the idea that our first priority should be to get our facts and our reasoning right. In the case of Neil’s “debunking” of Spider-Man, yeah, he got the biological facts straight, as much as that isn’t going to do anyone any good.


Ethos means character: you need to establish your credentials with your audience. And by credentials Aristotle didn’t mean the fact that you have a PhD (Tyson has one, from Columbia University), but that you are a good, trustworthy person. I can’t comment on the degree to which Neil fits this description, because I don’t know him well enough; but he certainly comes across as condescending in this video and on many other occasions, a character trait that Aristotle would not have approved of. (One more time: I have been guilty of the same before, and I’ve been actively working on improving the situation.)


Pathos refers to the establishment of an emotional connection with your audience. This is something that scientists are actively trained not to do, under the mistaken impression that emotional connection is the same thing as emotional manipulation. But this is the case only if the agent is unscrupulous and manipulative, not if he’s acting as a genuine human being. We humans need emotional connections, without which we are prone to distrust whoever is talking to us. In the video Tyson makes absolutely no effort to connect with his audience. Indeed, it isn’t even clear who is audience is, exactly (certainly, not fans of Spider-Man!), and therefore what the point of the whole exercise actually was.


So, by all means let us nurture good science communicators, which Neil deGrasse Tyson most certainly is. We do need them. But they really ought to read a bit of Aristotle (oh no, philosophy!), and also relax about the questionable science of movies like Avatar or comic books like Spider-Man.


Speaking of which, let me leave you with the delightfully corny original animated series soundtrack. Try to enjoy it without feeling the urge to “debunk” it, okay?

Darwinism in the modern era: more on the evolution of evolutionary theory – part II

1 (2)The many conceptual and empirical advances in evolutionary biology during the second half of the twentieth century that I have briefly sketched in part I of this essay naturally led to a broader theoretical turmoil. More and more people felt like the Modern Synthesis (MS) was increasingly becoming too restrictive a view of evolution to keep playing the role of biology’s “standard model.” This group included Carl Schlichting and myself, Mary Jane West-Eberhard (2003), Eva Jablonka, and others. But arguably none made a more concerted, if partial, effort than Stephen Jay Gould in his magnum opus, The Structure of Evolutionary Theory, published in 2002.


The Structure is comprised of two parts, one tracing the history of evolutionary ideas, both pre-and post-Darwin, and the second one presenting Gould’s view of contemporary theoretical debates within the field. While the constructive part of the book focuses too much on paleontology and multilevel selection, Gould correctly identified three conceptual pillars of Darwinism that got imported wholesale into the Modern Synthesis:

1. Agency: the locus of action of natural selection. For Darwin, this was the individual organism, while within the MS the focus expanded to the gene, thus leading to an overall increase of agency. Gould advocated further expansion, to include multiple levels of selection, from the gene to the individual to kin groups to species. This suggestion is perfectly in line with that of other authors advocating an Extended Evolutionary Synthesis (EES).


2. Efficacy: the causal power of natural selection relative to other evolutionary mechanisms. According to Darwin, natural selection is the chief mechanism of evolutionary change, and certainly the only one capable of producing adaptation. The MS formally described—by means of population genetic theory—four additional mechanisms: mutation, recombination, migration, and genetic drift. Gould adds a positive role for developmental constraints to the picture, and advocates of the EES further expand on this theme, including concepts such as those of evolvability (i.e., change over time of evolutionary mechanisms themselves), facilitated variation (from developmental biology), and niche construction (from ecology), among others.


3. Scope: the degree to which natural selection can be extrapolated from micro-to macro-evolutionary outcomes. As we have seen last time, this has been controversial early on, with the MS settling for the same basic picture proposed by Darwin: so-called macro-evolutionary processes are simply micro-evolutionary ones writ large. Gould, of course, questions this, on the basis of the already discussed theory of punctuated equilibria. Proponents of the EES also doubt the received view, suggesting that species selection and group-level ecological characteristics may partially, though not entirely, decouple micro-from macro-evolution.


If Gould’s general take is right, then, evolutionary theory has changed over time and the process can best be tracked conceptually by keeping tabs on changes in the agency, efficacy, and scope of natural selection within the theory. This, incidentally, makes natural selection the fundamental idea in biological evolution, and rightly so. No other concept, not even that of common descent, has had such a complex and convoluted history within the field. Moreover, what the EES is attempting to do can also be understood within Gould’s framework.


Now, as we have seen so far, the latter part of the twentieth century and the beginning of the twenty-first century have seen a renewed debate about the status of contemporary evolutionary theory, with a number of calls for an expansion of the Modern Synthesis into an Extended Evolutionary Synthesis. But what does the latter look like, at the current state of the discussion?


I provided an early sketch of it in a paper published in Evolution back in 2007 (available to Socratic level subscribers from my archives), and an updated and expanded version of that sketch has been put out by Laland and collaborators in 2015. My early analysis began by noting that philosopher Karl Popper famously interpreted the MS as a theory of genes, lacking a comparable theory of forms (i.e., phenotypes). The field got started, however, as a theory of forms in Darwin’s days, with genetics taking on a fundamental role only after the rediscovery of Mendel’s work at the turn of the twentieth century. Consequently, I suggested, a major goal that an EES aims for is an improvement and unification of our theories of genes and of forms. This, seems to me, may best be achieved through an organic grafting of novel concepts onto the foundational structure of the MS, particularly evolvability, phenotypic plasticity (i.e., the ability of a single genotype to produce different phenotypes in response to environmental variation), epigenetic inheritance, complexity theory (from mathematics), and the theory of evolution in highly dimensional adaptive landscapes (from population genetics).


Laland et al.’s paper from 2015 is the most focused and systematic attempt to articulate the EES, explicitly aiming at clearing away inconsistencies in previous works. They begin with a comparison of core assumptions of the MS versus the EES. To give you an idea of what they are getting at, here are the entries for inheritance:


Genetic inheritance (MS): Genes constitute the only general inheritance system. Acquired characters are not inherited.


Inclusive inheritance (EES): Inheritance extends beyond genes to encompass (transgenerational) epigenetic inheritance, physiological inheritance, ecological inheritance, social (behavioural) transmission and cultural inheritance. Acquired characters can play evolutionary roles by biasing phenotypic variants subject to selection, modifying environments and contributing to heritability.


They then run through a series of alternative interpretations of important evolutionary phenomena according to the two frameworks. For instance, in the case of developmental plasticity:


MS: conceptualized as a genetically specified feature of individuals that can evolve under selection and drift. Focus is on the conditions that promote adaptive evolution of plastic versus non-plastic phenotypes. The primary evolutionary role of plasticity is to adjust phenotypes adaptively to variable environments. Plastic responses regarded as pre-filtered by past selection.


EES: considers reducing plasticity to a genetic feature to be explanatorily insufficient. Retains an interest in adaptive evolution of plasticity, but also focuses on how plasticity contributes to the origin of functional variation under genetic or environmental change, and how the mechanisms of plasticity limit or enhance evolvability, and initiate evolutionary responses. Many plastic responses viewed as reliant on open-ended (e.g., exploratory) developmental processes, and hence capable of introducing phenotypic novelty.


Moreover, Laland et al. provide readers with a comparison of different predictions originating from the competing frameworks. For instance, in the case of the relationship between genetic and phenotypic change:


MS: genetic change causes, and logically precedes, phenotypic change, in adaptive evolution.


EES: phenotypic accommodation (a non-genetic process) can precede, rather than follow, genetic change, in adaptive evolution.


Laland et al. also present a graphical outline of the structure of the Extended Evolutionary Synthesis, as they see it . It is instructive to comment on a number of features of their model. Phenotypic evolution—the target of explanation of the entire framework, just as it was for Darwin—is assumed to be affected by three classes of processes: those that generate novel variation, those that bias selection, and those that modify the frequency of heritable variation.


Beginning with the first class, these processes include classical ones like mutation, recombination, gene expression, and developmental regulatory processes. But also EES-specific ones like environmental induction (of developmental processes), niche construction, phenotypic accommodation, and facilitated variation. The second class (processes that bias selection) include only EES-related entries: developmental bias and niche construction, while the third class (processes that affect heritable variation) are all classical (mutation pressure, selection, drift, and gene flow) but are in turn affected by the previous class.


The resulting picture is one of complete and, seems to me, highly coherent, meshing of the MS and the EES perspectives, where the latter adds to but does not really replace any of the previously recognized mechanisms. Which brings me to the next question I wish to address concerning the most recent developments of the now more than 150-year-old Darwinian tradition: is the proposed shift from the MS to the EES akin to a Kunhian paradigm shift?


One of the most controversial aspects of the discussion surrounding the MS versus EES debate is the extent to which the new framework is claimed to be distinct from the old one. At one extreme, there are scientists who simply reject the idea that the EES presents much that is new, claiming that whatever new concepts are being advanced were in fact already part of the MS, either implicitly or explicitly. At the opposite extreme, some supporters of the EES have been making statements to the effect that the new framework somehow amounts to a rejection of fundamental aspects of Darwinism, akin to what philosopher Thomas Kuhn famously termed a “paradigm shift” within the discipline, thus aligning themselves with a tradition that can be fairly characterized as anti-Darwinian. My own position has always been that the truth lies somewhere in the middle (in this case!): the EES is significantly different from the MS, and yet the change does not reflect any kind of scientific revolution within modern biology, but rather more of the same process that has led us from the original Darwinism to neo-Darwinism to the MS itself.


Kuhn famously argued—on the basis, crucially, of examples drawn exclusively from physics—that science goes through an alternation of two phases: during “normal” or “puzzle solving” science, practitioners are focused on addressing specific issues from within a given theoretical framework and set of methods (the “paradigm”), which itself is not the target of empirical testing or conceptual revision. From time to time, however, a sufficient number of “anomalies,” or unresolved puzzles, accumulate and precipitate a crisis within the field. At that point scientists look for a new paradigm, better suited to take into account the insofar unresolved issues. If they find it, the new framework is quickly adopted and deployed in turn to guide a new phase of normal science.


Kuhn suggested a number of approaches to tell whether a paradigm shift has occurred (or, in our case, is in the process of occurring). These include five criteria for theory comparison, as well as three classes of potential incommensurability between theories. Let’s begin by examining the five criteria: (1) accuracy, (2) consistency (internal and with other theories), (3) explanatory scope, (4) simplicity, and (5) fruitfulness of the accompanying research program. Here is how the MS and EES compare, in my mind, according to the Kuhnian criteria:


Accuracy, MS: building on the original Darwinism, it has produced quantitative accounts of the change over time of the genetic makeup of natural populations.


Accuracy, EES: incorporates the same methods and results of both the original Darwinism and the MS, adding the explanation of developmental and other self organizing biological phenomena.


Consistency, MS: as internally consistent as any major scientific theory, features explicit external links to genetics, molecular biology, and ecology.


Consistency, EES: same degree of internal and external consistency as the MS, with the addition of external links to developmental biology, genomics, and complexity theory, among others.


Scope, MS: new facts about the biological world that are explained have been consistently uncovered for the past several decades.


Scope, EES: further expands the scope of the MS by explicitly including questions about the origin of evolutionary novelties, the generation of biological form, and the problem of genotype–phenotype mapping.


Simplicity, MS: uses a limited number of mechanisms (natural selection, genetic drift, mutation, migration, assortative mating) to account for evolutionary change over time.


Simplicity, EES: makes use of all the mechanisms of the MS, adding a number of others such as epigenetic inheritance, evolvability, facilitated (i.e., self-emergent) variation, etc.


Fruitfulness, MS: has a history of more than 70 years of vigorous research programs, building on the previous fruits of the original Darwinism.


Fruitfulness, EES: builds on the ongoing research program of the MS but has also already led to empirical (e.g., emergent properties of gene networks and of cell assemblages) and conceptual (e.g., evolvability, phenotypic plasticity) discoveries, though of course it is very much a work in progress as of the moment of this writing.


Even this brief survey ought to make it clear that the MS => EES is not a paradigm shift, but rather an organic expansion. Then there is the second test proposed by Kuhn to consider, a test in a sense more stringent, that of incommensurability. If two theories are incommensurable in even one of the three classes, a good argument can be made that a paradigm shift is occurring. The classes in question are methodological, observational, and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. The EES takes on board the same puzzles, and the same set of approaches, of the MS, but it also adds new puzzles (such as the appearance of so-called evolutionary novelties, like eyes, feathers, spines, and so forth), which were largely untouched, or dealt with only superficially, by the MS. It further adds new approaches, like interpretations of evolutionary changes in terms of niche construction, developmental plasticity, or epigenetic inheritance.


Observational incommensurability is tightly linked to the idea that observations are theory dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. For instance, in pre-relativity physics there was a (supposed) fact of the matter that some kind of substance, referred to as ether, had to be present in space in order for light to travel through it. After the famous Michelson–Morley experiment demonstrating that there was no such thing as ether, the relevant fact became the constancy of the speed of light and therefore the relativity of frames of reference. Nothing like that seems to be happening in evolutionary biology at the moment: the very same facts that have been catalogued and explained by the MS enter into the empirical corpus of the EES, to be further expanded with new facts that come to the forefront because of the additional conceptual advancements.


Semantic incommensurability has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being that of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity. Again, I do not discern any analogous shift in the terminology used by proponents of the MS versus EES. Key biological concepts, such as species, genes, phenotypes, niche, and so forth, retain similar and perfectly commensurable meanings, even though our understanding of their referents becomes increasingly sharp.


It seems, therefore, that Darwinism after the Modern Synthesis has proceeded along similar lines to those followed by Darwinism before the MS: a continuous expansion of both empirical knowledge and conceptual understanding, an expansion that is likely to continue for the remainder of the current century and beyond.


This discussion is in part an opportunity to call for a bit of house cleaning, so to speak, on the part of evolutionary biologists and philosophers of science. For instance, it is truly astounding that in France the Modern Synthesis, and in particular population genetics, was not included in standardized university curricula, or addressed within main research programs until the 1970s. Against the Darwinian picture that was developing abroad, French life scientists supported various forms of Lamarckism throughout the twentieth century, and some of that attitude still lingers. There is no good scientific reason for that, and it is hard not to pin such an attitude on sheer nationalism and the cultural worship of Lamarck. Needless to say, that sort of thing has no place in a mature science. The French are not the only culprits here, and the fact that there are “German,” “Russian,” and other “traditions” within evolutionary biology is more than a little bizarre.


It’s also somewhat surprising that behavioral biologists are still clinging to simplistic notions from sociobiology and evolutionary biology, which have long since been debunked. It’s not the basic idea that behaviors, and especially human behaviors, evolve by natural selection and other means that is problematic. The problem, rather, lies with some of the specific claims made, and methods used, by evolutionary psychologists.


It is also both surprising and problematic that some researchers are still pursuing non-“mechanistic” or non-“physicalist” research programs, whatever that means. Indeed, a major point of the EES is to help bring the focus back on the organism and even the ecosystem, and yet—as I just argued above—this does not require a wholly alternative synthesis at all.


Over time, Darwinism has advanced its own agenda by incorporating a variety of themes proposed by its critics, including “saltationism” (punctuated equilibrium) and “Lamarckism” (epigenetic inheritance, phenotypic plasticity, and niche construction). This is fine, so long as we keep in mind that the terms within scare quotes above are to be understood in a modern, radically updated sense, and not along the lines of what biologists were thinking decades or even centuries ago. It’s this inherent flexibility of Darwinism that has allowed people with views as divergent as Stephen Jay Gould and Richard Dawkins to (rightly) claim the Darwinian mantle.


This ability to incorporate critical ideas is neither just a rhetorical move nor somehow indicative of serious problems inherent in the Darwinian approach. In the end, the various Darwinian traditions in evolutionary biology are best understood as a wide ranging family of conceptual and research approaches, always in dialectic dialogue with each other, always in a constructive tension that transcends the agendas and (sometimes strong) personalities of the many individual scientists that recognize themselves as intellectual descendants of Charles Darwin. More than a century and a half later, evolutionary theory keeps evolving.

Darwinism in the modern era: more on the evolution of evolutionary theory – part I

1 (1)Scientific theories are always provisional accounts of how the world works, intrinsically incomplete, and expected to be replaced by better accounts as science progresses. The theory of evolution, colloquially referred to as “Darwinism,” is, of course, no exception. It began in 1858 with joint papers presented to the Linnaean Society by Charles Darwin and Alfred Russell Wallace and was formalized shortly thereafter in On the Origin of Species. The original theory featured two conceptual pillars: the idea of common descent (which was accepted by a number of scholars even before Darwin), and that of natural selection as the chief mechanism of evolution, and the only one capable of generating adaptation.


The first bit of tinkering took place shortly thereafter, when Wallace himself, together with August Weismann, proposed to drop any reference to Lamarckian theories of heredity because of the newly proposed notion of the separation between sexual and somatic cellular lines, thus generating what is properly known as neo-Darwinism. After undergoing a temporary crisis, as a result of increasing skepticism from paleontologists and developmental biologists, we enter two phases of the so-called Modern Synthesis, the biological equivalent of the Standard Model in physics: the first phase consisted in the reconciliation between Mendelism (i.e., genetics) and Darwinism (i.e., the theory of natural selection), leading to the birth of population genetics; the second phase consisted in an expansion of the theory to include fields like natural history, population biology, paleontology, and botany.


What happened to “Darwinism” after 1950? The Modern Synthesis (MS) reigned as the dominant paradigm in the field, rather unchallenged until the late 1980s and early 1990s. At which point a number of authors, coming from a variety of disciplines, began to question not so much the foundations but the accepted structure of the MS. By the very late twentieth-century and early twenty-first-century, calls to replace the MS with an Extended Evolutionary Synthesis (EES) had begun to grow loud, and to be countered by equally loud voices raised in defense of the MS. How did this happen, and what does it mean for the current status and future of evolutionary theory? To understand this we need to step back for a moment and take a broad view of conceptual developments in the biological sciences during the second half of the twentieth century.


The second half of the twentieth century has been an incredibly exciting time for biology, a period that has put the discipline on the map at least at the same level of interest as physics, the alleged queen of sciences, and arguably even more so. Let me remind you of some of the major developments that have made this possible, because they all—directly or indirectly—eventually fed into the current discussion over the MS versus the EES as dominant conceptual frameworks in evolutionary biology.


A major breakthrough in one of the foundational fields of the Modern Synthesis, population genetics, came with the invention of a technique called gel electrophoresis, which for the first time made it possible to directly assess protein and gene frequencies in large samples drawn from natural populations. While research on electrophoresis began as early as the 1930s, it was the breakthrough work of Richard Lewontin and John Hubby in 1966 that set population genetics on fire. The unexpected discovery was, as the authors put it, that “there is a considerable amount of genic variation segregating in all of the populations studied …[it is not] clear what balance of forces is responsible for the genetic variation observed, but [it is] clear the kind and amount of variation at the genic level that we need to explain.” This new problem posed by a much larger degree of genetic variation than expected in natural populations eventually led to a revolution in population genetics, and also directly to the origination of the impactful neutral theory of molecular evolution first proposed in 1968 by Motoo Kimura.


The neutral theory was a landmark conceptual development because for the first time since Darwin it challenged the primacy of natural selection as an agent of evolutionary change. To be sure, Kimura and colleagues didn’t think that phenotypic evolution (i.e., the evolution of complex traits, like eyes, hearts, etc.) occurred in a largely neutral fashion, but if it turned out that much of what goes on at the molecular level is independent of selective processes, then the obvious question is how is it possible that largely neutral molecular variation can give rise to non-neutral phenotypic outcomes. Eventually, the debate about the neutral theory—which raged on intensely for a number of years—was settled with a sensible and empirically consistent compromise: a lot of molecular variation is “near-neutral,” which means that the role of stochastic processes such as genetic drift at the molecular level is significantly higher than might have been expected on the basis of a face-value reading of the tenets of the Modern Synthesis.


What could possibly connect the near-neutral molecular level with the obviously functional and therefore likely selected phenotypic level? The obvious answer was: development. The only problem was that developmental biology had famously been left out of the Modern Synthesis. It looked like something was seriously amiss with modern evolutionary theory.


Things began to change as an offshoot of yet another revolution in biology: the rapid advances made in molecular biology after the discovery of the structure of DNA in 1953. While molecular biology kept accelerating its pace independently of organismal biology for several decades—until their confluence in the era of evolutionary genomics—in the late 1970s the existence of homeotic genes regulating embryonic patterns of development in Drosophila was discovered. It soon turned out that this and similar classes of regulatory genes are both widespread and evolutionarily conserved (i.e., they don’t change much over time), so that they are one of the major keys to the understanding of the complex interplay among genotype, development, and phenotype.


This new approach eventually flourished into a new field, known as evolutionary developmental biology, or evo-devo for short, and one of its major contributions so far has been a marked shift of emphasis in the study of morphology and development, from the sort of classical population genetic studies focused on structural genes to an emphasis on regulatory genes and their potential to help us build a credible theory of the origin of evolutionary novelties (i.e., new structures like wings or flower). As Prud’homme and colleagues put it in 2007:


Because most animals share a conserved repertoire of body-building and -patterning genes, morphological diversity appears to evolve primarily through changes in the deployment of these genes during development. … Morphological evolution relies predominantly on changes in the architecture of gene regulatory networks and in particular on functional changes within [individual] regulatory elements. … Regulatory evolution: (i) uses available genetic components in the form of preexisting and active transcription factors and regulatory elements to generate novelty; (ii) minimizes the penalty to overall fitness by introducing discrete changes in gene expression; and (iii) allows interactions to arise among any transcription factor and [regulatory genes].


The picture that emerges from this and many other studies is not incompatible with the simple mathematical models that were incorporated into the Modern Synthesis, but it does present us with a much more complex and nuanced understanding of genetic, developmental, and phenotypic evolution, so much so that it is little wonder that people have been increasingly referring to the current, very much in flux, version of evolutionary theory as the Extended Synthesis.


I have already mentioned the molecular biology revolution initiated in the 1950s, which eventually led to the genomic revolution. Both these radical developments initially affected evolutionary biology only indirectly, by providing increasingly powerful new analytical tools, such as gel electrophoresis, and later on gene sequencing. But inevitably genomics itself became an evolutionary science, once technical developments made it possible to sequence entire genomes more quickly and cheaply, and molecular biologists fully internalized, as Theodosius Dobzhansky famously put it, that nothing in biology makes sense except in the light of evolution. The structure and function, as well as the sheer diversity, of genomes are themselves not understandable if not through evolutionary lenses, so that genomics and evolutionary biology currently represent a rare example of synergism between scientific disciplines: the first provides tools for the latter to advance, while the second one allows for a theoretical understanding of the data that the first one accumulates at such a heady pace.


While of course other disciplines within biology have made progress during the second part of the twentieth century—ecology, for instance—the next bit of this panoramic view I wish to briefly comment on concerns yet another area of inquiry that had played only a secondary role during the Modern Synthesis: paleontology. The field had always been a thorn in the side of Darwinism, since many paleontologists early on had rejected the Darwinian insight, proposing instead the idea that macro-evolutionary change was qualitatively distinct from the sort of micro-evolution that Darwin famously modeled on the basis of plant and animal breeding (and of course, notoriously, creationists have always made a big deal of the distinction between micro- and macro-evolution, often without understanding it). Indeed, it was this very rejection, together with the apparent incompatibility of Mendelism and Darwinism, that led to the above mentioned period of “eclipse” of the Darwinian theory at the turn of the twentieth century.


Paleontology’s early alternative to Darwinism took the shape of orthogenetic theory (according to organisms change in the same direction over millions of years), which in turn was essentially a scaled-up version of Lamarckism, since it postulated an inner vital force responsible for long-term evolutionary trends, which many paleontologists saw as otherwise inexplicable within the Darwinian framework. It was George Gaylor Simpson’s magistral role within the Modern Synthesis that cleared away any remnants of orthogenesis from paleontology, doing for that field what Fisher, Haldane and Sewall Wright had done for Mendelian genetics: he convincingly argued that the sort of so-called “micro”-evolutionary processes accounted for by Darwinism could be extrapolated to geological timescales, thus yielding the appearance of macro-evolutionary changes of a qualitatively different nature. In reality, Simpson argued, the second is simply a scaled up version of the former.


Simpson, however, was arguably too successful, essentially making paleontology a second-rate handmaiden to population genetics while overlooking the potential for its original contributions—theoretical as well as empirical—to the overall structure of evolutionary theory. Eventually, Simpson’s “conservatism,” so to speak, led to a backlash: Niles Eldredge and Stephen Jay Gould, the enfants terribles of modern paleontology, published in 1972 a landmark paper proposing the theory of punctuated equilibria, according to which evolution, when seen at the macroscopic scale, works by fits and starts: long periods of stasis during which not much appears to be happening in a given lineage, interrupted by sudden “bursts” of phenotypic change. The theory was immediately misunderstood by many population geneticists, who thought that Eldredge and Gould were attempting to revive an old notion known as “hopeful monsters,” i.e., of instantaneous evolutionary change resulting from genome-wide restructuring.


To be fair, at some point Gould’s own anti-establishment rhetoric, and the fact that creationists often mentioned him in their support, contributed to the confusion. But in fact, the sort of punctuations that Eldredge and Gould saw in the fossil record takes place over tens of thousands of generations, thus leaving plenty of time for standard Darwinian processes to do their work. As they pointed out later on in the debate, the real novel issue is that of prolonged stasis, over millions of years, not the allegedly (but not really) “instantaneous” change. A major class of explanation proposed especially by Gould for this observed stasis had to do with developmental processes and constraints, which nicely connects the new paleontology with the emerging field of evo-devo mentioned above, making both of them into pillars of the ensuing Extended Synthesis in evolutionary biology.


(next time: the Stephen Jay Gould conceptual revolution and the birth of the Extended Evolutionary Synthesis)

The impossible conundrum: science as a (perennial?) candle in the dark

1(left: Carl Sagan; right: Richard Lewontin)


When I was a kid I wanted to be an astronomer. One of my role models was Carl Sagan, the charming original host of the television series Cosmos and author of countless books on astronomy and the nature of science. Later on I decided that biology was really my calling, and my entire career was the result of reading a single, incredibly powerful paper: The analysis of variance and the analysis of causes, by Richard Lewontin. I never had the pleasure of meeting Sagan, but I did have an hour long chat with Lewontin when I was a graduate student at the University of Connecticut and he was visiting our lab. It was one of the highlights of my life.


Both Sagan and Lewontin had far more impact on me than just their science. Sagan made me sensitive to the importance of communicating with a broader public, to share the wonders of the scientific worldview, as well as to fight the irrationality of pseudoscience. Lewontin made me sensitive to the ideological underpinnings of science and even science popularizing, and therefore, ironically, somewhat skeptical of Sagan’s own approach.


Recently, one of my readers suggested that I take a fresh look at a classic within this context: Lewontin’s review of one of Sagan’s best known books, and one that has influenced me for two decades: The Demon-Haunted World, subtitled Science as a Candle in the Dark. The review, entitled Billions and Billions of Demons (a playful, perhaps somewhat sarcastic, take on Sagan’s famous tagline about a universe with billions and billions of stars) is well worth pondering again today.


Lewontin opens with a recounting of when he met Sagan for the first time, on the occasion of a public debate about creationism vs evolution in Little Rock, Arkansas, in 1964. The experience was formative for both, but they came away from it with radically different messages:


“Sagan and I drew different conclusions from our experience. For me the confrontation between creationism and the science of evolution was an example of historical, regional, and class differences in culture that could only be understood in the context of American social history. For Carl it was a struggle between ignorance and knowledge.”


I can sympathize. When, in 1997, I first debated a creationist, Duane Gish of the Institute for Creation Research (no kidding), I was squarely looking at things through Sagan’s filter: obviously creation “science” is no such thing; obviously evolutionary theory is solid science; and obviously anyone disagreeing with these two propositions is a hillbilly ignoramus. More than two decades after that debate I think that position was incredibly naive, and I find myself far closer to Lewontin’s, though not entirely on board just yet.


As Lewontin aptly puts it:


“The primary problem is not to provide the public with the knowledge of how far it is to the nearest star and what genes are made of, for that vast project is, in its entirety, hopeless. Rather, the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth. The reason that people do not have a correct view of nature is not that they are ignorant of this or that fact about the material world, but that they look to the wrong sources in their attempt to understand.”


In other words, and contra Sagan, it isn’t a question of educating people about facts, it’s a question of convincing them to trust the better authority. Think of it this way. You probably “know” that atomic nuclei are made of quarks, right? But do you? Really? Unless you are a physicist, or at any rate someone whose grasp of physics is far better than average, you don’t actually know how science arrived at this basic fact about the structure of the world. Instead, you are simply repeating a statement that you read in a book or heard from a prominent physicist, or your college physics professor. You don’t know. You trust.


That’s why rejection of evolution in favor of creationism — while wrong (I actually know this, I’m a biologist) — is not irrational. It simply means that many people in the United States would rather trust their preachers, who they think speak on behalf of God, than Profs. Sagan, Lewontin, or Pigliucci. That’s why Lewontin, correctly, says that the only way to understand why creationism is such an issue in the US of A but not in pretty much any other Western country (and, again, is very much an issue in a lot of Islamic countries) we don’t need to look at the quality of science education. We need to look at the specific cultural history of the United States vs that of European countries.


Sagan did not get it. Here is Lewontin again:


“The only explanation that [Sagan] offers for the dogged resistance of the masses to the obvious virtues of the scientific way of knowing is that ‘through indifference, inattention, incompetence, or fear of skepticism, we discourage children from science.’ He does not tell us how he used the scientific method to discover the ‘embedded’ human proclivity for science, or the cause of its frustration. Perhaps we ought to add to the menu of Saganic demonology, just after spoon-bending, ten-second seat-of-the-pants explanations of social realities.”


You hear similar ex cathedra pronouncements from the contemporary heirs of Sagan’s approach, for instance Neil deGrasse Tyson (who has taken over the helm of the new Cosmos series). Their analysis of the hows and whys of widespread beliefs in parapsychology, UFOs, astrology and so forth is just as unempirical and “seat-of-the-pants” as Sagan’s. One would expect better from people who loudly insist on the absolute necessity of systematic empirical data before making any pronouncement.


Lewontin then proceeds with chastising another common Sagan-Tyson-et-al argument in defense of science: that it “delivers the goods.” Well, yes, sometimes. At times, though, those “goods” are anything but (atomic weapons, biological weapons, Facebook), and in other cases there is no delivery at all (the “war on cancer,” or the over-hyped promises of the human genome project). Meanwhile billions and billions — of dollars — are spent at taxpayers’ expense. Referring to the repeated promises of scientists to deliver cures for diseases if they were only given money to sequence the genes associated with them, followed by inevitable failure since a DNA sequence by itself doesn’t provide a cure for anything, Lewontin writes:


“Scientists apparently do not realize that the repeated promises of benefits yet to come, with no likelihood that those promises will be fulfilled, can only produce a widespread cynicism about the claims for the scientific method. Sagan, trying to explain the success of Carlos, a telepathic charlatan, muses on ‘how little it takes to tamper with our beliefs, how readily we are led, how easy it is to fool the public when people are lonely and starved for something to believe in.’
Not to mention when they are sick and dying.”


Ouch, but on the mark. And there is more where that came from:


“Sagan’s suggestion that only demonologists engage in ‘special pleading, often to rescue a proposition in deep rhetorical trouble,’ is certainly not one that accords with my reading of the scientific literature. … As to assertions without adequate evidence, the literature of science is filled with them, especially the literature of popular science writing.”


I must say that my own experience as a scientist first, and now as a philosopher of science, is far more in synch with Lewontin’s cynicism than with Sagan’s optimism.


And here is another gem from the review:


“When, at the time of the moon landing, a woman in rural Texas was interviewed about the event, she very sensibly refused to believe that the television pictures she had seen had come all the way from the moon, on the grounds that with her antenna she couldn’t even get Dallas. What seems absurd depends on one’s prejudice. Carl Sagan accepts, as I do, the duality of light, which is at the same time wave and particle, but he thinks that the consubstantiality of Father, Son, and Holy Ghost puts the mystery of the Holy Trinity ‘in deep trouble.’ Two’s company, but three’s a crowd.”


Just in case your blood is boiling and you begin to think Lewontin to be a postmodern deconstructionist, think again (and try to breathe deeply). He is an atheist, and he certainly does believe that we landed on the moon. His point is about cautioning scientists and science popularizers against dismissing others on the ground that their views are “obviously” irrational. Rationality is a great tool, but its deployment depends on one’s axioms or, as Lewontin’s puts it, one’s prejudices.


Here is where I partially, but only partially, part company with Lewontin:


“We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism.”


Well, yes, sort of. I would say that materialism itself is a philosophical position that many have arrived at because it is the one that makes the most sense of the world as we understand it. But wait, isn’t our understanding of the world based on the assumption of materialism? In a sense, but I think it is a mistake to see one as definitely preceding the other. Materialism and science co-evolved for centuries, and there was plenty of time when many prominent scientists were definitely not materialists, or at least not thoroughgoing materialists — from Newton to Alfred Wallace (the co-discoverer of natural selection). But the more the metaphysical leanings of natural philosophers (as scientists were once called) approached full fledged materialism, the more their science became successful at explaining and manipulating the world. This is, in a sense, a beautiful, centuries-long example of why one’s metaphysics should never be far from one’s epistemology (as it is, by contrast, with religion). The problem is that it’s really hard to imagine how to trigger that same sort of shift in a general public that hardly thinks either philosophically or scientifically. And no, more courses along the lines of Biology or Physics 101 ain’t gonna do it.


Lewontin, again, is far more perceptive than Sagan:


“The struggle for possession of public consciousness between material and mystical explanations of the world is one aspect of the history of the confrontation between elite culture and popular culture. … Evolution, for example, was not part of the regular biology curriculum when I was a student in 1946 in the New York City high schools, nor was it discussed in school textbooks. In consequence there was no organized creationist movement. Then, in the late 1950s, a national project was begun to bring school science curricula up to date. … The elite culture was now extending its domination by attacking the control that families had maintained over the ideological formation of their children. The result was a fundamentalist revolt, the invention of ‘Creation Science,’ and successful popular pressure on local school boards and state textbook purchasing agencies to revise subversive curricula and boycott blasphemous textbooks.”


Lewontin is absolutely right here. But the problem is, and he would be the first one to admit it, that there is no solution in sight. Are we supposed not to teach one of the most important scientific theories of all time because teaching it is going to be taken as yet another affront perpetrated on the working class by the moneyed elite? I doubt it. But the only other path I can see just ain’t gonna happen: establish a society where there is no such thing as the moneyed elite, where everyone has access to free education, and where consequently a lot of the cultural and economic factors that Lewontin correctly pinpoints will be erased or at least greatly diminished. I’ not holding my breath, are you?


The review concludes with a quote from the Gorgias, one of Plato’s dialogues (which Sagan would have appreciated, though I’m pretty confident that a lot of contemporary science popularizers have no idea why anyone would quote a philosopher who’s been dead more than two millennia. After all, isn’t philosophy useless?). Gorgias, a sophist, and Socrates are debating the relative virtues of rhetoric and technical expertise in public life. We are meant, of course, to sympathize with Socrates, but see if you can appreciate Gorgias’ point, in light of the preceding discussion:


Gorgias: “I mean [by the art of rhetoric] the ability to convince by means of speech a jury in a court of justice, members of the Council in their Chamber, voters at a meeting of the Assembly, and any other gathering of citizens, whatever it may be.”


Socrates: “When the citizens hold a meeting to appoint medical officers or shipbuilders or any other professional class of person, surely it won’t be the orator who advises them then. Obviously in every such election the choice ought to fall on the most expert.”


Obviously it ought, but equally obviously it doesn’t. And that, two and a half millennia later, is still the problem, and the reason why we are in the mess we are in.