No, science does not provide all the answers to the big questions

From time to time a famous scientist allows himself (in my experience it’s always a man) to write nonchalantly about something of which he demonstrably has only a superficial grasp: philosophy. The list of offenders is a long one, and it includes Lawrence Krauss, Neil deGrasse Tyson, and Stephen Hawking, among several others. (Fortunately, there are also exceptions, scientists who value a constructive intercourse with the humanities, like Sean Carroll.) The latest entry in this dubious pantheon is Peter Atkins, who recently published a sloppy essay in the otherwise excellent Aeon magazine entitled “Why it’s only science that can answer all the big questions.” Oh boy.


Atkins begins by telling us that there are two fundamental kinds of “big questions”:


“One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. … Most questions of this class are a waste of time; and because they are not open to rational discourse, at worst they are resolved only by resort to the sword, the bomb or the flame. … The second class of big questions concerns features of the Universe for which there is evidence other than wish-fulfilling speculation and the stimulation provided by the study of sacred texts. … These are all real big questions and, in my view, are open to scientific elucidation.”


This is not news, of course, at all. David Hume — one of my favorite philosophers — made essentially the same argument back in the 18th century, in his case rejecting what he saw as the waste of time associated with the Scholastic metaphysics that had prevailed throughout the Middle Ages:


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” (An Enquiry Concerning Human Understanding)


With all due respect to Hume, it’s a good thing people didn’t follow his advice, or we would have lost his very own Enquiry Concerning Human Understanding, since that book doesn’t contain any abstract reasoning concerning quantity or number, nor does it contain any experimental reasoning concerning matter of fact. And yet, it is — justly — considered to be one of the most important works of modern philosophy.


Atkins apparently realizes that he may come across as a bit too facile, since he acknowledges that he is defining the big questions precisely as those that science can answer, turning then around to “demonstrate” that science is the only discipline equipped to answer such questions. As he drily puts it when considering the obvious charge of circularity: “that might be so.” Which doesn’t stop him from proceeding as if it were not so.


Atkins tells us that science is getting ready to tackle what he considers the next three big questions: How did the Universe begin? How did matter in the Universe become alive? and How did living matter become self-conscious?


I have no doubt, as a scientist, that those are, indeed, scientific questions. I’m slightly more skeptical, as a philosopher, that science will actually be able to come up with answers. Fundamental physics, after more than a century of uninterrupted success, seems to have entered a period of navel gazing where speculation (admittedly mathematically informed speculation) is poised to replace empirical evidence. So we shall see if and when we’ll actually get a “theory of everything,” and whether that theory will in fact be able to tell us how the universe began from “nothing” (there is some doubt that it will).


Regarding the second question, the origin of life, theories have been piling up for several centuries now, and yet we don’t seem to be particularly close to a resolution just yet. I’m certainly not arguing that it isn’t possible, but it’s a very, very difficult problem, for the simple reason that a lot of the historical traces have been lost. No geological strata survive from the time when the primordial earth was home to the first living organisms, meaning that researchers on the origin of life are like detectives who already know the smoking gun isn’t going to be found. At best, they’ll have to rely on circumstantial evidence. Even should we be able to produce life artificially in the laboratory that would not solve the problem, since it wouldn’t mean that life on our planet actually followed anything like that particular causal path.


As for consciousness, I remain convinced that the problem is indeed biological in nature, and that therefore developmental, evolutionary, and neuro-biology are the disciplines best positioned to find a solution. But at the moment nobody seems to have much of a clue, and common talk of the brain being a computer is finally beginning to be understood as the shaky and very likely misleading analogy that is.


So, yes, if any of those three big questions are going to be answered, the answer will be a scientific one. But what about other questions that arguably just as big (or, for most of us, even bigger)? Here Atkins shifts into full scientistic mode:


“I see no reason why the scientific method cannot be used to answer, or at least illuminate, Socrates’ question ‘How should we live?’ by appealing to those currently semi-sciences (the social sciences) including anthropology, ethology, psychology and economics.”


Please notice a number of interesting and revealing things about this sentence. First, Atkins is making the time-honored argument from personal incredulity: “I see no reason why…” Which, of course, is not an argument at all, but an elementary logical fallacy. Second, he is seriously hedging his bets when he immediately qualifies his initial statement: “or at least illuminate…” Ah, well, but philosophers since the Pre-Socratics have understood that empirical evidence (i.e., “science”) can illuminate philosophical questions. However, that’s a far more modest claim than the notion that science can actually answer those questions. Third, Atkins can’t help himself but deliver a contemptuous dig at the “semi-sciences.” This attitude, common among physicists, reflects a naive understanding of the philosophy of science, according to which physics is the (self-professed) “queen” of the sciences, and every other field will achieve full scientific status only when it will finally evolve into something that looks like physics. But an increasingly common view in philosophy is that there actually is a fundamental disunity of science, that “science” is only a loosely defined family resemblance term, reflecting the fact that each science has its own goals, methods, and internal standards, and that there is no universal yardstick to be appealed to in order to make comparative judgments of quality.


Going back to philosophy, the question of “how should I live?” admits of a large number of reasonable (and a lot of unreasonable!) answers, given the very same facts about the universe and human nature. It isn’t so much a question to be answered, as to be explored and clarified. Indeed, this is arguably what most fundamentally distinguishes science from philosophy.
One of my recent morning meditations is pertinent here. It begins with a quote by the Stoic philosopher Epictetus, who says in Discourses II, 11.13:


“Here you have philosophy’s starting point: we find that people cannot agree among themselves, and we go in search of the source of their disagreement.”


As I argue in the podcast episode, there are two broad sources of disagreement among human beings: factual and conceptual. If you and I disagree about, say, the number of moons orbiting around the planet Saturn, one of us is wrong, possibly both. There is a matter of fact about the issue, and we can find out the answer by asking an astronomer. Or more simply by doing a web search. If disagreement remains after that, then one of us is more than a bit obtuse.


The second kind of disagreement concerns how to think about facts, actions, and values. Here the facts are relevant, but insufficient to settle the dispute. Let’s say we have different opinions about the permissibility of assisted suicide. Certain empirical facts are going to be pertinent to the discussion, like information about how the procedure is going to be implemented, what safeguards there may be to avoid abuses, and so forth. But even if we agree on the facts, we may still disagree on the crucial issue: is assisted suicide morally permissible?


That’s the difference between science and philosophy, and why Epictetus says that philosophy begins with the search for why people disagree on things. Notoriously, philosophy does not necessarily settle such disagreements. The joke in philosophy departments is that our profession’s slogan is: “Philosophy: we have all the questions!” But what philosophy does, by means of careful analysis and reasoned argument, is to help us clarify why, exactly, we disagree. That is of huge help to people of good will who wish to honestly pursue discussions in search of better ways to conduct their lives. Atkins may want to take notice.

Advertisements

Is philosophy a profession? (Yes, it’s a serious question)

You would think that the one that gives the title to this essay is one of those silly questions that only a philosopher would pose. And yet, a few months ago I unwittingly caused a Twitterstorm when I suggested that philosophy is, indeed, a profession, and that it comes with credentials (in the form of an awarded PhD, job titles and so forth) and even (gasp!) expertise.


I will start by presenting my arguments for why philosophy is indeed a profession that marks a certain kind of expertise; then we’ll talk about why this matters; and finally we’ll explore why, I think, so many people got positively upset at the mere suggestion that there can be professional philosophers, and even more so that they deserve a bit of respect when they talk about their own subject matter. I will also address some common objections to the idea of professional philosophy, as they were put to me during said Twitterstorm.


Is philosophy a profession?


Modern philosophy, meaning — approximately — philosophy has it has been practiced since the 20th century, is a profession in the same sense that, say, psychology or dentistry are professions. If you want to become a psychologist, or a dentist, you go to specialized schools, you take specific courses, you demonstrate your ability as a practitioner, and you get awarded a certificate that says that yup, you are indeed a psychologist, dentist, or philosopher. You then look for a job in your chosen profession, and if you are capable and lucky you land one. You then practice said profession, drawing a salary or other form of income. And eventually you cease practicing in order to enjoy a more or less well deserved retirement.


Typically, in order to become a professional philosopher one needs an undergraduate degree in that field (in the United States, four years) and a PhD from an accredited university (4-6 years on average, but it can be more). The PhD requires taking advanced courses (in my case, for instance, on Plato, ethics, Descartes, Kant, and a number of others), and the writing of a dissertation that must be of publication quality and advance the field by way of proposing original ideas (here is mine). After this, a young philosopher may find temporary employment as a postdoctoral associate, or as a lecturer, and eventually, maybe, land a tenure track position (though the whole institution of tenure has been under relentless attack by conservative political forces, but that’s another discussion). If you do get such a position, you then have six years to prove to your colleagues that you are worth retaining and being promoted from assistant to associate professor, a promotion that comes with some benefits (beginning with tenure itself) and usually a very modest increase in salary. If you are good, a number of years later (usually around five) you get another promotion, to full professor, which comes with little additional benefits (except that now you can serve on more university committees!) and with an equally modest increase in salary.


What I have just described, of course, is the academic path. It used to be pretty much the only game in town, but now the American Philosophical Association has a whole booklet on career paths beyond academia, if you are so inclined. Nevertheless, the academy is still where you will find most professional philosophers, these days.


So, since becoming a philosopher requires studying and getting a degree, and is often associated with belonging to a professional society and getting a regular salary from an employer (usually a university) it seems pretty obvious that philosophy is, indeed, a profession as succinctly defined by the Merriam-Webster: a type of job that requires special education, training, or skill.


Why does this matter?


Why did I bother engaging in the above elucidation of the obvious? Because ever since I switched my own career from that of a scientist (evolutionary biology) to that of a philosopher, I noticed an incredible amount of hostility and dismissal toward philosophy, including — unbelievably — by some philosophers!


I think it is important to correct public misperceptions of philosophy in particular, and of the humanities in general, not because these disciplines are difficult to practice and therefore deserving of respect, but because they are vital to the functioning of an open society. Far too often these days we hear administrators and politicians (usually, but not only, conservatives) saying that a college degree should prepare students to find well paying jobs. That is simply not the case. That definition applies to trade schools, not universities. Yes, of course you want to find a well paying job, especially given the insane amount of money you will have to shell for the privilege of a higher education in the increasingly unequal United States of America (and elsewhere). But the point of a liberal arts education (as it used to be called before “liberal” somehow became a dirty word) is first and foremost to help create mature adults and responsible citizens. You know, the sort of people who can think for themselves about what to do with their lives, instead of being brainwashed by corporate ads. Or the sort of people who believe that voting is both a right and a privilege, and who exercise such right/privilege by doing their homework on different candidates, instead of falling for blatant propaganda and conspiracy theories. That, and not to create an obedient army of drones for the corporate world and an increasingly illiberal government, is what education is for. No wonder so many in power have tried so hard to undermine that mission.


And make no mistake about it, that mission requires a substantial involvement in the humanities, not just the STEM fields. Everyone these days claims to be teaching “critical thinking,” but trust me, you ain’t gonna learn that in a biology class, or in chemistry, or in engineering. You will learn all sorts of interesting things in those classes, some of which may even be useful for getting you a job. But you won’t acquire the sort of ability at critical analysis and writing that philosophy will give you. You will also not be able to familiarize yourself with art, literature and music, some of the main reasons why human life is so interesting and varied. And you will not learn about the stupid things we have repeatedly done in the course of history — which is just as well from the point of view of politicians who prefer to keep selling you propaganda according to which you live (of course!) in the best nation that has ever blessed planet earth, handpicked by God himself to be a shining light for the rest of the world. You see, if you read Plato and Shakespeare and Haruki Murakami, or learn about the American bombing of Dresden at the end of WWII, or appreciate just how and why inequality, racism, and sexism are still pervasive in the 21st century, you will might start questioning what the hell is going on and how to change it. As one of my favorite comedians, George Carlin, once put it: “it’s called the American dream because you must be asleep to believe it.” Philosophy, and the rest of the humanities, are a major way for you to wake up.


Why do people have a problem?


Once more, I would not have thought that any of the above were controversial. But it was! I got a surprising amount of pushback on social media. Okay, fine, it’s social media, where one gets pushback and worse for saying the most mundane things. But still. Studying those responses, it seems to me they fall in the following broad categories:


(i) People who believe that I’m telling them that only professional philosophers can think. What? No, and if you believe that’s the implicature of the above position, you may benefit to taking a philosophy class or two! Snarky comments aside (sorry, this sort of exercise is exhausting!), of course philosophers aren’t the only people who can think, or even think well. Nor does thinking require a license or accreditation of any sort. But the job description of the philosopher is not “thinker,” but rather thinker of a particular kind, using particular tools, applying them to particular subject matters. Similarly, a psychotherapist, say, isn’t just someone who talk to you about your problems. Your friend can do that over a beer at the local pub. But your friend is not professionally trained, is not aware of psychological theories of human behavior, and is not familiar with psychotherapeutic techniques. That’s why so many people pay professional therapists to talk about their problems, instead (or on top) of having a beer with their friends.


That is why it is bizarre that when someone disagrees with me on Twitter or Facebook they often say something along the lines of “you should be aware of logical fallacies,” or “you should study philosophy of science” (actual phrases, and please notice that I teach a course on — among other things — logical fallacies, have written technical papers on the topic, and my specialty is, you guessed it, philosophy of science). This isn’t to say that a professional is always right and an amateur always wrong. Sometimes your intuitions about what’s wrong with your car may trump those of your mechanic. But, as a general rule, is far more likely the expert got it right and that you have a superficial or incomplete understanding of the matter. There is no shame in this, of course. We can’t all be experts on everything.


(ii) Which brings me to the second cause of irritation among some commenters: a good number of people seem not to recognize that philosophy is a field of expertise. On the one hand, this is understandable, but on the other hand it is downright bizarre. It’s understandable because philosophy is, indeed, a rather peculiar field, even within the academy. While biologists study the living world, physicists study the fundamentals of matter and energy, psychologists study human behavior, and historian study human history, what do philosophers study, exactly? The answer is: everything.


Which doesn’t mean they are experts on everything. Here is how it works. First off, the very comparison between philosophy and, say, biology, is misleading. “Philosophy,” if anything, is comparable to “science,” not to a sub-discipline of science. Second, philosophers are interested in broad vistas and the connections among fields, hence the various “philosophies of” (mind, biology, physics, social science, language, history, and so forth). This doesn’t make it easier, but more difficult to be a philosopher. Take my own case: I am a philosopher of science, and in particular a philosopher of evolutionary biology. This means that I need to be very familiar with not one, but two areas of scholarship: evolutionary biology and philosophy of science. I need to understand both the biology and epistemology, for instance, in order to apply a philosophical lense to the science and ask questions like what is the logic and structure of a particular scientific theory, how do unstated assumptions and unrecognized biases interfere with scientific research, what exactly is the relationship between a scientific theory and the evidence that is invoked to back it up (i.e., what’s the “epistemic warrant” of the theory).


Surely this sort of work requires expertise. Equally surely, someone without background in both science and philosophy of science is unlikely to just waltz in and come up with a novel idea that will stun the pros. It’s possible, of course, but very, very unlikely.


(iii) A third group of responses threw back at me that apparent incongruity that I have spent years encouraging people to practice philosophy (Stoicism, specifically) in their everyday life, and yet I’m now telling them that they don’t understand it. But there is a big difference between philosophy as an academic field of scholarship and philosophy understood as a daily practice in life. The first one is the province of professionals, the second one can (and, I think, should) be accessible by anyone willing to spend a modicum of time reading about it.


Again, the difference that I’m drawing here should not be surprising, as it finds lots of parallels. Everyone should exercise to maintain good health. That doesn’t mean everyone suddenly is a professional trainer or athlete. Anyone is capable of driving a car. But we are not a planet of car mechanics. Every Christian is able to read the Gospels, but few are theologians of the level of Thomas Aquinas. And so on, the examples are endless.


So, no, there is no contradiction at all between the notion that philosophy is a specialized academic profession requiring a lot of training and the idea that anyone can read up enough about Stoicism, or Buddhism, or any other philosophical or religious practice and incorporate them in their lives.


Possible objections


Finally, let me do some pre-emptive addressing of likely criticisms (another useful habit that I picked up as a professional philosopher!):


(1) But dentists (say) produce something, what do philosophers produce?


The outcome of the profession of dentistry is that your teeth will be in better order and more healthy than they would have been had you not gone to the dentist. The outcome of the profession of philosophy is twofold: (a) our students develop a better sense for complex ideas and how to evaluate them; and (b) we publish papers and books that contain new insights into the problems we are interested in. (The latter is, of course, what every scholar does, both in the humanities and in the sciences.)


(2) But Socrates did not have a PhD!

 

True. Neither did Darwin. Or Galileo. But today it’s really, really hard to become a professional biologist or physicist without proper, standardized, and rigorous training, usually certified by the award of a PhD. Philosophy has changed exactly in the same way in which all other fields of inquiry have, and for similar reasons (increased specialization, consequent division of labor, institutionalization, etc.).


(3) But someone can make novel contributions to philosophy even without a degree.

 

Yes. Just like someone can make a novel contribution to biology, or physics, and so forth. Such cases exist, but they are rare. Indeed, they are increasingly hard to find, across fields, precisely because both humanistic and scientific knowledge are getting more and more sophisticated and specialized, thus requiring extensive professional training.


(4) But plenty of professional philosophers don’t make interesting contributions to the field.

 

True. And the same goes for plenty of professional biologists (believe me, I’ve seen it) and, I assume, professional physicists, mathematicians, and so forth. Even so, your average philosopher (or biologist, or physicist) will still have a far more sophisticated command of her field than someone who has never studied it systematically.


(5) But there are serious problems with academia. Indeed there are.

 

This is something often pointed out, among others, by my friend Nigel Warburton. That said, Nigel himself has a PhD in philosophy and was an academic before going freelance. And for his spectacularly successful podcast, Philosophy Bites, he tends to interview… you guessed it! Professional philosophers! (Including yours truly.) Because they have knowledge of their field, and interesting things to say about it.


The bottom line


So, can we please get over this strange combination of defensiveness and disdain, and admit that philosophy is — among other things — a serious profession carried out by people with expertise? As I argued above, there is far more at stake here than a petty turf war or wounded egos. Taking philosophy (and the humanities) seriously may be what ultimately will save us from the forces of obscurantism and tyranny.

Biology’s last paradigm shift and the evolution of evolutionary theory – part II

Last time we have seen how evolutionary theory has evolved over the past century and a half, why so many contemporary biologists are calling for what they refer to as the Extended Evolutionary Synthesis (see here and here), and how Darwin, building on David Hume, definitely rebutted the intelligent design argument advanced by William Paley. All as part of a discussion of a paper I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.” (full text here) In this second part we are going to look at whether the transition between natural theology and Darwinism constituted a paradigm shift, according to criteria laid out by philosopher of science Thomas Kuhn. As I mentioned last time, in the paper I also apply the same analysis to what happened after Darwinism, to more and more recent incarnations of evolutionary theory, but will not discuss that section here.


According to Kuhn, change in science is comprised of two distinct and alternating phases: during “normal science” scientists use the dominant theoretical and methodological tools within a field of inquiry to solve “puzzles”, i.e. problems arising within a particular theory. However, from time to time the number of such problems that cannot be resolved within the adopted framework (“anomalies”) becomes large enough to trigger a crisis, which is then resolved if a new “paradigm” is arrived at to replace the old framework and provide new guidance for further normal-puzzle solving science.


Typically, one of the problems with the Kuhnian approach is that Kuhn did not define exactly what he meant by paradigm, which means that it is not entirely clear what may constitute a paradigm shift. For the purposes of my argument, I will use the commonly accepted interpretation of paradigms as encompassing the “disciplinary matrix,” which means not just the dominant theory or theories within a given field, but also the accompanying methodologies, training strategies for the next generation of scientists, and – no less important – the pertinent metaphysical and epistemological assumptions.


Kuhn suggested five criteria for comparing competing paradigms and for theory choice: 1) Accuracy; 2) Consistency, both internal and with other theories; 3) Scope, in terms of how widely the explanatory reach of a theory extends; 4) Simplicity; and 5) Fruitfulness, in terms of further research. Roughly speaking, then, the comparison between the two paradigms of natural theology and Darwinism is striking. Let’s go through it criterion by criterion.


Accuracy


Natural theology: all explanations are ad hoc, since God’s will is inscrutable.


Darwinism: it can explain some surprising facts about the biological world, like the complexities of the flower structure in some orchid species, or the intricacies of the life cycles of some parasites.


Consistency


Natural theology: internally inconsistent with the idea of an all-powerful, all good God (the problem of natural evil).


Darwinism: as internally consistent as any major scientific theory; external links to other sciences, particularly Darwin’s prediction that the age of the earth had to be greater than what commonly thought by geologists and physicists of the time (turns out, he was right).


Scope


Natural theology: allegedly all-encompassing, but supernatural “explanations” are epistemologically empty. That is, to say “God did it” sounds like an explanation, but it really doesn’t explain anything.


Darwinism: new facts about the biological world that are explained by the theory have been consistently uncovered for more than one and a half centuries.


Simplicity


Natural theology: deceptively simple, if one neglects the obvious question of the origin and makeup of the Creator.


Darwinism: in its original form invokes a small number of mechanisms to explain biological history and complexity; more recent versions invoke more mechanisms, but still a relatively limited number.


Fruitfulness


Natural theology: did not lead to any research program or discovery.


Darwinism: has maintained a vigorous research program for more than one and a half centuries.
According to the above summary, then, the Darwinian paradigm is definitely preferable to Paley’s natural theology – not surprisingly. More interestingly for our purposes here, these are all clear signs of a paradigm shift, the only one ever occurred in evolutionary biology, I argue in the rest of the original paper.


Kuhn’s theory of paradigm shifts famously included another controversial notion: incommensurability, the idea that crucial concepts within a given paradigm are simply not comparable to what superficially appear to be equivalent concepts within another paradigm. Kuhn identified three distinct types of incommensurability: methodological, observational and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. Obviously, natural theology and Darwinism are methodologically incommensurable: while they both rely on observation and comparative analyses, their goals are entirely different. For Paley, the focus is on the intricate complexity of living organisms, constantly interpreted as an obvious indication of the will and omnipotence of the Creator. Darwin, instead, pays particular attention to precisely those biological phenomena that are troubling to the notion of intelligent design, as in this famous passage:


“I cannot persuade myself that a beneficent and omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of Caterpillars.” (letter collected by Francis Darwin 1887).


More broadly, the sort of “puzzles,” to use Kuhn’s terminology, that Darwinists began to pay attention to concern the historical relationships between different species of organisms (something that is defined out of existence within the natural theological paradigm, since species are specially created), as well as the kind of ecological settings that bring about different adaptations (again, a problem ruled out within natural theology, where adaptations are the direct result of an intelligent act).


Observational incommensurability is tightly linked to the idea that observations are theory-dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. This is perhaps one of the most controversial of Kuhn’s notions, famously illustrated with images from Gestalt psychology, where the same pattern of lines on paper can be interpreted in dramatically different fashions (e.g., a vase or two faces, an old or a young woman, a rabbit or a duck, etc.).


The problem, of course, is that if we take the Gestalt metaphor seriously, we are led to the position that there is no true or even better way to interpret the data, which in turn leads to the constructivist temptation: any theory is just as good as any other, and there really is no way to measure progress in science. Kuhn strongly disavowed such an extreme interpretation of his ideas, and the notion of theory-dependence of observations is now commonly accepted in philosophy of science and embedded in textbook treatments of the subject.


Be that as it may, it is hard to imagine examples of observational incommensurability between natural theology and Darwinism, in part no doubt because no sophisticated way of gathering data was accessible – beyond direct observation and rudimentary experiments – to proponents of the two paradigms.


Finally we get to semantic incommensurability. This has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being the concept of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity.


For the purposes of our discussion, one could make the argument that a similar situation holds for the shifting concept of species between natural theology and Darwinism. Both paradigms do refer to “species,” but the meaning of the term is entirely different. For Paley, species were fixed entities set in place by the action of the Creator – in that sense not far from Newton’s own conception of the physical world, and particularly of the laws governing it. For Darwin, however, species are ever changing entities with no sharp boundaries, which are altered by evolutionary processes in a continuous, gradualistic fashion.


All in all, then, it appears that whether we use the first set of Kuhnian criteria or the various notions of incommensurability, there are very strong reasons to conclude that the shift between natural theology and Darwinism was, in fact, a paradigm shift. It was also, in a very important sense, a shift from a proto-scientific to a scientific view of biology: Darwin and Wallace abandoned any reference to supernatural forces, thus literally establishing a whole new field of science, which keeps, ahem, evolving even today.

Biology’s last paradigm shift and the evolution of evolutionary theory – part I

I find myself devoting more and more time to public outreach and what is increasingly referred to as public philosophy. But, you might know, I am also an academic and publish scholarship in philosophy of science. Which is why from time to time I like to combine the two and do a post that highlights for the general public some of my own technical publications. (A series of these, covering the range 2010-2016, can be found here.) The present essay refers to a paper that I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.”


The theory of evolution, which provides the conceptual framework for all modern research in organismal biology and informs research in molecular biology, has gone through several stages of expansion and refinement. Darwin and Wallace proposed the original idea back in 1858, centering on the twin concepts of natural selection and common descent. Shortly thereafter, Wallace and August Weismann worked toward the complete elimination of any Lamarckian vestiges from the theory, leaning in particular on Weismann’s concept of the separation of soma and germ lines, and resulting in what is sometimes referred to as “neo-Darwinism.”


The theory then experienced a period of “eclipse” at the turn of the 20th century. Many biologists accepted the idea of common descent but either rejected or greatly de-emphasized the importance of natural selection as an evolutionary mechanism. The situation was exacerbated by the rediscovery of Mendel’s work, which pointed to an apparent incompatibility between discrete inheritance particles (“genes”) and the sort of continuous quantitative variation necessary for natural selection to produce gradual evolutionary change.


Famously, the crisis was overcome with the reconciliation of Mendelian and statistical genetics made possible by Ronald Fisher, J.B.S. Haldane and Sewall Wright in the 1930s, and that later on culminated in the Modern Synthesis (henceforth, MS) of the 1940s, to which several additional authors made important contributions, including but not limited to Theodosius Dobzhansky, Julian Huxley, Ernst Mayr, and George G. Simpson. The MS is still the version of the theory largely accepted by the scientific community, and it is what graduate students in the discipline are trained on.


More recently, several authors have pushed for an Extended Synthesis (henceforth, ES) in evolutionary biology, initially from a variety of individual perspectives largely rooted in particular fields of inquiry, such as Evo-Devo or phenotypic plasticity, and now with a more concerted effort aimed explicitly at the formalization of a broader conceptual framework for evolutionary biology (see here).


The ES is very much a work in progress, but the idea is to accomplish a number of goals that have so far proven somewhat elusive: first and foremost, to finally bring developmental biology – famously left out of the MS – into the fold. Second, to provide a coherent way to reconcile the “holistic” tendencies of organismal biological research with the decidedly more reductionist approach of molecular biology and its most recent products, the various “-omics” (genomics, proteomics, metabolomics, etc.). Third, to incorporate as primary players a number of biological phenomena and processes that had been either discarded or minimized within the context of the MS, e.g., phenotypic plasticity, genetic accommodation, epigenetic inheritance, etc. Fourth, to expand the standard theoretical toolkit of the MS – which is primarily grounded in population and quantitative genetic theory – to include elements from computational biology and complexity theory. Fifthly, to incorporate in evolutionary theory new concepts that have emerged from theoretical research during the past several years, chief among them the triad constituted by evolvability (i.e., the possibility of the evolution of evolutionary mechanisms), modularity (of different components of an organism’s phenotype) and robustness (i.e., the degree of resilience of developmental mechanisms).


In the paper I address the question of whether any of the above amounts to something akin to Thomas Kuhn’s famous paradigm shifts, i.e. whether evolutionary biology has ever undergone anything like what Kuhn describes as a moment of revolutionary science. I argue that it has not, and that it will not, even if the ES will succeed in establishing itself. Rather, I think the only time in the history of biology when such a transition has occurred was during the 19th century, when Darwin’s original theory replaced the dominant “paradigm” of the day, Paley-style natural theology. In the following I will discuss that particular paradigm shift, together with the general Kuhian notion of alternation between revolutionary and “normal” science. People interested in the same analysis applied to what happened to evolutionary theory after the onset of Darwinism are referred to the full paper.


William Paley is responsible for the most articulate defense of the idea that living organisms are the result of a special creation by supernatural forces. In his Natural Theology (1802) he famously introduced the metaphor of a watch and its watchmaker:


“In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there: I might possibly answer, that for any thing I know to the contrary, it had lain there for ever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer which I had before given, that for any thing I knew, the watch might have always been there. Yet why should not this answer serve for the watch, as well as for the stone? Why is it not as admissible in the second case as in the first? For this reason, and for no other, viz., that when we come to inspect the watch, we perceive (what we could not discover in the stone) that its several parts are framed and put together for a purpose (…) This mechanism being observed (…) the inference, we think, is inevitable, that the watch must have had a maker; that there must have existed, at some time, and at some place of other, an artificer or artificers, who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.” (1802, p. 5).


Paley argued by analogy from the watch/watchmaker inference to the complex living organism/supernatural intelligent designer inference, despite the fact that such inference had been dealt a devastating philosophical blow by Hume, writing several decades before Paley, in 1779. Contrary to what has been assumed for a long time, Paley appeared to have been familiar with Hume, though he was obviously unconvinced by the latter’s arguments.


Darwin was initially persuaded by Paley’s reasoning, but eventually of course provided the decisive counterpoint that was missing in Hume: an alternative mechanism (natural selection) to generate both biological complexity and the pattern of common descent that was denied by Paley’s natural theology. It behooves us to briefly examine Darwin’s answer, so that we may then proceed to compare Paley’s and Darwin’s “paradigms” in view of Kuhn’s ideas to see whether we can sensibly talk of a paradigm shift occurring at the very onset of evolutionary biology as an independent discipline.


Although Paley is mentioned by name only once in The Origin (on p. 201 of the first edition in the chapter on “Difficulties on Theory”), Darwin mounts a concerted and sustained attack on natural theology in chapters VI and XIII of his magnum opus. Here are some relevant quotes to establish the case. First Darwin explicitly contrasts the type of “explanation” provided by natural theologians with a naturalistic explanation typical of the new science:


“He who believes in separate and innumerable acts of creation will say, that in these cases [of organisms’ behavior that have changed in response to a new environment, without – yet – an accompanying change in the structure of the proper organs] it has pleased the Creator to cause a being of one type to take the place of one of another type; but this seems to me only restating the fact in dignified language. He who believes in the struggle for existence and in the principle of natural selection, will acknowledge that every organic being is constantly endeavouring to increase in numbers; and that if any one being vary ever so little, either in habits or structure, and thus gain an advantage over some other inhabitant of the country, it will seize on the place of that inhabitant, however different it may be from its own place.” (1859, p. 186)


Then he proceeds to directly criticize Paley’s use of analogies to draw a parallel between the inference to human design and the inference to supernatural design:


“It is scarcely possible to avoid comparing the eye to a telescope. We know that this instrument has been perfected by the long-continued efforts of the highest human intellects; and we naturally infer that the eye has been formed by a somewhat analogous process. But may not this inference be presumptuous? Have we any right to assume that the Creator works by intellectual powers like those of man?” (p. 188)


Immediately afterwards, he goes so far as laying out the criteria for the falsification of his hypothesis, in sharp contrast of course with the natural theologian’s ideas, which cannot be falsified:


“If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. … Natural selection will never produce in a being anything injurious to itself, for natural selection acts solely by and for the good of each. No organ will be formed, as Paley has remarked, for the purpose of causing pain or for doing an injury to its possessor. If a fair balance be struck between the good and evil caused by each part, each will be found on the whole advantageous.” (pp. 189, 201)


To summarize, then, the idea of intelligent design – which had been around at least since Plato’s Timaeus – had been severely crippled on philosophical grounds by Hume in the 18th century. Still, Paley was able to mount a spirited and detailed defense of it at the onset of the 19th century, while Darwin provided the final blow to it (except of course for the modern resurgence of creationism, which is not an intellectually meaningful movement) for the first time on scientific grounds during the second part of the 19th century. It is on the latter transition that I wish to focus in part II, from the perspective of Kuhn’s paradigm shifts.


(next: the Paley-Darwin transition as a paradigm shift)

Is exposing medical pseudoscience unethical?

You would think the answer to the title question is obvious: no, of course it isn’t unethical to expose medical pseudoscience (or any kind of pseudoscience). How can anyone think so? And yet, there are some decent reasons to doubt the certainty of such a conclusion. At least that’s the take of a paper that so far as I know has not yet been published, but is available at the PhilSci (philosophy of science) Archive at the University of Pittsburgh.


The paper is entitled “Exposing medical pseudoscience may be unethical,” and is authored by Ehud Lamm. Here is Lamm’s argument, as he presents it:


(1) Many ill health situations, in particular those involving multi- and comorbidity do not have very effective treatment options.


(2) Alternative medicine, consisting of various pseudo-scientific enterprises, provides repose to some of those suffering from such conditions.


(3) A significant part (if not all) of this benefit can be attributed to the placebo effect.


(4) In current, “Western,” societies, placebos are most effective when they have the symbolic currency of science.


(5) Ergo, exposing these pseudo-sciences will hurt people for whom there are only limited health care alternatives.


This is a reasonably constructed argument, so if one wishes to reject it — as I do — one needs to attack one or more of the premises, or to uncover and then reject some additional, hidden premises. I will follow both strategies here, though I wish to thank Lamm at the onset for having forced me to think more carefully about something I had hitherto taken for granted.


I am going to agree right away with premise (3): most (indeed, very likely, all) of the benefits of alternative medical treatments are the result of placebo effects. But I have varying degrees of issues with premise (1), (2), and (4). And I also think Lamm is not explicitly listing a few other very relevant premises here, though he mentions one of them in passing in the paper, as we shall see, only to discard it far too quickly.


Premise (1): this threads on an ambiguity surrounding the word “treatment.” Yes, modern medicine does not provide effective treatment against certain conditions, for instance a number of types of terminal cancer, or several types of dementia. But alternative medicine does not provide treatments either, according to Lamm’s own premise (3), only palliatives. Which means that the proper comparison here ought to be only between standard palliative care and “alternative” one. Palliative care works on the basis of a combination of actual biomedical effects (e.g., painkillers) and placebo. Other things being equal, then, it is more effective for a doctor to use her prestige (premise 4) to push tested palliative options rather than pseudoscientific ones.


Premise (2): for the reasons just seen, even if alternative medicine does provide some respite to patients by way of a placebo effect, standard palliative care is in any case preferable, because it also works by placebo, but additionally actually delivers (at least in some cases) a real biomedical effect. Again, remember that we are talking about alleviating pain and similar things, not actual cures, which are not achievable via placebo, and Lamm agrees that placebo is pretty much the only mechanism through which pseudo-medicine “works.”


Premise (4): the symbolic currency of science in the Western world seems much less real than Lamm assumes. Witness the widespread antivax movement, or — outside of medicine — so-called “skepticism” of climate change, or of evolutionary theory. If anything, science is suffering a credibility crisis of late, and inducing medical doctors to deploy pseudoscience is hardly going to improve such credibility.


If the reader agrees with even one of my criticisms of Lamm’s premises, that is enough to reject the argument. But there is more.


For instance, Lamm at one point says: “I am putting to one side deontological and consequentialist arguments against the use of placebos in general, and assume that in some situations relying on a placebo effect is legitimate.” That’s a bit too quick, and a rather big thing to put aside (and the bit after the comma somewhat begs the question)! A deontologist, for instance, might argue that it is a violation of the categorical imperative for a doctor to systematically lie to his patients, because that violates the implicit trust between the two (the very same trust on which premise 4 is built, incidentally), and because the doctors themselves would probably rather not been lied to when it is their turn to be patients.


On consequentialist grounds, one could argue that there are long term negative societal consequences engendered by lying to patients and by pushing pseudoscientific notions. Because, again, patients might eventually lose confidence in their doctors and begin to doubt any sort of medical advice; also because people will be more likely to embrace pseudoscience under other circumstances, for instance when there are, in fact, perfectly good evidence based treatment options available; and mroevoer because we would be condoning a multibillion dollar industry based on what is essentially a fraud.


Furthermore, there is a third big ethical framework out there: virtue ethics. It is hardly a good character trait to engage in systematic deception of others, even for their own good (a premise, this last one, that I have already rejected). Virtue ethics does not follow universal rules, so lying can be acceptable under certain circumstances. But the focus is on the character of the moral agent, and repeated lying is going to be deleterious to that character, since character is shaped precisely by repeating the same actions over and over, or by systematically entertaining and agreeing with the same thoughts.


Lamm sees another concern with exposing pseudo-medicine: “the movement toward Evidence Based Medicine (EBM), to the extent that it is successful, may lead people to be less open to treatments and assistance that are not backed by science, such as talk therapies, meeting with alternative medicine practitioners, support from informal social networks, help from clergy, and so on.”


This is an interesting point, but the way it is articulated it carelessly lumps a number of things that should be considered separately. For instance, homeopathy is clearly and definitely a pseudoscience, so it should be counted as a success if people do not waste their money (and hopes) on very expensive sugar pills. Talk therapies, however, are not necessarily pseudoscientific. We have good evidence that some forms of psychotherapy work (e.g., cognitive behavioral therapy), and there are distinctions to be drawn about what “working” means here. If a patient is in need to simply talk to someone, but is under no illusion that this will actually cure him, I don’t see what the problem is, or why such talk therapy should be counted as pseudo-medicine in the first place. Perhaps it won’t work better than getting a drink with your friend, but if chatting is the only thing one needs in order to feel better, it “works.” Much hinges, of course, on what the precise claim of the therapist is going to be. The very same considerations apply to the use of informal social networks, or to help from the clergy. These things are simply not in the same category of homeopathy and other kinds of pseudo-medicine.


And there is more to consider, like the general ideas that truth and honesty are intrinsically valuable, and should be compromised only under very unusual circumstances. Certainly both a deontologist and a virtue ethicist would reason this way, though a consequentialist might disagree (there goes yet another reason to reject consequentialism, in my view).


Lamm does consider some objections to his argument in the second part of the paper, and rejects them with a number of counterarguments. I have already covered part of what he says there in the previous points, but let me add a few pertinent comments.


He writes: “we should strive for alternative channels of social support to be available and determine whether non-science backed alternatives can provide patients with social and economic benefits that they need, such as health insurance coverage, help with the education system, and so on.” Well yes, as acknowledged before, but again this thread on an ambiguity of the term “support.” There is no problem in supporting people via talk therapy, social networks, religious counseling, and so forth. But these are not types of alternative medicine, and so long as the people providing the support are not lying, then there is no ethical issue. (Of course if you are an atheist you will think that all priests are lying by definition, but this is not the case from the point of view of a believer who has already adopted that particular religion’s framework. Admittedly though, that’s a tough case, best explored on its own.)


Lamm also maintains that some pseudoscientific “treatments” may be cheap and effective (again, via placebo) and therefore preferable whenever there are no established medical options for the condition. But we have seen above that at a very minimum modern medicine can provide palliatives, and let’s remember that much alternative medical practices are anything but cheap.


The author acknowledges a big potential downside of his approach: “if the pseudoscience providing the placebo effect causes patients to ignore science backed treatment we have to consider the tradeoff to decide whether belief in the efficacy of pseudoscience is beneficial or harmful.” Indeed, and this may often be the case. Not to mention that so long as the patient is also seeking regular medical treatment then, again, there is no additional benefit of the pseudo-medical one, which will cost money not just to the patient, but also, increasingly (and unfortunately) to the health insurance or government-backed apparatus that will pay for the so-called treatment.


Finally, Lamm argues: “It may be mistakenly suggested that the argument presented here is a justification for fraud (assuming the pseudoscience in question involves fraud). However, if anything, the argument discussed here only applies to exposing fraud, which in some cases may do more harm than good.” I don’t see how this would work: if a medical practitioner knows that a given remedy is pseudoscientific and recommends it nonetheless, he is participating in a fraud, even in cases in which he is not directly pocketing money for it (someone else is, and besides, he is being paid to take care of the patient). So, refraining from exposing fraud is, seems to me, condoning and abetting fraud no matter how one puts it. This may be a case of philosophical distinction without a difference.


Again, I recognize that Lamm’s paper is important because too often people like myself, who have pursued a career of exposing pseudoscience and defending science, make leaps of reasoning from the factual to the ethical, automatically assuming certain value judgments without further argument. But it seems to me that, once the arguments have been examined, there is still a strong preponderance of reasons to expose pseudo-medicine whenever possible.

Evolution, moral realism, and conditional imperatives

Is it true that genocide is wrong? Most of us would respond in the affirmative, would think that the answer is obvious, and would regard anyone answering in the negative as a psychopath, best immediately locked up somewhere where he can’t hurt people. And yet, that kind of response implicitly assumes that there is a fact of the matter about moral pronouncements, that some statements in ethics are true or false. But by what standard?


Moral truths — if they exist — don’t appear to be on par with scientific truths, despite much nonsense that has been written about it in recent years (see here and here). If a scientist says that, for instance, the planet Saturn has rings, that statement is true if, and only if, it turns out that Saturn does, in fact, have rings. This is referred to in philosophy as the correspondence theory of truth: a statement is true if it corresponds (to the best of our knowledge) to what’s actually “out there.”


Moral truths are also not (quite) like mathematical truths. In mathematics the Pythagorean theorem, say, is “true” if it can be derived deductively from a small number of axioms. The reasoning that leads to its derivation has to be coherent, meaning that the theorem has to be logically entailed by the axioms, and not lead to or imply any contradiction. This is known as the coherence theory of truth.


I don’t mean to imply that there is a sharp distinction between science and mathematics, nor that individual applications may not rely on a combination of the two theories of truth (indeed, we’ll see one such application below), but the above is a basic sketch that will serve us well in this essay.


So if moral truths don’t follow either a correspondence or a coherent account, what are we left with? Enter error theory. According to the excellent Stanford Encyclopedia of Philosophy:
“Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that level headed people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn’t believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn’t believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk.”


The upshot is that if you (like the majority of people) believe that there are such things as moral truths, you are a moral realist, but you need to provide an account of where moral truths come from. If you reject the existence of moral truths (and error theorists are just one class of philosophers who do) then you are left with the task of explaining how come so many people are prone to this particular type of error.


This is why I was curious to read a recent paper by eminent philosopher of science Kim Sterelny and his University of Canberra colleague Ben Fraser, entitled “Evolution and moral realism,” and published in the British Journal for the Philosophy of Science. Here is a summary of their approach:


“We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). We: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. … True moral beliefs are a ‘fuel for success,’ a map by which we steer, flexibly, in a variety of social interactions.”


Let me unpack the above, and see where this leads us. The rather uncontroversial premise of Sterelny and Fraser’s paper is that our sense of right and wrong derives from an instinct that was probably favored by natural selection in order to improve our prosocial behavior, because the latter — in highly social species like ours — increases individual survival and reproduction, which are the only things natural selection “cares” about. Elements of prosocial behavior of this sort, which we would call moral if observed in humans, are indeed present in other species of primates.


But as Sterelny and Fraser point out, evolutionary accounts have largely being co-opted by error theorists: while moral language is what philosophers call “truth-apt” (i.e., it looks like it’s referring to truths), actual moral claims cannot be demonstrated to be true, since neither the correspondence nor the coherence theory seem to apply.


This has actually always sounded strange to me, for the following reason. A similar argument could be made that natural selection evolved our intelligence not in order for us to discover truths about the world (including scientific truths), but rather to figure out how to best our rivals within social groups. This is referred to as the Machiavellian theory of the origin of mind (but see here for one criticism), and it would lead us to conclude that even our scientific utterances are “truth-apt” and yet “never true or warranted.” One theologian, Alvin Plantinga, actually makes that argument against naturalism (as opposed to supernaturalism), ironically using evolutionary theory to conclude that either evolutionary science is untrustworthy or philosophical naturalism is wrong. I think Plantinga’s argument is bogus, though it would require a separate essay to show why (maybe at a later time). Still, it seems really weird to say that science (including evolutionary theory) doesn’t at least approximate truths about the universe, given how well its products work in practice. So error theorists shouldn’t be that quick to co-opt evolutionary theory on behalf of their position, or they risk falling into something like Plantinga’s dilemma.


Which lead us to the three points that constitute the heart of Sterelny and Fraser’s paper. They consider evolutionary error theory within a broader framework, the continuum between “folk” (i.e., everyday) and scientific understanding of things. Let’s clarify by means of an example not related to ethics: the phenomena of sunrise and sunset. The folk understanding in pre-scientific times was that, literally, the sun would rise above the horizon every morning, and set below it every evening. The sun was understood in a variety of ways, metaphysically, but usually as some kind of god or manifestation of the divine. The scientific account, of course, is that the sun isn’t doing any such thing, and in reality it is the earth that rotates on its axis, causing the illusion of sunset and sunrise. An evolutionary error theory would say that although sunrises and sunsets are illusions, in the sense that they are not a truthful description of what is going on, they are useful, since people can regulate their days accordingly. For everyday life, it simply doesn’t matter whether it is the sun that rises or sets, or the earth that rotates around its axis.
This is why Sterelny and Fraser say that according to this approach “many folk conceptual structures are neither hopelessly wrong nor fully vindicated.” But how is evolutionary morality cashed out, using this framework? On the one hand, there cannot be any simple reduction of moral truths to scientific facts. On the other hand, “moral facts are facts about cooperation, and the conditions and practices that support or undermine it.”


This is an interesting move, but I think it succeeds only in part. Sterenly and Fraser are ambitious here, as they want to ground a kind of moral realism, or quasi-realism, in evolutionary theory. Essentially, they are saying that moral truths follow the correspondence account outlined above, in that something is morally true just in case it fosters cooperation among human beings, and it is morally wrong if it doesn’t.


But this simply cannot be the full story. I think it is morally right (“true”) to cooperate with the entire human race in order to achieve a peaceful and prosperous world. Unfortunately, this is certainly not the sort of cooperation that natural selection has ever fostered. On the contrary, human evolution has been characterized by competition, not cooperation, among groups, with cooperation limited to each in-group. Indeed, it can be argued that the natural human trait of xenophobia (which I assume we would unequivocally label as morally wrong) has been adaptive for much of the history of Homo sapiens: if someone looks different from members of your in-group, he’s probably dangerous and you should be wary of him.


It is true that Sterelny and Fraser are careful, and are not committed to the simplistic notion that whatever behavior was favored by natural selection it is ipso facto morally good. But there are simply far too many discrepancies between what a theory of evolutionary morality would predict and what most people nowadays consider morally right or wrong for their approach to get us very far.


What then? Are the error theorists right after all? I don’t think so. I suggest that the sort of considerations articulated by Sterelny and Fraser provide a good account of how a natural moral instinct might have evolved: to favor in-group prosociality. But ethics since the invention of language, and especially since the invention of philosophy in different parts of the world between two and a half and three millennia ago, has been about using reason to precisely articulate and usually expand what counts as moral. Slavery, oppression of women, and xenophobia were initially considered morally acceptable, because they either enhanced or did not get in the way of the functioning of human groups. But today we think of all those notions as morally wrong, and for good reasons.


These reasons are the result of a combination of a basic prosocial human nature, an innate sense of the existence of right and wrong things, and of reason applied to the amelioration of the human condition. The first two components are the result of biological evolution, the latter of cultural evolution, which took over once we left the African savanna between 70,000 and 40,000 years ago, and especially after the agricultural revolution of 12,000 years ago. While the natural selective imperative is to survive in order to reproduce, the cultural imperative goes well beyond it: we want to flourish, to pursue projects, to acquire a certain degree of independence, and so forth. Biology simply cannot account for that.


What does, then? As philosopher Philippa Foot famously argued in her landmark 1972 paper, “Morality as a System of Hypothetical Imperatives,” moral truths are conditional truths. IF we wish to build a peaceful world THEN xenophobia is immoral. IF we wish to maximize every agent’s ability to flourish THEN the oppression of groups or individuals is immoral. And so forth.
This makes moral truths a combination of correspondence and coherence. The correspondence part derives from the notion that there are certain facts about the human condition that we can ascertain empirically. For instance that individuals flourish if they are accorded some rights, like the right to health care, or education. The coherence part comes from the fact that IF … THEN statements are a matter of logic, and so reasoning built on their bases is valid in some cases and invalid in others.


Another way to put it is this: if moral reasoning is reasoning about hypothetical imperatives, as Foot correctly, in my mind, argued, then the structure of that reasoning is a matter of logic (coherence) while the assumptions from which one begins any such reasoning are empirical in nature (correspondence). If human beings were radically different kinds of beings, our moral philosophy would look very different, or perhaps wouldn’t exist at all. Hence the relevance to ethics of the concept of human nature.


Some people may be unhappy with what they will conceive as a weakened sense of moral truths. They want categorical, not just hypothetical imperatives. They wish for universal, mind-independent moral truths. Too bad, there is no such thing. Others will deny the above account and embrace a relativist position in which morality is an entirely arbitrary human construct. While theoretically possible, I challenge anyone who adheres to such position to actually live by it. It’s not going to happen because it isn’t a good concept of morality for humanity. Foot-style categorical imperative offer us the best available option to navigate between the Scylla of strict moral realism and the Charybdis of strict moral anti-realism.

Does the universe suffer from multiple personalities disorder?

I am the sort of rare philosopher who is somewhat skeptical of metaphysics. For instance, I recently wrote that I don’t think there is such thing as metaphysical necessity or impossibility, because those two categories are exhaustively covered by physics and logic: something is either physically or logically impossible / necessary. But if there is something that really makes my bullshit detector go up to red alert when it comes to metaphysical claims is the increasingly popular idea of panpsychism.


There are a number of versions of it (we will encounter a couple below), but essentially the notion is that consciousness is not — as biologists and neuroscientists would understand it — a highly evolved trait present only in human beings and, to a lesser extent, in other species with sufficiently complex brains. On the contrary, panpsychists think that it is an elemental property of the universe, like mass, or the spin of a particle, and is therefore present everywhere.


Needless to say, there is not a shred of empirical evidence that panpsychism is a correct description of the world, and the notion, in modern metaphysics, is tightly linked to the solution of an entirely made up (in my opinion) problem in philosophy of mind: some philosophers, like my New York University colleague David Chalmers, just can’t imagine how a mass of meat, electrical signals and chemicals (i.e., your brain) can possibly produce the first-person experience we all commonly have when we see red (literally, as in the color), or experience sexual pleasure, or think and feel anything at all.


Chalmers calls this the “hard problem” of consciousness, and I have argued that there is no such thing. Consciousness has not, yet, been understood by science, but there is no reason in principle why it couldn’t. It’s “hard” only in the sense that it requires a hell of a lot of imaginative empirical work.


The fact is that people like Chalmers find themselves in a pickle. Since they maintain that consciousness is a problem irreducible to the methods of science, they have to postulate some sort of dualism, i.e., a radical, qualitative separation between what regular matter does and what thinking matter does. The most famous dualist was Descartes, who thought that there are actually two different kinds of matter: res extensa and res cogitans (this is called substance dualism).


This sort of dualism has gone, thankfully, out of fashion in philosophy, only to be replaced by a milder (but I think equally untenable, in the form in which it is usually presented) type, known as property dualism. Property dualists like Chalmers argue that when matter (the same matter, not two different kinds as hypothesized by Descartes) is organized in a certain complex manner, then consciousness somehow emerges. This is problematic because nobody seems to have a clue about what emergence means in this case, or how to cash it out as an actual explanation of consciousness. It’s sophisticated hand waving, but hand waving nonetheless.


Enter panpsychism. Chalmers and others have figured out that this very old notion (it is found in a number of cultures across the globe, for instance in Stoic philosophy in the West) can be couched in modern philo-scientific jargon and made to do the work to solve the hard problem. Indeed, for a panpsychist, in a sense, the hard problem dissolves into a non-problem, because consciousness does not have to emerge from certain organizational patterns of matter, since it is a foundational property of matter itself. It’s consciousness all the way down, so to speak.


The idea is elegant an appealing. And I assure you that, as a modern Stoic practitioner, I would love for it to be true! But it is ad hoc, meaning that the only reason to believe it, so far, is that it solves an artificial problem created by philosophers of mind themselves. There is no empirical evidence or independent theoretical support (say, from biology, or fundamental physics) for us to believe it.


Which is why we now turn to a recent essay by Bernardo Kastrup, Adam Crabtree, and Edward Kelley, entitled “Could multiple personality disorder explain life, the universe and everything?” Yeah, you read the title right (and did you notice the reference to the brilliant Hitchhiker’s Guide to the Galaxy?) The article, published over at the Scientific American blogs, is a lay summary of a technical paper that I honestly can’t believe passed peer review: The universe is consciousness, which appeared in the Journal of Consciousness Studies (David Chalmers is on the advisory editorial board, tough that doesn’t mean there is any direct connection between him and the paper).


Before I get to the crazy part of Kastrup and colleagues’ article, let me talk about the bits where I agree with the authors. First off, multiple personality disorder, or as it is now known, dissociative identity disorder (DID), is real, and these authors are credentialed experts in that field. I am, therefore, not questioning what they say about the disorder itself.


People affected by DID switch between a number of alternative personalities, each characterized by its own distinctive behavior. Recent research has shown that there are clear neural correlates to each “alter.” For instance, a woman with DID exhibited some alters that claimed to be blind, even though there was nothing wrong with her optic nerve or any other part of her visual system. Using EEGs, researchers were able to confirm that the component of the woman’s brain activity normally associated with sight was, in fact, not present whenever one of her blind alters was in control. By contrast, when a sighted alter reasserted control, the usual brain activity returned. So the phenomenon is real, with a clear and demonstrable biological underpinning.


I also agree with Kastrup and colleagues’ criticism of standard versions of panpsychism. Specifically, they call the sort of panpsychism I described above “constitutive panpsychism” and write:


“Constitutive panpsychism has a critical problem of its own: there is arguably no coherent, non-magical way in which lower-level subjective points of view — such as those of subatomic particles or neurons in the brain, if they have these points of view — could combine to form higher-level subjective points of view, such as yours and ours. This is called the combination problem and it appears just as insoluble as the hard problem of consciousness.”


Yup, exactly.


Kastrup and colleagues then move to something called “cosmopsychism,” which is the idea that consciousness is indeed spread throughout the universe, but it isn’t particulate (i.e., present in bits and pieces in particles, molecules, rocks, neurons, and so forth) but rather one whole thing. This is really the old fashioned philosophical notion of idealism: there is only one, universal, consciousness.


But cosmopsychism also is no slam dunk:


“You don’t need to be a philosopher to realize the obvious problem with this idea: people have private, separate fields of experience. We can’t normally read your thoughts and, presumably, neither can you read ours. Moreover, we are not normally aware of what’s going on across the universe and, presumably, neither are you. So, for idealism to be tenable, one must explain — at least in principle — how one universal consciousness gives rise to multiple, private but concurrently conscious centers of cognition, each with a distinct personality and sense of identity.”


I think you know where this is going, right? Let us have Kastrup and colleagues tell us explicitly:
“We know empirically from DID that consciousness can give rise to many operationally distinct centers of concurrent experience, each with its own personality and sense of identity. Therefore, if something analogous to DID happens at a universal level, the one universal consciousness could, as a result, give rise to many alters with private inner lives like yours and ours. As such, we may all be alters — dissociated personalities — of universal consciousness. Moreover, there is something dissociative processes look like in the brain of a patient with DID. So, if some form of universal-level DID happens, the alters of universal consciousness must also have an extrinsic appearance. We posit that this appearance is life itself: metabolizing organisms are simply what universal-level dissociative processes look like.”


Holy crap. So we are now positing that the entire universe “suffers” from a multiple personality disorder because we need to solve a non-problem that we created ourselves out of stubbornly postulating that there is something special and quasi-magical about consciousness. And of course, all of this without either a modicum of empirical evidence or any serious theoretical reason (again, from either biology or fundamental physics) to back it up!


(Moreover, if the universe were suffering from DID and I were one of the alters, shouldn’t I perceive myself as a coherent entity looking out to the whole universe, just like human DID patients see the world around them from a standpoint of unitary consciousness, no matter which alter is in control?)


No my friends. I think it far more reasonable to take consciousness at face value. It’s a biological process (like photosynthesis, say), that evolved in certain groups of the phylum Animalia (but not in plants, fungi, bacteria, and the like) with a sufficiently complex brain. We do not know how it works in detail, though we are beginning to map its neural correlates. We also don’t know why consciousness was favored by natural selection (we infer that it must have been because the necessary brain structures are metabolically very costly), though there are hypotheses out there (it may have to do with our ability to create mental representations).


Let me be clear about one thing here: panpsychism, property dualism, and even substance dualism aren’t crazy ideas. They are not logically inconsistent or anything like that. But they are not consistent with everything we know from the natural sciences at this point. And if I have to choose between that knowledge and made up notions like the ones we have considered here, I’m whipping up my Occam razor and mercilessly slash through the whole shebang. At this point in time, the razor will surely cut down panpsychism. In the future, we’ll see, fate permitting.