Category Archives: Philosophy of Science

No, science does not provide all the answers to the big questions

From time to time a famous scientist allows himself (in my experience it’s always a man) to write nonchalantly about something of which he demonstrably has only a superficial grasp: philosophy. The list of offenders is a long one, and it includes Lawrence Krauss, Neil deGrasse Tyson, and Stephen Hawking, among several others. (Fortunately, there are also exceptions, scientists who value a constructive intercourse with the humanities, like Sean Carroll.) The latest entry in this dubious pantheon is Peter Atkins, who recently published a sloppy essay in the otherwise excellent Aeon magazine entitled “Why it’s only science that can answer all the big questions.” Oh boy.


Atkins begins by telling us that there are two fundamental kinds of “big questions”:


“One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. … Most questions of this class are a waste of time; and because they are not open to rational discourse, at worst they are resolved only by resort to the sword, the bomb or the flame. … The second class of big questions concerns features of the Universe for which there is evidence other than wish-fulfilling speculation and the stimulation provided by the study of sacred texts. … These are all real big questions and, in my view, are open to scientific elucidation.”


This is not news, of course, at all. David Hume — one of my favorite philosophers — made essentially the same argument back in the 18th century, in his case rejecting what he saw as the waste of time associated with the Scholastic metaphysics that had prevailed throughout the Middle Ages:


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” (An Enquiry Concerning Human Understanding)


With all due respect to Hume, it’s a good thing people didn’t follow his advice, or we would have lost his very own Enquiry Concerning Human Understanding, since that book doesn’t contain any abstract reasoning concerning quantity or number, nor does it contain any experimental reasoning concerning matter of fact. And yet, it is — justly — considered to be one of the most important works of modern philosophy.


Atkins apparently realizes that he may come across as a bit too facile, since he acknowledges that he is defining the big questions precisely as those that science can answer, turning then around to “demonstrate” that science is the only discipline equipped to answer such questions. As he drily puts it when considering the obvious charge of circularity: “that might be so.” Which doesn’t stop him from proceeding as if it were not so.


Atkins tells us that science is getting ready to tackle what he considers the next three big questions: How did the Universe begin? How did matter in the Universe become alive? and How did living matter become self-conscious?


I have no doubt, as a scientist, that those are, indeed, scientific questions. I’m slightly more skeptical, as a philosopher, that science will actually be able to come up with answers. Fundamental physics, after more than a century of uninterrupted success, seems to have entered a period of navel gazing where speculation (admittedly mathematically informed speculation) is poised to replace empirical evidence. So we shall see if and when we’ll actually get a “theory of everything,” and whether that theory will in fact be able to tell us how the universe began from “nothing” (there is some doubt that it will).


Regarding the second question, the origin of life, theories have been piling up for several centuries now, and yet we don’t seem to be particularly close to a resolution just yet. I’m certainly not arguing that it isn’t possible, but it’s a very, very difficult problem, for the simple reason that a lot of the historical traces have been lost. No geological strata survive from the time when the primordial earth was home to the first living organisms, meaning that researchers on the origin of life are like detectives who already know the smoking gun isn’t going to be found. At best, they’ll have to rely on circumstantial evidence. Even should we be able to produce life artificially in the laboratory that would not solve the problem, since it wouldn’t mean that life on our planet actually followed anything like that particular causal path.


As for consciousness, I remain convinced that the problem is indeed biological in nature, and that therefore developmental, evolutionary, and neuro-biology are the disciplines best positioned to find a solution. But at the moment nobody seems to have much of a clue, and common talk of the brain being a computer is finally beginning to be understood as the shaky and very likely misleading analogy that is.


So, yes, if any of those three big questions are going to be answered, the answer will be a scientific one. But what about other questions that arguably just as big (or, for most of us, even bigger)? Here Atkins shifts into full scientistic mode:


“I see no reason why the scientific method cannot be used to answer, or at least illuminate, Socrates’ question ‘How should we live?’ by appealing to those currently semi-sciences (the social sciences) including anthropology, ethology, psychology and economics.”


Please notice a number of interesting and revealing things about this sentence. First, Atkins is making the time-honored argument from personal incredulity: “I see no reason why…” Which, of course, is not an argument at all, but an elementary logical fallacy. Second, he is seriously hedging his bets when he immediately qualifies his initial statement: “or at least illuminate…” Ah, well, but philosophers since the Pre-Socratics have understood that empirical evidence (i.e., “science”) can illuminate philosophical questions. However, that’s a far more modest claim than the notion that science can actually answer those questions. Third, Atkins can’t help himself but deliver a contemptuous dig at the “semi-sciences.” This attitude, common among physicists, reflects a naive understanding of the philosophy of science, according to which physics is the (self-professed) “queen” of the sciences, and every other field will achieve full scientific status only when it will finally evolve into something that looks like physics. But an increasingly common view in philosophy is that there actually is a fundamental disunity of science, that “science” is only a loosely defined family resemblance term, reflecting the fact that each science has its own goals, methods, and internal standards, and that there is no universal yardstick to be appealed to in order to make comparative judgments of quality.


Going back to philosophy, the question of “how should I live?” admits of a large number of reasonable (and a lot of unreasonable!) answers, given the very same facts about the universe and human nature. It isn’t so much a question to be answered, as to be explored and clarified. Indeed, this is arguably what most fundamentally distinguishes science from philosophy.
One of my recent morning meditations is pertinent here. It begins with a quote by the Stoic philosopher Epictetus, who says in Discourses II, 11.13:


“Here you have philosophy’s starting point: we find that people cannot agree among themselves, and we go in search of the source of their disagreement.”


As I argue in the podcast episode, there are two broad sources of disagreement among human beings: factual and conceptual. If you and I disagree about, say, the number of moons orbiting around the planet Saturn, one of us is wrong, possibly both. There is a matter of fact about the issue, and we can find out the answer by asking an astronomer. Or more simply by doing a web search. If disagreement remains after that, then one of us is more than a bit obtuse.


The second kind of disagreement concerns how to think about facts, actions, and values. Here the facts are relevant, but insufficient to settle the dispute. Let’s say we have different opinions about the permissibility of assisted suicide. Certain empirical facts are going to be pertinent to the discussion, like information about how the procedure is going to be implemented, what safeguards there may be to avoid abuses, and so forth. But even if we agree on the facts, we may still disagree on the crucial issue: is assisted suicide morally permissible?


That’s the difference between science and philosophy, and why Epictetus says that philosophy begins with the search for why people disagree on things. Notoriously, philosophy does not necessarily settle such disagreements. The joke in philosophy departments is that our profession’s slogan is: “Philosophy: we have all the questions!” But what philosophy does, by means of careful analysis and reasoned argument, is to help us clarify why, exactly, we disagree. That is of huge help to people of good will who wish to honestly pursue discussions in search of better ways to conduct their lives. Atkins may want to take notice.

Biology’s last paradigm shift and the evolution of evolutionary theory – part II

Last time we have seen how evolutionary theory has evolved over the past century and a half, why so many contemporary biologists are calling for what they refer to as the Extended Evolutionary Synthesis (see here and here), and how Darwin, building on David Hume, definitely rebutted the intelligent design argument advanced by William Paley. All as part of a discussion of a paper I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.” (full text here) In this second part we are going to look at whether the transition between natural theology and Darwinism constituted a paradigm shift, according to criteria laid out by philosopher of science Thomas Kuhn. As I mentioned last time, in the paper I also apply the same analysis to what happened after Darwinism, to more and more recent incarnations of evolutionary theory, but will not discuss that section here.


According to Kuhn, change in science is comprised of two distinct and alternating phases: during “normal science” scientists use the dominant theoretical and methodological tools within a field of inquiry to solve “puzzles”, i.e. problems arising within a particular theory. However, from time to time the number of such problems that cannot be resolved within the adopted framework (“anomalies”) becomes large enough to trigger a crisis, which is then resolved if a new “paradigm” is arrived at to replace the old framework and provide new guidance for further normal-puzzle solving science.


Typically, one of the problems with the Kuhnian approach is that Kuhn did not define exactly what he meant by paradigm, which means that it is not entirely clear what may constitute a paradigm shift. For the purposes of my argument, I will use the commonly accepted interpretation of paradigms as encompassing the “disciplinary matrix,” which means not just the dominant theory or theories within a given field, but also the accompanying methodologies, training strategies for the next generation of scientists, and – no less important – the pertinent metaphysical and epistemological assumptions.


Kuhn suggested five criteria for comparing competing paradigms and for theory choice: 1) Accuracy; 2) Consistency, both internal and with other theories; 3) Scope, in terms of how widely the explanatory reach of a theory extends; 4) Simplicity; and 5) Fruitfulness, in terms of further research. Roughly speaking, then, the comparison between the two paradigms of natural theology and Darwinism is striking. Let’s go through it criterion by criterion.


Accuracy


Natural theology: all explanations are ad hoc, since God’s will is inscrutable.


Darwinism: it can explain some surprising facts about the biological world, like the complexities of the flower structure in some orchid species, or the intricacies of the life cycles of some parasites.


Consistency


Natural theology: internally inconsistent with the idea of an all-powerful, all good God (the problem of natural evil).


Darwinism: as internally consistent as any major scientific theory; external links to other sciences, particularly Darwin’s prediction that the age of the earth had to be greater than what commonly thought by geologists and physicists of the time (turns out, he was right).


Scope


Natural theology: allegedly all-encompassing, but supernatural “explanations” are epistemologically empty. That is, to say “God did it” sounds like an explanation, but it really doesn’t explain anything.


Darwinism: new facts about the biological world that are explained by the theory have been consistently uncovered for more than one and a half centuries.


Simplicity


Natural theology: deceptively simple, if one neglects the obvious question of the origin and makeup of the Creator.


Darwinism: in its original form invokes a small number of mechanisms to explain biological history and complexity; more recent versions invoke more mechanisms, but still a relatively limited number.


Fruitfulness


Natural theology: did not lead to any research program or discovery.


Darwinism: has maintained a vigorous research program for more than one and a half centuries.
According to the above summary, then, the Darwinian paradigm is definitely preferable to Paley’s natural theology – not surprisingly. More interestingly for our purposes here, these are all clear signs of a paradigm shift, the only one ever occurred in evolutionary biology, I argue in the rest of the original paper.


Kuhn’s theory of paradigm shifts famously included another controversial notion: incommensurability, the idea that crucial concepts within a given paradigm are simply not comparable to what superficially appear to be equivalent concepts within another paradigm. Kuhn identified three distinct types of incommensurability: methodological, observational and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. Obviously, natural theology and Darwinism are methodologically incommensurable: while they both rely on observation and comparative analyses, their goals are entirely different. For Paley, the focus is on the intricate complexity of living organisms, constantly interpreted as an obvious indication of the will and omnipotence of the Creator. Darwin, instead, pays particular attention to precisely those biological phenomena that are troubling to the notion of intelligent design, as in this famous passage:


“I cannot persuade myself that a beneficent and omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of Caterpillars.” (letter collected by Francis Darwin 1887).


More broadly, the sort of “puzzles,” to use Kuhn’s terminology, that Darwinists began to pay attention to concern the historical relationships between different species of organisms (something that is defined out of existence within the natural theological paradigm, since species are specially created), as well as the kind of ecological settings that bring about different adaptations (again, a problem ruled out within natural theology, where adaptations are the direct result of an intelligent act).


Observational incommensurability is tightly linked to the idea that observations are theory-dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. This is perhaps one of the most controversial of Kuhn’s notions, famously illustrated with images from Gestalt psychology, where the same pattern of lines on paper can be interpreted in dramatically different fashions (e.g., a vase or two faces, an old or a young woman, a rabbit or a duck, etc.).


The problem, of course, is that if we take the Gestalt metaphor seriously, we are led to the position that there is no true or even better way to interpret the data, which in turn leads to the constructivist temptation: any theory is just as good as any other, and there really is no way to measure progress in science. Kuhn strongly disavowed such an extreme interpretation of his ideas, and the notion of theory-dependence of observations is now commonly accepted in philosophy of science and embedded in textbook treatments of the subject.


Be that as it may, it is hard to imagine examples of observational incommensurability between natural theology and Darwinism, in part no doubt because no sophisticated way of gathering data was accessible – beyond direct observation and rudimentary experiments – to proponents of the two paradigms.


Finally we get to semantic incommensurability. This has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being the concept of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity.


For the purposes of our discussion, one could make the argument that a similar situation holds for the shifting concept of species between natural theology and Darwinism. Both paradigms do refer to “species,” but the meaning of the term is entirely different. For Paley, species were fixed entities set in place by the action of the Creator – in that sense not far from Newton’s own conception of the physical world, and particularly of the laws governing it. For Darwin, however, species are ever changing entities with no sharp boundaries, which are altered by evolutionary processes in a continuous, gradualistic fashion.


All in all, then, it appears that whether we use the first set of Kuhnian criteria or the various notions of incommensurability, there are very strong reasons to conclude that the shift between natural theology and Darwinism was, in fact, a paradigm shift. It was also, in a very important sense, a shift from a proto-scientific to a scientific view of biology: Darwin and Wallace abandoned any reference to supernatural forces, thus literally establishing a whole new field of science, which keeps, ahem, evolving even today.

Biology’s last paradigm shift and the evolution of evolutionary theory – part I

I find myself devoting more and more time to public outreach and what is increasingly referred to as public philosophy. But, you might know, I am also an academic and publish scholarship in philosophy of science. Which is why from time to time I like to combine the two and do a post that highlights for the general public some of my own technical publications. (A series of these, covering the range 2010-2016, can be found here.) The present essay refers to a paper that I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.”


The theory of evolution, which provides the conceptual framework for all modern research in organismal biology and informs research in molecular biology, has gone through several stages of expansion and refinement. Darwin and Wallace proposed the original idea back in 1858, centering on the twin concepts of natural selection and common descent. Shortly thereafter, Wallace and August Weismann worked toward the complete elimination of any Lamarckian vestiges from the theory, leaning in particular on Weismann’s concept of the separation of soma and germ lines, and resulting in what is sometimes referred to as “neo-Darwinism.”


The theory then experienced a period of “eclipse” at the turn of the 20th century. Many biologists accepted the idea of common descent but either rejected or greatly de-emphasized the importance of natural selection as an evolutionary mechanism. The situation was exacerbated by the rediscovery of Mendel’s work, which pointed to an apparent incompatibility between discrete inheritance particles (“genes”) and the sort of continuous quantitative variation necessary for natural selection to produce gradual evolutionary change.


Famously, the crisis was overcome with the reconciliation of Mendelian and statistical genetics made possible by Ronald Fisher, J.B.S. Haldane and Sewall Wright in the 1930s, and that later on culminated in the Modern Synthesis (henceforth, MS) of the 1940s, to which several additional authors made important contributions, including but not limited to Theodosius Dobzhansky, Julian Huxley, Ernst Mayr, and George G. Simpson. The MS is still the version of the theory largely accepted by the scientific community, and it is what graduate students in the discipline are trained on.


More recently, several authors have pushed for an Extended Synthesis (henceforth, ES) in evolutionary biology, initially from a variety of individual perspectives largely rooted in particular fields of inquiry, such as Evo-Devo or phenotypic plasticity, and now with a more concerted effort aimed explicitly at the formalization of a broader conceptual framework for evolutionary biology (see here).


The ES is very much a work in progress, but the idea is to accomplish a number of goals that have so far proven somewhat elusive: first and foremost, to finally bring developmental biology – famously left out of the MS – into the fold. Second, to provide a coherent way to reconcile the “holistic” tendencies of organismal biological research with the decidedly more reductionist approach of molecular biology and its most recent products, the various “-omics” (genomics, proteomics, metabolomics, etc.). Third, to incorporate as primary players a number of biological phenomena and processes that had been either discarded or minimized within the context of the MS, e.g., phenotypic plasticity, genetic accommodation, epigenetic inheritance, etc. Fourth, to expand the standard theoretical toolkit of the MS – which is primarily grounded in population and quantitative genetic theory – to include elements from computational biology and complexity theory. Fifthly, to incorporate in evolutionary theory new concepts that have emerged from theoretical research during the past several years, chief among them the triad constituted by evolvability (i.e., the possibility of the evolution of evolutionary mechanisms), modularity (of different components of an organism’s phenotype) and robustness (i.e., the degree of resilience of developmental mechanisms).


In the paper I address the question of whether any of the above amounts to something akin to Thomas Kuhn’s famous paradigm shifts, i.e. whether evolutionary biology has ever undergone anything like what Kuhn describes as a moment of revolutionary science. I argue that it has not, and that it will not, even if the ES will succeed in establishing itself. Rather, I think the only time in the history of biology when such a transition has occurred was during the 19th century, when Darwin’s original theory replaced the dominant “paradigm” of the day, Paley-style natural theology. In the following I will discuss that particular paradigm shift, together with the general Kuhian notion of alternation between revolutionary and “normal” science. People interested in the same analysis applied to what happened to evolutionary theory after the onset of Darwinism are referred to the full paper.


William Paley is responsible for the most articulate defense of the idea that living organisms are the result of a special creation by supernatural forces. In his Natural Theology (1802) he famously introduced the metaphor of a watch and its watchmaker:


“In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there: I might possibly answer, that for any thing I know to the contrary, it had lain there for ever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer which I had before given, that for any thing I knew, the watch might have always been there. Yet why should not this answer serve for the watch, as well as for the stone? Why is it not as admissible in the second case as in the first? For this reason, and for no other, viz., that when we come to inspect the watch, we perceive (what we could not discover in the stone) that its several parts are framed and put together for a purpose (…) This mechanism being observed (…) the inference, we think, is inevitable, that the watch must have had a maker; that there must have existed, at some time, and at some place of other, an artificer or artificers, who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.” (1802, p. 5).


Paley argued by analogy from the watch/watchmaker inference to the complex living organism/supernatural intelligent designer inference, despite the fact that such inference had been dealt a devastating philosophical blow by Hume, writing several decades before Paley, in 1779. Contrary to what has been assumed for a long time, Paley appeared to have been familiar with Hume, though he was obviously unconvinced by the latter’s arguments.


Darwin was initially persuaded by Paley’s reasoning, but eventually of course provided the decisive counterpoint that was missing in Hume: an alternative mechanism (natural selection) to generate both biological complexity and the pattern of common descent that was denied by Paley’s natural theology. It behooves us to briefly examine Darwin’s answer, so that we may then proceed to compare Paley’s and Darwin’s “paradigms” in view of Kuhn’s ideas to see whether we can sensibly talk of a paradigm shift occurring at the very onset of evolutionary biology as an independent discipline.


Although Paley is mentioned by name only once in The Origin (on p. 201 of the first edition in the chapter on “Difficulties on Theory”), Darwin mounts a concerted and sustained attack on natural theology in chapters VI and XIII of his magnum opus. Here are some relevant quotes to establish the case. First Darwin explicitly contrasts the type of “explanation” provided by natural theologians with a naturalistic explanation typical of the new science:


“He who believes in separate and innumerable acts of creation will say, that in these cases [of organisms’ behavior that have changed in response to a new environment, without – yet – an accompanying change in the structure of the proper organs] it has pleased the Creator to cause a being of one type to take the place of one of another type; but this seems to me only restating the fact in dignified language. He who believes in the struggle for existence and in the principle of natural selection, will acknowledge that every organic being is constantly endeavouring to increase in numbers; and that if any one being vary ever so little, either in habits or structure, and thus gain an advantage over some other inhabitant of the country, it will seize on the place of that inhabitant, however different it may be from its own place.” (1859, p. 186)


Then he proceeds to directly criticize Paley’s use of analogies to draw a parallel between the inference to human design and the inference to supernatural design:


“It is scarcely possible to avoid comparing the eye to a telescope. We know that this instrument has been perfected by the long-continued efforts of the highest human intellects; and we naturally infer that the eye has been formed by a somewhat analogous process. But may not this inference be presumptuous? Have we any right to assume that the Creator works by intellectual powers like those of man?” (p. 188)


Immediately afterwards, he goes so far as laying out the criteria for the falsification of his hypothesis, in sharp contrast of course with the natural theologian’s ideas, which cannot be falsified:


“If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. … Natural selection will never produce in a being anything injurious to itself, for natural selection acts solely by and for the good of each. No organ will be formed, as Paley has remarked, for the purpose of causing pain or for doing an injury to its possessor. If a fair balance be struck between the good and evil caused by each part, each will be found on the whole advantageous.” (pp. 189, 201)


To summarize, then, the idea of intelligent design – which had been around at least since Plato’s Timaeus – had been severely crippled on philosophical grounds by Hume in the 18th century. Still, Paley was able to mount a spirited and detailed defense of it at the onset of the 19th century, while Darwin provided the final blow to it (except of course for the modern resurgence of creationism, which is not an intellectually meaningful movement) for the first time on scientific grounds during the second part of the 19th century. It is on the latter transition that I wish to focus in part II, from the perspective of Kuhn’s paradigm shifts.


(next: the Paley-Darwin transition as a paradigm shift)

Is exposing medical pseudoscience unethical?

You would think the answer to the title question is obvious: no, of course it isn’t unethical to expose medical pseudoscience (or any kind of pseudoscience). How can anyone think so? And yet, there are some decent reasons to doubt the certainty of such a conclusion. At least that’s the take of a paper that so far as I know has not yet been published, but is available at the PhilSci (philosophy of science) Archive at the University of Pittsburgh.


The paper is entitled “Exposing medical pseudoscience may be unethical,” and is authored by Ehud Lamm. Here is Lamm’s argument, as he presents it:


(1) Many ill health situations, in particular those involving multi- and comorbidity do not have very effective treatment options.


(2) Alternative medicine, consisting of various pseudo-scientific enterprises, provides repose to some of those suffering from such conditions.


(3) A significant part (if not all) of this benefit can be attributed to the placebo effect.


(4) In current, “Western,” societies, placebos are most effective when they have the symbolic currency of science.


(5) Ergo, exposing these pseudo-sciences will hurt people for whom there are only limited health care alternatives.


This is a reasonably constructed argument, so if one wishes to reject it — as I do — one needs to attack one or more of the premises, or to uncover and then reject some additional, hidden premises. I will follow both strategies here, though I wish to thank Lamm at the onset for having forced me to think more carefully about something I had hitherto taken for granted.


I am going to agree right away with premise (3): most (indeed, very likely, all) of the benefits of alternative medical treatments are the result of placebo effects. But I have varying degrees of issues with premise (1), (2), and (4). And I also think Lamm is not explicitly listing a few other very relevant premises here, though he mentions one of them in passing in the paper, as we shall see, only to discard it far too quickly.


Premise (1): this threads on an ambiguity surrounding the word “treatment.” Yes, modern medicine does not provide effective treatment against certain conditions, for instance a number of types of terminal cancer, or several types of dementia. But alternative medicine does not provide treatments either, according to Lamm’s own premise (3), only palliatives. Which means that the proper comparison here ought to be only between standard palliative care and “alternative” one. Palliative care works on the basis of a combination of actual biomedical effects (e.g., painkillers) and placebo. Other things being equal, then, it is more effective for a doctor to use her prestige (premise 4) to push tested palliative options rather than pseudoscientific ones.


Premise (2): for the reasons just seen, even if alternative medicine does provide some respite to patients by way of a placebo effect, standard palliative care is in any case preferable, because it also works by placebo, but additionally actually delivers (at least in some cases) a real biomedical effect. Again, remember that we are talking about alleviating pain and similar things, not actual cures, which are not achievable via placebo, and Lamm agrees that placebo is pretty much the only mechanism through which pseudo-medicine “works.”


Premise (4): the symbolic currency of science in the Western world seems much less real than Lamm assumes. Witness the widespread antivax movement, or — outside of medicine — so-called “skepticism” of climate change, or of evolutionary theory. If anything, science is suffering a credibility crisis of late, and inducing medical doctors to deploy pseudoscience is hardly going to improve such credibility.


If the reader agrees with even one of my criticisms of Lamm’s premises, that is enough to reject the argument. But there is more.


For instance, Lamm at one point says: “I am putting to one side deontological and consequentialist arguments against the use of placebos in general, and assume that in some situations relying on a placebo effect is legitimate.” That’s a bit too quick, and a rather big thing to put aside (and the bit after the comma somewhat begs the question)! A deontologist, for instance, might argue that it is a violation of the categorical imperative for a doctor to systematically lie to his patients, because that violates the implicit trust between the two (the very same trust on which premise 4 is built, incidentally), and because the doctors themselves would probably rather not been lied to when it is their turn to be patients.


On consequentialist grounds, one could argue that there are long term negative societal consequences engendered by lying to patients and by pushing pseudoscientific notions. Because, again, patients might eventually lose confidence in their doctors and begin to doubt any sort of medical advice; also because people will be more likely to embrace pseudoscience under other circumstances, for instance when there are, in fact, perfectly good evidence based treatment options available; and mroevoer because we would be condoning a multibillion dollar industry based on what is essentially a fraud.


Furthermore, there is a third big ethical framework out there: virtue ethics. It is hardly a good character trait to engage in systematic deception of others, even for their own good (a premise, this last one, that I have already rejected). Virtue ethics does not follow universal rules, so lying can be acceptable under certain circumstances. But the focus is on the character of the moral agent, and repeated lying is going to be deleterious to that character, since character is shaped precisely by repeating the same actions over and over, or by systematically entertaining and agreeing with the same thoughts.


Lamm sees another concern with exposing pseudo-medicine: “the movement toward Evidence Based Medicine (EBM), to the extent that it is successful, may lead people to be less open to treatments and assistance that are not backed by science, such as talk therapies, meeting with alternative medicine practitioners, support from informal social networks, help from clergy, and so on.”


This is an interesting point, but the way it is articulated it carelessly lumps a number of things that should be considered separately. For instance, homeopathy is clearly and definitely a pseudoscience, so it should be counted as a success if people do not waste their money (and hopes) on very expensive sugar pills. Talk therapies, however, are not necessarily pseudoscientific. We have good evidence that some forms of psychotherapy work (e.g., cognitive behavioral therapy), and there are distinctions to be drawn about what “working” means here. If a patient is in need to simply talk to someone, but is under no illusion that this will actually cure him, I don’t see what the problem is, or why such talk therapy should be counted as pseudo-medicine in the first place. Perhaps it won’t work better than getting a drink with your friend, but if chatting is the only thing one needs in order to feel better, it “works.” Much hinges, of course, on what the precise claim of the therapist is going to be. The very same considerations apply to the use of informal social networks, or to help from the clergy. These things are simply not in the same category of homeopathy and other kinds of pseudo-medicine.


And there is more to consider, like the general ideas that truth and honesty are intrinsically valuable, and should be compromised only under very unusual circumstances. Certainly both a deontologist and a virtue ethicist would reason this way, though a consequentialist might disagree (there goes yet another reason to reject consequentialism, in my view).


Lamm does consider some objections to his argument in the second part of the paper, and rejects them with a number of counterarguments. I have already covered part of what he says there in the previous points, but let me add a few pertinent comments.


He writes: “we should strive for alternative channels of social support to be available and determine whether non-science backed alternatives can provide patients with social and economic benefits that they need, such as health insurance coverage, help with the education system, and so on.” Well yes, as acknowledged before, but again this thread on an ambiguity of the term “support.” There is no problem in supporting people via talk therapy, social networks, religious counseling, and so forth. But these are not types of alternative medicine, and so long as the people providing the support are not lying, then there is no ethical issue. (Of course if you are an atheist you will think that all priests are lying by definition, but this is not the case from the point of view of a believer who has already adopted that particular religion’s framework. Admittedly though, that’s a tough case, best explored on its own.)


Lamm also maintains that some pseudoscientific “treatments” may be cheap and effective (again, via placebo) and therefore preferable whenever there are no established medical options for the condition. But we have seen above that at a very minimum modern medicine can provide palliatives, and let’s remember that much alternative medical practices are anything but cheap.


The author acknowledges a big potential downside of his approach: “if the pseudoscience providing the placebo effect causes patients to ignore science backed treatment we have to consider the tradeoff to decide whether belief in the efficacy of pseudoscience is beneficial or harmful.” Indeed, and this may often be the case. Not to mention that so long as the patient is also seeking regular medical treatment then, again, there is no additional benefit of the pseudo-medical one, which will cost money not just to the patient, but also, increasingly (and unfortunately) to the health insurance or government-backed apparatus that will pay for the so-called treatment.


Finally, Lamm argues: “It may be mistakenly suggested that the argument presented here is a justification for fraud (assuming the pseudoscience in question involves fraud). However, if anything, the argument discussed here only applies to exposing fraud, which in some cases may do more harm than good.” I don’t see how this would work: if a medical practitioner knows that a given remedy is pseudoscientific and recommends it nonetheless, he is participating in a fraud, even in cases in which he is not directly pocketing money for it (someone else is, and besides, he is being paid to take care of the patient). So, refraining from exposing fraud is, seems to me, condoning and abetting fraud no matter how one puts it. This may be a case of philosophical distinction without a difference.


Again, I recognize that Lamm’s paper is important because too often people like myself, who have pursued a career of exposing pseudoscience and defending science, make leaps of reasoning from the factual to the ethical, automatically assuming certain value judgments without further argument. But it seems to me that, once the arguments have been examined, there is still a strong preponderance of reasons to expose pseudo-medicine whenever possible.

Evolution, moral realism, and conditional imperatives

Is it true that genocide is wrong? Most of us would respond in the affirmative, would think that the answer is obvious, and would regard anyone answering in the negative as a psychopath, best immediately locked up somewhere where he can’t hurt people. And yet, that kind of response implicitly assumes that there is a fact of the matter about moral pronouncements, that some statements in ethics are true or false. But by what standard?


Moral truths — if they exist — don’t appear to be on par with scientific truths, despite much nonsense that has been written about it in recent years (see here and here). If a scientist says that, for instance, the planet Saturn has rings, that statement is true if, and only if, it turns out that Saturn does, in fact, have rings. This is referred to in philosophy as the correspondence theory of truth: a statement is true if it corresponds (to the best of our knowledge) to what’s actually “out there.”


Moral truths are also not (quite) like mathematical truths. In mathematics the Pythagorean theorem, say, is “true” if it can be derived deductively from a small number of axioms. The reasoning that leads to its derivation has to be coherent, meaning that the theorem has to be logically entailed by the axioms, and not lead to or imply any contradiction. This is known as the coherence theory of truth.


I don’t mean to imply that there is a sharp distinction between science and mathematics, nor that individual applications may not rely on a combination of the two theories of truth (indeed, we’ll see one such application below), but the above is a basic sketch that will serve us well in this essay.


So if moral truths don’t follow either a correspondence or a coherent account, what are we left with? Enter error theory. According to the excellent Stanford Encyclopedia of Philosophy:
“Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that level headed people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn’t believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn’t believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk.”


The upshot is that if you (like the majority of people) believe that there are such things as moral truths, you are a moral realist, but you need to provide an account of where moral truths come from. If you reject the existence of moral truths (and error theorists are just one class of philosophers who do) then you are left with the task of explaining how come so many people are prone to this particular type of error.


This is why I was curious to read a recent paper by eminent philosopher of science Kim Sterelny and his University of Canberra colleague Ben Fraser, entitled “Evolution and moral realism,” and published in the British Journal for the Philosophy of Science. Here is a summary of their approach:


“We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). We: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. … True moral beliefs are a ‘fuel for success,’ a map by which we steer, flexibly, in a variety of social interactions.”


Let me unpack the above, and see where this leads us. The rather uncontroversial premise of Sterelny and Fraser’s paper is that our sense of right and wrong derives from an instinct that was probably favored by natural selection in order to improve our prosocial behavior, because the latter — in highly social species like ours — increases individual survival and reproduction, which are the only things natural selection “cares” about. Elements of prosocial behavior of this sort, which we would call moral if observed in humans, are indeed present in other species of primates.


But as Sterelny and Fraser point out, evolutionary accounts have largely being co-opted by error theorists: while moral language is what philosophers call “truth-apt” (i.e., it looks like it’s referring to truths), actual moral claims cannot be demonstrated to be true, since neither the correspondence nor the coherence theory seem to apply.


This has actually always sounded strange to me, for the following reason. A similar argument could be made that natural selection evolved our intelligence not in order for us to discover truths about the world (including scientific truths), but rather to figure out how to best our rivals within social groups. This is referred to as the Machiavellian theory of the origin of mind (but see here for one criticism), and it would lead us to conclude that even our scientific utterances are “truth-apt” and yet “never true or warranted.” One theologian, Alvin Plantinga, actually makes that argument against naturalism (as opposed to supernaturalism), ironically using evolutionary theory to conclude that either evolutionary science is untrustworthy or philosophical naturalism is wrong. I think Plantinga’s argument is bogus, though it would require a separate essay to show why (maybe at a later time). Still, it seems really weird to say that science (including evolutionary theory) doesn’t at least approximate truths about the universe, given how well its products work in practice. So error theorists shouldn’t be that quick to co-opt evolutionary theory on behalf of their position, or they risk falling into something like Plantinga’s dilemma.


Which lead us to the three points that constitute the heart of Sterelny and Fraser’s paper. They consider evolutionary error theory within a broader framework, the continuum between “folk” (i.e., everyday) and scientific understanding of things. Let’s clarify by means of an example not related to ethics: the phenomena of sunrise and sunset. The folk understanding in pre-scientific times was that, literally, the sun would rise above the horizon every morning, and set below it every evening. The sun was understood in a variety of ways, metaphysically, but usually as some kind of god or manifestation of the divine. The scientific account, of course, is that the sun isn’t doing any such thing, and in reality it is the earth that rotates on its axis, causing the illusion of sunset and sunrise. An evolutionary error theory would say that although sunrises and sunsets are illusions, in the sense that they are not a truthful description of what is going on, they are useful, since people can regulate their days accordingly. For everyday life, it simply doesn’t matter whether it is the sun that rises or sets, or the earth that rotates around its axis.
This is why Sterelny and Fraser say that according to this approach “many folk conceptual structures are neither hopelessly wrong nor fully vindicated.” But how is evolutionary morality cashed out, using this framework? On the one hand, there cannot be any simple reduction of moral truths to scientific facts. On the other hand, “moral facts are facts about cooperation, and the conditions and practices that support or undermine it.”


This is an interesting move, but I think it succeeds only in part. Sterenly and Fraser are ambitious here, as they want to ground a kind of moral realism, or quasi-realism, in evolutionary theory. Essentially, they are saying that moral truths follow the correspondence account outlined above, in that something is morally true just in case it fosters cooperation among human beings, and it is morally wrong if it doesn’t.


But this simply cannot be the full story. I think it is morally right (“true”) to cooperate with the entire human race in order to achieve a peaceful and prosperous world. Unfortunately, this is certainly not the sort of cooperation that natural selection has ever fostered. On the contrary, human evolution has been characterized by competition, not cooperation, among groups, with cooperation limited to each in-group. Indeed, it can be argued that the natural human trait of xenophobia (which I assume we would unequivocally label as morally wrong) has been adaptive for much of the history of Homo sapiens: if someone looks different from members of your in-group, he’s probably dangerous and you should be wary of him.


It is true that Sterelny and Fraser are careful, and are not committed to the simplistic notion that whatever behavior was favored by natural selection it is ipso facto morally good. But there are simply far too many discrepancies between what a theory of evolutionary morality would predict and what most people nowadays consider morally right or wrong for their approach to get us very far.


What then? Are the error theorists right after all? I don’t think so. I suggest that the sort of considerations articulated by Sterelny and Fraser provide a good account of how a natural moral instinct might have evolved: to favor in-group prosociality. But ethics since the invention of language, and especially since the invention of philosophy in different parts of the world between two and a half and three millennia ago, has been about using reason to precisely articulate and usually expand what counts as moral. Slavery, oppression of women, and xenophobia were initially considered morally acceptable, because they either enhanced or did not get in the way of the functioning of human groups. But today we think of all those notions as morally wrong, and for good reasons.


These reasons are the result of a combination of a basic prosocial human nature, an innate sense of the existence of right and wrong things, and of reason applied to the amelioration of the human condition. The first two components are the result of biological evolution, the latter of cultural evolution, which took over once we left the African savanna between 70,000 and 40,000 years ago, and especially after the agricultural revolution of 12,000 years ago. While the natural selective imperative is to survive in order to reproduce, the cultural imperative goes well beyond it: we want to flourish, to pursue projects, to acquire a certain degree of independence, and so forth. Biology simply cannot account for that.


What does, then? As philosopher Philippa Foot famously argued in her landmark 1972 paper, “Morality as a System of Hypothetical Imperatives,” moral truths are conditional truths. IF we wish to build a peaceful world THEN xenophobia is immoral. IF we wish to maximize every agent’s ability to flourish THEN the oppression of groups or individuals is immoral. And so forth.
This makes moral truths a combination of correspondence and coherence. The correspondence part derives from the notion that there are certain facts about the human condition that we can ascertain empirically. For instance that individuals flourish if they are accorded some rights, like the right to health care, or education. The coherence part comes from the fact that IF … THEN statements are a matter of logic, and so reasoning built on their bases is valid in some cases and invalid in others.


Another way to put it is this: if moral reasoning is reasoning about hypothetical imperatives, as Foot correctly, in my mind, argued, then the structure of that reasoning is a matter of logic (coherence) while the assumptions from which one begins any such reasoning are empirical in nature (correspondence). If human beings were radically different kinds of beings, our moral philosophy would look very different, or perhaps wouldn’t exist at all. Hence the relevance to ethics of the concept of human nature.


Some people may be unhappy with what they will conceive as a weakened sense of moral truths. They want categorical, not just hypothetical imperatives. They wish for universal, mind-independent moral truths. Too bad, there is no such thing. Others will deny the above account and embrace a relativist position in which morality is an entirely arbitrary human construct. While theoretically possible, I challenge anyone who adheres to such position to actually live by it. It’s not going to happen because it isn’t a good concept of morality for humanity. Foot-style categorical imperative offer us the best available option to navigate between the Scylla of strict moral realism and the Charybdis of strict moral anti-realism.

Biological landscapes, surfaces, and morphospaces: what are they good for?

ammonite

Metaphors are rampant in both everyday language and in science, and while they are inevitable, readers of this blog also know by now that I’m rather skeptical of their widespread use, both in professional publications and, especially, when addressing the general public. (See here, here, here, and here.) One such problematic metaphor is that of so-called adaptive landscapes, or surfaces, in evolutionary biology, something on which I did a fair amount of research when I was running a laboratory of ecology and evolutionary biology.

My detailed criticism of the way the landscape metaphor has sometimes warped biologists’ thinking is detailed in a chapter that was published back in 2012 as part of a very interesting collection entitled The Adaptive Landscape in Evolutionary Biology, edited by Erik Svensson and Ryan Calsbeek for Oxford University Press. As it often happens, mine was the lone contribution from the token skeptic…

Few metaphors in biology are more enduring than the idea of adaptive landscapes, originally proposed by Sewall Wright in 1932 as a way to visually present to an audience of typically non-mathematically savvy biologists his ideas about the relative role of natural selection and genetic drift in the course of evolution. The metaphor was born troubled, not the least reason for which is the fact that Wright presented different diagrams in his original paper that simply cannot refer to the same concept and are therefore hard to reconcile with each other. For instance, in some usages, the landscape’s non-fitness axes represent combinations of individual genotypes, while in other usages the points on the diagram represent gene or genotypic frequencies, and so are actually populations, not individuals.

typical (hypothetical) fitness landscape

Things got even more confusing after the landscape metaphor began to play an extended role within the Modern Synthesis in evolutionary biology and was appropriated by G.G. Simpson to further his project of reconciling macro- and micro-evolution, i.e. to reduce paleontology to population genetics. This time the non-fitness axes of the landscape were phenotypic traits, not genetic measures at all. How one would then translate from one landscape to another (i.e., genes to morphologies) is entirely unaddressed in the literature, except for vague motions to an ill-defined and very rarely calculated “genotype-phenotype mapping function.”

These are serious issues, if we wish to use the landscape metaphor as a unified key to an integrated treatment of genotypic and phenotypic evolution (as well as of micro- and macro-evolution). Without such unification evolutionary biology would be left in the awkward position of having two separate theories, one about genetic change, the other about phenotypic change, and no conceptual bridge to connect them.

To try to clarify things a bit, I went through the available literature and arrived at a typology of four different kinds of “landscapes” routinely used by biologists:

Fitness landscapes. These are the sort of entities originally introduced by Wright. The non-fitness dimensions are measures of genotypic diversity. The points on the landscape are typically population means, and the mathematical approach is rooted in population genetics. (see figure above)

Adaptive Landscapes. These are the non straightforward “generalizations” of fitness landscapes introduced by Simpson, where the non-fitness dimensions now are phenotypic traits. The points on the landscape are populations speciating in response to ecological pressures or even above-species level lineages (i.e., this is about macro-evolution). There is — with very special exceptions discussed in my paper — no known way to move from fitness to adaptive landscapes or vice versa, even though this is usually assumed by authors.

Fitness surfaces.These were introduced by Russell Lande and Steve Arnold back in the ‘80s to quantify the study of natural selection. Here phenotypic traits are plotted against a surrogate measure of fitness, and the landscapes are statistical estimates used in quantitative genetic modeling. The points on the landscape can be either individuals within a population or population means, in both cases belonging to a single species (i.e. this is about micro-evolution).

Morphospaces. These were first articulated by paleontologist David Raup in the mid-’60s, and differ dramatically from the other types for two reasons: (a) they do not have a fitness axis; and (b) their dimensions, while representing phenotypic (“morphological”) traits, are generated via a priori geometrical or mathematical models, i.e. they are not the result of observational measurements. They typically refer to across species (macro-evolutionary) differences, though they can be used for within-species work as well.

The first thing to note is that there are few actual biological examples of fitness landscapes (Wright-style) or Adaptive Landscapes (Simpson-style) available, while there is a good number of well understood examples of morphospaces (Raup-style) and particularly of adaptive surfaces (Lande–Arnold style). These differences are highly significant for my discussion of the metaphor. The paper summarizes examples — both conceptual and empirical — of each type of landscape and the complex, often barely sketched out, relationships among the different types.

When it comes to asking what the metaphor of landscapes in biology is for, we need to distinguish between the visual metaphor, which is necessarily low-dimensional, and the general idea that evolution takes place in some sort of hyper-dimensional space. Remember that Wright introduced the metaphor because his advisor suggested that a biological audience at a conference would be more receptive toward diagrams than toward a series of equations. But of course the diagrams are simply not necessary for the equations to do their work. More to the point, subsequent research by my former University of Tennessee colleague Sergey Gavrilets and his collaborators has shown in a rather dramatic fashion that the original (mathematical) models were far too simple and that the accompanying visual metaphor is therefore not just incomplete, but highly misleading. It turns out that hyper-dimensional dynamics are very much qualitatively different from the low-dimensional ones originally considered by Wright.

In a very important sense Wright’s metaphor of fitness landscapes was meant to have purely heuristic value, to aid biologists to think in general terms about how evolution takes place, not to actually provide a rigorous analysis of, or predictions about, the evolutionary process (it was left to the math to do that work). Seen from this perspective, fitness landscapes have been problematic for decades, generating research aimed at solving problems — like the so-called peak shift one (how do populations stuck on a local fitness peak “shift” to a higher one?) that do not actually exist as formulated, since high-dimensional landscapes don’t have “peaks” at all, as their topology is radically different.

There are problems also with the Lande-Arnold type landscapes (discussed in the paper), but here I want to shift to some good news: the actual usefulness of the fourth type of landscape: Raup-style morphospaces. One of the best examples was produced by Raup himself, with crucial follow-up by one of his graduate students, John Chamberlain. It is a study of potential ammonoid forms that puts the actual (i.e., not just heuristic) usefulness of morphospaces in stark contrast with the cases of fitness and adaptive landscapes. Ammonoids, of course, were beautiful shelled marine invertebrates that existed in a bewildering variety of forms for a good chunk of Earth’s biological history, and eventually went extinct 65 million years ago, together with the dinosaurs. This is going to be a bit technical, but stick with me, it will be worth it.

Raup explored a mathematical-geometrical space of ammonoid forms defined by two variables: W, the rate of expansion of the whorl of the shell; and D, the distance between the aperture of the shell and the coiling axis. Raup arrived at two simple equations that can be used to generate pretty much any shell morphology that could potentially count as “ammonoid-like,” including shells that — as far as we know — have never actually evolved in any ammonoid lineage. Raup then moved from theory to empirical data by plotting the frequency distribution of 405 actual ammonoid species in W/D space and immediately discovered two interesting things: first, the distribution had an obvious peak around 0.3 <D <0.4 and W near 2. Remember that this kind of peak is not a direct measure of fitness or adaptation, it is simply a reflection of the frequency of occurrence of certain forms rather than others. Second, the entire distribution of ammonoid forms was bounded by the W = 1/D hyperbola, meaning that few if any species crossed that boundary on the morphospace. The reason for this was immediately obvious: the 1/D line represents the limit in morphospace where whorls still overlap with one another. This means that for some reason very few ammonites ever evolved shells in which the whorls did not touch or overlap.

one-peak ammonoid morphospace

Raup’s initial findings were intriguing, but they were lacking a sustained functional analysis that would account for the actual distribution of forms in W/D space. Why one peak, and why located around those particular coordinates? Here is where things become interesting and the morphospace metaphor delivers much more than just heuristic value. John Chamberlain, a student of Raup, carried out experimental work to estimate the drag coefficient of the different types of ammonoid shells. His first result clarified why most actual species of ammonoids are found below the W=1/D hyperbola: shells with whorl overlap have a significantly lower drag coefficient, resulting in more efficiently swimming animals.

However, Chamberlain also found something more intriguing: the experimental data suggested that there should be two regions of the W/D morphospace corresponding to shells with maximum swimming efficiency, while Raup’s original frequency morphospace detected only one peak. It seemed that for some reason natural selection found one peak, but not the other. Four decades had to pass from Raup’s paper for the mystery of the second peak to be cleared up: the addition of 597 new species of ammonoids to the original database showed that indeed the second peak had also been occupied!, a rather spectacular case of confirmed prediction in evolutionary biology, not exactly a common occurrence, particularly in paleontology.

two-peak ammonoid morphospace, with representative shell forms

So, is the landscape metaphor in biology useful? It depends. The original versions, those introduced by Sewall Wright to make his math accessible to his colleagues, have been highly influential for decades, and yet have arguably channeled both empirical and theoretical research in unproductive directions, inventing problems (like the peak shift one) that arguably do not exist, at least not as formulated. The Lande-Arnold landscapes, which I have not discussed in this post, but do treat in the paper, have a mixed record. They have been heuristically useful for biologists interesting in quantifying natural selection in the field, but have also arguably brought about a degree of tunnel vision in both the theoretical and empirical study of that most important concept in modern evolutionary theory. Morphospaces, by contrast, have a very good record of being useful in terms of generating insight into the evolution of animal (and plant) form, and yet, they are actually the least commonly deployed version of the landscape idea in the technical literature. And because population genetics, with its mathematical approach, is considered more sophisticated than paleontology, things are unlikely to change in the near future. Unfortunately.

Book Club: Darwin’s Unfinished Symphony, 9, the arts

painting elephants“The logic of cultural evolution is identical to that of biological evolution, even if the details differ. New ideas, behaviors, or products are devised through diverse creative processes; these differ in their attractiveness, appeal, or utility, and as a result are differentially adopted, with newfangled variants superseding the obsolete,” says Kevin Laland at the beginning of the last chapter of his book, Darwin’s Unfinished Symphony: How Culture Made the Human Mind (p. 292). It is, therefore, with a brief commentary on this chapter, focusing on the arts, that I will end my series on Kevin’s fascinating view of the young field of cultural evolution.

That introductory gambit actually illustrates where Laland’s and my views begin to diverge, though perhaps not as sharply as each of our perspectives differs from standard evolutionary psychology. I see cultural evolution as linked to its biological counterpart in two ways: first, because it originated from it; and second, because there is a broad analogy between the two. But I fall far short of Kevin’s strong statement that the two are “identical” in logic. They are not, in my mind, fundamentally because biological evolution is propelled by the teolonomic process of natural selection. Cultural evolution, by contrast, is moved by the teleological process of human cognition. The two are not the same, and I maintain that no currently available theory of cultural evolution satisfactorily accounts for either the difference or the relationship between the two. (I hasten to say, which should not be necessary, that I see nothing magical or “mysterian” about this. At all. It is simply an open scientific question, like many others.)

The cultural evolution of art is, obviously, a huge topic, which would require a book of its own. So Laland takes a reasonable approach, focusing on aspects of the evolution of a particular art form: dance. As we shall see, he has lots of interesting things to say, but not much that would surprise a historian of dance, and definitely not much that originates specifically from a biological evolutionary perspective.

Before getting to dancing, Kevin briefly discusses another art form, acting, making the case that it crucially (though not solely, of course) depends on imitation, which he has argued previously, is an important evolved skill in the human lineage. Since dancing also fully deploys our ability to imitate others, and given that neither acting nor dancing presumably were direct targets of natural selection, he can then conclude that both art forms are in fact a byproduct of natural selection for the capacity to imitate.

“Imitation is no trivial matter. Few other animals are capable of motor imitation, and even those that do exhibit this form of learning cannot imitate with anything like the accuracy and precision of our species.” (p. 295)

Our ancestors at some point became able to solve what Laland calls the correspondence problem: imagine, for instance, that you are trying to learn how to use chopsticks. This is done by imitation, which requires translating the visual cues obtained by watching someone using chopsticks into the motor control that our own muscles have to exercise in order for us to be able to do the same. The sensory experiences involved in watching and doing are utterly different, and yet somehow our brain has to be capable to solve this correspondence problem.

Recent research has shown that human beings solve the correspondence problem by using neural networks similar to the so-called mirror neurons discovered in other primates. Kevin suggests that it is plausible that the mirror neuron or equivalent network has been selected precisely to facilitate imitation, that this particular skill has been much more refined by natural selection in humans, and that one of its most astounding and least recognized byproducts is our ability to do and appreciate art — not just movies and dancing, but also painting, sculpture, theater, music, and even computer gaming.

Kevin doesn’t think much of the alleged ability of other animals to produce art, and I think he is right:

“The motor control that allows humans to produce artistic works and performances spontaneously is a capability that no other animal shares. … The claim that chimpanzees [for instance] are artists, in any meaningful sense, is greeted with skepticism by animal behaviorists and art scholars alike.” (p. 299)

He also thoroughly debunks the idea that elephants in Thailand can paint, referring instead to evidence that the animals have been well trained to respond to subtle cues provided by their handlers, through the simple device of tugging at the elephant’s ears.

What about dancing? Here again the suggestion has been made that some animals do it, though as Laland points out, much of the answer depends on how one defines dancing, and what counts as instances of the art form. Regardless, and more importantly, he highlights the fact that the only good candidates for dancing animals are, not surprisingly, those species that are most capable of imitation. (The same considerations apply to singing animals, by the way.)

“The most transparent connection between dance and imitation … will be readily apparent to just about anyone who has ever taken or observed a dance lesson; that is, dance sequences are typically learned through imitation. … It is no coincidence that dance rehearsal studios around the world almost always have large mirrors along one wall. These allow the learner to flit rapidly between observing the movements of the instructor or choreographer and observing their own performance.” (p. 307-308)

The other thing that makes for a good dancer is the ability to learn a long sequence of actions, and Kevin has shown before in the book that this type of learning is very difficult in a non-social setting, because it pretty much requires teachers. So the evolution of teaching, which he has discussed previously as a crucial component of early cultural evolution in the human lineage, is also a prerequisite for the wonderful byproduct of our biology that we call dance.

Much of the remainder of the chapter concerns itself with the history of dancing, and it is there, I think, that the limits of insights from biological evolution are most painfully clear. Laland asks whether dance could be said to have evolved in any “rigorous” sense of the term, by which he means to ask whether dance as a “system” possesses the characteristics that any evolving system has to possess: variation, differential fitness, and inheritance. But it should be obvious that while the evolution of dance does display all three, we have essentially no account whatsoever of the second element, differential fitness. This deficiency, I argue, at the moment makes cultural evolution into a tautological theory of the kind that Karl Popper (mistakenly) thought the theory of biological evolution was. While Darwin and his successors solved that problem in the biological case, neither evolutionary psychologists nor the more sophisticated approach advocated by Kevin and colleagues has been able to solve it in the case of cultural evolution.

Kevin presents readers with a number of examples showing that there is much variation among the world’s dances, and that this variation is culturally inherited via imitation (though, crucially, the equivalent of biological “mutation” and “recombination” result from conscious or unconscious human decision making, which follows, and indeed also shapes, human aesthetic judgments).

We therefore learn about European sword dances, which apparently first appeared in ancient Greece and were brought to Britain by invading Danes and Vikings. Waltz is Kevin’s favorite example of cultural fitness, as he calls it. And yet, here the limits of his approach are stark, in his own words:

“Relative to other dances in the late eighteenth century, the waltz could be said to possess high ‘cultural fitness,’ which really means little more than it was unusually appealing and as a result increased readily in frequency.” (p. 311)

Right. And that, right there, is the problem. Strip the fancy wording and we are left with: “waltz (at that particular time, in that particular culture) had high fitness because it had high fitness.” That’s the sort of vicious circularity that rightly annoyed Popper. You don’t find it in evolutionary biology because a separate discipline comes to the rescue: functional ecology. It is the latter that allows us to make predictions about which organismal traits are going to be adaptive in one environment or another, given the organism’s anatomy, physiology, and ecology (and given the laws of physics and chemistry). We don’t just say that natural selection favors the fit, and then immediately turn around and define the fit as those that are favored by natural selection. But that’s pretty much what cultural evolutionary theory does, at the moment, and it shares this limitation with other approaches, such as evolutionary psychology and memetics, though for different reasons that are specific to each approach.

To be fair, Kevin does attempt to sketch an elementary functional ecology of dance. For instance we are told that waltz was attractive in late 18th century Europe, in part because of the “dance’s intoxicating swirling, and the dangerously intimate contact between male and female were a major draw.”

Okay, but presumably swirling and close male and female contact have always been intoxicating. So why late 18th century Europe? Moreover, I don’t know much about the history of dance as an academic field of study, but I doubt anything Laland says in this chapter will come as a surprise to historians of dance — and I mean everything, from the genealogical patterns of evolution by imitation to the “mutations” introduced by different cultures at different times, to ad hoc explanations (which may even be true) like the intoxicating effect of a particular dance. In other words, invoking Darwin here does no work at all, or almost.

I don’t have a better alternative. I chose Kevin’s book precisely because I think it is one of the best in the field of cultural evolution, reflecting the incredible vigor and ingenuity of Kevin as a principal investigator, not to mention the many collaborators he gives due credit throughout the book. It’s all tantalizing and very, very interesting. But it falls far short of a comprehensive theory of cultural evolution. It is good to learn about the importance of social learning, of teaching, and of imitation throughout the history of hominins. It is fascinating to think that such biological history has a lot to do with the subsequent shaping of cultural evolution. But we are still nowhere near giving a decent scientific account of sword dancing, waltz, flamenco, polka, jitterbug, or rock’n’roll. Not to mention Michelangelo, Picasso, and de Kooning; or Mozart, Beethoven and Tchaikovsky; or Homer, Dante, and Shakespeare. And so on and so forth, encompassing the bewildering variety of manifestations of what we call culture.

_____

And now for something completely different: our next book will be Early Socratic Dialogues, edited by Trevor J. Saunders, Penguin 2005. I figured that this is a blog called Footnotes to Plato, and yet we have hardly talked about Plato. So, here we go…

Book Club: Darwin’s Unfinished Symphony, 8, foundations of cooperation

reciprocal altruismThink about the complexities involved in allowing you to do something that nowadays is fairly normal: getting on a plane and fly to another city, across an ocean. It’s not just the sophisticated machinery, ground transportation, the airports, and so forth. It’s the people. Accomplishing such a feat requires the coordinated cooperation of a large number of people who don’t know each other, and don’t know you or why you wish to get on that plane in the first place. This observation sets the stage for the next to the last chapter of Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind, which we have been discussing for a while now.

The first point Kevin makes in this chapter (n. 11 in the book) is that conventional evolutionary explanations, such as kin selection and other gene-based explanations are insufficient to account for the degree and sophistication of cooperative activities that have characterized human civilization ever since the agricultural revolution. A fully formed theory of cultural evolution is needed, to draw the outlines of which, of course, is Kevin’s goal. Obviously, the idea is not that cultural evolution is independent from its biological counterpart, but rather that it is a novel mode of evolutionary change that resulted from the particular path of biological evolution that hominins happen to have taken.

Two of the factors that make large-scale human cooperation possible are the ability to teach others, and language, which Laland has already argued itself evolved to facilitate teaching. A third factor was the origin of social norms. These specify how individuals are expected to behave within a group, including how to treat individuals who violate norms. Crucially, norms also make possible for people to identify with a particular group, as abiding by its norms carries privileges for in-group members.

Moreover, humans are pretty much the only animals capable of trading goods (there are a few alleged cases in other primates, but they are disputed), and certainly the only ones that arrived at that convenient abstraction we call money. This level of sophistication requires language, and it is both facilitated and made necessary by the existence of division of labor, something that evolved to a high degree of sophistication, again, after the agricultural revolution, which made possible the existence of large and stable groups of humans.

All of this coordination is beneficial thanks to the advantage provided to individuals by indirect reciprocity: I do something for you, you do something for someone else, and at some point down the line another person that has been benefiting from in-group membership does something for me. Like allowing me to safely cross the Atlantic to get from New York to Rome. Repeated bouts of indirect reciprocity require gossip, so that people have a sense of who they can trust and who to stay away from. Needless to say, gossiping, and hence the building and destroying of social reputations, is not possible, again, without language.

Language, in turn, also evolves, quickly generating local dialects. Dialects then rapidly become a mark of local membership, a quick heuristic to tell apart in- from out-group members. They increase within-group cooperation, and likely across-group conflict, which sets the stage for group selection at the cultural level:

“Cultural processes generate plenty of variation among human groups for natural selection to act upon. Extensive data now demonstrate that the differences between human societies result far more from cultural rather than genetic variation. … Symbolic group marker systems, such as rituals, dances, songs, languages, dress, and flags, make it considerably easier for cultures to maintain their identities and to resist imported cultural traits from immigrants, than it is for local gene pools to maintain their identity by resisting gene flow.” (p. 283)

This is something important to keep in mind, as it is intuitive to say that cultures change more rapidly than genes. While this is true if we are talking about mutations (which are, indeed, rare), it is not the case once we consider gene flow and genetic recombination, which happen far more frequently, as Kevin points out, than some types of cultural change.

Laland also remarks on the widespread existence of practices that synchronize the behavior of individuals, like group dancing, or military marches. These activities result in the simultaneous release of endorphins, which in turn promotes within-group bonding. The broader point is that humans evolved a psychology of group behavior that is entirely unknown in other animals, and that cannot be explained on the basis of standard genetic models of evolution. Pace the evolutionary psychologists, of course, for whom we have seen Laland has relatively little patience.

We are reaching the end of this series of posts on Darwin’s Unfinished Symphony. The next and last installment will focus on the cultural evolutionary origin and significance of art.

Why machine-information metaphors are bad for science education, part II: the search for new metaphors

metaphor vs simileWhile discussing some sections of a paper I wrote with Maarten Boudry, we have seen a number of reasons why using machine-information metaphors is bad for science education. As I pointed out before, the full paper also devotes quite a bit of space to arguing that those metaphors haven’t been particularly good in actual scientific research. One of the fascinating things to watch after I posted the first part of this commentary was the number of people who vehemently defended the “biological organisms are machines” take, both here on the blog and on my Twitter feed. It’s like here we are, in the second decade of the 21st century, and there are still a lot of Cartesians around, who have apparently never heard of David Hume. Oh well.

In the conclusion of this two-part series I am going to focus on the last section of my paper with Maarten, where we discuss the search for alternative metaphors, and in the end (spoiler alert!) suggest that the best thing to do at this point is just to describe things as they are, staying as clear as possible of metaphorical language. And when one really cannot avoid it, then use multiple metaphors and be very clear on the limits of their use. Let’s take a look.

In their classic work on metaphors, Lakoff and Johnson argue that the basic function of metaphorical concepts is to structure a new kind of experience in terms of a more familiar and delineated experience. In science as well as in everyday language, metaphors highlight particular aspects of whatever it is we are trying to grasp, but they will inevitably distort others. For example, the image of the “tree of life,” with new species branching off as budding twigs and extinct species as dead branches, is an instructive approximation of the relations of evolutionary descent. However, it can also foster misconceptions about “progress” in evolution, or lead to a simplistic conception of speciation events, or to a downplay of horizontal gene transfer and reticulate (i.e., by interspecies hybridization) speciation events. To give one more example, in physical chemistry the model of the atom as a miniature solar system, with electrons orbiting the nucleus as planets, though still having wide public appeal, is fundamentally inaccurate.

Of course, no metaphor will do its job perfectly, but it is crucial to realize, as Lakoff and Johnson have shown, that the widespread deployment of a particular metaphor can have a feedback effect on the way we perceive things, not just how we present them to others. In the examples discussed in my paper with Maarten, the lure of machine-information metaphors in the history of biology has invited scientists to think of genomes as “blueprints” for organisms, written in the four-letter alphabet of DNA and readable in a manner analogous to a computer code. But as we argue, the machine-information conception of living systems has led both the public and the scientific community astray.

In response to this problem, some scientists and science educators have proposed several alternative and improved metaphors to characterize the relationship between genotype and phenotype. Biologist Patrick Bateson, for instance, was probably the first to compare the DNA sequence of living organisms with a recipe for a cake. The idea of a genetic recipe has several advantages over the blueprint metaphor, the most important being that it takes into account pleiotropy (one gene affecting more than one trait) and epistasis (gene–gene interactions). As a consequence, the simple picture of a one-to-one (or close to) correspondence between particular genes and phenotypic traits is abandoned, which becomes clear when one considers that there is no way to locate particular ingredients in individual crumbs of a cake. Accordingly, there is no possibility of reverse-engineering the end product to the set of procedures (the “recipe”) that made the final product possible. This has important consequences not just for science education, but for research agendas, as the idea of ‘‘reverse engineering’’ is commonly invoked everywhere from genomic studies to the understanding of the brain.

Of course, if carried too far, the recipe metaphor can in turn be quite misleading. To get the desired result, a cook has to lump together different ingredients in the correct proportions and follow a set of instructions for handling the dough and preparing the oven. But actual developmental encoding in living organisms is an enormously more complex and very different sort of procedure, which is also highly dependent on epigenetic factors and unpredictable vagaries of the external environment. The expression of specific genes in the course of development resembles nothing like the way a cook handles the ingredients of a recipe. Living organisms are also highly differentiated in a number of functional parts or components (cell types, tissues, etc.), in contrast with the homogenous cake that comes out of the oven. Moreover, the genome is not written in anything like a ‘‘language,’’ as in the case of a recipe, and it certainly does not contain a description of the desired end product in any meaningful sense of the word ‘‘description.’’

Condit and colleagues have discussed the recipe metaphor as an alternative to talk of blueprints, pointing out that it was adopted ‘‘with surprising swiftness’’ by science popularizers and the media in the 1990s. However, they also remark that, as a new ‘‘master metaphor’’ to capture the relationship between genotype and phenotype, the image of a recipe for a cake has little to recommend either. For example, evoking recipes can invite people to think of the genome as a step-by-step manual that describes ‘‘how to make a human,’’ in that sense falling into the same trap as the idea of a blueprint.

That being said, if contrasted with the blueprint metaphor, the recipe metaphor conveys the point about lack of one-to-one correspondence between genes and phenotypes very well, and hence it highlights an important fact about development and what biologists call the Genotype => Phenotype map. If the recipe metaphor is used within this restricted context, for example in explicit contrast with the characteristics of a blueprint, it is immediately clear what are the salient points of connection with living systems, and people are less likely to be misled by stretching the metaphor beyond usefulness. If the recipe metaphor is presented as an alternative to the blueprint, however, it is bound to mislead people no less than its rival.

The same point applies to other interesting metaphors that have been proposed in this context, for example Lewis Wolpert’s comparison of early embryonic development with the Japanese art of origami. The analogy highlights the circuitous step-by-step development of the early embryo, but of course in a piece of origami art the structure is imposed top-down from an intelligent agent, whereas the functional differentiation in the embryo is regulated bottom-up by a complex interaction between genes and environment. Moreover, origami simply fold to yield the final product, which in a very real sense is already there from the beginning. This is definitely not the way embryos develop, with their ability to respond to local and external environmental fluctuations.

The general problem that we have been discussing seems to us to be not just that one kind of metaphor or another is woefully inadequate to conceptualize biological organisms and their evolution. It is that it simply does not seem to be possible to come up with a metaphor that is cogent and appropriate beyond a very limited conceptual space. Although some of the alternatives are more accurate than the blueprint metaphor (in some respects), Maarten and I certainly have not found one that we would recommend as a replacement. Should we therefore try to avoid the use of metaphors in biological teaching and research altogether? Or do we simply expect too much from metaphors in science and education?

Analogical and metaphorical thinking is widespread among human beings, although of course different cultures and historical moments inspire people to use different metaphors. After all, a metaphor is an attempt to make sense of novel concepts by pairing them with known ideas to increase our overall understanding. Metaphorical thinking is therefore part of our language, and language is inextricably connected to our thinking, but to put it as Wittgenstein did: ‘‘It is, in most cases, impossible to show an exact point where an analogy starts to mislead us.’’ Yet a great part of doing philosophy consists precisely in clarifying our language in an attempt to advance our thinking. To quote Wittgenstein again: ‘‘Philosophy is a battle against the bewitchment of our intelligence by means of our language.’’ To complicate matters further, there is emerging empirical evidence that the human brain processes metaphors in a specific fashion: research on Alzheimer’s patients, for instance (see ref. in the paper), found that impairment of the brain’s ‘‘executive’’ function, associated with the prefrontal cortex, leads to poor understanding of novel metaphors (while, interestingly, comprehension of familiar metaphors is unaffected). Metaphorical thinking seems to be a biologically entrenched functional mode of our brains, and may therefore be hard to avoid altogether.

Both science and philosophy have made ample use of metaphorical and analogical thinking, sometimes with spectacularly positive results, at other times more questionably so. Nonetheless, it seems that nowhere is metaphorical thinking so entrenched — and so potentially misleading — as in biology. Given the maturity of biology as a science, and considering that it deals with objects whose nature is not as alien to our daily experience as, say, those of quantum physics, Maarten and I do not actually see any good reason for clinging onto outdated metaphors in biological education and research for characterizing living organisms, their genomes and their means of development. Taking into account the fact that the machine information metaphors have been grist to the mill of ID creationism, fostering design intuitions and other misconceptions about living systems, we think it is time to dispense with them altogether. Still, we are also not as naive as to expect that this advice will be followed by scientists and science educators any time soon, precisely because the machine/information metaphor is so entrenched in biology education. What to do then? We propose two approaches, one for science educators, the other for practicing scientists.

In science education, talk of metaphorical thinking can be turned into a teaching moment. Students (and the public at large) would actually greatly benefit from explanations that contrast different metaphors with the express goal of highlighting the limitations intrinsic in metaphors and analogies. So, for instance, science educators and writers could talk about the human genome by introducing the blueprint metaphor, only to immediately point out why it does not capture much of what genomes and organisms are about; they could then proceed to familiarize their students and readers with alternative metaphors, say the recipe one, focusing on differences with the original metaphor while of course not neglecting to point out the (different) deficiencies of the new approach as well. The goal of this process would be to foster a cautious attitude about metaphorical thinking, as well as to develop a broader understanding of how unlike commonsense modern science really is. On the latter point, it is interesting to note, for instance, that a popular refrain among evolution or global warming deniers is that ‘‘simple commonsense’’ shows that the scientists are wrong, a position that ignores the proper weight of technical expertise in favor of a folk understanding of nature. It is therefore crucial that the public appreciates the limitations of common sense thinking about science.

There is an analogous teaching moment that can be brought to bear when research scientists engage in unbridled metaphorical thinking: we could refer to this as a philosophy appreciation moment. Scientists are notoriously insensitive to, or even downright dismissive of, considerations arising from the history and philosophy of their discipline, and often for good practical reasons: modern science is a highly specialized activity, where there is barely enough time to keep up with the overwhelming literature in one’s own narrow field of research, and certainly not enough incentive to indulge in historical readings or philosophical speculation. Nonetheless, historians and philosophers of science can easily show the pitfalls of metaphorical thinking (by using well-documented historical examples) and even get across to their colleagues some basic notions of philosophy (by analyzing the effects of particular metaphors on the development of specific lines of scientific inquiry). None of this will quickly amount to overcoming C.P. Snow’s infamous divide between ‘‘the two cultures,’’ but it may bring about better understanding and appreciation of philosophy by scientists, and perhaps even help science see new horizons that have been hitherto obscured by a superficially illuminating metaphor.

Why machine-information metaphors are bad for science education, part I: biological machines and intelligent design

bacterial flagellum

bacterial flagellum, as often represented in biology education

Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Predictably, modern proponents of Intelligent Design so-called theory, the latest incarnation of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories.

In reality, the living organism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do.

My friend and collaborator Maarten Boudry and I have written an article several years ago in which we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. We argued that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.

We have discussed this topic twice in the last couple of years, once on the occasion of another paper with Maarten, on why machine metaphors in biology are misleading; more recently because of a paper I wrote about genes as blueprints; the current entry completes the trilogy, so to speak. In part I, here, I will present what Maarten and I had to say about the origin of machine-information metaphors in biology, as well as its questionable use in science education. In part II, next week, I’ll talk about the search for new and better metaphors in science and science education. Interested readers are referred to the original paper for references, as well as for a discussion of the misuse of machine-information metaphors in actual biological research (i.e., not just for educational purposes).

When delving into unknown territory, scientists have often naturally relied on their experiences in more familiar domains to make sense of what they encounter. In the early days of the scientific revolution, mechanical metaphors proved to be a powerful instrument to get a grip on new discoveries about the living world and the universe at large, and we can trace back the emergence of machine metaphors at least to the Middle Ages, when new achievements of technology had a profound cultural influence and captured the collective imagination. Against this background of technological innovation, it is not surprising that the pioneers of anatomy and physiology relied on the metaphor of the animal body as a complicated piece of machinery to make sense of their discoveries. The mechanical language provided a richness of meaning and allowed them to structure the new phenomena in terms of familiar experiences. For example, the image of the human heart as a pump with intricate mechanical components played an important role in William Harvey’s discoveries about blood circulation.

In the course of the 17th century, a new philosophy of nature became prominent that developed a conception of the universe in purely mechanical terms. According to this mechanical philosophy, which was developed by thinkers like Rene` Descartes, Pierre Gassendi and Robert Boyle, the phenomena of nature can be understood purely in terms of mechanical interactions of inert matter. This mechanization of nature proved an important driving force behind the Scientific Revolution, and at the end of the 17th century culminated in Newton’s theory of motion. Newton’s description of planetary orbits following the fixed laws of gravity conveyed an image of a clockwork universe set in motion by an intelligent First Cause. In fact, that was exactly how Newton conceived the universe and its relation to the Creator. For Newton and many of his contemporaries, the importance of the mechanical conception of nature was greater than the mere term ‘metaphor’ would suggest, as the development of mechanistic philosophy was itself largely inspired by religious motivations; indeed, the very employment of machine metaphors invited theological speculation.

In the second part of the 17th century, the mechanical pictures of living organisms and of the cosmos at large converged into an intellectual tradition where theology and science were intimately intertwined: natural theology. The most famous representative of this tradition was William Paley, whose work Natural Theology, of Evidence of Existence and Attributes of the Deity, Collected from the Appearances of Nature (1802) made a deep impression on the young Charles Darwin. As the title of the book makes clear, Paley and the natural theologians conceived of Nature as a complicated machinery of intricate wheels within wheels, in which every organism has its proper place and is adapted to its environment. According to Paley, the contrivance and usefulness of parts exhibited by living organisms attests to the intelligence and providence of a benevolent Creator. This so-called ‘design argument’ already had a long intellectual pedigree, dating back to Plato, Cicero and Thomas Aquinas, but its most famous formulation is found in the first chapter of Natural Theology, in which Paley famously relies on the analogy between living organisms and a pocket watch to support his design inference.

While Darwin was the one who gave the most decisive blow to the design argument by suggesting a natural explanation for adaptive complexity in the living world, many philosophers would agree that David Hume foreshadowed its demise, by exposing several problems with the central analogy. In his Dialogues Concerning Natural Religion (1779), which actually predates Paley’s magnum opus by more than 50 years, we find a discussion of the design argument among Philo, the skeptical character that voices Hume’s ideas, Demea, the orthodox religious believer, and Cleanthes, the advocate of natural theology.

After Cleanthes has set out the design argument in terms foreshadowing Paley’s analogy of the watch, Philo objects that it is dangerous to derive conclusions about the whole of the universe on the basis of a spurious analogy with one of its parts. Given that our experience with design is limited to human artifacts only, we have to proceed with great caution, and it would be presumptuous to take so minute and select a principle as the human mind as the model for the origin of the whole universe. Hume realized that, at least in some cases, appearances of intelligent design can be deceptive.

In contemplating that ‘‘many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out’’, Hume even comes close to Darwin’s crucial insight about the power of natural selection. Although Hume does not deny that we can discern similarities between nature and human artifacts, he warns us that the analogy is also defective in several respects. And if the effects are not sufficiently similar, conclusions about similar causes are premature. To illustrate this, Philo proposes another possible cosmogony on the basis of the analogy between the world and an animal:

“A continual circulation of matter in [the universe] produces no disorder; a continual waste in every part is incessantly repaired: The closest sympathy is perceived throughout the entire system: And each part or member, in performing its proper offices, operates both to its own preservation and to that of the whole. The world, therefore, I infer, is an animal.” (Hume 1779, p. 39)

In The Origin of Species, Charles Darwin (1859) finally proposed a natural explanation for the phenomenon that inspired Paley but failed to convince Hume. Although the design argument is still of interest to philosophers and historians of science, it has been widely discarded in the scientific community. However, the analogy on which Paley based his inference seems to be alive and well, not only in the minds of creationists and ID proponents, but also in the writings of science popularizers and educators. Many scientists have actually argued that Paley at least offered an incisive formulation of the problem as there is indeed a hard-to-shake intuition of contrivance and intelligent design in nature. As one of the most ardent defenders and popularizers of evolutionary theory, Richard Dawkins, put it, ‘‘Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Adaptive complexity, then, is still regarded as something that requires a special explanation.

In textbooks, science educators have presented the comparison of living organisms and man-made machines not just as a superficial analogy, but carrying it out to a considerable level of detail. For example, the cell has been described as a miniature factory, complete with assembly lines, messengers, transport vehicles, etc. Consider the following quote from Bruce Alberts, molecular biologist, and former president of the National Academy of Sciences:

“The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. … Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. Given the ubiquity of protein machines in biology, we should be seriously attempting a comparative analysis of all of the known machines, with the aim of classifying them into types and deriving some general principles for future analyses. Some of the methodologies that have been derived by the engineers who analyze the machines of our common experience are likely to be relevant.” (Alberts 1998, p. 291)

Creationists and their modern heirs of the Intelligent Design movement have been eager to exploit mechanical metaphors for their own purposes. For example, Bruce Alberts’ description of the living cell as a factory has been approvingly quoted by both Michael Behe and William Dembski, two leading figures in the ID movement. For ID proponents, of course, these are not metaphors at all, but literal descriptions of the living world, arching back to Newton’s conception of the Universe as a clock-like device made by the Creator. The very fact that scientists rely on mechanical analogies to make sense of living systems, while disclaiming any literal interpretation, strengthens creationists in their misconception that scientists are ”blinded” by a naturalistic prejudice. In the creationist textbook Of Pandas and People, which has been proposed by ID advocates as an alternative to standard biology textbooks in high school, we read that ‘‘Intelligent design […] locates the origin of new organisms in an immaterial cause: in a blueprint, a plan, a pattern, devised by an intelligent agent’’ (Davis et al. 1993, p. 14).

The analogy between living organisms and man-made machines has proven a persuasive rhetorical tool of the ID movement. In fact, for all the technical lingo and mathematical “demonstrations,” in much of their public presentations it is clear that ID theorists actually expect the analogies to do the argumentative work for them. In Darwin’s Black Box, Behe takes Alberts’ machine analogy to its extreme, describing the living cell as a complicated factory containing cargo-delivery systems, scanner machines, transportation systems and a library full of blueprints. Here is a typical instance of Behe’s reasoning:

“In the main area [cytoplasm] are many machines and machine parts; nuts, bolts, and wires float freely about. In this section reside many copies of what are called master machines [ribosomes], whose job it is to make other machines. They do this by reading the punch holes in a blueprint [DNA], grabbing nuts, bolts, and other parts that are floating by, and mechanically assembling the machine piece by piece.” (Behe 2006, pp. 104–5)

Behe’s favorite model of biochemical systems is a mechanical mousetrap, the familiar variant consisting of a wooden platform, a metal hammer, a spring etc. According to Behe, if any one of these components is missing, the mousetrap is no longer able to catch mice. He has termed this interlocking of parts ‘‘irreducible complexity’’ and thinks it characterizes typical biochemical systems. n other words, the mousetrap is to Behe what the well-designed pocket watch was for Paley. But whereas Paley can be excused on the grounds of the state of scientific knowledge in the 18th century, for Behe the situation is a little different. Modern biochemistry, nota bene Behe’s own discipline, has revealed that biochemical systems are not like mechanical artifacts at all. Moreover, even biological systems that are irreducibly complex under Behe’s definition pose no problem for evolution by natural selection, as has been in detail by people like cell biologist Ken Miller.

ID proponents have buttressed their analogies between living systems and mechanical contraptions with a lot of visual rhetoric as well. The flagellum of the bacterium E. coli, the hallmark of the ID movement, has been represented as a full-fledged outboard rotary motor, with a stator, drive shaft, fuel supply, etc.. It features on the cover of Dembski’s book No Free Lunch, and has been used numerous times in presentations and online articles. The idea seems to be that if it looks designed, it has to be designed. But as Mark Perakh has documented in a paper published in 2008, ID supporters invariably use idealized and heavily stylized representations of the flagellum, in order to make it more resemble a man-made contraption. Another striking example of this visual rhetoric is a video by Discovery Institute president Stephen C. Meyer, which presents a computer-simulated — and again heavily stylized — journey inside the cell, and describes the biochemical processes in terms of ‘‘digital characters in a machine code,’’ ‘‘information-recognition devices,’’ and ‘‘mechanical assembly lines.’’ Meyer commented that evolutionists will have a hard time now dissuading the public from the fact that ‘‘the evidence for design literally unfolds before them.’’

Of course, the mere observation that creationists have seized on machine metaphors in biology does not suffice to demonstrate that these metaphors do not make scientific sense. However, the fact that they tend to do so systematically, using full-length quotes from respectable scientists, should make us weary of the possible dangers of misleading metaphors. If the rhetoric of the ID movement is demonstrably based on these mechanical analogies, it can be instructive to reexamine their scientific merits. In the paper, Maarten and I argue that the machine-information analogy has indeed influenced the way scientists themselves think about biological structure, function, and evolution. By analyzing the consequences of and reactions to this analogy in actual biological research, we show that its scientific merits are very weak, and that its place in modern biology has become questionable. What then? Stay tuned for part II, on the search for new and better metaphors…