Category Archives: Massimo’s Technical Stuff

Biology’s last paradigm shift and the evolution of evolutionary theory – part I

I find myself devoting more and more time to public outreach and what is increasingly referred to as public philosophy. But, you might know, I am also an academic and publish scholarship in philosophy of science. Which is why from time to time I like to combine the two and do a post that highlights for the general public some of my own technical publications. (A series of these, covering the range 2010-2016, can be found here.) The present essay refers to a paper that I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.”

The theory of evolution, which provides the conceptual framework for all modern research in organismal biology and informs research in molecular biology, has gone through several stages of expansion and refinement. Darwin and Wallace proposed the original idea back in 1858, centering on the twin concepts of natural selection and common descent. Shortly thereafter, Wallace and August Weismann worked toward the complete elimination of any Lamarckian vestiges from the theory, leaning in particular on Weismann’s concept of the separation of soma and germ lines, and resulting in what is sometimes referred to as “neo-Darwinism.”

The theory then experienced a period of “eclipse” at the turn of the 20th century. Many biologists accepted the idea of common descent but either rejected or greatly de-emphasized the importance of natural selection as an evolutionary mechanism. The situation was exacerbated by the rediscovery of Mendel’s work, which pointed to an apparent incompatibility between discrete inheritance particles (“genes”) and the sort of continuous quantitative variation necessary for natural selection to produce gradual evolutionary change.

Famously, the crisis was overcome with the reconciliation of Mendelian and statistical genetics made possible by Ronald Fisher, J.B.S. Haldane and Sewall Wright in the 1930s, and that later on culminated in the Modern Synthesis (henceforth, MS) of the 1940s, to which several additional authors made important contributions, including but not limited to Theodosius Dobzhansky, Julian Huxley, Ernst Mayr, and George G. Simpson. The MS is still the version of the theory largely accepted by the scientific community, and it is what graduate students in the discipline are trained on.

More recently, several authors have pushed for an Extended Synthesis (henceforth, ES) in evolutionary biology, initially from a variety of individual perspectives largely rooted in particular fields of inquiry, such as Evo-Devo or phenotypic plasticity, and now with a more concerted effort aimed explicitly at the formalization of a broader conceptual framework for evolutionary biology (see here).

The ES is very much a work in progress, but the idea is to accomplish a number of goals that have so far proven somewhat elusive: first and foremost, to finally bring developmental biology – famously left out of the MS – into the fold. Second, to provide a coherent way to reconcile the “holistic” tendencies of organismal biological research with the decidedly more reductionist approach of molecular biology and its most recent products, the various “-omics” (genomics, proteomics, metabolomics, etc.). Third, to incorporate as primary players a number of biological phenomena and processes that had been either discarded or minimized within the context of the MS, e.g., phenotypic plasticity, genetic accommodation, epigenetic inheritance, etc. Fourth, to expand the standard theoretical toolkit of the MS – which is primarily grounded in population and quantitative genetic theory – to include elements from computational biology and complexity theory. Fifthly, to incorporate in evolutionary theory new concepts that have emerged from theoretical research during the past several years, chief among them the triad constituted by evolvability (i.e., the possibility of the evolution of evolutionary mechanisms), modularity (of different components of an organism’s phenotype) and robustness (i.e., the degree of resilience of developmental mechanisms).

In the paper I address the question of whether any of the above amounts to something akin to Thomas Kuhn’s famous paradigm shifts, i.e. whether evolutionary biology has ever undergone anything like what Kuhn describes as a moment of revolutionary science. I argue that it has not, and that it will not, even if the ES will succeed in establishing itself. Rather, I think the only time in the history of biology when such a transition has occurred was during the 19th century, when Darwin’s original theory replaced the dominant “paradigm” of the day, Paley-style natural theology. In the following I will discuss that particular paradigm shift, together with the general Kuhian notion of alternation between revolutionary and “normal” science. People interested in the same analysis applied to what happened to evolutionary theory after the onset of Darwinism are referred to the full paper.

William Paley is responsible for the most articulate defense of the idea that living organisms are the result of a special creation by supernatural forces. In his Natural Theology (1802) he famously introduced the metaphor of a watch and its watchmaker:

“In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there: I might possibly answer, that for any thing I know to the contrary, it had lain there for ever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer which I had before given, that for any thing I knew, the watch might have always been there. Yet why should not this answer serve for the watch, as well as for the stone? Why is it not as admissible in the second case as in the first? For this reason, and for no other, viz., that when we come to inspect the watch, we perceive (what we could not discover in the stone) that its several parts are framed and put together for a purpose (…) This mechanism being observed (…) the inference, we think, is inevitable, that the watch must have had a maker; that there must have existed, at some time, and at some place of other, an artificer or artificers, who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.” (1802, p. 5).

Paley argued by analogy from the watch/watchmaker inference to the complex living organism/supernatural intelligent designer inference, despite the fact that such inference had been dealt a devastating philosophical blow by Hume, writing several decades before Paley, in 1779. Contrary to what has been assumed for a long time, Paley appeared to have been familiar with Hume, though he was obviously unconvinced by the latter’s arguments.

Darwin was initially persuaded by Paley’s reasoning, but eventually of course provided the decisive counterpoint that was missing in Hume: an alternative mechanism (natural selection) to generate both biological complexity and the pattern of common descent that was denied by Paley’s natural theology. It behooves us to briefly examine Darwin’s answer, so that we may then proceed to compare Paley’s and Darwin’s “paradigms” in view of Kuhn’s ideas to see whether we can sensibly talk of a paradigm shift occurring at the very onset of evolutionary biology as an independent discipline.

Although Paley is mentioned by name only once in The Origin (on p. 201 of the first edition in the chapter on “Difficulties on Theory”), Darwin mounts a concerted and sustained attack on natural theology in chapters VI and XIII of his magnum opus. Here are some relevant quotes to establish the case. First Darwin explicitly contrasts the type of “explanation” provided by natural theologians with a naturalistic explanation typical of the new science:

“He who believes in separate and innumerable acts of creation will say, that in these cases [of organisms’ behavior that have changed in response to a new environment, without – yet – an accompanying change in the structure of the proper organs] it has pleased the Creator to cause a being of one type to take the place of one of another type; but this seems to me only restating the fact in dignified language. He who believes in the struggle for existence and in the principle of natural selection, will acknowledge that every organic being is constantly endeavouring to increase in numbers; and that if any one being vary ever so little, either in habits or structure, and thus gain an advantage over some other inhabitant of the country, it will seize on the place of that inhabitant, however different it may be from its own place.” (1859, p. 186)

Then he proceeds to directly criticize Paley’s use of analogies to draw a parallel between the inference to human design and the inference to supernatural design:

“It is scarcely possible to avoid comparing the eye to a telescope. We know that this instrument has been perfected by the long-continued efforts of the highest human intellects; and we naturally infer that the eye has been formed by a somewhat analogous process. But may not this inference be presumptuous? Have we any right to assume that the Creator works by intellectual powers like those of man?” (p. 188)

Immediately afterwards, he goes so far as laying out the criteria for the falsification of his hypothesis, in sharp contrast of course with the natural theologian’s ideas, which cannot be falsified:

“If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. … Natural selection will never produce in a being anything injurious to itself, for natural selection acts solely by and for the good of each. No organ will be formed, as Paley has remarked, for the purpose of causing pain or for doing an injury to its possessor. If a fair balance be struck between the good and evil caused by each part, each will be found on the whole advantageous.” (pp. 189, 201)

To summarize, then, the idea of intelligent design – which had been around at least since Plato’s Timaeus – had been severely crippled on philosophical grounds by Hume in the 18th century. Still, Paley was able to mount a spirited and detailed defense of it at the onset of the 19th century, while Darwin provided the final blow to it (except of course for the modern resurgence of creationism, which is not an intellectually meaningful movement) for the first time on scientific grounds during the second part of the 19th century. It is on the latter transition that I wish to focus in part II, from the perspective of Kuhn’s paradigm shifts.

(next: the Paley-Darwin transition as a paradigm shift)

Biological landscapes, surfaces, and morphospaces: what are they good for?


Metaphors are rampant in both everyday language and in science, and while they are inevitable, readers of this blog also know by now that I’m rather skeptical of their widespread use, both in professional publications and, especially, when addressing the general public. (See here, here, here, and here.) One such problematic metaphor is that of so-called adaptive landscapes, or surfaces, in evolutionary biology, something on which I did a fair amount of research when I was running a laboratory of ecology and evolutionary biology.

My detailed criticism of the way the landscape metaphor has sometimes warped biologists’ thinking is detailed in a chapter that was published back in 2012 as part of a very interesting collection entitled The Adaptive Landscape in Evolutionary Biology, edited by Erik Svensson and Ryan Calsbeek for Oxford University Press. As it often happens, mine was the lone contribution from the token skeptic…

Few metaphors in biology are more enduring than the idea of adaptive landscapes, originally proposed by Sewall Wright in 1932 as a way to visually present to an audience of typically non-mathematically savvy biologists his ideas about the relative role of natural selection and genetic drift in the course of evolution. The metaphor was born troubled, not the least reason for which is the fact that Wright presented different diagrams in his original paper that simply cannot refer to the same concept and are therefore hard to reconcile with each other. For instance, in some usages, the landscape’s non-fitness axes represent combinations of individual genotypes, while in other usages the points on the diagram represent gene or genotypic frequencies, and so are actually populations, not individuals.

typical (hypothetical) fitness landscape

Things got even more confusing after the landscape metaphor began to play an extended role within the Modern Synthesis in evolutionary biology and was appropriated by G.G. Simpson to further his project of reconciling macro- and micro-evolution, i.e. to reduce paleontology to population genetics. This time the non-fitness axes of the landscape were phenotypic traits, not genetic measures at all. How one would then translate from one landscape to another (i.e., genes to morphologies) is entirely unaddressed in the literature, except for vague motions to an ill-defined and very rarely calculated “genotype-phenotype mapping function.”

These are serious issues, if we wish to use the landscape metaphor as a unified key to an integrated treatment of genotypic and phenotypic evolution (as well as of micro- and macro-evolution). Without such unification evolutionary biology would be left in the awkward position of having two separate theories, one about genetic change, the other about phenotypic change, and no conceptual bridge to connect them.

To try to clarify things a bit, I went through the available literature and arrived at a typology of four different kinds of “landscapes” routinely used by biologists:

Fitness landscapes. These are the sort of entities originally introduced by Wright. The non-fitness dimensions are measures of genotypic diversity. The points on the landscape are typically population means, and the mathematical approach is rooted in population genetics. (see figure above)

Adaptive Landscapes. These are the non straightforward “generalizations” of fitness landscapes introduced by Simpson, where the non-fitness dimensions now are phenotypic traits. The points on the landscape are populations speciating in response to ecological pressures or even above-species level lineages (i.e., this is about macro-evolution). There is — with very special exceptions discussed in my paper — no known way to move from fitness to adaptive landscapes or vice versa, even though this is usually assumed by authors.

Fitness surfaces.These were introduced by Russell Lande and Steve Arnold back in the ‘80s to quantify the study of natural selection. Here phenotypic traits are plotted against a surrogate measure of fitness, and the landscapes are statistical estimates used in quantitative genetic modeling. The points on the landscape can be either individuals within a population or population means, in both cases belonging to a single species (i.e. this is about micro-evolution).

Morphospaces. These were first articulated by paleontologist David Raup in the mid-’60s, and differ dramatically from the other types for two reasons: (a) they do not have a fitness axis; and (b) their dimensions, while representing phenotypic (“morphological”) traits, are generated via a priori geometrical or mathematical models, i.e. they are not the result of observational measurements. They typically refer to across species (macro-evolutionary) differences, though they can be used for within-species work as well.

The first thing to note is that there are few actual biological examples of fitness landscapes (Wright-style) or Adaptive Landscapes (Simpson-style) available, while there is a good number of well understood examples of morphospaces (Raup-style) and particularly of adaptive surfaces (Lande–Arnold style). These differences are highly significant for my discussion of the metaphor. The paper summarizes examples — both conceptual and empirical — of each type of landscape and the complex, often barely sketched out, relationships among the different types.

When it comes to asking what the metaphor of landscapes in biology is for, we need to distinguish between the visual metaphor, which is necessarily low-dimensional, and the general idea that evolution takes place in some sort of hyper-dimensional space. Remember that Wright introduced the metaphor because his advisor suggested that a biological audience at a conference would be more receptive toward diagrams than toward a series of equations. But of course the diagrams are simply not necessary for the equations to do their work. More to the point, subsequent research by my former University of Tennessee colleague Sergey Gavrilets and his collaborators has shown in a rather dramatic fashion that the original (mathematical) models were far too simple and that the accompanying visual metaphor is therefore not just incomplete, but highly misleading. It turns out that hyper-dimensional dynamics are very much qualitatively different from the low-dimensional ones originally considered by Wright.

In a very important sense Wright’s metaphor of fitness landscapes was meant to have purely heuristic value, to aid biologists to think in general terms about how evolution takes place, not to actually provide a rigorous analysis of, or predictions about, the evolutionary process (it was left to the math to do that work). Seen from this perspective, fitness landscapes have been problematic for decades, generating research aimed at solving problems — like the so-called peak shift one (how do populations stuck on a local fitness peak “shift” to a higher one?) that do not actually exist as formulated, since high-dimensional landscapes don’t have “peaks” at all, as their topology is radically different.

There are problems also with the Lande-Arnold type landscapes (discussed in the paper), but here I want to shift to some good news: the actual usefulness of the fourth type of landscape: Raup-style morphospaces. One of the best examples was produced by Raup himself, with crucial follow-up by one of his graduate students, John Chamberlain. It is a study of potential ammonoid forms that puts the actual (i.e., not just heuristic) usefulness of morphospaces in stark contrast with the cases of fitness and adaptive landscapes. Ammonoids, of course, were beautiful shelled marine invertebrates that existed in a bewildering variety of forms for a good chunk of Earth’s biological history, and eventually went extinct 65 million years ago, together with the dinosaurs. This is going to be a bit technical, but stick with me, it will be worth it.

Raup explored a mathematical-geometrical space of ammonoid forms defined by two variables: W, the rate of expansion of the whorl of the shell; and D, the distance between the aperture of the shell and the coiling axis. Raup arrived at two simple equations that can be used to generate pretty much any shell morphology that could potentially count as “ammonoid-like,” including shells that — as far as we know — have never actually evolved in any ammonoid lineage. Raup then moved from theory to empirical data by plotting the frequency distribution of 405 actual ammonoid species in W/D space and immediately discovered two interesting things: first, the distribution had an obvious peak around 0.3 <D <0.4 and W near 2. Remember that this kind of peak is not a direct measure of fitness or adaptation, it is simply a reflection of the frequency of occurrence of certain forms rather than others. Second, the entire distribution of ammonoid forms was bounded by the W = 1/D hyperbola, meaning that few if any species crossed that boundary on the morphospace. The reason for this was immediately obvious: the 1/D line represents the limit in morphospace where whorls still overlap with one another. This means that for some reason very few ammonites ever evolved shells in which the whorls did not touch or overlap.

one-peak ammonoid morphospace

Raup’s initial findings were intriguing, but they were lacking a sustained functional analysis that would account for the actual distribution of forms in W/D space. Why one peak, and why located around those particular coordinates? Here is where things become interesting and the morphospace metaphor delivers much more than just heuristic value. John Chamberlain, a student of Raup, carried out experimental work to estimate the drag coefficient of the different types of ammonoid shells. His first result clarified why most actual species of ammonoids are found below the W=1/D hyperbola: shells with whorl overlap have a significantly lower drag coefficient, resulting in more efficiently swimming animals.

However, Chamberlain also found something more intriguing: the experimental data suggested that there should be two regions of the W/D morphospace corresponding to shells with maximum swimming efficiency, while Raup’s original frequency morphospace detected only one peak. It seemed that for some reason natural selection found one peak, but not the other. Four decades had to pass from Raup’s paper for the mystery of the second peak to be cleared up: the addition of 597 new species of ammonoids to the original database showed that indeed the second peak had also been occupied!, a rather spectacular case of confirmed prediction in evolutionary biology, not exactly a common occurrence, particularly in paleontology.

two-peak ammonoid morphospace, with representative shell forms

So, is the landscape metaphor in biology useful? It depends. The original versions, those introduced by Sewall Wright to make his math accessible to his colleagues, have been highly influential for decades, and yet have arguably channeled both empirical and theoretical research in unproductive directions, inventing problems (like the peak shift one) that arguably do not exist, at least not as formulated. The Lande-Arnold landscapes, which I have not discussed in this post, but do treat in the paper, have a mixed record. They have been heuristically useful for biologists interesting in quantifying natural selection in the field, but have also arguably brought about a degree of tunnel vision in both the theoretical and empirical study of that most important concept in modern evolutionary theory. Morphospaces, by contrast, have a very good record of being useful in terms of generating insight into the evolution of animal (and plant) form, and yet, they are actually the least commonly deployed version of the landscape idea in the technical literature. And because population genetics, with its mathematical approach, is considered more sophisticated than paleontology, things are unlikely to change in the near future. Unfortunately.

Why machine-information metaphors are bad for science education, part II: the search for new metaphors

metaphor vs simileWhile discussing some sections of a paper I wrote with Maarten Boudry, we have seen a number of reasons why using machine-information metaphors is bad for science education. As I pointed out before, the full paper also devotes quite a bit of space to arguing that those metaphors haven’t been particularly good in actual scientific research. One of the fascinating things to watch after I posted the first part of this commentary was the number of people who vehemently defended the “biological organisms are machines” take, both here on the blog and on my Twitter feed. It’s like here we are, in the second decade of the 21st century, and there are still a lot of Cartesians around, who have apparently never heard of David Hume. Oh well.

In the conclusion of this two-part series I am going to focus on the last section of my paper with Maarten, where we discuss the search for alternative metaphors, and in the end (spoiler alert!) suggest that the best thing to do at this point is just to describe things as they are, staying as clear as possible of metaphorical language. And when one really cannot avoid it, then use multiple metaphors and be very clear on the limits of their use. Let’s take a look.

In their classic work on metaphors, Lakoff and Johnson argue that the basic function of metaphorical concepts is to structure a new kind of experience in terms of a more familiar and delineated experience. In science as well as in everyday language, metaphors highlight particular aspects of whatever it is we are trying to grasp, but they will inevitably distort others. For example, the image of the “tree of life,” with new species branching off as budding twigs and extinct species as dead branches, is an instructive approximation of the relations of evolutionary descent. However, it can also foster misconceptions about “progress” in evolution, or lead to a simplistic conception of speciation events, or to a downplay of horizontal gene transfer and reticulate (i.e., by interspecies hybridization) speciation events. To give one more example, in physical chemistry the model of the atom as a miniature solar system, with electrons orbiting the nucleus as planets, though still having wide public appeal, is fundamentally inaccurate.

Of course, no metaphor will do its job perfectly, but it is crucial to realize, as Lakoff and Johnson have shown, that the widespread deployment of a particular metaphor can have a feedback effect on the way we perceive things, not just how we present them to others. In the examples discussed in my paper with Maarten, the lure of machine-information metaphors in the history of biology has invited scientists to think of genomes as “blueprints” for organisms, written in the four-letter alphabet of DNA and readable in a manner analogous to a computer code. But as we argue, the machine-information conception of living systems has led both the public and the scientific community astray.

In response to this problem, some scientists and science educators have proposed several alternative and improved metaphors to characterize the relationship between genotype and phenotype. Biologist Patrick Bateson, for instance, was probably the first to compare the DNA sequence of living organisms with a recipe for a cake. The idea of a genetic recipe has several advantages over the blueprint metaphor, the most important being that it takes into account pleiotropy (one gene affecting more than one trait) and epistasis (gene–gene interactions). As a consequence, the simple picture of a one-to-one (or close to) correspondence between particular genes and phenotypic traits is abandoned, which becomes clear when one considers that there is no way to locate particular ingredients in individual crumbs of a cake. Accordingly, there is no possibility of reverse-engineering the end product to the set of procedures (the “recipe”) that made the final product possible. This has important consequences not just for science education, but for research agendas, as the idea of ‘‘reverse engineering’’ is commonly invoked everywhere from genomic studies to the understanding of the brain.

Of course, if carried too far, the recipe metaphor can in turn be quite misleading. To get the desired result, a cook has to lump together different ingredients in the correct proportions and follow a set of instructions for handling the dough and preparing the oven. But actual developmental encoding in living organisms is an enormously more complex and very different sort of procedure, which is also highly dependent on epigenetic factors and unpredictable vagaries of the external environment. The expression of specific genes in the course of development resembles nothing like the way a cook handles the ingredients of a recipe. Living organisms are also highly differentiated in a number of functional parts or components (cell types, tissues, etc.), in contrast with the homogenous cake that comes out of the oven. Moreover, the genome is not written in anything like a ‘‘language,’’ as in the case of a recipe, and it certainly does not contain a description of the desired end product in any meaningful sense of the word ‘‘description.’’

Condit and colleagues have discussed the recipe metaphor as an alternative to talk of blueprints, pointing out that it was adopted ‘‘with surprising swiftness’’ by science popularizers and the media in the 1990s. However, they also remark that, as a new ‘‘master metaphor’’ to capture the relationship between genotype and phenotype, the image of a recipe for a cake has little to recommend either. For example, evoking recipes can invite people to think of the genome as a step-by-step manual that describes ‘‘how to make a human,’’ in that sense falling into the same trap as the idea of a blueprint.

That being said, if contrasted with the blueprint metaphor, the recipe metaphor conveys the point about lack of one-to-one correspondence between genes and phenotypes very well, and hence it highlights an important fact about development and what biologists call the Genotype => Phenotype map. If the recipe metaphor is used within this restricted context, for example in explicit contrast with the characteristics of a blueprint, it is immediately clear what are the salient points of connection with living systems, and people are less likely to be misled by stretching the metaphor beyond usefulness. If the recipe metaphor is presented as an alternative to the blueprint, however, it is bound to mislead people no less than its rival.

The same point applies to other interesting metaphors that have been proposed in this context, for example Lewis Wolpert’s comparison of early embryonic development with the Japanese art of origami. The analogy highlights the circuitous step-by-step development of the early embryo, but of course in a piece of origami art the structure is imposed top-down from an intelligent agent, whereas the functional differentiation in the embryo is regulated bottom-up by a complex interaction between genes and environment. Moreover, origami simply fold to yield the final product, which in a very real sense is already there from the beginning. This is definitely not the way embryos develop, with their ability to respond to local and external environmental fluctuations.

The general problem that we have been discussing seems to us to be not just that one kind of metaphor or another is woefully inadequate to conceptualize biological organisms and their evolution. It is that it simply does not seem to be possible to come up with a metaphor that is cogent and appropriate beyond a very limited conceptual space. Although some of the alternatives are more accurate than the blueprint metaphor (in some respects), Maarten and I certainly have not found one that we would recommend as a replacement. Should we therefore try to avoid the use of metaphors in biological teaching and research altogether? Or do we simply expect too much from metaphors in science and education?

Analogical and metaphorical thinking is widespread among human beings, although of course different cultures and historical moments inspire people to use different metaphors. After all, a metaphor is an attempt to make sense of novel concepts by pairing them with known ideas to increase our overall understanding. Metaphorical thinking is therefore part of our language, and language is inextricably connected to our thinking, but to put it as Wittgenstein did: ‘‘It is, in most cases, impossible to show an exact point where an analogy starts to mislead us.’’ Yet a great part of doing philosophy consists precisely in clarifying our language in an attempt to advance our thinking. To quote Wittgenstein again: ‘‘Philosophy is a battle against the bewitchment of our intelligence by means of our language.’’ To complicate matters further, there is emerging empirical evidence that the human brain processes metaphors in a specific fashion: research on Alzheimer’s patients, for instance (see ref. in the paper), found that impairment of the brain’s ‘‘executive’’ function, associated with the prefrontal cortex, leads to poor understanding of novel metaphors (while, interestingly, comprehension of familiar metaphors is unaffected). Metaphorical thinking seems to be a biologically entrenched functional mode of our brains, and may therefore be hard to avoid altogether.

Both science and philosophy have made ample use of metaphorical and analogical thinking, sometimes with spectacularly positive results, at other times more questionably so. Nonetheless, it seems that nowhere is metaphorical thinking so entrenched — and so potentially misleading — as in biology. Given the maturity of biology as a science, and considering that it deals with objects whose nature is not as alien to our daily experience as, say, those of quantum physics, Maarten and I do not actually see any good reason for clinging onto outdated metaphors in biological education and research for characterizing living organisms, their genomes and their means of development. Taking into account the fact that the machine information metaphors have been grist to the mill of ID creationism, fostering design intuitions and other misconceptions about living systems, we think it is time to dispense with them altogether. Still, we are also not as naive as to expect that this advice will be followed by scientists and science educators any time soon, precisely because the machine/information metaphor is so entrenched in biology education. What to do then? We propose two approaches, one for science educators, the other for practicing scientists.

In science education, talk of metaphorical thinking can be turned into a teaching moment. Students (and the public at large) would actually greatly benefit from explanations that contrast different metaphors with the express goal of highlighting the limitations intrinsic in metaphors and analogies. So, for instance, science educators and writers could talk about the human genome by introducing the blueprint metaphor, only to immediately point out why it does not capture much of what genomes and organisms are about; they could then proceed to familiarize their students and readers with alternative metaphors, say the recipe one, focusing on differences with the original metaphor while of course not neglecting to point out the (different) deficiencies of the new approach as well. The goal of this process would be to foster a cautious attitude about metaphorical thinking, as well as to develop a broader understanding of how unlike commonsense modern science really is. On the latter point, it is interesting to note, for instance, that a popular refrain among evolution or global warming deniers is that ‘‘simple commonsense’’ shows that the scientists are wrong, a position that ignores the proper weight of technical expertise in favor of a folk understanding of nature. It is therefore crucial that the public appreciates the limitations of common sense thinking about science.

There is an analogous teaching moment that can be brought to bear when research scientists engage in unbridled metaphorical thinking: we could refer to this as a philosophy appreciation moment. Scientists are notoriously insensitive to, or even downright dismissive of, considerations arising from the history and philosophy of their discipline, and often for good practical reasons: modern science is a highly specialized activity, where there is barely enough time to keep up with the overwhelming literature in one’s own narrow field of research, and certainly not enough incentive to indulge in historical readings or philosophical speculation. Nonetheless, historians and philosophers of science can easily show the pitfalls of metaphorical thinking (by using well-documented historical examples) and even get across to their colleagues some basic notions of philosophy (by analyzing the effects of particular metaphors on the development of specific lines of scientific inquiry). None of this will quickly amount to overcoming C.P. Snow’s infamous divide between ‘‘the two cultures,’’ but it may bring about better understanding and appreciation of philosophy by scientists, and perhaps even help science see new horizons that have been hitherto obscured by a superficially illuminating metaphor.

Why machine-information metaphors are bad for science education, part I: biological machines and intelligent design

bacterial flagellum

bacterial flagellum, as often represented in biology education

Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Predictably, modern proponents of Intelligent Design so-called theory, the latest incarnation of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories.

In reality, the living organism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do.

My friend and collaborator Maarten Boudry and I have written an article several years ago in which we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. We argued that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.

We have discussed this topic twice in the last couple of years, once on the occasion of another paper with Maarten, on why machine metaphors in biology are misleading; more recently because of a paper I wrote about genes as blueprints; the current entry completes the trilogy, so to speak. In part I, here, I will present what Maarten and I had to say about the origin of machine-information metaphors in biology, as well as its questionable use in science education. In part II, next week, I’ll talk about the search for new and better metaphors in science and science education. Interested readers are referred to the original paper for references, as well as for a discussion of the misuse of machine-information metaphors in actual biological research (i.e., not just for educational purposes).

When delving into unknown territory, scientists have often naturally relied on their experiences in more familiar domains to make sense of what they encounter. In the early days of the scientific revolution, mechanical metaphors proved to be a powerful instrument to get a grip on new discoveries about the living world and the universe at large, and we can trace back the emergence of machine metaphors at least to the Middle Ages, when new achievements of technology had a profound cultural influence and captured the collective imagination. Against this background of technological innovation, it is not surprising that the pioneers of anatomy and physiology relied on the metaphor of the animal body as a complicated piece of machinery to make sense of their discoveries. The mechanical language provided a richness of meaning and allowed them to structure the new phenomena in terms of familiar experiences. For example, the image of the human heart as a pump with intricate mechanical components played an important role in William Harvey’s discoveries about blood circulation.

In the course of the 17th century, a new philosophy of nature became prominent that developed a conception of the universe in purely mechanical terms. According to this mechanical philosophy, which was developed by thinkers like Rene` Descartes, Pierre Gassendi and Robert Boyle, the phenomena of nature can be understood purely in terms of mechanical interactions of inert matter. This mechanization of nature proved an important driving force behind the Scientific Revolution, and at the end of the 17th century culminated in Newton’s theory of motion. Newton’s description of planetary orbits following the fixed laws of gravity conveyed an image of a clockwork universe set in motion by an intelligent First Cause. In fact, that was exactly how Newton conceived the universe and its relation to the Creator. For Newton and many of his contemporaries, the importance of the mechanical conception of nature was greater than the mere term ‘metaphor’ would suggest, as the development of mechanistic philosophy was itself largely inspired by religious motivations; indeed, the very employment of machine metaphors invited theological speculation.

In the second part of the 17th century, the mechanical pictures of living organisms and of the cosmos at large converged into an intellectual tradition where theology and science were intimately intertwined: natural theology. The most famous representative of this tradition was William Paley, whose work Natural Theology, of Evidence of Existence and Attributes of the Deity, Collected from the Appearances of Nature (1802) made a deep impression on the young Charles Darwin. As the title of the book makes clear, Paley and the natural theologians conceived of Nature as a complicated machinery of intricate wheels within wheels, in which every organism has its proper place and is adapted to its environment. According to Paley, the contrivance and usefulness of parts exhibited by living organisms attests to the intelligence and providence of a benevolent Creator. This so-called ‘design argument’ already had a long intellectual pedigree, dating back to Plato, Cicero and Thomas Aquinas, but its most famous formulation is found in the first chapter of Natural Theology, in which Paley famously relies on the analogy between living organisms and a pocket watch to support his design inference.

While Darwin was the one who gave the most decisive blow to the design argument by suggesting a natural explanation for adaptive complexity in the living world, many philosophers would agree that David Hume foreshadowed its demise, by exposing several problems with the central analogy. In his Dialogues Concerning Natural Religion (1779), which actually predates Paley’s magnum opus by more than 50 years, we find a discussion of the design argument among Philo, the skeptical character that voices Hume’s ideas, Demea, the orthodox religious believer, and Cleanthes, the advocate of natural theology.

After Cleanthes has set out the design argument in terms foreshadowing Paley’s analogy of the watch, Philo objects that it is dangerous to derive conclusions about the whole of the universe on the basis of a spurious analogy with one of its parts. Given that our experience with design is limited to human artifacts only, we have to proceed with great caution, and it would be presumptuous to take so minute and select a principle as the human mind as the model for the origin of the whole universe. Hume realized that, at least in some cases, appearances of intelligent design can be deceptive.

In contemplating that ‘‘many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out’’, Hume even comes close to Darwin’s crucial insight about the power of natural selection. Although Hume does not deny that we can discern similarities between nature and human artifacts, he warns us that the analogy is also defective in several respects. And if the effects are not sufficiently similar, conclusions about similar causes are premature. To illustrate this, Philo proposes another possible cosmogony on the basis of the analogy between the world and an animal:

“A continual circulation of matter in [the universe] produces no disorder; a continual waste in every part is incessantly repaired: The closest sympathy is perceived throughout the entire system: And each part or member, in performing its proper offices, operates both to its own preservation and to that of the whole. The world, therefore, I infer, is an animal.” (Hume 1779, p. 39)

In The Origin of Species, Charles Darwin (1859) finally proposed a natural explanation for the phenomenon that inspired Paley but failed to convince Hume. Although the design argument is still of interest to philosophers and historians of science, it has been widely discarded in the scientific community. However, the analogy on which Paley based his inference seems to be alive and well, not only in the minds of creationists and ID proponents, but also in the writings of science popularizers and educators. Many scientists have actually argued that Paley at least offered an incisive formulation of the problem as there is indeed a hard-to-shake intuition of contrivance and intelligent design in nature. As one of the most ardent defenders and popularizers of evolutionary theory, Richard Dawkins, put it, ‘‘Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Adaptive complexity, then, is still regarded as something that requires a special explanation.

In textbooks, science educators have presented the comparison of living organisms and man-made machines not just as a superficial analogy, but carrying it out to a considerable level of detail. For example, the cell has been described as a miniature factory, complete with assembly lines, messengers, transport vehicles, etc. Consider the following quote from Bruce Alberts, molecular biologist, and former president of the National Academy of Sciences:

“The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. … Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. Given the ubiquity of protein machines in biology, we should be seriously attempting a comparative analysis of all of the known machines, with the aim of classifying them into types and deriving some general principles for future analyses. Some of the methodologies that have been derived by the engineers who analyze the machines of our common experience are likely to be relevant.” (Alberts 1998, p. 291)

Creationists and their modern heirs of the Intelligent Design movement have been eager to exploit mechanical metaphors for their own purposes. For example, Bruce Alberts’ description of the living cell as a factory has been approvingly quoted by both Michael Behe and William Dembski, two leading figures in the ID movement. For ID proponents, of course, these are not metaphors at all, but literal descriptions of the living world, arching back to Newton’s conception of the Universe as a clock-like device made by the Creator. The very fact that scientists rely on mechanical analogies to make sense of living systems, while disclaiming any literal interpretation, strengthens creationists in their misconception that scientists are ”blinded” by a naturalistic prejudice. In the creationist textbook Of Pandas and People, which has been proposed by ID advocates as an alternative to standard biology textbooks in high school, we read that ‘‘Intelligent design […] locates the origin of new organisms in an immaterial cause: in a blueprint, a plan, a pattern, devised by an intelligent agent’’ (Davis et al. 1993, p. 14).

The analogy between living organisms and man-made machines has proven a persuasive rhetorical tool of the ID movement. In fact, for all the technical lingo and mathematical “demonstrations,” in much of their public presentations it is clear that ID theorists actually expect the analogies to do the argumentative work for them. In Darwin’s Black Box, Behe takes Alberts’ machine analogy to its extreme, describing the living cell as a complicated factory containing cargo-delivery systems, scanner machines, transportation systems and a library full of blueprints. Here is a typical instance of Behe’s reasoning:

“In the main area [cytoplasm] are many machines and machine parts; nuts, bolts, and wires float freely about. In this section reside many copies of what are called master machines [ribosomes], whose job it is to make other machines. They do this by reading the punch holes in a blueprint [DNA], grabbing nuts, bolts, and other parts that are floating by, and mechanically assembling the machine piece by piece.” (Behe 2006, pp. 104–5)

Behe’s favorite model of biochemical systems is a mechanical mousetrap, the familiar variant consisting of a wooden platform, a metal hammer, a spring etc. According to Behe, if any one of these components is missing, the mousetrap is no longer able to catch mice. He has termed this interlocking of parts ‘‘irreducible complexity’’ and thinks it characterizes typical biochemical systems. n other words, the mousetrap is to Behe what the well-designed pocket watch was for Paley. But whereas Paley can be excused on the grounds of the state of scientific knowledge in the 18th century, for Behe the situation is a little different. Modern biochemistry, nota bene Behe’s own discipline, has revealed that biochemical systems are not like mechanical artifacts at all. Moreover, even biological systems that are irreducibly complex under Behe’s definition pose no problem for evolution by natural selection, as has been in detail by people like cell biologist Ken Miller.

ID proponents have buttressed their analogies between living systems and mechanical contraptions with a lot of visual rhetoric as well. The flagellum of the bacterium E. coli, the hallmark of the ID movement, has been represented as a full-fledged outboard rotary motor, with a stator, drive shaft, fuel supply, etc.. It features on the cover of Dembski’s book No Free Lunch, and has been used numerous times in presentations and online articles. The idea seems to be that if it looks designed, it has to be designed. But as Mark Perakh has documented in a paper published in 2008, ID supporters invariably use idealized and heavily stylized representations of the flagellum, in order to make it more resemble a man-made contraption. Another striking example of this visual rhetoric is a video by Discovery Institute president Stephen C. Meyer, which presents a computer-simulated — and again heavily stylized — journey inside the cell, and describes the biochemical processes in terms of ‘‘digital characters in a machine code,’’ ‘‘information-recognition devices,’’ and ‘‘mechanical assembly lines.’’ Meyer commented that evolutionists will have a hard time now dissuading the public from the fact that ‘‘the evidence for design literally unfolds before them.’’

Of course, the mere observation that creationists have seized on machine metaphors in biology does not suffice to demonstrate that these metaphors do not make scientific sense. However, the fact that they tend to do so systematically, using full-length quotes from respectable scientists, should make us weary of the possible dangers of misleading metaphors. If the rhetoric of the ID movement is demonstrably based on these mechanical analogies, it can be instructive to reexamine their scientific merits. In the paper, Maarten and I argue that the machine-information analogy has indeed influenced the way scientists themselves think about biological structure, function, and evolution. By analyzing the consequences of and reactions to this analogy in actual biological research, we show that its scientific merits are very weak, and that its place in modern biology has become questionable. What then? Stay tuned for part II, on the search for new and better metaphors…

Genotype-phenotype mapping and the genes as blueprint metaphor

One of my occasional series here at Footnotes to Plato is devoted to accessible (or so I hope) versions of some of my technical papers. The idea is to provide a taste of academic scholarship in philosophy of science, but in a form that can be read by more than a few dozen colleagues who specialize in the same exact area. So far, I’ve covered all suitable entries from 2013 to 2017. I don’t plan to go back to the very beginning of my career (for a good chunk of which I was publishing as an experimental biologist anyway), but only to 2010, the year after I moved to philosophy full time. By my reckoning, that means four papers left: on genotype-environment interactions and the “genes as blueprints” metaphor (2010), on the machine-information metaphor in biological research and education (2011), on the metaphor of adaptive landscapes (2012), and on paradigm shifts in evolutionary biology (also 2012). Yeah, that was my “metaphors” period. This entry is dedicated to the first paper mentioned, and over the next few months I’ll cover the remaining three.

The underlying question being considered here is: what is the relationship between genotypes and phenotypes? This question has marked the evolution of evolutionary theory ever since the rediscovery of Mendel’s work at the beginning of the twentieth century, which immediately generated an apparent conflict with the Darwinian view of gradual evolution. Famously, the answer proposed by the architects of the Modern Synthesis (i.e., the “standard model” in evolutionary biology from the 1940s on) is that genes determine phenotypes, as in the oft-cited metaphors of a “genetic blueprint” or a “genetic program.” This sort of answer bypasses the process of development, which is treated as an incidental blackbox with no direct causal relevance to the evolutionary process. Given this conceptual framework, it is no wonder that developmental biology was famously left out of the Modern Synthesis, and that it has (partially) re-emerged only recently within the so-called “evo-devo” approach.

In the paper, published in the Philosophical Transactions of the Royal Society (2010) I re-examine the question of the relationship between genotype and phenotype by going back to Alberch’s concept of a genotype-phenotype (G>P) “map” and examine what recent research tells us on actual G>P maps. It turns out that computational and empirical studies of three classes of systems (RNA folding, protein function and software development — though in this post I will talk only about the first and last cases) have yielded important generalizations about the problem, as well as novel insight into the evolutionary process more broadly. One of the consequences of these lines of research is that the blueprint metaphor is untenable and in fact positively misleading, and should be replaced by the concept of developmental encoding.

Back in 1991, Alberch introduced a metaphor differing from the standard blueprint view of genetic encoding, talking about a G>P “mapping function,” defined by a given parameter space and at least potentially amenable to mathematical description.

Alberch derived four general conclusions from his conceptualization of the G>P map: (i) the map is (much) more complex than a one-to-one relation between genotype and phenotype, which means that the same phenotype may be obtained from different combinations of genetic informational resources; (ii) the area in parameter space where a particular phenotype exists gives an indication of how stable that phenotype is likely to be; (iii) the parameter space is marked by “transformational boundaries,” i.e. areas were a small change in one or more developmental parameters will cause the transition from one phenotypic state to another; and (iv) the phenotypic stability of a given population will depend on which area of the parameter space it occupies, and in particular whether it is close to a transformational boundary or not.

Alberch’s famous example of a phenotypic transition that is amenable to be described according to his idea of parameter space and mapping function was the evolution of the number of digits in amphibians. In particular, he showed how salamanders tend to lose their fifth toe every time the digit reduction evolves, while anurans tend to lose their first digit. The difference between the two groups can be recreated experimentally by administration of an inhibitor of cell division, a result that Alberch interpreted as telling us that anurans and salamanders find themselves in different areas of the parameter space, and in particular that they are located near different transitional boundaries, so that every time the transition happens within one of the two groups it occurs by the same developmental means, but when the two groups are compared the transitions happen by different developmental routes.

Ambystoma salamander

A salamander of the genus Ambystoma, a classic model system for evo-devo studies

A good starting point to tackle the G>P mapping problem is to start simple, and the simplest place to start is the growing literature on RNA folding. RNA folding is relatively well understood at a chemical-physical level, with increasingly sophisticated computer models capable of predicting the three-dimensional folding of a linear sequence of nucleotides based on thermodynamic considerations. Moreover, it is relatively straightforward to verify such predictions experimentally for a subset of simulated folding patterns, and researchers can even carry out competition experiments among RNA molecules for a given catalytic function.

As far as the G>P problem is particularly concerned, the step from genotype to phenotype is in this case as short as it is possible in any biological system, and indeed probably somewhat reflects the ancestral situation in the RNA world hypothesized within the context of the origin of life problem. RNA folding is therefore both an extremely suitable system to begin examining G>P mapping and one that may yield important clues to how historically mapping functions got started and became more complex and indirect. A crucial advantage of RNA folding studies of G>P mapping is that the fitness function of the molecules is not assumed arbitrarily to follow a particular statistical distribution, but can be studied empirically. In other words, the connections between genotype and phenotype on one hand and between phenotype and fitness on the other hand are explicit, relatively simple and biologically meaningful.

Several important generalizations have emerged from studies of RNA folding, generalizations that are crucial to our understanding of phenotypic evolution beyond the relatively simple framework offered by the Modern Synthesis. Consider, for instance, the study of mutational networks, i.e. of the structure of the genotypic landscape in terms of one-mutation steps surrounding a given focal genotype. The idea goes back to Kauffman & Levin’s work on genotypic landscapes, back in 1987. The problem to be tackled is how does evolution explore phenotypic landscapes by moving across a corresponding genotypic landscape in a non-saltatory manner, according to standard Darwinian theory. The solution requires an understanding of the connection between the genotypic and phenotypic landscapes, and in the case of RNA folding one can actually computationally explore the totality of both landscapes for a given short sequence length, or statistically sample the properties of landscapes defined by longer sequences.

For instance, all 30-nucleotide long binary RNA molecules produce about one billion unique sequences, a bewildering genotypic space. This space, however, corresponds to only 220,000 unique folding shapes in the Guanine/Uracile nucleotide landscape and a mere 1,000 shapes in the Adenine/Uracile landscape, the two situations that have been extensively studied. This is a spectacular example of what biologists call “degeneracy” (i.e., redundancy of sequence coding), which in turn is a fundamental concept underlying the neutral theory of molecular evolution — according to which most (but, crucially, not all) mutations are selectively neutral at the molecular level.

Genotypes on these landscapes are connected by mutational networks whose properties can then be explored. An interesting result is that the distribution of phenotypes on RNA mutational networks follows regular patterns, characterized by a few abundant RNA shapes and a large number of rare ones. The structure of the landscape is such that evolution can explore most or all of the common structures by one-step mutations that preserve structure while moving the population on a neutral path of constant fitness, until it bumps into a novel phenotype with higher fitness. Interestingly, most genotypes turn out to be located within a few mutational steps from most of the common phenotypes in the landscape, making it predictable that such phenotypes will in fact be found by natural selection in a relatively short period of time. However, the connectivity on the landscape is always asymmetrical, which means that which particular phenotypes will be reached more easily while starting with a given genotype will be a matter of historical contingency.

Research on the general properties of RNA folding evolution has showed that the G>P function is such that small movements in genotypic space do not necessarily correspond to small movement in phenotypic space, a rather flagrant contradiction of one of the standard assumptions of the Modern Synthesis. In particular, if we consider a genotype G with a given phenotype P, it is likely that G is connected to a one-step neighbor associated with a phenotype which is not structurally similar to P. This brings us to a rather surprising general behavior that emerges from studies of RNA folding (as well as of protein function, micro-organisms and simulated systems — as discussed in the rest of the full paper), a true “punctuated equilibrium” pattern of evolution that does not result from the usual suspects in terms of underlying causes.

Punctuated equilibrium, of course, was one of the early challenges to the Modern Synthesis brought about by palaeontologists Eldredge & Gould back in 1972. The standard explanation for the fossil record pattern of stasis punctuated by occasional rapid shifts in phenotype is that of stabilizing selection. Simulations of RNA folding evolution display the same general pattern that one sees in the fossil record, obviously at a much smaller temporal scale. The mechanism, however, has nothing to do with “stabilizing selection” (a rather vague concept in itself, really simply a way to describe a statistical pattern of constant mean and reduced variance). Rather, the punctuated evolution results from the fact that the population divides itself into smaller chunks, each of which explores a portion of the largely neutral genotypic landscape. From time to time, a population encounters a new phenotypic optimum and “jumps” on it quickly. Stasis, in this context, is then not the result of selection for a constant phenotype, but rather of the largely neutral structure of the landscape, which allows populations to wander around until they find a new functional phenotype and jump into a nearby neutral network, only to resume their evolutionary wanderings.

RNA-like systems can also be a model for the evolution of ecological communities, thereby beginning to forge a still surprisingly lacking direct link between ecology and evolutionary biology. For instance, Takeuchi & Hogeweg, in 2008, showed that a population of replicators originally made of just one genotype evolves into a complex system characterized by four functionally distinct groups of genotypes, which the authors call “species.” Interestingly, the model also evolved “parasites” which not only were able to coexist with catalytic molecules, but in turn were themselves catalysts for the evolution of further complexity in the system. While Takeuchi & Hogeweg’s definition of species in this context may appear artificial, the group of genotypes they identified are in fact both ecologically functionally distinct and genealogically related to each other, and a functional-genealogical concept is certainly one of the viable contenders as a definition of biological species.

The examples drawn from research on RNA folding (as well as those not discussed here, on protein sequence space) help bring to the forefront a major limitation of the Modern Synthesis: the almost utter disregard for developmental biology.

Notoriously, that field was essentially left out of the synthesis of the 1940s that gave us the current structure of evolutionary theory. Part of the reason for this is that it has never been conceptually clear what exactly the role of development in evolution is. Mayr, a founding father of the Modern Synthesis, famously made a distinction — arching back to Aristotle — between proximate and ultimate causes in biology, with the genetic bases of phenotypes counting as proximate causes and the evolutionary processes that brought those phenotypes about considered as ultimate causes (see this post for an in-depth discussion). Even if one accepts Mayr’s framework, however, it is not clear whether development should be considered a proximate or an ultimate cause.

The onset of evo-devo and calls for an Extended Synthesis in biology (see this previous post) have reopened that question. The answer is emerging from research on the structure of G>P maps, and in particular from a parallel literature in computational science that attempts to exploit the characteristics of biological development to produce a new generation of “evolvable hardware.” The picture that is forming out of these efforts is that development is a necessary link between proximate and ultimate causality, and that in a sense the G>P map is whatever specific type of “developmental encoding” (as opposed to the classic genetic encoding) a given species of organism uses to produce environmentally apt phenotypes. Developmental encoding refers to situations were information encodes not a detailed description of the full system (as in the blueprint metaphor), but rather the local steps necessary to build the system through a developmental process.

Several authors have pointed out the limitations of both direct genetic encoding of “information” and of the blueprint metaphor that results from it. Ciliberti and collaborators, in a 2007 paper, have for instance referred referred to human-engineered systems as being characterized by “brittleness,” i.e. the unfortunate property that if one component ceases functioning properly, there is a high probability that the whole system will unravel. This is most clearly not what happens with biological organisms, which means that the oft-made analogy (ironically, by both some biologists and proposers of intelligent design creationism) between living organisms and “machines” or “programs” is profoundly misleading. Along similar lines, Stanley, also in 2007, reiterated that the amount of direct genetic information present in, say, the human genome (now estimated to be around 30,000 protein-coding genes) is orders of magnitude below what would be necessary to actually specify the spatial location, functionality and connectivity among the trillions of cells that make up a human brain. The answer must be in the local deployment of information that is possible through developmental processes, where the “instructions” can be used in a way that is sensitive (and therefore capable of adjusting) to both the internal and external environments.

According to Hartmann and colleagues (in another 2007 paper), artificial development is increasingly being used to solve computational problems outside of biology by direct analogy with biological systems. The results indicate that replacing direct genetic encoding with indirect developmental encoding dramatically reduces the search space for evolutionary algorithms. Moreover, the resulting systems are less complex and yet more robust (“fault-tolerant” in engineering jargon) than those obtained by evolving standard genetic algorithms. Another way to put the point is that direct genetic encoding is limited by the fact that the length of the genetic string grows proportionally to the complexity of the phenotype, thereby quickly encountering severe limitations in search space. With developmental encoding, instead, the evolving system can take advantage of a small number of genetic instructions mapping to a large number of phenotypic outcomes, because those outcomes are determined by the (local) interactions among parts of the system and by interactions of the system with the environment.

Simulations comparing the evolution of standard genetic systems of information encoding with systems based on developmental encoding clearly show that genetic systems reach a maximum level of fitness for low levels of complexity; at higher levels of complexity developmental encoding “scales” much better, with developmental systems being capable of achieving high fitness more quickly and efficiently. Moreover, developmental encoding leads to the artificial evolution of systems that are both significantly more robust to internal disruptions and significantly more flexible in response to external environmental conditions than standard genetic systems. This is an interesting situation whereby a research area parallel to evolutionary biology, computational science, draws inspiration from the actual structure of biological systems and ends up providing a theoretical underpinning for why, in fact, those biological systems are structured the way they are.

In conclusion, the conceptual and mathematical foundations of evolutionary theory are evolving from a simple beginning as bean-bag genetics, Mayr’s derogatory term for population genetics theory, to a sophisticated patchwork that draws from population genetics, quantitative genetics, bioinformatics and computational science. Medawar & Medawar, in 1983, famously said that “genetics proposes, epigenetics disposes,” where epigenetics here means the whole of developmental processes, a way to highlight that evolutionary theory finally needs a good conceptual understanding of development, and not just of genetics. As I have argued in the paper referenced here, such a broadened theoretical framework cannot come from population genetics alone, but benefits from the input of computational research both on simple biological examples of G>P maps, such as those underlying RNA folding and protein function, and from consideration of broader issues such as the properties of large neutral networks in genotypic space (see full paper) and of developmental versus genetic-encoding systems.

On the different ways of doing theory in biology

Theoretical biology’ is a surprisingly heterogeneous field, partly because it encompasses ‘‘doing theory’’ across disciplines as diverse as molecular biology, systematics, ecology, and evolutionary biology. Moreover, it is done in a stunning variety of different ways, using anything from formal analytical models to computer simulations, from graphic representations to verbal arguments. A few years ago I co-organized a workshop on this topic at the Konrad Lorenz Institute for theoretical biology in Vienna, and then published an edited volume of the journal Biological Theory collecting all contributions.

In my paper I surveyed a number of aspects of what it means to do theoretical biology, and how they compare with the allegedly much more restricted sense of theory in the physical sciences. I also tackled a somewhat recent trend toward the presentation of all-encompassing theories in the biological sciences, from general theories of ecology to an attempt to provide a conceptual framework for the entire set of biological disciplines. I also discussed the roles played by philosophers of science in criticizing and shaping biological theorizing. The full paper is available for download here (free), and the edited volume can be found here (articles beyond paywall). Let me, however, summarize my main points to facilitate a general discussion.

First, I discussed the issue of alleged laws in biology. If there is anything that characterizes physics as a science it is its unending quest for universal laws, from Newton’s mechanics to the current (and highly controversial) string theory. This is the case despite the fact that influential philosophers of science like Van Fraassen and Giere maintain laws play a marginal and mostly didactical role, even in physics. Regardless, it is not surprising, that discussions of general laws in biology are a recurrent staple of the literature and—interestingly—one that provides a good example of positive interactions between theoretically inclined biologists and philosophers of science.

In a number of cases authors draw a direct parallel between physical laws and proposed biological equivalents. For instance, M. Elgin argues that the ‘‘epistemic functions of a priori biological laws in biology are the same as those of empirical laws in physics.’’ Elgin begins by acknowledging the (almost) universal agreement among philosophers who subscribe to the concept of laws that these must be both universal and empirical in nature, though he hastens to say that these conditions are necessary but not sufficient to distinguish laws from ‘‘accidental’’ generalizations. He then discusses Elliot Sober’s proposal that the Hardy–Weinberg principle in population genetics is an example of a biological law, even though it is universal but not empirical.

There are several problems with this proposal, chiefly the fact that Hardy–Weinberg cannot meaningfully be thought of as a ‘‘zero force law’’ analogous to, say, the law of inertia (as Elgin suggests), as well as the above mentioned lack of empirical content. Jonathan Kaplan and, back in 2006, have discussed in detail why the various evolutionary mechanisms that can cause a population to deviate from Hardy–Weinberg equilibrium are not conceptually equivalent, and should not be thought of as ‘‘forces’’ characterized by intensity and direction. Moreover, it simply seems strange to suggest that a scientific law can have no empirical content and instead simply be true a priori (as Hardy–Weinberg surely is, mathematically speaking). This risks embarking philosophy of science down the slippery slope of considering logical and mathematical principles themselves as ‘‘laws,’’ a usage that clearly does not accord to scientific practice at all. Apparently, however, this point is not at all clear in the minds of some biologists, since it is possible to find statements like the following: ‘‘The global-optimum model is not so much a predictor of nature as a definition of nature. It must be true that a perfectly adapted organism leaves the most possible offspring!’’ (In a paper by Nonacs and Dill, published in 1993). Or: ‘‘The existence of a global-optimum point is a ‘deep axiom’: a tautology that guarantees logical consistency at the core of a theory’’ (by Stearns and Schmid-Hempel, in 1987). This is surely one area where more communication between theoretically minded biologists and philosophers of science would be welcome.

Besides appeals to Hardy–Weinberg as an example of biological laws, the next most popular area of discussion concerning this topic is perhaps the possible existence of laws in ecology. For instance, G.M. Mikkelson makes a case for moving ecology from an idiographic (historical) mode of explanation to a nomothetic (law-based) one. He maintains that—contra to what he perceives as the practice among ecologists—generalizations should be interpreted in terms of law-like generalizations because functional kinds (such as ‘‘predators’’) and structural kinds (like the various community types) correlate better than taxa (historical kinds) with fundamental ecological patterns and processes. As Mikkelson puts it, ‘‘Imagine being dropped at a random spot on the land surface of the Earth. Which would allow you to predict the density of plant species around you—that is, the number of species per 10,000 square kilometers—most precisely: knowing the climate, or knowing the landmass on which you stand? Answer: climate wins, hands down.’’. Well yes, but it is arguable that such predictions are the result of ‘‘laws’’ in any way like those that physicists are after, and it is telling that Mikkelson is in fact cautious enough to talk about ‘‘law-like generalizations.’’

Interestingly, the issue of, shall we say, “physics envy” shows up explicitly in yet another author’s treatment of the issue of laws in ecology, D.R. Lockwood. In this case it is an ecologist who takes on the matter, and comes down rather negatively on the possibility of laws in his discipline. Lockwood discusses two frequent suggestions as examples of ecological laws: Malthusian growth and the logistic equation. He quickly finds them inadequate to the task, as they do not support counterfactuals, are not temporally universal, and in fact repeatedly fail empirical tests. In the end, Lockwood agrees with philosopher W.C. Wimsatt’s suggestion that ‘‘aggregative systems’’ (those typically studied by physics) do follow robust laws, while emergent systems (like those studied in biology) do not. This does not mean that biologists cannot generalize their empirical findings (within certain limits), and that such generalizations cannot be used to make reasonable predictions about the behavior of the systems of interest to them. And that, after all, is what actually matters.

If not laws, are there general theories in biology? Theodosius Dobzhansky famously said that ‘‘nothing in biology makes sense except in the light of evolution.’’ Adding that to Richard Dawkins’ quest for ‘‘universal Darwinism’’ and to Daniel Dennett’s contention that Darwinism is a ‘‘universal acid’’ of sorts that cuts across disciplines, extending the idea of Darwinian evolution well beyond biology itself, one would think that biologists have settled on their version of a theory of everything long ago. One would be surprised. A perusal of the recent literature shows quite a bit of activity in this department, again largely on the side of ecologists. I will briefly comment on one such attempts, referring the interested reader to two more case studies discussed in the paper.

Stephen Hubbell’s unified neutral theory of biodiversity and biogeography attempts to do precisely what its name implies: to propose a combined theoretical framework for biodiversity (measured by species–abundance curves) and biogeography (measured by species–area curves), where the ‘‘neutrality’’ consists in assuming that the differences among species that belong to the same trophic level within a given ecological community do not matter for the dynamics of that community. Hubbell’s theory draws from explicit parallels with the neutral theory of molecular evolution proposed by Motoo Kimura back in 1985, and from the above mentioned Hardy–Weinberg equilibrium in population genetics.

The unified theory has generated a significant literature, including a number of critics and empirical tests. It is important to realize a couple of things, however: first, that the scope of the theory is crucially limited by the clause that it applies to species of similar trophic level within a given community, which makes it quite a bit more narrow in scope than its name (and some of the discussion that has followed the publication of Hubbell’s book) might otherwise give the impression. Moreover, the theory is notoriously difficult to test, because while it does make distinctive predictions when compared to, say, niche assembly theories (which are non neutral), the predicted differences are very small, and easily lost in the noise characteristic of ecological data sets. This is not the place to get into an in-depth discussion of Hubbell’s theory, but I can hazard a prediction based on the similar history of the neutral theory of molecular evolution: in that case more than a decade of discussions led to the conclusion that a modified ‘‘quasi-neutral’’ theory was the best bet. Which basically means that stochastic as well as selective processes affect the outcome of evolution, just as it would be reasonable to expect.

My more general point in the paper was that even a cursory look at the literature allows one to distinguish four modalities for theoretical biology (though similar distinctions can also be found in, say, physics, especially if one considers the entire discipline, and not just specific subsets like particle physics). I refer to these as analytical modeling, statistical modeling, computer modeling, and conceptual analysis.

The classic example of analytical approaches in theoretical biology is represented by much of the body of works that makes up population genetics theory, beginning again with the Hardy–Weinberg principle and arriving at more recent advances such as coalescent theory. The basic approach here is to use mathematical formalism to arrive at analytical (i.e., precise, non-statistical) solutions of sets of equations describing the behavior of idealized populations of organisms.

The second general type of approach to biological theorizing is statistical in nature, beginning with Ronald Fisher’s famous ‘‘fundamental’’ theorem of natural selection, which was proposed as explicitly equivalent to one of the most solid pieces of theory in classical physics, the second principle of thermodynamics. Fisher laid the foundations for statistical genetics, which—when reconciled with the apparently discrepant Mendelian genetics—resulted in the Modern Synthesis of the 1940s, basically the still current standard model in evolutionary theory (but see this).

The third way of doing theoretical biology is based on computer modeling, and it is in a sense a continuation of a long standing trend in the field: when things get too complicated even for a quantitative (i.e., statistical) genetic approach (let alone for a population genetic, analytical one), researchers move toward computationally intensive simulations of biological populations. There are many examples of this, some of which are continuous with the population-quantitative genetic type of issues just discussed, some having to do with broader questions concerning the evolution of evolutionary mechanisms (evolvability), and some concerning the relationship between structural biology and evolutionary dynamics.

The fourth and last modality of biological theorizing is based on the articulation of verbal-conceptual models, and obviously comes closest to what philosophers of biology themselves engage in when they analyze the concepts deployed by working biologists. Verbal-conceptual models in science have the reputation of being second grade when compared to ‘rigorous’ mathematical modeling, even though of course both the original work by Darwin and much of the work done during the Modern Synthesis (except for the part that was explicitly population-genetic) fall into this category. Indeed, there seems to be a resurgence of this approach as a necessary complement to increasingly ‘‘experimental’’ mathematical treatments like the ones discussed above. Verbal-conceptual models include a broad category of biological theorizing that is particularly popular in molecular biology and biochemistry, where many papers present the results of complex experiments on the structure of genetic networks or biochemical pathways in the form of conceptual diagrams that are meant to both summarize the current status of knowledge and provide food for thought for the developing of new hypotheses and subsequent empirical tests.

My conclusions at the end of the full paper: the term ‘‘speculation’’ has a rather bad reputation in science, often associated with the much-dreaded accusation hurled at philosophers that they engage in ‘‘armchair theorizing.’’ But of course all theory is armchair speculation, and unless one thinks of mathematics in a special Platonic fashion, mathematical approaches are simply continuous with, and complementary to, all the other ways of doing theory in science.

Which brings me to the role of philosophy of science in all of this. I think that philosophy of science itself is characterized by different modalities, some of which have little to do with helping scientists and reflect instead on the logic of scientific theories, the epistemology underlying scientific claims, and so on. Indeed, philosophy of science itself is continuous with the history of science, since it would be difficult to attempt generalizations about the nature of science while focusing only on currently ongoing (and therefore far from being settled) scientific research.

To begin with, then, classic philosophy of science is concerned with the study of the logic of scientific discovery, as exemplified by the well-known names (even among scientists!) of Popper, Kuhn, and—to a lesser extent—Feyerabend and Lakatos (and, of course, a number of contemporary scholars, too many to mention). This type of philosophy of science is, arguably, of very little direct relevance to scientists themselves (except insofar as they are curious about how outsiders see and analyze their own activity). It is perhaps this sort of philosophizing that has brought a number of physicists (e.g., Steven Weinberg, Stephen Hawking, and Lawrence Krauss) to claim that ‘‘philosophy is dead’’ on the ground that, of late, it has not managed to solve any scientific problem with which physics is concerned. In so arguing, these scientists are committing an elementary category mistake prompted by a combination of intellectual hubris and a surprising amount of ignorance.

Philosophy of science, however, also functions in modalities that are (or ought to be) of more direct interest to practicing scientists themselves—whether the latter realize it or not. One such modality is represented by always necessary (if prone to annoy the targeted scientists) external criticism of socially relevant scientific claims (e.g., concerning race, gender, or the validity and application of certain types of medical research). I hesitate to use the label ‘‘science criticism’’ for this activity—even though it is arguably the most appropriate one available—because the term has been possibly irreparably tainted by much post-modern-inspired nonsense at the height of the so-called ‘‘science wars’’ of the 1990s. Regardless of what we end up calling it, it is the sort of philosophical inquiry that actually has practical implications, analogous to the better known ones usually associated with, say, business ethics, medical ethics, and bioethics, and one that should develop into an earnest dialogue between philosophers and scientists about the social implications of science itself.

The third and final modality for philosophy of science is in even more close symbiotic relationship with science, one that seems to be welcome by scientists themselves. Indeed, recent years have seen an increasing number of philosophers of physics, biology, and other disciplines who have been publishing conceptual papers on a large variety of topics that are hard to distinguish from theoretical physics, biology, etc. This is, I think, a much welcome development, and a small (but, hopefully, growing) number of scientists have started to collaborate with philosophers and/or to publish in philosophical journals, as the case of debates about laws in biology discussed above exemplifies. As I pointed out elsewhere, this is along the lines of what Hasok Chang called ‘‘the continuation of science by other means’’:

Complementary science [based on history and philosophy of science] is critical but not prescriptive in relation to specialist science. … Complementary science identifies scientific questions that are excluded by specialist science. … The primary aim of complementary science is not to tell specialist science what to do, but to do what specialist science is presently unable to do. It is a shadow discipline, whose boundaries change exactly so as to encompass whatever gets excluded in specialist science. (pp. 249–250)

From this perspective, then, philosophy of biology represents a fifth type of theoretical biology, albeit one that is practiced from the outside looking into the core discipline. Because of that, it is uniquely positioned, I think, to perceive the threads connecting the other four modalities, as well as the advantages and limitations of each. The idea, of course, is not to make philosophers the ultimate arbiters in theoretical biology (or in anything else, for that matter). Rather, it is a recognition that it does take some distance from the nitty gritty of the specialized literature to be able to perceive the broad picture that is necessary for the advancement of broadly construed theoretical biology. Accordingly, it is not by chance that when biologists themselves step back to contemplate a more inclusive level of analysis they begin to sound like philosophers. Perhaps, then, ongoing cross-fertilization—like the one fostered by that special issue of Biological Theory—will bring less distrust and more fruitful collaboration between the two disciplines.

Between holism and reductionism: a philosophical primer on emergence

A few years ago I was asked by the editor of the Biological Journal of the Linnaean Society to write a short paper introducing biologists to the philosophical literature on emergence, given recurrent, and sometimes confused, talk of “emergence” in biology. The result was “Between holism and reductionism,” published in 2014, the full version of which you can download here.

‘Emergence’ is a controversial concept with a convoluted history, in both science and philosophy. It is therefore not surprising that it has been misused and vilified, as well as more often than not misunderstood. Typically, the idea of emergence is brought up by researchers who are — for one reason or another — unhappy with an ultra-reductionist scientific program, preferring instead some kind of holism or interactionism in the way they approach their research questions (think of the always current debates on gene–environment interactions). Just as surely, biologists who are embedded in reductionist programs are skeptical of emergence and similar ideas as obscurantist and leading only to never ending and fruitless debates.

Given its convoluted and controversial history, it may pay to get a bit clearer about where the concept of emergence came from and how it evolved qua concept. Perhaps the first thing to appreciate is that emergentism is usually put forth as a third way between mechanism-reductionism and vitalism-holism. Emergentists — particularly in biology — reject quasi-mystical appeals to vital forces, but at the same time argue for the appearance of genuinely novel phenomena at various levels of complexity of matter. Of course, much hinges on what ‘novel’ means in this context, as well as on the idea that there are meaningful ‘levels’ of complexity in nature.

One of the earliest articulations of the concept of emergence in biology is due, perhaps surprisingly, to John Stuart Mill, who in 1843 wrote “To whatever degree we might imagine our knowledge of the properties of the several ingredients of a living body to be extended and perfected, it is certain that no mere summing up of the separate actions of those elements will ever amount to the action of the living body itself.” The idea being expressed here is familiar in both biology and statistics nowadays: we know of plenty of examples, from chemistry and biology, where the whole — as the saying goes — is more than (or, to be more accurate, different from) the sum of its parts.

Mill himself contrasted an example from physics and one from chemistry to make his point. In Newtonian physics (which was, of course, the dominant paradigm at the time), mechanical forces can be combined in an additive fashion by means of vector analysis. So, for instance, the total force being applied to a given object from two different sources is simply the sum of the individual forces thus applied. In chemistry, however, the result of a chemical reaction is pretty much never a simple sum of the reactants: we can combine, say, an acid and a base to obtain a salt (and water). The properties of the salt are in no meaningful sense simply the sum of the properties of the pre-reaction acid and base.

Arguably the next big step in the development of emergentism in philosophy of science was due to C.D. Broad (circa 1925), who framed the issue in terms of the independence (or lack thereof) of the “special” sciences (i.e., of anything that is not fundamental physics). This is very much still a topic of discussion today (see this treatment of the issue at Footnotes), usually pitting physicists on the side of a strong sense of the unity of science (and its potential theoretical reduction to physics) vs (some) philosophers and non-physical scientists who defend the notion of a fundamental disunity, or at least heterogeneity, of science.

Broad aligned himself with the disunity camp, on the basis that he thought there are some emergent “trans-ordinal” laws that are compatible with, and yet irreducible to, more fundamental principles of physics. However, he did not think these laws presented any particular reason for the mystically oriented to rejoice: “There is nothing, so far as I can see, mysterious or unscientific about a trans-ordinal law or about the notion of ultimate characteristics of a given order. A transordinal law is as good a law as any other; and, once it has been discovered, it can be used like any other to suggest experiments, to make predictions, and to give us practical control over external objects.”

A modern example of the disunity camp is provided by Nancy Cartwright’s work, for instance in her analysis of the concept of natural laws. Cartwright sees laws as empirical generalizations with more or less broad (but, crucially, not universal) domains of application. Indeed, she maintains that the empirical evidence itself strongly suggests that laws cannot be both universal and true: the only circumstances when we can verify a law of nature (say, Newtonian mechanics) to a precise extent is when we create artificial worlds characterized by highly controlled conditions. In the real world, by contrast, laws only apply given a more or less large number of ceteris paribus conditions. And this holds even for physics, where usually scientists take the very idea of a law of nature to be uncontroversial. In biology things are even worse, given plenty of debates — among both philosophers and biologists — about whether we can sensibly talk about laws (as opposed, again, to mere empirical generalizations) in the first place.

In the paper I then move to make explicit a distinction that will be familiar to readers of Footnotes, but that needed to be brought to the fore in the context of the paper for the Biological Journal: we can think of emergent properties from either an ontological or an epistemological stance, although the two are not mutually exclusive.

Ontology, of course, has to do with the nature of things, while epistemology has to do with how we (think we) come to have knowledge of the nature of things. Ontological claims are inherently metaphysical, while epistemological claims are not (they can be purely pragmatic, or derived from principles of logic). To complicate things further, several philosophers (though by no means all!) from the mid-20th century on began to agree that metaphysical statements ought to be evaluated in terms of our epistemic access to the world, meaning that what we can know empirically should constrain how we think metaphysically (in biology, see this recent project, involving my colleague and friend Alan Love).

In terms of emergence and reductionism, my sense of the literature is that most philosophers nowadays are in agreement with most scientists: they reject ontological emergence and accept ontological reductionism. What this means is that the standard metaphysical position is that there are no true (or “strong”) emergent phenomena, only phenomena that cannot currently (or even ever) be described or understood in terms of fundamental physics, and yet are, in fact, only complex manifestations of the microscopic world as understood by fundamental physics.

A simple way to make sense of this idea is to deploy the concept of supervenience: in philosophy a property A is supervenient on another one, B, just in case A cannot change unless there is a change in B. For instance, if the total amount of money in my pocket is $20, this fact cannot change unless the number of coins and/or notes that make up that amount somehow diminishes or increases inside said pocket (as opposed to, say, simply exchanging a dollar bill for four 25 cent coins). Analogously, higher-order phenomena in physics or biology supervene on micro-physical phenomena just in case the only way to change the former is to change the latter.

What about epistemological emergence, then? T. O’Connor helpfully describes two types of the latter, which he labels predictive and irreducible-pattern. Predictive emergence is the idea that in practice it is not possible to predict the features of a complex system in terms of its constituent parts, even if one were to know all the laws governing the behavior of said parts. Irreducible-pattern emergentists, instead, maintain that the problem is conceptual in nature, i.e., that the lower-level laws simply do not provide us with the tools to deal with higher-level phenomena.

As O’Connor acknowledges, the distinction between predictive and irreducible-pattern views of epistemic emergence is not sharp, but it does draw attention to the fact that emergent phenomena present both pragmatic and conceptual issues for the practicing scientist and aspiring reductionist. It is not just, for instance, that it would be too computationally cumbersome to develop a quantum mechanical theory of economics (the predictive issue), it is that one would not know where to start with the task of deploying the tools of quantum mechanics (indeterminacy principle, non-locality, etc.) to somehow account for the phenomena studied by economists (relation between supply and demand, boom-and-bust cycles, etc.). It turns out, then, that one does not need to be an ontological emergentist in order to firmly reject a “greedy” reductionist program in biology or the social sciences.

The next section of the paper discusses in some detail two examples of weak emergence in biology, so-called N-K networks, introduced by Stuart Kaufman to account for non-linear, non-additive gene-gene interactions; and my preferred field of study, genotype-environment interactions. I direct the interested readers to the full paper for the details.

In the end, a good number of scientists are understandably wary of the notion of emergence, for the simple reason that it sounds a bit too mystical and wool-eyed. Of course, if emergence turns out to be an ontological reality, then these scientists would simply be mistaken and would have to accept a new metaphysics. However, even if emergence is only an epistemic phenomenon, there are good reasons to take it seriously, for instance because it points toward current methodological or theoretical deficiencies that make straightforward reductionist accounts unfeasible in practice, if not in principle.

In order for more scientists to take emergence seriously we need a coherent account of why we see emergent phenomena to begin with. One such account has been provided recently by Brian Johnson, and it is worth considering briefly. I am not suggesting that Johnson is necessarily correct, or that his explanation is the only one on the table. But it represents a good example of the contribution that philosophy of science (in this case, actually done by a scientist) can give to the way in which scientists themselves think of a given issue. Besides, Johnson may very well turn out to be exactly right.

Johnson’s basic idea is simple: (at least some kinds of) emergent properties are the result of a large number of interactions among parts of a complex system, all going on simultaneously in time and space. In order to be able to grasp emergent outcomes, our brains should be able to think in parallel at the conscious level (parallel unconscious thinking does occur, but it leads to an “intuitive,” not rational, grasp of phenomena). As the human brain is not capable of parallel conscious processing of information, we are faced with the impossibility of reasoning our way through the mechanics of emergence.

How do we know that the human brain cannot do parallel processing consciously? There are several reasons to think so, but Johnson provides a simple little exercise in figure 1 of his paper (which is available in full here) and it is worth trying it out to see how difficult that sort of thinking actually is, and how unsuitable we are at carrying it out. (The exercise involves summing up numbers, first on a single row — which is easy to do — then on multiple rows, which becomes immediately overwhelming.)

Interestingly, Johnson’s example of an emergent property that is not mysterious, and yet that we cannot cognitively deal with, is cellular automata. Johnson’s figure 2 presents a standard cellular automaton, and argues that we cannot predict the behavior of the cells in the game because our brains cannot process in parallel the various simple rules that generate such behavior. There is no magic here, as we designed the rules and we can check — time instant by time instant — that the behavior of the automaton is, in fact, the result of the application of such rules. But we cannot help being baffled by the complex and elegant pattern “emerging” from the massively parallel deployment of the same rules. Analogously, there may be no mystery in, say, the emergence of the biological property of robustness from the interactions going on in genetic networks, or the emergence of phenotypes during development (save, of course, for the — remote, at the moment — possibility that some of these behaviors may be ontologically, not just epistemically, emergent).

If Johnson is correct, then emergence is a necessary concept to deploy across scientific disciplines for eminently practical reasons, any time that there is a mismatch in degree of complexity and interactivity between the way the world that we try to comprehend actually is, and the capacities of the brains with which we try to comprehend it work. Nothing spooky or mysterious about it, just the natural result of the fact that brains that evolved to solve Pleistocene problems cannot compute in the way in which cellular automata, and countless other phenomena, ought to be computed in order to be deeply grasped.