Category Archives: Massimo’s Technical Stuff

Why machine-information metaphors are bad for science education, part I: biological machines and intelligent design

bacterial flagellum

bacterial flagellum, as often represented in biology education

Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Predictably, modern proponents of Intelligent Design so-called theory, the latest incarnation of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories.

In reality, the living organism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do.

My friend and collaborator Maarten Boudry and I have written an article several years ago in which we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. We argued that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.

We have discussed this topic twice in the last couple of years, once on the occasion of another paper with Maarten, on why machine metaphors in biology are misleading; more recently because of a paper I wrote about genes as blueprints; the current entry completes the trilogy, so to speak. In part I, here, I will present what Maarten and I had to say about the origin of machine-information metaphors in biology, as well as its questionable use in science education. In part II, next week, I’ll talk about the search for new and better metaphors in science and science education. Interested readers are referred to the original paper for references, as well as for a discussion of the misuse of machine-information metaphors in actual biological research (i.e., not just for educational purposes).

When delving into unknown territory, scientists have often naturally relied on their experiences in more familiar domains to make sense of what they encounter. In the early days of the scientific revolution, mechanical metaphors proved to be a powerful instrument to get a grip on new discoveries about the living world and the universe at large, and we can trace back the emergence of machine metaphors at least to the Middle Ages, when new achievements of technology had a profound cultural influence and captured the collective imagination. Against this background of technological innovation, it is not surprising that the pioneers of anatomy and physiology relied on the metaphor of the animal body as a complicated piece of machinery to make sense of their discoveries. The mechanical language provided a richness of meaning and allowed them to structure the new phenomena in terms of familiar experiences. For example, the image of the human heart as a pump with intricate mechanical components played an important role in William Harvey’s discoveries about blood circulation.

In the course of the 17th century, a new philosophy of nature became prominent that developed a conception of the universe in purely mechanical terms. According to this mechanical philosophy, which was developed by thinkers like Rene` Descartes, Pierre Gassendi and Robert Boyle, the phenomena of nature can be understood purely in terms of mechanical interactions of inert matter. This mechanization of nature proved an important driving force behind the Scientific Revolution, and at the end of the 17th century culminated in Newton’s theory of motion. Newton’s description of planetary orbits following the fixed laws of gravity conveyed an image of a clockwork universe set in motion by an intelligent First Cause. In fact, that was exactly how Newton conceived the universe and its relation to the Creator. For Newton and many of his contemporaries, the importance of the mechanical conception of nature was greater than the mere term ‘metaphor’ would suggest, as the development of mechanistic philosophy was itself largely inspired by religious motivations; indeed, the very employment of machine metaphors invited theological speculation.

In the second part of the 17th century, the mechanical pictures of living organisms and of the cosmos at large converged into an intellectual tradition where theology and science were intimately intertwined: natural theology. The most famous representative of this tradition was William Paley, whose work Natural Theology, of Evidence of Existence and Attributes of the Deity, Collected from the Appearances of Nature (1802) made a deep impression on the young Charles Darwin. As the title of the book makes clear, Paley and the natural theologians conceived of Nature as a complicated machinery of intricate wheels within wheels, in which every organism has its proper place and is adapted to its environment. According to Paley, the contrivance and usefulness of parts exhibited by living organisms attests to the intelligence and providence of a benevolent Creator. This so-called ‘design argument’ already had a long intellectual pedigree, dating back to Plato, Cicero and Thomas Aquinas, but its most famous formulation is found in the first chapter of Natural Theology, in which Paley famously relies on the analogy between living organisms and a pocket watch to support his design inference.

While Darwin was the one who gave the most decisive blow to the design argument by suggesting a natural explanation for adaptive complexity in the living world, many philosophers would agree that David Hume foreshadowed its demise, by exposing several problems with the central analogy. In his Dialogues Concerning Natural Religion (1779), which actually predates Paley’s magnum opus by more than 50 years, we find a discussion of the design argument among Philo, the skeptical character that voices Hume’s ideas, Demea, the orthodox religious believer, and Cleanthes, the advocate of natural theology.

After Cleanthes has set out the design argument in terms foreshadowing Paley’s analogy of the watch, Philo objects that it is dangerous to derive conclusions about the whole of the universe on the basis of a spurious analogy with one of its parts. Given that our experience with design is limited to human artifacts only, we have to proceed with great caution, and it would be presumptuous to take so minute and select a principle as the human mind as the model for the origin of the whole universe. Hume realized that, at least in some cases, appearances of intelligent design can be deceptive.

In contemplating that ‘‘many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out’’, Hume even comes close to Darwin’s crucial insight about the power of natural selection. Although Hume does not deny that we can discern similarities between nature and human artifacts, he warns us that the analogy is also defective in several respects. And if the effects are not sufficiently similar, conclusions about similar causes are premature. To illustrate this, Philo proposes another possible cosmogony on the basis of the analogy between the world and an animal:

“A continual circulation of matter in [the universe] produces no disorder; a continual waste in every part is incessantly repaired: The closest sympathy is perceived throughout the entire system: And each part or member, in performing its proper offices, operates both to its own preservation and to that of the whole. The world, therefore, I infer, is an animal.” (Hume 1779, p. 39)

In The Origin of Species, Charles Darwin (1859) finally proposed a natural explanation for the phenomenon that inspired Paley but failed to convince Hume. Although the design argument is still of interest to philosophers and historians of science, it has been widely discarded in the scientific community. However, the analogy on which Paley based his inference seems to be alive and well, not only in the minds of creationists and ID proponents, but also in the writings of science popularizers and educators. Many scientists have actually argued that Paley at least offered an incisive formulation of the problem as there is indeed a hard-to-shake intuition of contrivance and intelligent design in nature. As one of the most ardent defenders and popularizers of evolutionary theory, Richard Dawkins, put it, ‘‘Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Adaptive complexity, then, is still regarded as something that requires a special explanation.

In textbooks, science educators have presented the comparison of living organisms and man-made machines not just as a superficial analogy, but carrying it out to a considerable level of detail. For example, the cell has been described as a miniature factory, complete with assembly lines, messengers, transport vehicles, etc. Consider the following quote from Bruce Alberts, molecular biologist, and former president of the National Academy of Sciences:

“The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. … Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. Given the ubiquity of protein machines in biology, we should be seriously attempting a comparative analysis of all of the known machines, with the aim of classifying them into types and deriving some general principles for future analyses. Some of the methodologies that have been derived by the engineers who analyze the machines of our common experience are likely to be relevant.” (Alberts 1998, p. 291)

Creationists and their modern heirs of the Intelligent Design movement have been eager to exploit mechanical metaphors for their own purposes. For example, Bruce Alberts’ description of the living cell as a factory has been approvingly quoted by both Michael Behe and William Dembski, two leading figures in the ID movement. For ID proponents, of course, these are not metaphors at all, but literal descriptions of the living world, arching back to Newton’s conception of the Universe as a clock-like device made by the Creator. The very fact that scientists rely on mechanical analogies to make sense of living systems, while disclaiming any literal interpretation, strengthens creationists in their misconception that scientists are ”blinded” by a naturalistic prejudice. In the creationist textbook Of Pandas and People, which has been proposed by ID advocates as an alternative to standard biology textbooks in high school, we read that ‘‘Intelligent design […] locates the origin of new organisms in an immaterial cause: in a blueprint, a plan, a pattern, devised by an intelligent agent’’ (Davis et al. 1993, p. 14).

The analogy between living organisms and man-made machines has proven a persuasive rhetorical tool of the ID movement. In fact, for all the technical lingo and mathematical “demonstrations,” in much of their public presentations it is clear that ID theorists actually expect the analogies to do the argumentative work for them. In Darwin’s Black Box, Behe takes Alberts’ machine analogy to its extreme, describing the living cell as a complicated factory containing cargo-delivery systems, scanner machines, transportation systems and a library full of blueprints. Here is a typical instance of Behe’s reasoning:

“In the main area [cytoplasm] are many machines and machine parts; nuts, bolts, and wires float freely about. In this section reside many copies of what are called master machines [ribosomes], whose job it is to make other machines. They do this by reading the punch holes in a blueprint [DNA], grabbing nuts, bolts, and other parts that are floating by, and mechanically assembling the machine piece by piece.” (Behe 2006, pp. 104–5)

Behe’s favorite model of biochemical systems is a mechanical mousetrap, the familiar variant consisting of a wooden platform, a metal hammer, a spring etc. According to Behe, if any one of these components is missing, the mousetrap is no longer able to catch mice. He has termed this interlocking of parts ‘‘irreducible complexity’’ and thinks it characterizes typical biochemical systems. n other words, the mousetrap is to Behe what the well-designed pocket watch was for Paley. But whereas Paley can be excused on the grounds of the state of scientific knowledge in the 18th century, for Behe the situation is a little different. Modern biochemistry, nota bene Behe’s own discipline, has revealed that biochemical systems are not like mechanical artifacts at all. Moreover, even biological systems that are irreducibly complex under Behe’s definition pose no problem for evolution by natural selection, as has been in detail by people like cell biologist Ken Miller.

ID proponents have buttressed their analogies between living systems and mechanical contraptions with a lot of visual rhetoric as well. The flagellum of the bacterium E. coli, the hallmark of the ID movement, has been represented as a full-fledged outboard rotary motor, with a stator, drive shaft, fuel supply, etc.. It features on the cover of Dembski’s book No Free Lunch, and has been used numerous times in presentations and online articles. The idea seems to be that if it looks designed, it has to be designed. But as Mark Perakh has documented in a paper published in 2008, ID supporters invariably use idealized and heavily stylized representations of the flagellum, in order to make it more resemble a man-made contraption. Another striking example of this visual rhetoric is a video by Discovery Institute president Stephen C. Meyer, which presents a computer-simulated — and again heavily stylized — journey inside the cell, and describes the biochemical processes in terms of ‘‘digital characters in a machine code,’’ ‘‘information-recognition devices,’’ and ‘‘mechanical assembly lines.’’ Meyer commented that evolutionists will have a hard time now dissuading the public from the fact that ‘‘the evidence for design literally unfolds before them.’’

Of course, the mere observation that creationists have seized on machine metaphors in biology does not suffice to demonstrate that these metaphors do not make scientific sense. However, the fact that they tend to do so systematically, using full-length quotes from respectable scientists, should make us weary of the possible dangers of misleading metaphors. If the rhetoric of the ID movement is demonstrably based on these mechanical analogies, it can be instructive to reexamine their scientific merits. In the paper, Maarten and I argue that the machine-information analogy has indeed influenced the way scientists themselves think about biological structure, function, and evolution. By analyzing the consequences of and reactions to this analogy in actual biological research, we show that its scientific merits are very weak, and that its place in modern biology has become questionable. What then? Stay tuned for part II, on the search for new and better metaphors…

Genotype-phenotype mapping and the genes as blueprint metaphor

One of my occasional series here at Footnotes to Plato is devoted to accessible (or so I hope) versions of some of my technical papers. The idea is to provide a taste of academic scholarship in philosophy of science, but in a form that can be read by more than a few dozen colleagues who specialize in the same exact area. So far, I’ve covered all suitable entries from 2013 to 2017. I don’t plan to go back to the very beginning of my career (for a good chunk of which I was publishing as an experimental biologist anyway), but only to 2010, the year after I moved to philosophy full time. By my reckoning, that means four papers left: on genotype-environment interactions and the “genes as blueprints” metaphor (2010), on the machine-information metaphor in biological research and education (2011), on the metaphor of adaptive landscapes (2012), and on paradigm shifts in evolutionary biology (also 2012). Yeah, that was my “metaphors” period. This entry is dedicated to the first paper mentioned, and over the next few months I’ll cover the remaining three.

The underlying question being considered here is: what is the relationship between genotypes and phenotypes? This question has marked the evolution of evolutionary theory ever since the rediscovery of Mendel’s work at the beginning of the twentieth century, which immediately generated an apparent conflict with the Darwinian view of gradual evolution. Famously, the answer proposed by the architects of the Modern Synthesis (i.e., the “standard model” in evolutionary biology from the 1940s on) is that genes determine phenotypes, as in the oft-cited metaphors of a “genetic blueprint” or a “genetic program.” This sort of answer bypasses the process of development, which is treated as an incidental blackbox with no direct causal relevance to the evolutionary process. Given this conceptual framework, it is no wonder that developmental biology was famously left out of the Modern Synthesis, and that it has (partially) re-emerged only recently within the so-called “evo-devo” approach.

In the paper, published in the Philosophical Transactions of the Royal Society (2010) I re-examine the question of the relationship between genotype and phenotype by going back to Alberch’s concept of a genotype-phenotype (G>P) “map” and examine what recent research tells us on actual G>P maps. It turns out that computational and empirical studies of three classes of systems (RNA folding, protein function and software development — though in this post I will talk only about the first and last cases) have yielded important generalizations about the problem, as well as novel insight into the evolutionary process more broadly. One of the consequences of these lines of research is that the blueprint metaphor is untenable and in fact positively misleading, and should be replaced by the concept of developmental encoding.

Back in 1991, Alberch introduced a metaphor differing from the standard blueprint view of genetic encoding, talking about a G>P “mapping function,” defined by a given parameter space and at least potentially amenable to mathematical description.

Alberch derived four general conclusions from his conceptualization of the G>P map: (i) the map is (much) more complex than a one-to-one relation between genotype and phenotype, which means that the same phenotype may be obtained from different combinations of genetic informational resources; (ii) the area in parameter space where a particular phenotype exists gives an indication of how stable that phenotype is likely to be; (iii) the parameter space is marked by “transformational boundaries,” i.e. areas were a small change in one or more developmental parameters will cause the transition from one phenotypic state to another; and (iv) the phenotypic stability of a given population will depend on which area of the parameter space it occupies, and in particular whether it is close to a transformational boundary or not.

Alberch’s famous example of a phenotypic transition that is amenable to be described according to his idea of parameter space and mapping function was the evolution of the number of digits in amphibians. In particular, he showed how salamanders tend to lose their fifth toe every time the digit reduction evolves, while anurans tend to lose their first digit. The difference between the two groups can be recreated experimentally by administration of an inhibitor of cell division, a result that Alberch interpreted as telling us that anurans and salamanders find themselves in different areas of the parameter space, and in particular that they are located near different transitional boundaries, so that every time the transition happens within one of the two groups it occurs by the same developmental means, but when the two groups are compared the transitions happen by different developmental routes.

Ambystoma salamander

A salamander of the genus Ambystoma, a classic model system for evo-devo studies

A good starting point to tackle the G>P mapping problem is to start simple, and the simplest place to start is the growing literature on RNA folding. RNA folding is relatively well understood at a chemical-physical level, with increasingly sophisticated computer models capable of predicting the three-dimensional folding of a linear sequence of nucleotides based on thermodynamic considerations. Moreover, it is relatively straightforward to verify such predictions experimentally for a subset of simulated folding patterns, and researchers can even carry out competition experiments among RNA molecules for a given catalytic function.

As far as the G>P problem is particularly concerned, the step from genotype to phenotype is in this case as short as it is possible in any biological system, and indeed probably somewhat reflects the ancestral situation in the RNA world hypothesized within the context of the origin of life problem. RNA folding is therefore both an extremely suitable system to begin examining G>P mapping and one that may yield important clues to how historically mapping functions got started and became more complex and indirect. A crucial advantage of RNA folding studies of G>P mapping is that the fitness function of the molecules is not assumed arbitrarily to follow a particular statistical distribution, but can be studied empirically. In other words, the connections between genotype and phenotype on one hand and between phenotype and fitness on the other hand are explicit, relatively simple and biologically meaningful.

Several important generalizations have emerged from studies of RNA folding, generalizations that are crucial to our understanding of phenotypic evolution beyond the relatively simple framework offered by the Modern Synthesis. Consider, for instance, the study of mutational networks, i.e. of the structure of the genotypic landscape in terms of one-mutation steps surrounding a given focal genotype. The idea goes back to Kauffman & Levin’s work on genotypic landscapes, back in 1987. The problem to be tackled is how does evolution explore phenotypic landscapes by moving across a corresponding genotypic landscape in a non-saltatory manner, according to standard Darwinian theory. The solution requires an understanding of the connection between the genotypic and phenotypic landscapes, and in the case of RNA folding one can actually computationally explore the totality of both landscapes for a given short sequence length, or statistically sample the properties of landscapes defined by longer sequences.

For instance, all 30-nucleotide long binary RNA molecules produce about one billion unique sequences, a bewildering genotypic space. This space, however, corresponds to only 220,000 unique folding shapes in the Guanine/Uracile nucleotide landscape and a mere 1,000 shapes in the Adenine/Uracile landscape, the two situations that have been extensively studied. This is a spectacular example of what biologists call “degeneracy” (i.e., redundancy of sequence coding), which in turn is a fundamental concept underlying the neutral theory of molecular evolution — according to which most (but, crucially, not all) mutations are selectively neutral at the molecular level.

Genotypes on these landscapes are connected by mutational networks whose properties can then be explored. An interesting result is that the distribution of phenotypes on RNA mutational networks follows regular patterns, characterized by a few abundant RNA shapes and a large number of rare ones. The structure of the landscape is such that evolution can explore most or all of the common structures by one-step mutations that preserve structure while moving the population on a neutral path of constant fitness, until it bumps into a novel phenotype with higher fitness. Interestingly, most genotypes turn out to be located within a few mutational steps from most of the common phenotypes in the landscape, making it predictable that such phenotypes will in fact be found by natural selection in a relatively short period of time. However, the connectivity on the landscape is always asymmetrical, which means that which particular phenotypes will be reached more easily while starting with a given genotype will be a matter of historical contingency.

Research on the general properties of RNA folding evolution has showed that the G>P function is such that small movements in genotypic space do not necessarily correspond to small movement in phenotypic space, a rather flagrant contradiction of one of the standard assumptions of the Modern Synthesis. In particular, if we consider a genotype G with a given phenotype P, it is likely that G is connected to a one-step neighbor associated with a phenotype which is not structurally similar to P. This brings us to a rather surprising general behavior that emerges from studies of RNA folding (as well as of protein function, micro-organisms and simulated systems — as discussed in the rest of the full paper), a true “punctuated equilibrium” pattern of evolution that does not result from the usual suspects in terms of underlying causes.

Punctuated equilibrium, of course, was one of the early challenges to the Modern Synthesis brought about by palaeontologists Eldredge & Gould back in 1972. The standard explanation for the fossil record pattern of stasis punctuated by occasional rapid shifts in phenotype is that of stabilizing selection. Simulations of RNA folding evolution display the same general pattern that one sees in the fossil record, obviously at a much smaller temporal scale. The mechanism, however, has nothing to do with “stabilizing selection” (a rather vague concept in itself, really simply a way to describe a statistical pattern of constant mean and reduced variance). Rather, the punctuated evolution results from the fact that the population divides itself into smaller chunks, each of which explores a portion of the largely neutral genotypic landscape. From time to time, a population encounters a new phenotypic optimum and “jumps” on it quickly. Stasis, in this context, is then not the result of selection for a constant phenotype, but rather of the largely neutral structure of the landscape, which allows populations to wander around until they find a new functional phenotype and jump into a nearby neutral network, only to resume their evolutionary wanderings.

RNA-like systems can also be a model for the evolution of ecological communities, thereby beginning to forge a still surprisingly lacking direct link between ecology and evolutionary biology. For instance, Takeuchi & Hogeweg, in 2008, showed that a population of replicators originally made of just one genotype evolves into a complex system characterized by four functionally distinct groups of genotypes, which the authors call “species.” Interestingly, the model also evolved “parasites” which not only were able to coexist with catalytic molecules, but in turn were themselves catalysts for the evolution of further complexity in the system. While Takeuchi & Hogeweg’s definition of species in this context may appear artificial, the group of genotypes they identified are in fact both ecologically functionally distinct and genealogically related to each other, and a functional-genealogical concept is certainly one of the viable contenders as a definition of biological species.

The examples drawn from research on RNA folding (as well as those not discussed here, on protein sequence space) help bring to the forefront a major limitation of the Modern Synthesis: the almost utter disregard for developmental biology.

Notoriously, that field was essentially left out of the synthesis of the 1940s that gave us the current structure of evolutionary theory. Part of the reason for this is that it has never been conceptually clear what exactly the role of development in evolution is. Mayr, a founding father of the Modern Synthesis, famously made a distinction — arching back to Aristotle — between proximate and ultimate causes in biology, with the genetic bases of phenotypes counting as proximate causes and the evolutionary processes that brought those phenotypes about considered as ultimate causes (see this post for an in-depth discussion). Even if one accepts Mayr’s framework, however, it is not clear whether development should be considered a proximate or an ultimate cause.

The onset of evo-devo and calls for an Extended Synthesis in biology (see this previous post) have reopened that question. The answer is emerging from research on the structure of G>P maps, and in particular from a parallel literature in computational science that attempts to exploit the characteristics of biological development to produce a new generation of “evolvable hardware.” The picture that is forming out of these efforts is that development is a necessary link between proximate and ultimate causality, and that in a sense the G>P map is whatever specific type of “developmental encoding” (as opposed to the classic genetic encoding) a given species of organism uses to produce environmentally apt phenotypes. Developmental encoding refers to situations were information encodes not a detailed description of the full system (as in the blueprint metaphor), but rather the local steps necessary to build the system through a developmental process.

Several authors have pointed out the limitations of both direct genetic encoding of “information” and of the blueprint metaphor that results from it. Ciliberti and collaborators, in a 2007 paper, have for instance referred referred to human-engineered systems as being characterized by “brittleness,” i.e. the unfortunate property that if one component ceases functioning properly, there is a high probability that the whole system will unravel. This is most clearly not what happens with biological organisms, which means that the oft-made analogy (ironically, by both some biologists and proposers of intelligent design creationism) between living organisms and “machines” or “programs” is profoundly misleading. Along similar lines, Stanley, also in 2007, reiterated that the amount of direct genetic information present in, say, the human genome (now estimated to be around 30,000 protein-coding genes) is orders of magnitude below what would be necessary to actually specify the spatial location, functionality and connectivity among the trillions of cells that make up a human brain. The answer must be in the local deployment of information that is possible through developmental processes, where the “instructions” can be used in a way that is sensitive (and therefore capable of adjusting) to both the internal and external environments.

According to Hartmann and colleagues (in another 2007 paper), artificial development is increasingly being used to solve computational problems outside of biology by direct analogy with biological systems. The results indicate that replacing direct genetic encoding with indirect developmental encoding dramatically reduces the search space for evolutionary algorithms. Moreover, the resulting systems are less complex and yet more robust (“fault-tolerant” in engineering jargon) than those obtained by evolving standard genetic algorithms. Another way to put the point is that direct genetic encoding is limited by the fact that the length of the genetic string grows proportionally to the complexity of the phenotype, thereby quickly encountering severe limitations in search space. With developmental encoding, instead, the evolving system can take advantage of a small number of genetic instructions mapping to a large number of phenotypic outcomes, because those outcomes are determined by the (local) interactions among parts of the system and by interactions of the system with the environment.

Simulations comparing the evolution of standard genetic systems of information encoding with systems based on developmental encoding clearly show that genetic systems reach a maximum level of fitness for low levels of complexity; at higher levels of complexity developmental encoding “scales” much better, with developmental systems being capable of achieving high fitness more quickly and efficiently. Moreover, developmental encoding leads to the artificial evolution of systems that are both significantly more robust to internal disruptions and significantly more flexible in response to external environmental conditions than standard genetic systems. This is an interesting situation whereby a research area parallel to evolutionary biology, computational science, draws inspiration from the actual structure of biological systems and ends up providing a theoretical underpinning for why, in fact, those biological systems are structured the way they are.

In conclusion, the conceptual and mathematical foundations of evolutionary theory are evolving from a simple beginning as bean-bag genetics, Mayr’s derogatory term for population genetics theory, to a sophisticated patchwork that draws from population genetics, quantitative genetics, bioinformatics and computational science. Medawar & Medawar, in 1983, famously said that “genetics proposes, epigenetics disposes,” where epigenetics here means the whole of developmental processes, a way to highlight that evolutionary theory finally needs a good conceptual understanding of development, and not just of genetics. As I have argued in the paper referenced here, such a broadened theoretical framework cannot come from population genetics alone, but benefits from the input of computational research both on simple biological examples of G>P maps, such as those underlying RNA folding and protein function, and from consideration of broader issues such as the properties of large neutral networks in genotypic space (see full paper) and of developmental versus genetic-encoding systems.

On the different ways of doing theory in biology

Theoretical biology’ is a surprisingly heterogeneous field, partly because it encompasses ‘‘doing theory’’ across disciplines as diverse as molecular biology, systematics, ecology, and evolutionary biology. Moreover, it is done in a stunning variety of different ways, using anything from formal analytical models to computer simulations, from graphic representations to verbal arguments. A few years ago I co-organized a workshop on this topic at the Konrad Lorenz Institute for theoretical biology in Vienna, and then published an edited volume of the journal Biological Theory collecting all contributions.

In my paper I surveyed a number of aspects of what it means to do theoretical biology, and how they compare with the allegedly much more restricted sense of theory in the physical sciences. I also tackled a somewhat recent trend toward the presentation of all-encompassing theories in the biological sciences, from general theories of ecology to an attempt to provide a conceptual framework for the entire set of biological disciplines. I also discussed the roles played by philosophers of science in criticizing and shaping biological theorizing. The full paper is available for download here (free), and the edited volume can be found here (articles beyond paywall). Let me, however, summarize my main points to facilitate a general discussion.

First, I discussed the issue of alleged laws in biology. If there is anything that characterizes physics as a science it is its unending quest for universal laws, from Newton’s mechanics to the current (and highly controversial) string theory. This is the case despite the fact that influential philosophers of science like Van Fraassen and Giere maintain laws play a marginal and mostly didactical role, even in physics. Regardless, it is not surprising, that discussions of general laws in biology are a recurrent staple of the literature and—interestingly—one that provides a good example of positive interactions between theoretically inclined biologists and philosophers of science.

In a number of cases authors draw a direct parallel between physical laws and proposed biological equivalents. For instance, M. Elgin argues that the ‘‘epistemic functions of a priori biological laws in biology are the same as those of empirical laws in physics.’’ Elgin begins by acknowledging the (almost) universal agreement among philosophers who subscribe to the concept of laws that these must be both universal and empirical in nature, though he hastens to say that these conditions are necessary but not sufficient to distinguish laws from ‘‘accidental’’ generalizations. He then discusses Elliot Sober’s proposal that the Hardy–Weinberg principle in population genetics is an example of a biological law, even though it is universal but not empirical.

There are several problems with this proposal, chiefly the fact that Hardy–Weinberg cannot meaningfully be thought of as a ‘‘zero force law’’ analogous to, say, the law of inertia (as Elgin suggests), as well as the above mentioned lack of empirical content. Jonathan Kaplan and, back in 2006, have discussed in detail why the various evolutionary mechanisms that can cause a population to deviate from Hardy–Weinberg equilibrium are not conceptually equivalent, and should not be thought of as ‘‘forces’’ characterized by intensity and direction. Moreover, it simply seems strange to suggest that a scientific law can have no empirical content and instead simply be true a priori (as Hardy–Weinberg surely is, mathematically speaking). This risks embarking philosophy of science down the slippery slope of considering logical and mathematical principles themselves as ‘‘laws,’’ a usage that clearly does not accord to scientific practice at all. Apparently, however, this point is not at all clear in the minds of some biologists, since it is possible to find statements like the following: ‘‘The global-optimum model is not so much a predictor of nature as a definition of nature. It must be true that a perfectly adapted organism leaves the most possible offspring!’’ (In a paper by Nonacs and Dill, published in 1993). Or: ‘‘The existence of a global-optimum point is a ‘deep axiom’: a tautology that guarantees logical consistency at the core of a theory’’ (by Stearns and Schmid-Hempel, in 1987). This is surely one area where more communication between theoretically minded biologists and philosophers of science would be welcome.

Besides appeals to Hardy–Weinberg as an example of biological laws, the next most popular area of discussion concerning this topic is perhaps the possible existence of laws in ecology. For instance, G.M. Mikkelson makes a case for moving ecology from an idiographic (historical) mode of explanation to a nomothetic (law-based) one. He maintains that—contra to what he perceives as the practice among ecologists—generalizations should be interpreted in terms of law-like generalizations because functional kinds (such as ‘‘predators’’) and structural kinds (like the various community types) correlate better than taxa (historical kinds) with fundamental ecological patterns and processes. As Mikkelson puts it, ‘‘Imagine being dropped at a random spot on the land surface of the Earth. Which would allow you to predict the density of plant species around you—that is, the number of species per 10,000 square kilometers—most precisely: knowing the climate, or knowing the landmass on which you stand? Answer: climate wins, hands down.’’. Well yes, but it is arguable that such predictions are the result of ‘‘laws’’ in any way like those that physicists are after, and it is telling that Mikkelson is in fact cautious enough to talk about ‘‘law-like generalizations.’’

Interestingly, the issue of, shall we say, “physics envy” shows up explicitly in yet another author’s treatment of the issue of laws in ecology, D.R. Lockwood. In this case it is an ecologist who takes on the matter, and comes down rather negatively on the possibility of laws in his discipline. Lockwood discusses two frequent suggestions as examples of ecological laws: Malthusian growth and the logistic equation. He quickly finds them inadequate to the task, as they do not support counterfactuals, are not temporally universal, and in fact repeatedly fail empirical tests. In the end, Lockwood agrees with philosopher W.C. Wimsatt’s suggestion that ‘‘aggregative systems’’ (those typically studied by physics) do follow robust laws, while emergent systems (like those studied in biology) do not. This does not mean that biologists cannot generalize their empirical findings (within certain limits), and that such generalizations cannot be used to make reasonable predictions about the behavior of the systems of interest to them. And that, after all, is what actually matters.

If not laws, are there general theories in biology? Theodosius Dobzhansky famously said that ‘‘nothing in biology makes sense except in the light of evolution.’’ Adding that to Richard Dawkins’ quest for ‘‘universal Darwinism’’ and to Daniel Dennett’s contention that Darwinism is a ‘‘universal acid’’ of sorts that cuts across disciplines, extending the idea of Darwinian evolution well beyond biology itself, one would think that biologists have settled on their version of a theory of everything long ago. One would be surprised. A perusal of the recent literature shows quite a bit of activity in this department, again largely on the side of ecologists. I will briefly comment on one such attempts, referring the interested reader to two more case studies discussed in the paper.

Stephen Hubbell’s unified neutral theory of biodiversity and biogeography attempts to do precisely what its name implies: to propose a combined theoretical framework for biodiversity (measured by species–abundance curves) and biogeography (measured by species–area curves), where the ‘‘neutrality’’ consists in assuming that the differences among species that belong to the same trophic level within a given ecological community do not matter for the dynamics of that community. Hubbell’s theory draws from explicit parallels with the neutral theory of molecular evolution proposed by Motoo Kimura back in 1985, and from the above mentioned Hardy–Weinberg equilibrium in population genetics.

The unified theory has generated a significant literature, including a number of critics and empirical tests. It is important to realize a couple of things, however: first, that the scope of the theory is crucially limited by the clause that it applies to species of similar trophic level within a given community, which makes it quite a bit more narrow in scope than its name (and some of the discussion that has followed the publication of Hubbell’s book) might otherwise give the impression. Moreover, the theory is notoriously difficult to test, because while it does make distinctive predictions when compared to, say, niche assembly theories (which are non neutral), the predicted differences are very small, and easily lost in the noise characteristic of ecological data sets. This is not the place to get into an in-depth discussion of Hubbell’s theory, but I can hazard a prediction based on the similar history of the neutral theory of molecular evolution: in that case more than a decade of discussions led to the conclusion that a modified ‘‘quasi-neutral’’ theory was the best bet. Which basically means that stochastic as well as selective processes affect the outcome of evolution, just as it would be reasonable to expect.

My more general point in the paper was that even a cursory look at the literature allows one to distinguish four modalities for theoretical biology (though similar distinctions can also be found in, say, physics, especially if one considers the entire discipline, and not just specific subsets like particle physics). I refer to these as analytical modeling, statistical modeling, computer modeling, and conceptual analysis.

The classic example of analytical approaches in theoretical biology is represented by much of the body of works that makes up population genetics theory, beginning again with the Hardy–Weinberg principle and arriving at more recent advances such as coalescent theory. The basic approach here is to use mathematical formalism to arrive at analytical (i.e., precise, non-statistical) solutions of sets of equations describing the behavior of idealized populations of organisms.

The second general type of approach to biological theorizing is statistical in nature, beginning with Ronald Fisher’s famous ‘‘fundamental’’ theorem of natural selection, which was proposed as explicitly equivalent to one of the most solid pieces of theory in classical physics, the second principle of thermodynamics. Fisher laid the foundations for statistical genetics, which—when reconciled with the apparently discrepant Mendelian genetics—resulted in the Modern Synthesis of the 1940s, basically the still current standard model in evolutionary theory (but see this).

The third way of doing theoretical biology is based on computer modeling, and it is in a sense a continuation of a long standing trend in the field: when things get too complicated even for a quantitative (i.e., statistical) genetic approach (let alone for a population genetic, analytical one), researchers move toward computationally intensive simulations of biological populations. There are many examples of this, some of which are continuous with the population-quantitative genetic type of issues just discussed, some having to do with broader questions concerning the evolution of evolutionary mechanisms (evolvability), and some concerning the relationship between structural biology and evolutionary dynamics.

The fourth and last modality of biological theorizing is based on the articulation of verbal-conceptual models, and obviously comes closest to what philosophers of biology themselves engage in when they analyze the concepts deployed by working biologists. Verbal-conceptual models in science have the reputation of being second grade when compared to ‘rigorous’ mathematical modeling, even though of course both the original work by Darwin and much of the work done during the Modern Synthesis (except for the part that was explicitly population-genetic) fall into this category. Indeed, there seems to be a resurgence of this approach as a necessary complement to increasingly ‘‘experimental’’ mathematical treatments like the ones discussed above. Verbal-conceptual models include a broad category of biological theorizing that is particularly popular in molecular biology and biochemistry, where many papers present the results of complex experiments on the structure of genetic networks or biochemical pathways in the form of conceptual diagrams that are meant to both summarize the current status of knowledge and provide food for thought for the developing of new hypotheses and subsequent empirical tests.

My conclusions at the end of the full paper: the term ‘‘speculation’’ has a rather bad reputation in science, often associated with the much-dreaded accusation hurled at philosophers that they engage in ‘‘armchair theorizing.’’ But of course all theory is armchair speculation, and unless one thinks of mathematics in a special Platonic fashion, mathematical approaches are simply continuous with, and complementary to, all the other ways of doing theory in science.

Which brings me to the role of philosophy of science in all of this. I think that philosophy of science itself is characterized by different modalities, some of which have little to do with helping scientists and reflect instead on the logic of scientific theories, the epistemology underlying scientific claims, and so on. Indeed, philosophy of science itself is continuous with the history of science, since it would be difficult to attempt generalizations about the nature of science while focusing only on currently ongoing (and therefore far from being settled) scientific research.

To begin with, then, classic philosophy of science is concerned with the study of the logic of scientific discovery, as exemplified by the well-known names (even among scientists!) of Popper, Kuhn, and—to a lesser extent—Feyerabend and Lakatos (and, of course, a number of contemporary scholars, too many to mention). This type of philosophy of science is, arguably, of very little direct relevance to scientists themselves (except insofar as they are curious about how outsiders see and analyze their own activity). It is perhaps this sort of philosophizing that has brought a number of physicists (e.g., Steven Weinberg, Stephen Hawking, and Lawrence Krauss) to claim that ‘‘philosophy is dead’’ on the ground that, of late, it has not managed to solve any scientific problem with which physics is concerned. In so arguing, these scientists are committing an elementary category mistake prompted by a combination of intellectual hubris and a surprising amount of ignorance.

Philosophy of science, however, also functions in modalities that are (or ought to be) of more direct interest to practicing scientists themselves—whether the latter realize it or not. One such modality is represented by always necessary (if prone to annoy the targeted scientists) external criticism of socially relevant scientific claims (e.g., concerning race, gender, or the validity and application of certain types of medical research). I hesitate to use the label ‘‘science criticism’’ for this activity—even though it is arguably the most appropriate one available—because the term has been possibly irreparably tainted by much post-modern-inspired nonsense at the height of the so-called ‘‘science wars’’ of the 1990s. Regardless of what we end up calling it, it is the sort of philosophical inquiry that actually has practical implications, analogous to the better known ones usually associated with, say, business ethics, medical ethics, and bioethics, and one that should develop into an earnest dialogue between philosophers and scientists about the social implications of science itself.

The third and final modality for philosophy of science is in even more close symbiotic relationship with science, one that seems to be welcome by scientists themselves. Indeed, recent years have seen an increasing number of philosophers of physics, biology, and other disciplines who have been publishing conceptual papers on a large variety of topics that are hard to distinguish from theoretical physics, biology, etc. This is, I think, a much welcome development, and a small (but, hopefully, growing) number of scientists have started to collaborate with philosophers and/or to publish in philosophical journals, as the case of debates about laws in biology discussed above exemplifies. As I pointed out elsewhere, this is along the lines of what Hasok Chang called ‘‘the continuation of science by other means’’:

Complementary science [based on history and philosophy of science] is critical but not prescriptive in relation to specialist science. … Complementary science identifies scientific questions that are excluded by specialist science. … The primary aim of complementary science is not to tell specialist science what to do, but to do what specialist science is presently unable to do. It is a shadow discipline, whose boundaries change exactly so as to encompass whatever gets excluded in specialist science. (pp. 249–250)

From this perspective, then, philosophy of biology represents a fifth type of theoretical biology, albeit one that is practiced from the outside looking into the core discipline. Because of that, it is uniquely positioned, I think, to perceive the threads connecting the other four modalities, as well as the advantages and limitations of each. The idea, of course, is not to make philosophers the ultimate arbiters in theoretical biology (or in anything else, for that matter). Rather, it is a recognition that it does take some distance from the nitty gritty of the specialized literature to be able to perceive the broad picture that is necessary for the advancement of broadly construed theoretical biology. Accordingly, it is not by chance that when biologists themselves step back to contemplate a more inclusive level of analysis they begin to sound like philosophers. Perhaps, then, ongoing cross-fertilization—like the one fostered by that special issue of Biological Theory—will bring less distrust and more fruitful collaboration between the two disciplines.

Between holism and reductionism: a philosophical primer on emergence

A few years ago I was asked by the editor of the Biological Journal of the Linnaean Society to write a short paper introducing biologists to the philosophical literature on emergence, given recurrent, and sometimes confused, talk of “emergence” in biology. The result was “Between holism and reductionism,” published in 2014, the full version of which you can download here.

‘Emergence’ is a controversial concept with a convoluted history, in both science and philosophy. It is therefore not surprising that it has been misused and vilified, as well as more often than not misunderstood. Typically, the idea of emergence is brought up by researchers who are — for one reason or another — unhappy with an ultra-reductionist scientific program, preferring instead some kind of holism or interactionism in the way they approach their research questions (think of the always current debates on gene–environment interactions). Just as surely, biologists who are embedded in reductionist programs are skeptical of emergence and similar ideas as obscurantist and leading only to never ending and fruitless debates.

Given its convoluted and controversial history, it may pay to get a bit clearer about where the concept of emergence came from and how it evolved qua concept. Perhaps the first thing to appreciate is that emergentism is usually put forth as a third way between mechanism-reductionism and vitalism-holism. Emergentists — particularly in biology — reject quasi-mystical appeals to vital forces, but at the same time argue for the appearance of genuinely novel phenomena at various levels of complexity of matter. Of course, much hinges on what ‘novel’ means in this context, as well as on the idea that there are meaningful ‘levels’ of complexity in nature.

One of the earliest articulations of the concept of emergence in biology is due, perhaps surprisingly, to John Stuart Mill, who in 1843 wrote “To whatever degree we might imagine our knowledge of the properties of the several ingredients of a living body to be extended and perfected, it is certain that no mere summing up of the separate actions of those elements will ever amount to the action of the living body itself.” The idea being expressed here is familiar in both biology and statistics nowadays: we know of plenty of examples, from chemistry and biology, where the whole — as the saying goes — is more than (or, to be more accurate, different from) the sum of its parts.

Mill himself contrasted an example from physics and one from chemistry to make his point. In Newtonian physics (which was, of course, the dominant paradigm at the time), mechanical forces can be combined in an additive fashion by means of vector analysis. So, for instance, the total force being applied to a given object from two different sources is simply the sum of the individual forces thus applied. In chemistry, however, the result of a chemical reaction is pretty much never a simple sum of the reactants: we can combine, say, an acid and a base to obtain a salt (and water). The properties of the salt are in no meaningful sense simply the sum of the properties of the pre-reaction acid and base.

Arguably the next big step in the development of emergentism in philosophy of science was due to C.D. Broad (circa 1925), who framed the issue in terms of the independence (or lack thereof) of the “special” sciences (i.e., of anything that is not fundamental physics). This is very much still a topic of discussion today (see this treatment of the issue at Footnotes), usually pitting physicists on the side of a strong sense of the unity of science (and its potential theoretical reduction to physics) vs (some) philosophers and non-physical scientists who defend the notion of a fundamental disunity, or at least heterogeneity, of science.

Broad aligned himself with the disunity camp, on the basis that he thought there are some emergent “trans-ordinal” laws that are compatible with, and yet irreducible to, more fundamental principles of physics. However, he did not think these laws presented any particular reason for the mystically oriented to rejoice: “There is nothing, so far as I can see, mysterious or unscientific about a trans-ordinal law or about the notion of ultimate characteristics of a given order. A transordinal law is as good a law as any other; and, once it has been discovered, it can be used like any other to suggest experiments, to make predictions, and to give us practical control over external objects.”

A modern example of the disunity camp is provided by Nancy Cartwright’s work, for instance in her analysis of the concept of natural laws. Cartwright sees laws as empirical generalizations with more or less broad (but, crucially, not universal) domains of application. Indeed, she maintains that the empirical evidence itself strongly suggests that laws cannot be both universal and true: the only circumstances when we can verify a law of nature (say, Newtonian mechanics) to a precise extent is when we create artificial worlds characterized by highly controlled conditions. In the real world, by contrast, laws only apply given a more or less large number of ceteris paribus conditions. And this holds even for physics, where usually scientists take the very idea of a law of nature to be uncontroversial. In biology things are even worse, given plenty of debates — among both philosophers and biologists — about whether we can sensibly talk about laws (as opposed, again, to mere empirical generalizations) in the first place.

In the paper I then move to make explicit a distinction that will be familiar to readers of Footnotes, but that needed to be brought to the fore in the context of the paper for the Biological Journal: we can think of emergent properties from either an ontological or an epistemological stance, although the two are not mutually exclusive.

Ontology, of course, has to do with the nature of things, while epistemology has to do with how we (think we) come to have knowledge of the nature of things. Ontological claims are inherently metaphysical, while epistemological claims are not (they can be purely pragmatic, or derived from principles of logic). To complicate things further, several philosophers (though by no means all!) from the mid-20th century on began to agree that metaphysical statements ought to be evaluated in terms of our epistemic access to the world, meaning that what we can know empirically should constrain how we think metaphysically (in biology, see this recent project, involving my colleague and friend Alan Love).

In terms of emergence and reductionism, my sense of the literature is that most philosophers nowadays are in agreement with most scientists: they reject ontological emergence and accept ontological reductionism. What this means is that the standard metaphysical position is that there are no true (or “strong”) emergent phenomena, only phenomena that cannot currently (or even ever) be described or understood in terms of fundamental physics, and yet are, in fact, only complex manifestations of the microscopic world as understood by fundamental physics.

A simple way to make sense of this idea is to deploy the concept of supervenience: in philosophy a property A is supervenient on another one, B, just in case A cannot change unless there is a change in B. For instance, if the total amount of money in my pocket is $20, this fact cannot change unless the number of coins and/or notes that make up that amount somehow diminishes or increases inside said pocket (as opposed to, say, simply exchanging a dollar bill for four 25 cent coins). Analogously, higher-order phenomena in physics or biology supervene on micro-physical phenomena just in case the only way to change the former is to change the latter.

What about epistemological emergence, then? T. O’Connor helpfully describes two types of the latter, which he labels predictive and irreducible-pattern. Predictive emergence is the idea that in practice it is not possible to predict the features of a complex system in terms of its constituent parts, even if one were to know all the laws governing the behavior of said parts. Irreducible-pattern emergentists, instead, maintain that the problem is conceptual in nature, i.e., that the lower-level laws simply do not provide us with the tools to deal with higher-level phenomena.

As O’Connor acknowledges, the distinction between predictive and irreducible-pattern views of epistemic emergence is not sharp, but it does draw attention to the fact that emergent phenomena present both pragmatic and conceptual issues for the practicing scientist and aspiring reductionist. It is not just, for instance, that it would be too computationally cumbersome to develop a quantum mechanical theory of economics (the predictive issue), it is that one would not know where to start with the task of deploying the tools of quantum mechanics (indeterminacy principle, non-locality, etc.) to somehow account for the phenomena studied by economists (relation between supply and demand, boom-and-bust cycles, etc.). It turns out, then, that one does not need to be an ontological emergentist in order to firmly reject a “greedy” reductionist program in biology or the social sciences.

The next section of the paper discusses in some detail two examples of weak emergence in biology, so-called N-K networks, introduced by Stuart Kaufman to account for non-linear, non-additive gene-gene interactions; and my preferred field of study, genotype-environment interactions. I direct the interested readers to the full paper for the details.

In the end, a good number of scientists are understandably wary of the notion of emergence, for the simple reason that it sounds a bit too mystical and wool-eyed. Of course, if emergence turns out to be an ontological reality, then these scientists would simply be mistaken and would have to accept a new metaphysics. However, even if emergence is only an epistemic phenomenon, there are good reasons to take it seriously, for instance because it points toward current methodological or theoretical deficiencies that make straightforward reductionist accounts unfeasible in practice, if not in principle.

In order for more scientists to take emergence seriously we need a coherent account of why we see emergent phenomena to begin with. One such account has been provided recently by Brian Johnson, and it is worth considering briefly. I am not suggesting that Johnson is necessarily correct, or that his explanation is the only one on the table. But it represents a good example of the contribution that philosophy of science (in this case, actually done by a scientist) can give to the way in which scientists themselves think of a given issue. Besides, Johnson may very well turn out to be exactly right.

Johnson’s basic idea is simple: (at least some kinds of) emergent properties are the result of a large number of interactions among parts of a complex system, all going on simultaneously in time and space. In order to be able to grasp emergent outcomes, our brains should be able to think in parallel at the conscious level (parallel unconscious thinking does occur, but it leads to an “intuitive,” not rational, grasp of phenomena). As the human brain is not capable of parallel conscious processing of information, we are faced with the impossibility of reasoning our way through the mechanics of emergence.

How do we know that the human brain cannot do parallel processing consciously? There are several reasons to think so, but Johnson provides a simple little exercise in figure 1 of his paper (which is available in full here) and it is worth trying it out to see how difficult that sort of thinking actually is, and how unsuitable we are at carrying it out. (The exercise involves summing up numbers, first on a single row — which is easy to do — then on multiple rows, which becomes immediately overwhelming.)

Interestingly, Johnson’s example of an emergent property that is not mysterious, and yet that we cannot cognitively deal with, is cellular automata. Johnson’s figure 2 presents a standard cellular automaton, and argues that we cannot predict the behavior of the cells in the game because our brains cannot process in parallel the various simple rules that generate such behavior. There is no magic here, as we designed the rules and we can check — time instant by time instant — that the behavior of the automaton is, in fact, the result of the application of such rules. But we cannot help being baffled by the complex and elegant pattern “emerging” from the massively parallel deployment of the same rules. Analogously, there may be no mystery in, say, the emergence of the biological property of robustness from the interactions going on in genetic networks, or the emergence of phenotypes during development (save, of course, for the — remote, at the moment — possibility that some of these behaviors may be ontologically, not just epistemically, emergent).

If Johnson is correct, then emergence is a necessary concept to deploy across scientific disciplines for eminently practical reasons, any time that there is a mismatch in degree of complexity and interactivity between the way the world that we try to comprehend actually is, and the capacities of the brains with which we try to comprehend it work. Nothing spooky or mysterious about it, just the natural result of the fact that brains that evolved to solve Pleistocene problems cannot compute in the way in which cellular automata, and countless other phenomena, ought to be computed in order to be deeply grasped.

Prove it! The burden of proof in science vs pseudoscience disputes

IMG_9677Time to look back at one of my technical papers, this one published in 2013 with my friend and collaborator Maarten Boudry in the journal Philosophia, and entitled “Prove it! The burden of proof in science vs Pseudoscience disputes.” (As with all my technical papers, they can be downloaded from my DropBox, by going to this link.)

The starting point for the paper is that the concept of burden of proof is used in a wide range of discourses, from philosophy to law, science, skepticism, and even in everyday reasoning. Maarten and I, then, provide an analysis of the proper deployment of burden of proof, focusing in particular on skeptical discussions of pseudoscience and the paranormal, where burden of proof assignments are most poignant and relatively clear-cut. We argue that burden of proof is often misapplied or used as a mere rhetorical gambit, with little appreciation of the underlying principles. The paper elaborates on an important distinction between evidential and prudential varieties of burdens of proof, which is cashed out in terms of Bayesian probabilities and error management theory. Finally, we explore the relationship between burden of proof and several (alleged) informal logical fallacies. This allows us to get a firmer grip on the concept and its applications in different domains, and also to clear up some confusions with regard to when exactly some fallacies (ad hominem, ad ignorantiam, and petitio principii) may or may not occur.

To begin with, an important distinction needs to be made between prudential and evidential burden of proof (BoP). The prudential BoP is applicable when there are cost asymmetries in arriving at two judgments about whatever matter is under dispute, whereas the evidential burden of proof applies when there are no such cost asymmetries involved.

Consider, for instance, the question of the safety of food additives. If approached as a straightforward scientific question, then the relevant concept is that of evidential BoP: there is no “cost” associated with arriving at the right judgment, other than the symmetric cost in getting a chunk of reality wrong. But if we approach the issue of food additives from the standpoint of its potential consequences for public health, there is a differential cost in getting the wrong answer, so the idea of prudential BoP seems more appropriate.

The (controversial) precautionary principle, which is an application of the prudential burden of proof, states that — if a certain action or policy is suspected to be harmful — the burden falls on those who believe that a new policy or course of action is not harmful. The status quo is perceived as less costly than a potentially dangerous new policy or course of action. In more general terms, the prudential BoP can be applied in situations where the cost of a false positive is significantly different (greater or smaller) from the cost of a false negative.

Examples of prudential BoP where the cost associated with a false negative outweighs that of a false positive include smoke detection alarms, environmental hazards, cancer screening, etc. An example of the opposite case, where false positives are perceived as more costly, include the presumption of innocence in a court of law. This principle in American criminal law clearly skews things in favor of the defendant, but this is done because the risk of a false positive (convicting an innocent) is treated as much less acceptable than the risk of a false negative (exonerating a guilty party).

Of course, cases of prudential BoP always involve an evidential dimension as well, while the opposite is not the case. In prudential BoP, cost asymmetries have to be taken into account in addition to prior probabilities. For example, in discussions about cancer and cell phones, the initial plausibility of low-energy electromagnetic radiation being carcinogenic has to be taken into account in addition to cost asymmetries. If prior probabilities are ignored, the precautionary principle is misused and can have paralyzing effects on public policy. Conversely, one cannot just invoke a Bayesian perspective (as useful as it is) to settle issues where cost asymmetries are involved, since even when competing claims have equal priors, a prudential approach (but not an evidential one) could easily tip the balance in favor of one claim over the other.

There are a number of important discussions in science, pseudoscience, and even in straightforward philosophical argumentation, that can reasonably be approached either from an evidential or from a prudential perspective, depending on the interest of the parties involved. For instance, the force of the philosophical argument behind Pascal’s wager is supposed to be that the risk of a false negative (you don’t believe there is a god, but it turns out there is one) is much higher than that of a false positive (because of the threat of eternal damnation in Hell). By contrast, to take another philosophical example dealing with the import of paranormal or supernatural hypotheses: the risk (in terms of practical consequences) of falsely accepting the existence of Bertrand Russell’s tea pot orbiting the sun (false positive) seems to be the same as the risk of rejecting the tea pot when there really is one (false negative).

Maarten and I then discuss a number of sources of subjectivity in the judgment of were the burden of proof lies, as well as a distinction between “global” and “local” burden of proof, where the global BoP is fixed throughout a discussion, because it is related to what a discussant ultimately wishes to establish (or her opponents wishes to deny). Within that broad goal, however, a number of local burdens of proof may arise, which shift during the debate itself, as they pertain to smaller pieces of the overall puzzle.

We move on to consider how BoP should be assigned. In 1970, Brown characterized the request for meeting the burden by a given side in a debate as amounting to the claim that, prima facie, that side’s position is more initially plausible than the alternative(s). Brown’s framework does not involve the costs associated with different judgments, and can thus be seen as a characterization of evidential BoP. A major exponent of modern skepticism, Michael Shermer, describes the principle of (evidential) BoP as follows: “The person making the extraordinary claim has the burden of proving to the experts and to the community at large that his or her belief has more validity than the one almost everyone else accepts.” Psychologist Terence Hines, in another compendium on pseudoscience, agrees that the burden should fall on the claimant of the extraordinary, because “it is often impossible to disprove even a clearly ridiculous claim,” such as that Santa Claus exists.

We discuss a formal analysis of these ideas, carried out by Larry Laudan (details in the paper), but warn that such an analysis should not be cause for too much complacency on the part of the skeptic of pseudoscience, since it doesn’t license an automatic rejection of any claim of the paranormal or extranormal, except when the prior probability of the paranormal hypothesis is exactly zero (e.g., when it is logically incoherent). The reason why BoP rests on the believers is also often misconstrued in the skeptical community. The evidential BoP is not on “whoever makes the positive claim.”

First, it is very easy to turn any positive claim into a negative one, and vice versa, by simple application of basic logical rules. In general, affirming P is exactly the same as denying ~P. Any existential claim can be translated into a negative universal, and vice versa. Resorting to such moves would merely amount to sophistic word play rather than a substantive consideration of epistemic burden.

Second, there are cases in which the BoP rests on those who are putting forth what may most plausibly be construed as the “negative” claim, in the sense of denying the material existence of some X. For example, the burden of proof is no longer on historians to provide evidence of Zyklon B use in the nazi concentration camps, although, apart from logical sophistries, they are the ones making a “positive” claim. In this case, then, the BoP rests on those making the “negative” claim.

In most discussions of pseudoscience and the paranormal, admittedly, the believers in pseudoscientific notions are making positive claims, in the sense of affirming the existence of entities (spaceships, psi force, qi energy lines, auras) that are rejected by modern science, but this — per se — is not the reason why the BoP rests on them. Evidential BoP assignment always reflects substantial background knowledge and prior probabilities, and these assumptions of plausibility, we argue, should be based on the expert consensus on the matter.

Maarten and I go on explore the role of Occam’s razor in this debate and then tackle what we call the technical burden. Believers of the paranormal and supernatural have often tried to turn the tables on skeptics, finding various ways to shift the BoP back to the latter. In particular, rhetorical moves of the type “you can’t prove it wrong” are unfair requests that fail to appreciate the proper BoP procedure. In some cases, such requests can be straightforwardly fulfilled (e.g., it is very easy to prove that the co-authors of this paper, at this very moment, have far less than $1 M dollar in their pockets), but even then, the skeptic is doing the accuser a favor in taking on a BoP that does not really fall on him (we are under no obligation to empty our pockets after each such gratuitous insinuation).

Similarly, if ufologists claim that some crop circle was left by a space ship, the BoP is firmly on their side to come up with extraordinary evidence. If the skeptic chooses to take on their sophistic challenge to “prove that there was no spaceship” by way of providing direct or circumstantial evidence that that particular crop circle was in fact a human hoax, they are indulging the believers by taking on a BoP that, rationally speaking, does not pertain to them at all.

For most actual para/extranormal claims, however, the space of possibilities cannot be exhausted in a finite (and suitably short) time. For instance, to arrive at proof that there are no alien spaceships visiting earth — at any moment, not just in the case of a specific alleged incident — would require a type of temporally protracted exhaustive monitoring of the entire planet’s surface, something that it is so far beyond current technological possibility that the request can easily be dismissed as a simple debating trick.

This, however, leaves the skeptic with a dilemma. Although it may sometimes be rhetorically persuasive for her to take on a BoP that, strictly speaking, does not fall on her (for example, providing a natural explanation of a given UFO sighting), this may be perceived as an implicit acknowledgement that skeptics do carry the negative BoP for every single anomaly that believers come up with. The result is a mug’s game for skeptics: all believers have to do is throw around challenges for the skeptic, who will surely not be able to answer every single one of them. To refer again to the ufological literature, even ardent skeptics do admit that a small percentage (at most 10%, and likely significantly less than that) of alleged UFOs cannot be turned into IFOs (Identified Flying Objects), even after direct investigation of the available evidence.

There are at least three replies the skeptic has available here. To begin with, investigative resources are limited, especially when it comes to likely pseudoscientific claims, so it should not be surprising that on a certain number of occasions the researcher simply does not have sufficient means to carry out a positive identification of the allegedly unexplained phenomenon.

Second, even in the case of genuinely scientific questions one has to contend with limited epistemic access to the relevant phenomena, access that can be affected by the lack of sufficient empirical traces or by the intrinsic epistemic limits of human reason. Think of the long — and so far still largely unsuccessful — quest for an explanation for the origin of life, for instance.

Third, as Thomas Kuhn reminded us, even successful “normal” science constantly has to deal with a number of unsolved “puzzles,” and it is only when the puzzles become numerous and widespread that they genuinely begin to threaten the reigning paradigm, forcing scientists to seek alternative theoretical frameworks. Even if skeptics cannot provide a complete explanation for every single anomaly, what they often can do is to offer promissory notes for explanations, speculating about potential natural interpretations. Given that the BoP really falls on believers to come up with convincing evidence, this is all that can be expected from skeptics under these circumstances.

Intelligent Design proponents and assorted creationists, for instance, have often pointed to alleged instances of “irreducible complexity” in the living world: biological systems that are so intricate that they could not possibly have evolved. In dealing with such challenges, evolutionary biologists can suggest possible evolutionary pathways leading to a given complex biological structure. When they have done so, there is an extra BoP on ID advocates to rule out all of the proposed natural explanations. Contrary to what believers think, the BoP is not on skeptics to demonstrate which one of the natural explanations is the correct one. Given the overwhelming evidence for the power of natural selection to produce adaptive complexity, and the difficulty of garnering information about a distant evolutionary past, this kind of informed speculation is all that is needed to put ID arguments to rest (of course, evidence of specific mutations and selection processes further strengthens the case for evolution, but its fate no longer depends on it). The amount of anomalies (in casu, evolutionary puzzles) has simply not come even close to the Kuhnian threshold for a paradigm shift, though of course this says nothing about whether it might do so in the future.

At this point the paper changes direction somewhat, and Maarten and I provide a discussion of so-called informal logical fallacies. I strongly suggest interested people to check the paper for the details, but we basically argue that too often skeptics (and now, increasingly, believers) throw out the “you committed logical fallacy X” as if that were the end of all discussion. Sometimes an informal fallacy is not a fallacy at all, but actually a good heuristic, or a relevant piece of information.

Let’s say, for instance, that during court proceedings a lawyer for the defense points out that a hostile witness has a history of being unreliable and of lying, or perhaps has something to gain if the accused is convicted. The prosecution can’t just shout “ad hominem!” and be done with it, since information about the character and/or personal interests of the witness are, in fact, germane to the case, even though of course they don’t prove that the witness is lying on this particular occasion.

In conclusion, the word “skepticism” has, of course, a long and venerable history in philosophy. When it comes to disputes about allegedly pseudoscientific notions, though, the term may refer to one of two distinct attitudes: one corresponds to someone who knows that the para- or extra-normal claim is wrong and is out to prove it. Although this may in fact be the case in many actual instances, such a figure is not at all intellectually interesting. The second meaning is the Humean sense in which “a wise man proportions his belief to the evidence.” If we are to be honest Humean skeptics, though, we need to set the bar for evidence of extraordinary claims at the right level, not as low as a gullible believer would wish it, but not as high as for the BoP to be impossible to meet.

Modern skeptics are fond of quoting Carl Sagan’s rendition of the Humean dictum mentioned above: “Extraordinary claims require extraordinary evidence.” This is fine as far as it goes, but we clearly need criteria to credibly establish when a claim is indeed “extraordinary,” and what would count as commensurate evidence. Hume’s own famous argument against miracles is sometimes (uncharitably, we think) interpreted as amounting to a statement of the impossibility, not just very low likelihood, of miracles, and people who believe in ufological or paranormal phenomena echo that sentiment when they claim that skeptics will never be satisfied no matter how compelling the evidence is going to be.

However, Hume’s approach in Of Miracles can be reasonably reformulated in Bayesian terms, with the priors — and consequently the BoP — being set by the accepted background conditions pertinent to the dispute at hand. Seen from this perspective, all we need to avoid are the extremes of setting our priors to 0 (complete skepticism) or to 1 (complete belief), since no amount of data can possibly move us away from those limit cases. Indeed, there are some instances in the skeptical literature on pseudoscience where priors have significantly moved over time. For instance, while acupuncture is still criticized in terms of both the underlying theory and the exaggerated claims of its supporters, there may now be sufficient evidence of its limited efficacy that a skeptic needs to reconsider outright rejection. This is even more so for a variety of transcendental meditation techniques, where again one may reasonably reject the underlying metaphysics while agreeing that qua techniques they do work for a range of claimed effects.

If anything, it is harder to find prominent exponents of para- or extra-normal beliefs that have changed their mind in the face of skeptical arguments (though even those can be found, if one digs deep enough). Which brings us to the last point in this paper (which I haven’t discussed above): discussions of BoP in the context of science vs pseudoscience disputes are, of course, a type of Wittgenstenian language game that presupposes a minimum commonality of standards. People cannot agree on how to fairly allocate BoP unless they find themselves at the least in the same ballpark when it comes to the type of background knowledge that constraints the priors pertinent to the dispute at hand. And that is precisely the most common obstacle in debates between skeptics and believers: the former too often simply reject out of hand even the possibility of an anomalous phenomenon turning out to be real, while the latter are equally quick to label the entire scientific enterprise as “too reductionist” or narrow minded to be able to come to terms with novel phenomena. This sort of impasse depends on a widespread lack of appreciation for the sort of epistemic issues Maarten and I have described in this paper, but it also boils down at least in part to individual psychological attitudes, whereof a philosopher is better served not to speak.

The Extended Evolutionary Synthesis and causality in biology

IMG_8298I have just spent three delightful days at the Konrad Lorenz Institute for theoretical biology in Vienna, participating to a workshop of philosophers and biologists on the question of how to think about causality, especially within the context of the so-called Extended Evolutionary Synthesis, the currently unfolding update to the standard model in evolutionary theory (for more on the EES, see here).

The workshop was organized by my colleagues Kevin Laland and Tobias Uller, and hosted by the delightful Gerd Müller, an old friend of mine. In an informal sense, this was a follow-up to a meeting that Gerd and I organized at the KLI back in 2008, to explore the very meaning and conceptual boundaries of an EES, and which resulted in the publication of this book about the effort.

Continue reading

Socrates: ancient Humanist?

MNR-Socrate

Socrates, Roman National Museum, photo by the Author

As part of my ongoing occasional series aiming at bringing some of my own technical papers to the attention of a wider public (after all, what the hell is the point of doing scholarship if it only benefits other scholars?), below I reprint a paper I recently published in The Human Prospect. It inquires on the possibility of interpreting Socrates as a proto-Humanist of sorts, and it therefore includes a discussion of Humanism as a philosophy of life, as well its likely stemming from the ancient Greco-Roman tradition of virtue ethics (via the mediation of the Renaissance Humanists, which were informed by, and yet were reacting against, medieval Christianity).

Continue reading

The mismeasure of machine: why machine metaphors in biology are misleading

Time to indulge in the occasional revisiting of one of my technical papers, in the hope that they may be of more general interest then the original audience they were written for. This time I’m going to focus on one that I co-wrote with my long-time collaborator, Maarten Boudry, and published in 2013 in the journal Studies in History and Philosophy of Biological and Biomedical Sciences. The title of the paper is: “The mismeasure of machine: synthetic biology and the trouble with engineering metaphors.”

Continue reading

Paul Feyerabend’s defense of astrology, part IV

FeyerabendTime to bring to a close this longer than expected (by me, when I started it!) mini-series on the fruitful exchange I’ve had recently in the pages of Social Epistemology with Ian Kidd, concerning Paul Feyerabend’s (in)famous “defense” of astrology and criticism of scientific dogmatism. (part I, part II, and part III here). This last entry will focus on my second response to Kidd, which has concluded our exchange, at the least so far.

Continue reading