Between holism and reductionism: a philosophical primer on emergence

A few years ago I was asked by the editor of the Biological Journal of the Linnaean Society to write a short paper introducing biologists to the philosophical literature on emergence, given recurrent, and sometimes confused, talk of “emergence” in biology. The result was “Between holism and reductionism,” published in 2014, the full version of which you can download here.

‘Emergence’ is a controversial concept with a convoluted history, in both science and philosophy. It is therefore not surprising that it has been misused and vilified, as well as more often than not misunderstood. Typically, the idea of emergence is brought up by researchers who are — for one reason or another — unhappy with an ultra-reductionist scientific program, preferring instead some kind of holism or interactionism in the way they approach their research questions (think of the always current debates on gene–environment interactions). Just as surely, biologists who are embedded in reductionist programs are skeptical of emergence and similar ideas as obscurantist and leading only to never ending and fruitless debates.

Given its convoluted and controversial history, it may pay to get a bit clearer about where the concept of emergence came from and how it evolved qua concept. Perhaps the first thing to appreciate is that emergentism is usually put forth as a third way between mechanism-reductionism and vitalism-holism. Emergentists — particularly in biology — reject quasi-mystical appeals to vital forces, but at the same time argue for the appearance of genuinely novel phenomena at various levels of complexity of matter. Of course, much hinges on what ‘novel’ means in this context, as well as on the idea that there are meaningful ‘levels’ of complexity in nature.

One of the earliest articulations of the concept of emergence in biology is due, perhaps surprisingly, to John Stuart Mill, who in 1843 wrote “To whatever degree we might imagine our knowledge of the properties of the several ingredients of a living body to be extended and perfected, it is certain that no mere summing up of the separate actions of those elements will ever amount to the action of the living body itself.” The idea being expressed here is familiar in both biology and statistics nowadays: we know of plenty of examples, from chemistry and biology, where the whole — as the saying goes — is more than (or, to be more accurate, different from) the sum of its parts.

Mill himself contrasted an example from physics and one from chemistry to make his point. In Newtonian physics (which was, of course, the dominant paradigm at the time), mechanical forces can be combined in an additive fashion by means of vector analysis. So, for instance, the total force being applied to a given object from two different sources is simply the sum of the individual forces thus applied. In chemistry, however, the result of a chemical reaction is pretty much never a simple sum of the reactants: we can combine, say, an acid and a base to obtain a salt (and water). The properties of the salt are in no meaningful sense simply the sum of the properties of the pre-reaction acid and base.

Arguably the next big step in the development of emergentism in philosophy of science was due to C.D. Broad (circa 1925), who framed the issue in terms of the independence (or lack thereof) of the “special” sciences (i.e., of anything that is not fundamental physics). This is very much still a topic of discussion today (see this treatment of the issue at Footnotes), usually pitting physicists on the side of a strong sense of the unity of science (and its potential theoretical reduction to physics) vs (some) philosophers and non-physical scientists who defend the notion of a fundamental disunity, or at least heterogeneity, of science.

Broad aligned himself with the disunity camp, on the basis that he thought there are some emergent “trans-ordinal” laws that are compatible with, and yet irreducible to, more fundamental principles of physics. However, he did not think these laws presented any particular reason for the mystically oriented to rejoice: “There is nothing, so far as I can see, mysterious or unscientific about a trans-ordinal law or about the notion of ultimate characteristics of a given order. A transordinal law is as good a law as any other; and, once it has been discovered, it can be used like any other to suggest experiments, to make predictions, and to give us practical control over external objects.”

A modern example of the disunity camp is provided by Nancy Cartwright’s work, for instance in her analysis of the concept of natural laws. Cartwright sees laws as empirical generalizations with more or less broad (but, crucially, not universal) domains of application. Indeed, she maintains that the empirical evidence itself strongly suggests that laws cannot be both universal and true: the only circumstances when we can verify a law of nature (say, Newtonian mechanics) to a precise extent is when we create artificial worlds characterized by highly controlled conditions. In the real world, by contrast, laws only apply given a more or less large number of ceteris paribus conditions. And this holds even for physics, where usually scientists take the very idea of a law of nature to be uncontroversial. In biology things are even worse, given plenty of debates — among both philosophers and biologists — about whether we can sensibly talk about laws (as opposed, again, to mere empirical generalizations) in the first place.

In the paper I then move to make explicit a distinction that will be familiar to readers of Footnotes, but that needed to be brought to the fore in the context of the paper for the Biological Journal: we can think of emergent properties from either an ontological or an epistemological stance, although the two are not mutually exclusive.

Ontology, of course, has to do with the nature of things, while epistemology has to do with how we (think we) come to have knowledge of the nature of things. Ontological claims are inherently metaphysical, while epistemological claims are not (they can be purely pragmatic, or derived from principles of logic). To complicate things further, several philosophers (though by no means all!) from the mid-20th century on began to agree that metaphysical statements ought to be evaluated in terms of our epistemic access to the world, meaning that what we can know empirically should constrain how we think metaphysically (in biology, see this recent project, involving my colleague and friend Alan Love).

In terms of emergence and reductionism, my sense of the literature is that most philosophers nowadays are in agreement with most scientists: they reject ontological emergence and accept ontological reductionism. What this means is that the standard metaphysical position is that there are no true (or “strong”) emergent phenomena, only phenomena that cannot currently (or even ever) be described or understood in terms of fundamental physics, and yet are, in fact, only complex manifestations of the microscopic world as understood by fundamental physics.

A simple way to make sense of this idea is to deploy the concept of supervenience: in philosophy a property A is supervenient on another one, B, just in case A cannot change unless there is a change in B. For instance, if the total amount of money in my pocket is $20, this fact cannot change unless the number of coins and/or notes that make up that amount somehow diminishes or increases inside said pocket (as opposed to, say, simply exchanging a dollar bill for four 25 cent coins). Analogously, higher-order phenomena in physics or biology supervene on micro-physical phenomena just in case the only way to change the former is to change the latter.

What about epistemological emergence, then? T. O’Connor helpfully describes two types of the latter, which he labels predictive and irreducible-pattern. Predictive emergence is the idea that in practice it is not possible to predict the features of a complex system in terms of its constituent parts, even if one were to know all the laws governing the behavior of said parts. Irreducible-pattern emergentists, instead, maintain that the problem is conceptual in nature, i.e., that the lower-level laws simply do not provide us with the tools to deal with higher-level phenomena.

As O’Connor acknowledges, the distinction between predictive and irreducible-pattern views of epistemic emergence is not sharp, but it does draw attention to the fact that emergent phenomena present both pragmatic and conceptual issues for the practicing scientist and aspiring reductionist. It is not just, for instance, that it would be too computationally cumbersome to develop a quantum mechanical theory of economics (the predictive issue), it is that one would not know where to start with the task of deploying the tools of quantum mechanics (indeterminacy principle, non-locality, etc.) to somehow account for the phenomena studied by economists (relation between supply and demand, boom-and-bust cycles, etc.). It turns out, then, that one does not need to be an ontological emergentist in order to firmly reject a “greedy” reductionist program in biology or the social sciences.

The next section of the paper discusses in some detail two examples of weak emergence in biology, so-called N-K networks, introduced by Stuart Kaufman to account for non-linear, non-additive gene-gene interactions; and my preferred field of study, genotype-environment interactions. I direct the interested readers to the full paper for the details.

In the end, a good number of scientists are understandably wary of the notion of emergence, for the simple reason that it sounds a bit too mystical and wool-eyed. Of course, if emergence turns out to be an ontological reality, then these scientists would simply be mistaken and would have to accept a new metaphysics. However, even if emergence is only an epistemic phenomenon, there are good reasons to take it seriously, for instance because it points toward current methodological or theoretical deficiencies that make straightforward reductionist accounts unfeasible in practice, if not in principle.

In order for more scientists to take emergence seriously we need a coherent account of why we see emergent phenomena to begin with. One such account has been provided recently by Brian Johnson, and it is worth considering briefly. I am not suggesting that Johnson is necessarily correct, or that his explanation is the only one on the table. But it represents a good example of the contribution that philosophy of science (in this case, actually done by a scientist) can give to the way in which scientists themselves think of a given issue. Besides, Johnson may very well turn out to be exactly right.

Johnson’s basic idea is simple: (at least some kinds of) emergent properties are the result of a large number of interactions among parts of a complex system, all going on simultaneously in time and space. In order to be able to grasp emergent outcomes, our brains should be able to think in parallel at the conscious level (parallel unconscious thinking does occur, but it leads to an “intuitive,” not rational, grasp of phenomena). As the human brain is not capable of parallel conscious processing of information, we are faced with the impossibility of reasoning our way through the mechanics of emergence.

How do we know that the human brain cannot do parallel processing consciously? There are several reasons to think so, but Johnson provides a simple little exercise in figure 1 of his paper (which is available in full here) and it is worth trying it out to see how difficult that sort of thinking actually is, and how unsuitable we are at carrying it out. (The exercise involves summing up numbers, first on a single row — which is easy to do — then on multiple rows, which becomes immediately overwhelming.)

Interestingly, Johnson’s example of an emergent property that is not mysterious, and yet that we cannot cognitively deal with, is cellular automata. Johnson’s figure 2 presents a standard cellular automaton, and argues that we cannot predict the behavior of the cells in the game because our brains cannot process in parallel the various simple rules that generate such behavior. There is no magic here, as we designed the rules and we can check — time instant by time instant — that the behavior of the automaton is, in fact, the result of the application of such rules. But we cannot help being baffled by the complex and elegant pattern “emerging” from the massively parallel deployment of the same rules. Analogously, there may be no mystery in, say, the emergence of the biological property of robustness from the interactions going on in genetic networks, or the emergence of phenotypes during development (save, of course, for the — remote, at the moment — possibility that some of these behaviors may be ontologically, not just epistemically, emergent).

If Johnson is correct, then emergence is a necessary concept to deploy across scientific disciplines for eminently practical reasons, any time that there is a mismatch in degree of complexity and interactivity between the way the world that we try to comprehend actually is, and the capacities of the brains with which we try to comprehend it work. Nothing spooky or mysterious about it, just the natural result of the fact that brains that evolved to solve Pleistocene problems cannot compute in the way in which cellular automata, and countless other phenomena, ought to be computed in order to be deeply grasped.

Advertisements


Categories: Massimo's Technical Stuff, Philosophy of Science

91 replies

  1. Brodix,

    Humanity as a parallel processor? Unless you speak metaphorically you know what this is, right?…

    Liked by 1 person

%d bloggers like this: