Category Archives: Philosophy of Science

Book Club: Darwin’s Unfinished Symphony, 7, the dawn of civilization

Egyptian agricultureHomo sapiens is the only species on planet Earth to have experienced three phases of evolution: the standard biological one, driven by mutation and natural selection; gene-culture coevolution; and now the period of evolution driven primarily by culture. This is how chapter 10 of Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind, begins the transition to the author’s discussion of that very last, novel, and crucial phase. (More entries in this ongoing series here.)

It’s an obviously crucial topic for a variety of reasons. First off, to help explain why on earth we evolved such large and metabolically expensive brains. Keep in mind that the human brain accounts for only 2% of our total body weight, and yet it consumes a whopping 20% of our daily caloric intake. (It’s unfortunate that thinking harder doesn’t lead to weight loss…). Second, as Kevin has documented in the previous chapters of the book, it is our capacity for social learning (and teaching) that accounts for the incredible success of our species, as the third mode of evolution is what has made possible for us to build giant cities, go to the Moon, and waste our existence on social media.

Kevin begins by addressing a related question: why did it take so long for our species to develop complex civilizations, while hunter-gatherer societies still today have very limited technology and simple cultures? The likely answer has to do with the severe limitations imposed by a hunter-gatherer lifestyle. To begin with, of course, hunter-gatherers have to be constantly on the move, changing base location once the local resources are depleted. This means that it is impossible to settle down long enough to develop a large population size and the division of labor that foster new technological developments. And even if some new technology were to be developed, it would have to be of limited size and complexity, again because the entire population has to pick up and move every few weeks or so.

Similarly, in hunter-gatherer societies the birthrate is typically low, with new pregnancies well separated in time, as a human female cannot carry and care for many small children when the group is constantly on the move. Small population size and temporary abodes also means no accumulation of wealth of the kind that makes division of labor possible, leading in turn to the origin of specialized classes of workers that can rapidly accumulate specific technical knowledge over few generations.

“This helps us understand why hunter-gather technology was only slowly changing for such a long time, and also why, even today, many small-scale societies possess limited technology. Hunter-gatherers are effectively trapped in a vicious cycle that severely constrains their rate of cultural evolution.” (p. 248)

That’s also why the invention of agriculture, which took place multiple times after the last Ice Age, is tightly linked with the origin of complex human technological cultures. The reason agriculture did not originate earlier is because the conditions following that Ice Age, about 11,500 years ago, have actually been the most favorable — climatically speaking — for such an event over the last two million years of hominid evolution. And before then our ancestors simply did not have the required brain power and ability to communicate through language.

Plant and animal domestication of some sort preceded the full blown agriculture revolution, and the first plants to be domesticated were annuals, characterized by a rapid life cycle and hence easy to select artificially. These included peas, wheat, rye, barley, and maize. A new form of wheat, for instance, appeared around 9,600 BCE in the eastern Mediterranean region. Maize was farmed in southern Mexico around 9,000 years ago. Millet appeared in China between 10,300 and 8,700 years ago, rice around 9,000 years ago.

The invention of agriculture was not without its own problems. The more stable source of food led to population explosions, which in turn caused periodical famines. Indeed, the archeological data show that Europeans became shorter by about 7 cm. between 2,300 and as little as 400 years ago, because of poor nutrition.

As Laland points out, agriculture is a great example of niche construction on the part of human beings. The old idea, in ecology, that niches are “out there,” waiting to be filled by new species of organisms, has been questioned for some time now. Rather, living beings actively alter their environment, co-evolving with it, if you will. By far the most spectacular example in the history of earth is the fact that we have high levels of oxygen in our atmosphere, a byproduct of photosynthesis, an organic process that has made animal life possible in the first place.

Since agriculture was not an unqualified good, it is reasonable to ask how come the new mode of life largely and rapidly replaced the old hunter-gathering. Kevin offers two main reasons: first, agriculturalists simply outbred hunter-gatherers, because of the larger population size made possible by a sedentary lifestyle. Before the advent of agriculture the world’s human population had stabilized at around one million people. By the time of the Roman empire it was up to 60 millions.

The second factor was a wave of innovations triggered by agriculture. For instance, the invention of the wheel, which appeared simultaneously in Mesopotamia, Russia and central Europe around 5,500 years ago. The first organized religions also sprang in agricultural societies, with different cultures, predictably, worshiping gods related to agriculture: Inti, the sun-god of the Inca; Renenutet, the Egyptian god of harvest; Ashnan, the goddess of grain in Mesopotamia; and Ceres, the Roman goddess (counterpart of the Greek Demeter) who was credited with the discovery of wheat, the invention of ploughing, the yoking of oxen, and similar.

Here is another way to appreciate the difference between pre- and post-agriculture humanity:

“Prior to the advent of agriculture, each population would have possessed at most a few hundred types of artifacts, while today the inhabitants of New York are able to choose between 100 billion bar-coded items. … One recent estimate of the amount of information now stored on the internet is 1,200,000 terabytes.” (p. 263, 269)

Kevin points out that all this innovation has had dark sides, including environmental destruction, not just today, but throughout the last 10,000 years or so, with humanity being responsible for countless extinctions of other species; as well as of course the scale of war that technology has made possible; and the increasing inequality (compared to hunter-gatherer societies) among human beings themselves. It seems like both natural and cultural selection don’t really care about ethical considerations, although of course we should. But that’s another story.

Book Club: Darwin’s Unfinished Symphony, 6, gene-culture co-evolution

lactose tolerance

map of lactose tolerance

Kevin Laland’s book, Darwin’s Unfinished Symphony, which I have been discussing for several posts now, is basically one long argument in favor of the thesis that human evolution has been shaped by a feedback process involving a cultural drive mechanism initiated by natural selection, a mechanism that favored the acquisition of accurate and efficient copying. Chapter 9, to be examined here, is devoted to the classic approach of gene-culture co-evolution, the fundamental notion that cultural changes affect genetic evolution, and indeed that the more time passes the more human evolution is increasingly driven by culture and less so by biology (though biology always remains a fundamental constraint to be reckoned with):

“Genetic propensities, expressed throughout development, influence the cultural traits that are learned, while cultural knowledge, expressed in behavior and artifacts, spreads through populations and modifies how natural selection affects human populations in repeated, richly interwoven interactions.” (p. 217)

While the chapter begins with an interesting treatment of the phenomenon of right-handedness, the standard example of gene-culture co-evolution is, of course, lactose tolerance. In most humans, the ability to metabolize milk disappears in adulthood, as it was not pre-historically needed. But some populations have large numbers of adult individuals that retain a functional version of the gene coding for lactase activity, resulting in the phenotype of lactose tolerance. We now know that lactose tolerance evolved independently at least six times, and that this happened after the switch to agriculture following the last glaciation, making it a strong candidate for culture-driven genetic change in humans. Interestingly, mathematical models show that the rapidity of spread of the genetic trait depends on the fidelity of transmission of the cultural one: the more likely children of milk drinkers are to become milk drinkers themselves, the stronger the selection coefficient favoring the continued expression of the lactase gene into adulthood.

Several other traits have been shown to have evolved in a similar fashion in recent human history, including genes involved in skin pigmentation, salt retention, and heat stress, all obviously related to the sorts of climate changes experienced by human populations during their migrations. Unfortunately for us today, some of these strongly selected genes facilitate a highly efficient usage of food sources, as well as storage of energy into fats. Hence the trouble that many moderns are experiencing with obesity, leading to diabetes and heart problems, among other negative effects. Another fascinating example is the sarcomeric myosin gene MYH16, expressed mostly in the jawbone. A sizable chunk of the gene has been deleted, leading hominins to lose a lot of jaw muscles. This genetic event occurred at about the time we invented cooking, which made strong jaw muscles unnecessary (and likely metabolically expensive). And of course, many genes involved with brain development, particularly the neocortex, are now known to have undergone very strong positive selection in recent time.

As Kevin is careful to point out, none of this means that natural selection stopped working in humans. So long as there will be differential survival and reproduction, selection will be active on our genomes. But its mode and tempo have been dramatically altered by the onset of cultural evolution, which has become a drive, rather than an outcome, of natural selection in our species. As Laland puts it:

“Theoretical models consistently find that gene-culture dynamics are typically faster, stronger, and operate over a broader range of conditions than conventional evolutionary dynamics. … This picture of the evolution of the human mind is radically different from the portrayal advanced by evolutionary psychologists and many popular science writers.” (p. 239)

I think Kevin is a bit too mild when he discusses the limitations of evolutionary psychology (whose initial central hypothesis, a massive modularity of the human mind, has now been definitively rejected empirically). He states that current research in gene-culture co-evolution shows that the degree of mismatch between our genetic endowment and our culturally created environment is “far more limited” than evopsych authors envisioned. I’d say that’s a dramatic understatement, but certainly still an observation that should lead serious evolutionary psychologists to revise a great deal of what they are doing, abandoning the increasingly silly idea that the Pleistocene was a crucial “environment of evolutionary adaptedness” (EEA), as if the genetic evolution of Homo sapiens had suddenly stopped at that point in time.

“Far from being trapped in the past by an outdated biological legacy, humans are characterized by a remarkable plasticity. Our adaptiveness is reinforced by both cultural and biological evolution.” (p. 240)

If people who write about evopsych were to take this conclusion a bit more seriously, especially when they write for a general public, there would be a lot less garbage floating around the pop science literature. But I ain’t holding my breath…

Book Club: Darwin’s Unfinished Symphony, 5, the evolution of language

LanguageWhy is it that only the species Homo sapiens has evolved language? Well, aside, possibly, for other, now extinct, species of our own genus. Despite much talk of animal communication, that’s just what other species do: communicate. Language is a very special, and highly sophisticated, type of communication. Characterized by grammar, capable of recursivity, inherently open ended. Nothing like that exists anywhere else in the animal world. Why?

That’s the topic of the eight chapter of Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind, which we are in the midst of discussing. A major problem here, as Laland points out, is not that we have no idea of the possible answer, but rather that there are too many explanations on offer, none of which seems to quite do the job. Here is a partial list. Language evolved:

  • To facilitate cooperative hunting.
  • As a costly ornament allowing females to assess male quality.
  • As a substitute for the grooming exhibited by other primate species.
  • To promote pair bonding.
  • To aid mother-child communication.
  • To gossip about others.
  • To expedite tool making.
  • As a tool for thought.

And of course it’s very possible that language evolved to fulfill more than one, or even all of those functions! The stumbling block isn’t the imagination of researchers, but rather the dearth of relevant empirical evidence (something, of course, that isn’t the case only in some areas of evolutionary biology).

Part of the difficulty stems from the fact that the evolution of language was a singular event, which precludes the use of one of evolutionary biology’s standard tools of investigation, the comparative phylogenetic method. Moreover, languages don’t leave much of a fossil record, thus taking out a second major tool from the biologist’s box.

Kevin proceeds by listing six criteria (and adding a seventh of his own) that a successful theory of language’s origin should meet in order to be further considered (I refer the reader to the chapter itself for more in-depth explanations concerning each criterion):

  1. The theory must account for the honesty of early language. (If words are easy and cost-free, why should anyone believe what others say?)
  2. The theory should account for the cooperativeness of early language. (Why should people, early on, have gone out of their way to help others by passing to them valuable information?)
  3. The theory should explain how language was adaptive from the onset. (As it is hard to imagine how it could have been a spandrel.)
  4. The concepts proposed by the theory should be grounded in reality. (That is, how did words acquire meaning in the first place?)
  5. The theory should explain the generality of language. (As opposed to the specificity characteristic of every other animal communication system.)
  6. The theory should account for the uniqueness of human language. (Why us and not anyone else?)
  7. The theory should explain why communication needed to be learned. (Why is it that language needed to be socially learned and capable of changing rapidly?)

Laland then concludes that no theory suggested so far meets all seven of these criteria, and I think he’s right. His preferred answer should, at this point in our discussion of the book, come as no surprise:

“[This] raises the question of why humans alone should exhibit a culture that ratchets up in complexity. Theoretical studies answer this question by showing that high-fidelity information transmission is necessary for cumulative culture, but then pose the supplementary question of how our ancestors achieved high-fidelity transmission. The obvious answer is through teaching.” (p. 183)

Kevin then proceeds in orderly fashion by comparing his preferred hypothesis — that language evolved in order to teach relatives — to the seven criteria just listed, finding that the language-to-teach scenario satisfies all of them.

At this point it will be good to step back for a second. To begin with, I’m sure that other students of the evolution of language will dispute both of Laland’s claims: (i) that no other hypothesis is a good fit for all seven criteria, and (ii) that only the language-to-teach hypothesis does a good job with the same criteria. Or perhaps (iii) someone will question the adequacy or necessity of one or more of the criteria in the first place.

For me, though, what makes this chapter the least convincing of those we have read so far is that even if we grant Kevin everything he is arguing for, we are still left, at best, with an hypothetical scenario that falls far short of empirical verification. Yes, maybe language evolved so that we could efficiently teach valuable information to our relatives, and things then went on from there. Or maybe there is a clever variant of one of the other hypotheses now on the table that will be even more convincing than the present analysis. Or perhaps there is yet another scenario that simply nobody has thought up yet. We just don’t know. And to be honest I don’t think we are likely to know any time soon, if ever. Precisely because of a major stumbling block acknowledged by Laland himself: the evolution of language was a unique historical event, and unique historical events are exceedingly difficult (though not impossible) to study.

While reading the chapter, I was reminded of some sharp, and I’m sure very much unwelcome words written by one of my scientific role models, the Harvard geneticist Richard Lewontin. In a book chapter entitled “The evolution of cognition: questions we will never answer,” he presents a critical analysis of the literature on the topic, making an argument that builds up to the following conclusion:

“I must say that the best lesson our readers can learn is to give up the childish notion that everything that is interesting about nature can be understood. History, and evolution is a form of history, [often] simply does not leave sufficient traces. … Form and even behavior may leave fossil remains, but forces like natural selection do not. It might be interesting to know how cognition (whatever that is) arose and spread and changed, but we cannot know. Tough luck.” (p. 130)

Seems to me that one could easily replace “cognition” with “language” and still be largely in the right. I’m sure Kevin will disagree, and I look forward to his comments.


(Note to the reader: this commentary covers that major part of chapter 8, devoted to the question of the original function of language. The latter part of the chapter addresses a different, if related, question: how was it computationally possible for hominins to learn language, regardless of which selective pressured favored it? While interesting, I elected not to cover this bit, in order to focus discussion on what I think are the more crucial points of the chapter.)

Peter Woit vs Sean Carroll: string theory, the multiverse, and Popperazism

Peter Woit vs Sean Carroll

Peter Woit (left) vs Sean Carroll (right)

The string and multiverse wars are going strong in fundamental physics! And philosophy of science is very much at the center of the storm. I am no physicist, not even a philosopher of physics, in fact (my specialty is evolutionary biology), so I will not comment on the science itself. I take it that the protagonists of this diatribe are more than competent enough to know what they are talking about. But they keep bringing in Karl Popper and his ideas on the nature of science, as well as invoke — or criticize — Richard Dawid’s concept of non-empirical theory confirmation, so I feel a bit of a modest commentary as a philosopher of science is not entirely out of order.

Let me begin with two caveats: first, there are many people involved in the controversy, including Sean Carroll, Peter Woit, Sabine Hossenfelder, George Ellis, and Joe Silk (not to mention astute commentators such as Lee Smolin and Jim Baggott). Refreshingly, almost all of them have respect for philosophy of science, unlike ignorant (of philosophy) physicists like Lawrence Krauss and Stephen Hawking. So, who knows, some of them may even read the following with some interest. Second, I actually know most of these people, obviously some better than others. I like and respect them all, even though — as we shall see — in this post I will come squarely down on one side rather than the other.

And what are these sides? For this round, I’ll focus on an exchange between Sean Carroll and Peter Woit on the specific issue of multiverse theory, though the two disagree — for the same reasons — also about the status of string theory. I have published an extended commentary on the string wars at Aeon magazine, after having participated to a conference organized by Dawid, where Peter, unfortunately, had not been invited, and which Sean, equally unfortunately, couldn’t attend.

Sean has recently written a post at Preposterous Universe entitled “Beyond falsifiability,” in which he summarizes a paper of his, currently at Beyond falsifiability: normal science in a multiverse. Here is the abstract of that paper:

“Cosmological models that invoke a multiverse — a collection of unobservable regions of space where conditions are very different from the region around us — are controversial, on the grounds that unobservable phenomena shouldn’t play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice.”

Not so fast, replies Peter at his blog, Not Even Wrong: “Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research. … None of those references [in the paper] contain anything like the naive argument that if we can’t observe something, it ‘simply shouldn’t matter,’ or one should not speculate about it, or it ‘shouldn’t count as science at all.’”

A good part of the discussion hinges on Sean accusing critics of both string theory and the multiverse of “Popperazism,” a neologism coined by him (as far as I can tell), which refers to the alleged misappropriation of the ideas of influential philosopher of science Karl Popper. Indeed, Sean already wrote a short piece for Edge back in 2014 in response to the question: “What scientific theory is ready for retirement?” His answer: falsificationism, the notion, proposed by Popper, that what demarcates science from non-science (and pseudoscience) is the feasibility of falsifying the tenets of a given theory or hypothesis. If a theory is in principle falsifiable, argued Popper, then it is scientific. If there is no way to subject it to the falsifiability criterion, it isn’t science.

Setting aside that falsificationism is not a scientific theory, but rather a notion in philosophy of science (after all, how would you falsify Popper’s account?), Sean admits that he hasn’t gone over the nuances of what Popper actually wrote. That’s unfortunate, because Popper was a bit more of a sophisticated philosopher than he is usually given credit for. Even though his ideas are no longer current in philosophy of science (you know, philosophy does make progress!), if one invokes him to dismiss a scientific theory (as Ellis and Silk do), or, conversely, rejects his insight in order to deflect criticism against one’s favorite theory (as Sean does), it would be good to take a look at what the men actually wrote.

Without going into too much detail (for an in-depth discussion and pertinent quotes see my Aeon article mentioned above), Popper realized that falsification is not a sharp blade capable of neatly cutting off science front non-science. He was also aware of, and discussed at length, the fact that legitimate scientific theories do include ad hoc explanations that are used by scientists as place holders until (and if) they figure out what is wrong with the theory they are working on. Nobody has ever rejected a scientific theory because all its statements were not immediately falsifiable, nor did Popper suggest such a crude practice in the first place.

To be fair to Sean, he says that what he is after is the naive version of Popper that he thinks others are using as a blunt instrument to dismiss string theory and the multiverse as outright unscientific. But, as Peter points out, evidence of such extreme “Popperazism” is hard to come by. Here, for instance, is the above mention George Ellis, in a response to a critique by Daniel Harlow, which Sean quotes approvingly:

“The process of science — exploring cosmology options, including the possible existence or not of a multiverse — is indeed what should happen. The scientific result is that there is no unique observable output predicted in multiverse proposals. This is because, as is often stated by proponents, anything that can happen does happen in most multiverses. Having reached this point, one has to step back and consider the scientific status of claims for their existence. The process of science must include this evaluation as well.”

Peter comments: “The problem with the multiverse is that it’s an empty idea, predicting nothing. It is functioning not as what we would like from science, a testable explanation, but as an untestable excuse for not being able to predict anything. In defense of empty multiverse theorizing, Carroll wants to downplay the role of any conventional testability criterion in our understanding of what is science and what isn’t.”

Does Sean do that? It appears so when he says: “The best reason for classifying the multiverse as a straightforwardly scientific theory is that we don’t have any choice. This is the case for any hypothesis that satisfies two criteria: (i) It might be true; (ii) Whether or not it is true affects how we understand what we observe.”

Those are exceedingly weak criteria indeed. As an extreme example, take the very fuzzy notion of God: it might be true, and whether it’s true or not this would affect how we understand the world. So what? Neither of those two observations — in itself — provides an iota of reason to believe in God. Or the multiverse.

Sean then moves to another target critics of string theory and the multiverse often aim at: Richard Dawid’s notion, mentioned above, of a new science based on what he calls “non-empirical confirmation.” As Sean acknowledges, that term was probably really bad PR on the part of Dawid:

“It sounds like Dawid is saying that we can confirm theories (in the sense of demonstrating that they are true) without using any empirical data, but he’s not saying that at all. Philosophers use ‘confirmation’ in a much weaker sense than that of ordinary language, to refer to any considerations that could increase our credence in a theory. Of course there are some non-empirical ways that our credence in a theory could change; we could suddenly realize that it explains more than we expected, for example. But we can’t simply declare a theory to be ‘correct’ on such grounds, nor was Dawid suggesting that we could.”

Hmm, as a philosopher, I don’t actually subscribe to this notion that we use “confirmation” in a weak sense at all. Still, Sean is right that we may, in the course of exploring the logical entailments of a given theory, discover that it has many more than we at first thought. Indeed, this is precisely what happened during the early history of string theory, and why it has attracted so much attention for so long. As for Dawid’s not suggesting that a theory should be declared correct on just such grounds, this is true. But it is also true that the whole point of Dawid’s Bayesian-informed approach is to make the argument that our belief in a theory should be updated, and even tilted toward provisional acceptance, on the basis of non-empirical entailments. This is controversial to say the least, both among philosophers and among scientists.

Sean continues: “Nobody is trying to change the rules of science; we are just trying to state them accurately. The multiverse is scientific in an utterly boring, conventional way: it makes definite statements about how things are, it has explanatory power for phenomena we do observe empirically, and our credence in it can go up or down on the basis of both observations and improvements in our theoretical understanding. Most importantly, it might be true, even if it might be difficult to ever decide with high confidence whether it is or not.”

There is a lot to unpack in those sentences. Beginning with the end, again, yes, multiverse theory may be true, but if we will never be able to decide that on the basis of empirical observation it simply shouldn’t count as a scientific theory. Nor should it be considered “probably” true, pace Dawid’s Bayesian approach. Indeed, at the moment, at least, the notion of a multiverse should be classed as scientifically-informed metaphysics. Too bad that so many scientists recoil from the “m-word,” though.

In terms of not trying to change the rules of science, I beg to differ. Maybe Sean isn’t, but Dawid definitely is. That’s a major point of his book on the subject. The question is whether such change is warranted or not. (I don’t think so.)

Moreover, there seems to me — as a naive external observer to the debate — to be nothing “boring” or “conventional” about the multiverse. It is, rather, a radical theory that would dramatically revise our whole conception of what a “universe” is in the first place!

Here Woit again makes some sharp comments: “[What] Carroll ignores is that the evaluation problem is not just ‘hard,’ but actually impossible, and if one looks into the reason for this, one finds that it’s because his term ‘the theory’ has no fixed reference. What ‘theory’ is he talking about? One sort of ‘theory’ he discusses are eternal inflation models of a multiverse in which you will have bubble collisions. Some such models predict observable effects in the CMB [cosmic background radiation]. Those are perfectly scientific and easy to evaluate, just wrong (since we see no such thing). Other such models predict no observable effect, those are untestable. ‘Hardness’ has nothing to do with it, the fact that there is some narrow range of models where tests are in principle possible but hard to do is true but irrelevant.”

Here we get pretty close to the edge of my competence, and I am going to leave it to Sean, Peter and the rest to evaluate what actual (novel) predictions multiverse theory makes, and whether and how they might be tested. But the more time passes (and this goes for string theory as well), the more the burden of proof rests on defenders of the theory, while the skeptics are increasingly justified in their impatience regarding the current dearth of such tests.

Sean concludes his post by writing that “understanding how science progresses is an interesting and difficult question, and should not be reduced to brandishing bumper-sticker mottos to attack theoretical approaches to which we are not personally sympathetic.”

That is most certainly true, though again I see little evidence of bumper-sticker brandishing. But it is curious to me that he seems to imply that his critics attack string theory and the multiverse because they are not “personally sympathetic” to those notions — not because they honestly see intellectual problems with them. This comes close to poisoning the well, a type of elementary logical fallacy that Sean is usually too careful a thinker to indulge in. Besides, what makes him so confident that he and other defenders of strings and multiverse aren’t just as much personally invested in those notions, and hence subject to more or less unconscious biases? As Caroll Tavris and Elliot Ar0nson memorably put it, “mistakes were made, but not my me.”


Postscript: the term “Popperazzi” appears to have been used first by Leonard Susskind, at least since his 2006 “The Cosmic Landscape,” p. 192 (with thanks to various commenters on Twitter, especially Jim Baggott).

Also, entirely coincidentally, Sabine Hossenfelder has a (critical) piece on the multiverse at

The (process) metaphysics of evolution


Heraclitus, by Hendrick ter Brugghen

Metaphysics isn’t exactly the first thing that comes to mind when someone is thinking about the theory of evolution, especially if that someone is an evolutionary biologist who is reasonably skeptical of any metaphysical claim — like yours truly.

Nonetheless, my Exeter University colleague John Dupré has published a stimulating paper that seeks to apply so-called process metaphysics to modern debates in evolutionary biology (full paper here). The result is intriguing, though that doesn’t mean I’m completely on board with what John writes.

Let’s start with the basics. Dupré defines metaphysics as the branch of philosophy that aspires to provide the most general description of reality. You would think that nowadays that aspiration falls squarely within fundamental physics, except for the fact that fundamental physics — as interesting as it is — is largely irrelevant to most of the other sciences, and “the most general description of reality” can’t be just the description of whatever is at the bottom of reality.

John also argues, and I think he is completely right about it, that:

“Though they may sincerely deny it, scientists are almost inevitably committed to metaphysical opinions. … Metaphysics can be ignored but not escaped.”

If that’s true, then we (philosophers) better get our metaphysics straight, and we (biologist) better pay attention to the consequences of our own assumptions on the matter, regardless of whether these assumptions are explicitly stated or not (in fact, particularly if they are not explicitly stated).

The way Dupré goes about this is by applying some philosophical reflection to the scientific work done by biologists, to see if certain metaphysical commitments made by scientists don’t turn out to be incorrect in the light of the scientists’ own work. Specifically, in this paper he explores a very old question in metaphysics: whether the world is composed of things or processes. The first position goes back at least to the Ancient Greek atomists, like Leucippus and Democritus (or, in India, the Jain, Ajivika and Carvaka schools, possibly dating back to the 4th century BCE). The second position was espoused by Heraclitus, and made famous by the Latin version of his dictum, panta rhei, “everything flows.” Nowadays, they are known in philosophy respectively as substance and process metaphysics.

Substance metaphysics implies an ontology of things (as opposed to an ontology of processes), and it has been dominant since the beginning of the scientific revolution in the 17th century. It is connected to the mechanicist approach, where everything is made of things with particular functions, and those things constitute the mechanisms that explain how the world works. Mechanicism in turn implies reductionism: things are made more and more complex from the bottom up, and the causal story is unidirectional.

Contrast this with the more fluid (ah!) process ontology, which rejects both mechanicism and reductionism: what maintains patterns of stability in the world isn’t just the behavior of individual entities, but also the network of relations among patterns and between patterns and the environment in which they are situated. I must confess that I started out, decades ago, as a mechanicist who believed in an ontology of things, just like most scientists do (especially physicists). But it was my own scientific research in evolutionary biology (particularly writing this book) that gradually interested me more and more in an ontology of processes and a less reductionist view of things. Only I didn’t know (until I turned professionally to philosophy) that this was called process ontology, and that it was a well worked out position among metaphysicians.

In his essay, Dupré applies this debate to the nature of biological species (see this paper of mine) and to our conception of evolutionary processes in general. As he clarifies, of course substance ontologists recognize that evolution is a process, but they think it is made possible by the more fundamental existence of things. For process ontologists, by contrast, everything — including living organisms — is a process. There is nothing stable. Panta rhei.

John explains very nicely the contrast between substance and process ontology. Consider the difference between a mountain and a storm. For a substance ontologist, the first one is a stable object, the second a process. But the process ontologist sees both as processes, only at very different time scales: the mountain changes continuously, but it takes millions of years for the change to be noticeable by the human eye. The storm also changes, obviously, but much more rapidly.

Then again, some storms are remarkably persistent, though in a dynamic fashion. Think of the famous Red Spot on Jupiter, which has been observed now for hundreds of years. Process ontologists think that a dynamic storm is a better paradigm for living organisms than a mountain. After all, when living beings achieve a stationary state we call them dead. As my professor of biophysics back in college, Mario Ageno (a student of Fermi, article in Italian), used to say: death is a sudden increase in entropy.

Why does it matter to think of organisms as processes rather than things? Dupré Suggests two reasons:

“The first is that it motivates a significant shift in emphasis with respect to what stands in need of explanation. The traditional concern for thing-centred ontology is change. I do not expect an explanation of why my desk is very much as I left it when I was last in my office. For a process, on the other hand, persistence requires explanation. Physiology is largely concerned with understanding the multitude of internal processes that enable an organism to stay alive, to maintain its thermodynamic disequilibrium with its environment. … The second reason why the processual status of organisms is important is that it places in the proper perspective the search for mechanistic explanation that is often alleged to be central to the contemporary life sciences. I take a mechanistic explanation to be, very roughly, one that involves identifying a set of constituents of a phenomenon and showing how their actions and interactions combine to generate the phenomenon. There is no doubt that this has been an enormously productive scientific strategy. Nonetheless, from a process perspective the mechanisms postulated by such explanations must always be abstractions from the wider biological context, and this always poses potential limits on their application.”

As a result, the organism should be seen not as a set of interlocking things (cells, sub-cellular components, individual molecules), but as a hierarchy of processes acting at different levels (molecular, physiological, and so forth).

If all of this sounds a bit abstract, that’s okay, it’s metaphysics, after all. To make things more concrete, John goes through a fairly in-depth analysis of the question of what evolves, i.e., what is the unit of evolution, in the process making some important clarifications on the nature of biological species.

It is an accepted truism in biology that individuals do not evolve, populations do. Then again, what counts as an individual? This is a very broad, and controversial question, but let’s focus on just one aspect of it: are species themselves “kinds” or individuals? Most biologists, I’m guessing, would answer that of course species are kinds, meaning categories to which a number of individual things belong, if these things satisfy certain criteria. For instance, Homo sapiens is a kind, to which all organisms that are statistically characterized by a certain type of genome, certain physiological, anatomical, developmental, and even cultural attributes, belong. Pan troglodites, a species of chimpanzee, is an evolutionarily related kind, to which all organisms satisfying certain other characteristics belong. And so forth.

However, most philosophers of biology accept some classical arguments put forth by Michael Ghiselin and David Hull, according to which species are not kinds, but rather individuals. If one accepts modern cladistic systematics, species are individuals identified by branches on a phylogenetic tree. This possibility was actually first brought up by a paleontologist, Steven Stanley, in a paper entitled “A theory of evolution above the species level” (pdf here).

Dupré accepts Ghiselin and Hull’s account, with some provisos inspired by his endorsement of process ontology. The most important of these provisos is that a branch of a phylogenetic tree is, of course, a process, not a thing. This neatly resolves a long standing problem for the species-as-individuals view: species change over time, but we said above that individuals do not evolve, populations do, so what gives? The answer is that species are individuals in the same sense that organisms are, even though organisms develop and change in the course of their short lifetime, while species change over longer evolutionary times. In both cases, we are talking about dynamic processes, not static things. That also explains why species typically have fuzzy boundaries: do you expect a thunderstorm to have a sharp and neat boundary, an exact point beyond which it is no longer a storm?

It is important to note, as John does in his paper, that not all processes generate individuals. Geological erosion is a process, but it doesn’t turn up anything like biological species. In order to do that, one needs a stabilizing process. In biology a major, though not the only, stabilizing process is natural selection, which tends to keep variants that are (locally, both spatially and temporally) sufficiently suited to the environment.

One consequence of this view of species as organisms that result from the stabilization of processes is that for most of the evolution of life on Earth there were no such thing as species-as-individuals. Bacteria are not stabilized in the required fashion, because their lineages do not actually identify species at all. And they don’t because they lack sexual reproduction, another major source of stabilization in multicellular species. John sees, correctly I think, the emergence of sex as making possible the emergence of species as individuals.

There is a lot more in the paper, where the author touches on a panoply of fascinating ideas in modern biology, from niche construction to epigenetic inheritance, from parental care to the evolution of culture. John’s treatment of the subject has wide ranging consequences, as he summarizes near the end of the paper:

“If species are what evolve, we should not, for this reason, expect quite general accounts of evolution. The Modern Synthesis [i.e., the increasingly less dominant standard model in biology], specifically, may be more or less true for some kinds of species, but quite inadequate for others. If species have evolved new forms of evolvability [because of the invention of sex], this is surely to be expected. Evolvability of many populations may just be a summative property of organism properties, but as species become integrated processes it is plausible that evolvability might emerge as a specific capacity of lineages.”

Evolutionary mechanisms themselves, in other words, evolve. And they do so because everything in biology is a process, not a thing.

Book Club: Darwin’s Unfinished Symphony, 4, intelligence and high fidelity

Let us resume our discussion of Kevin Laland’s excellent Darwin’s Unfinished Symphony: How Culture Made the Human Mind, by tackling the first two chapters of the second part of the book: 6, on the evolution of intelligence, and 7, on high fidelity (of copying behaviors). Just to give you heads up, we are then left with five more chapters, likely each deserving its own post: why we alone have language, gene-culture coevolution, the dawn of civilization, foundations of cooperation, and the arts. The best has yet to come, and we will be at this for a while. But I assure you, it is worth it.

The chapter on the evolution of intelligence (#6) opens with a brief explanation of Allan Wilson’s observation, back in the ‘80s, of an interesting relationship between an animal’s brain size and the time since it had shared a common ancestor with humans: over the last 400 million years animal brains have grown 100-fold, and the growth rate has accelerated over time, a potential indication of a feedback mechanism at work.

Wilson proposed a three-step hypothesis, the so-called cultural drive scenario, to explain the evolution of intelligence:

1. A new advantageous habit arises in an individual through behavioral innovation.

2. The new habit spreads throughout the population by way of social learning.

3. Selection favors mutations that augment either the likelihood of innovation or the ability to engage in social learning.

Moreover, the expectation — which has been since empirical verified — was that new habits generate selection for changes in the animal’s anatomy that are better suited to the new behavior. Indeed, a good portion of this chapter is devoted to fleshing out Wilson’s original hypothesis, as well as to line up empirical evidence for it. (Laland was supposed to work with Wilson as a postdoc in 1991, but Wilson tragically died a month before Laland’s arrival.)

One of the issues in this discussion is that, quite obviously, large brains are not necessary for animals to take advantage of social learning — think of eusocial insects like bees and ants. The idea, however, is that larger brains evolved in primates because they make social learning increasingly efficient, and that trait is favored by natural selection.

Better learning — for instance through copying — in turn requires better perceptual systems, so that it is easier for an animal to copy another’s behavior (e.g., a fishing technique) at a safe distance. This, in turns, mandates an increasingly sophisticated type of integration across perceptual systems, as well as a precise mapping of sensorial inputs to behavioral outputs:

“To copy a fishing technique, for instance, the observer’s brain must convert a stream of visual information about how others move their hands and arms into corresponding outputs specifying how the observer must also move its muscles and joints. … Selection for copying proficiency might plausibly favor the evolutionary expansion of circuitry linking the visual and auditory cortex to somatosensory and motor cortex regions of the brain.” (p. 129)

Moreover, effective copying might also promote the evolution of what is often referred to as a “theory of mind,” meaning the ability of an animal to form an idea of the intentions, beliefs, and desires of other members of its own species. Not only that, but the cultural drive hypothesis also predicts that large brained animals should evolve pro-social traits (like tolerance for other members of the group, and cooperation), because learning requires allowing especially the young to hang around so that they can observe and copy advantageous behaviors.

Sure enough, for instance it is known that young chimpanzees hang around their mothers well after they are physically independent, until age seven or so. Presumably so that they can learn from observing, copying, and experimenting with her behaviors. Notice also that in most animals the innovators are adults with a lot of experience, another reason for the young ones to hang around as much as possible.

Larger brains, of course, are a necessary byproduct of the need for better and better cognition at so many levels. One of these levels has to do with the need to keep track of social interactions in increasingly larger groups, and an interesting observation is that the rate of “tactical deception” of other members of the group correlates positively with brain size in primates: the smarter we get, the more Machiavellian we become.

One of the studies conducted by Laland’s group that I found particularly interesting is a test, across species, of the standard evolutionary psychology hypothesis that minds are highly modular, i.e., that our brains work like cognitive Swiss Army knives, with each module evolved very specifically for a particular task or behavior. No such thing. A study of 62 primate species showed just one principal component explaining a very large amount of the variance in cognitive measures, with species excelling at one domain (say, innovation, social learning, tool use, etc.) also excelling at all others. Not that I sense any indication whatsoever that evolutionary psychologists have taken such results on board, of course. Moreover:

“Those species that were designated ‘smart’ by our statistical measure of primate intelligence turned out to be precisely those species that had performed well in laboratory-based experimental tests of learning and cognition.” (p. 138)

And just to make crystal clear what Laland thinks this means for evolutionary psychology:

“Such findings are inconsistent with the view, widespread within evolutionary psychology, that cognitive abilities evolve independently as separate modules, and the results strongly imply general intelligence [across primate species].” (p. 139)

The range of brain size (both in absolute and relative terms) in primates is huge, going from 3 g in fat-tailed dwarf lemurs to 1.5 Kg in humans. The overwhelming majority of the evolutionary increase is due to development of the neocortex, known in humans to be the locus for problem solving, learning, planning, reasoning, and language. And — no surprise here — our species possesses by far the largest (both in absolute and relative terms) and best connected neocortex of all primate species.

Why would such an increase in brain size, and particularly in the neocortex, be favored in primate lineages? Because productivity increases dramatically with age, once an animal has had sufficient time to learn a range of adaptive behaviors, so long as there is a reliable inter-generational flow of information, from older to younger individuals. Furthermore, mathematical models have shown that selection favors lower mortality rates and longevity when productivity increases over time the way it does in primates:

“By virtue of possessing a big brain, and being able to use it to acquire all kinds of useful survival skills from others, some clever primates had apparently been able to extend their lifespans and live longer. In other words, in primates and primates alone, cultural intelligence facilitates survival.” (p. 147)

Next, let us briefly discuss chapter 7, on “high fidelity.” It opens with a darn good question:

“If cultural drive has operated on all the great apes and some monkeys too, then why haven’t gorillas invented particle accelerators? Why haven’t capuchins put a monkey on the moon, or devised a simian version of Facebook?” (p. 150)

Even though I don’t think a simian version of Facebook would be an improvement on things, and even though some people may be inclined to dismiss the very question as framed by Laland, this does get to the core of the issue of cultural evolution addressed by the book. Why indeed?

Kevin considers a number of proposed answers. Maybe it was just chance. As he admits, chance explanations are hard to rule out, but they are also unappealing and hard to take seriously, when one is talking about such a striking difference between Homo sapiens and everything else, a difference predicated on a large brain that consumes a hefty portion of our daily caloric intake. The answer must lie elsewhere.

Maybe it was demographics: once our species reached a certain population threshold cultural information became less likely to be lost, and it started to accumulate. But as Laland quickly observes, there are plenty of other species with large population sizes, and they haven’t invented vaccines or drafted declarations of individual rights.

Instead, some tantalizing clues come from mathematical modeling exploring how many “cultural parents” are necessary for stable cultural transmission. The details are intriguing, and I refer the reader to the book chapter for much more, but the bottom line is that a small increase in the fidelity of social learning makes a huge difference for the stability of cultural traits, and beyond a certain threshold of fidelity it makes them essentially immortal. In turn, high-fidelity mechanisms simply support a far larger accumulation of cultural knowledge, over time. So a species’ repertoire becomes both more reliable and far larger.

Why primates, however, and humans in particular? Because, as it turns out, most other species operate on the basis of low fidelity cultural transmission, including birds, fishes, and insects. As a result, most animal species either have no cultural traditions at all, or are characterized by so-called “lightening traditions,” i.e., patterns that persist in a population only for an evolutionarily short period of time and then are lost, possibly re-invented, and lost again. No cumulative process there.

“Across thousands of simulations with different parameter values, trait fidelity explained more of the variance in the buildup of cumulative culture than novel invention, modification, and combination combined.” (p. 156)

One of the interesting, and perhaps counterintuitive, findings here is that innovation is the least important of the pertinent cultural traits. That’s because cultural evolution takes place largely through continuous, cumulative, alterations on previous designs, reworking and further developing pre-existing technology. The myth of the heroic inventor is, largely, a myth.

Not only that, but teaching — defined by Kevin as behavior that functions to enhance the fidelity of information transmission between tutor and pupil — turns out also to be a crucial factor in ever accelerating cultural evolution. Even though there is controversy about this point, there are precious few solidly documented cases of teaching outside the human species, while the behavior is universal — and in some cases highly refined — across human populations. Moreover, the known examples of animal teaching do not, in fact, show up where one would expect them to:

“Animal behaviorists have now compiled evidence for teaching in a small and rather curious assortment of species; these include meerkats, ants, bees, and two species of birds called pied babblers and superb fairy wrens; with suggestive, but not yet conclusive, evidence in cats, cheetahs, and tamarin monkeys. The functional similarities between teaching in, say, ants and humans should not [however] obscure the fact that mechanistically, cases of teaching in other animals are entirely different from human teaching, and are reliant on entirely separate psychological and neural processes. … We were struck by the observation that no compelling evidence for teaching had been found in nonhuman apes, dolphins, elephants, or other large-brained mammals celebrated for their intelligence.” (p. 161-162)

Some findings about teaching in the animal world — based on computer modeling — are intuitive, for instance that it is favored by selection the more tutor and pupil are genetically related to each other. But other findings are both counterintuitive and enlightening: teaching is favored when it concerns tasks of intermediate difficulty, not too easy, not too difficult. Why? Because if a task is very easy then the individual is likely to learn it on its own, no teaching required — which in turn explains why a lot of intelligent species do not engage in teaching. By contrast, if a task is too difficult, then there simply won’t be enough teachers available to transmit it efficiently to the next generation. Comparatively few traits are found in the right intermediate range of difficulty, which is probably a major part of the explanation for the rarity of teaching in nature. There is an important lesson to be learned here:

“The incidence of teaching only appeared puzzling because we had the wrong intuitions. We had expected teaching to be exhibited by clever animals that are good at social learning. In fact, with some caveats that we will come to, smart animals rarely need to teach, because most of their skills can be picked up through copying or trial and error.” (p. 165)

The attentive reader, however, will have picked up on the fact that human beings seem to flagrantly violate the above rules: we teach very, very complex behaviors indeed! Laland and his collaborators began to suspect that it was cumulative knowledge that had made the difference. Interestingly, once the mathematical models allowed for cumulative knowledge, the results clearly showed that teaching was favored, as the relative fitness of teachers to non-teachers was almost always higher under those circumstances:

“Models that allow for cumulative cultural knowledge gain, however, suggest that teaching evolved in humans despite, rather than because of, our strong imitative capabilities, and primarily because cumulative culture renders otherwise difficult-to-acquire valuable information available to teach. The analyses suggest that human teaching and cumulative culture evolved together, through mutual reinforcement.” (p. 167)

Yet another factor facilitating the evolution of teaching is cooperative breeding, when young ones are raised by a group of relatives, instead of just the parents, or one of the parents. That’s because cooperative breeding makes the fitness cost of being a teacher surprisingly low, thus favoring the behavior. Finally, experimental results show that human children far outperform other primates at cognitive tasks in part because they cooperate with each other in the solution of problems, engaging in a high number of “altruistic events” during the time a group is occupied with a given task. The conclusion, so far in the story, is then that:

“Humans alone possess cumulative culture because humans alone possess sufficiently high-fidelity information transmission mechanisms, including an unusually accurate capacity for imitation, teaching, and language.” (p. 174)

Genotype-phenotype mapping and the genes as blueprint metaphor

One of my occasional series here at Footnotes to Plato is devoted to accessible (or so I hope) versions of some of my technical papers. The idea is to provide a taste of academic scholarship in philosophy of science, but in a form that can be read by more than a few dozen colleagues who specialize in the same exact area. So far, I’ve covered all suitable entries from 2013 to 2017. I don’t plan to go back to the very beginning of my career (for a good chunk of which I was publishing as an experimental biologist anyway), but only to 2010, the year after I moved to philosophy full time. By my reckoning, that means four papers left: on genotype-environment interactions and the “genes as blueprints” metaphor (2010), on the machine-information metaphor in biological research and education (2011), on the metaphor of adaptive landscapes (2012), and on paradigm shifts in evolutionary biology (also 2012). Yeah, that was my “metaphors” period. This entry is dedicated to the first paper mentioned, and over the next few months I’ll cover the remaining three.

The underlying question being considered here is: what is the relationship between genotypes and phenotypes? This question has marked the evolution of evolutionary theory ever since the rediscovery of Mendel’s work at the beginning of the twentieth century, which immediately generated an apparent conflict with the Darwinian view of gradual evolution. Famously, the answer proposed by the architects of the Modern Synthesis (i.e., the “standard model” in evolutionary biology from the 1940s on) is that genes determine phenotypes, as in the oft-cited metaphors of a “genetic blueprint” or a “genetic program.” This sort of answer bypasses the process of development, which is treated as an incidental blackbox with no direct causal relevance to the evolutionary process. Given this conceptual framework, it is no wonder that developmental biology was famously left out of the Modern Synthesis, and that it has (partially) re-emerged only recently within the so-called “evo-devo” approach.

In the paper, published in the Philosophical Transactions of the Royal Society (2010) I re-examine the question of the relationship between genotype and phenotype by going back to Alberch’s concept of a genotype-phenotype (G>P) “map” and examine what recent research tells us on actual G>P maps. It turns out that computational and empirical studies of three classes of systems (RNA folding, protein function and software development — though in this post I will talk only about the first and last cases) have yielded important generalizations about the problem, as well as novel insight into the evolutionary process more broadly. One of the consequences of these lines of research is that the blueprint metaphor is untenable and in fact positively misleading, and should be replaced by the concept of developmental encoding.

Back in 1991, Alberch introduced a metaphor differing from the standard blueprint view of genetic encoding, talking about a G>P “mapping function,” defined by a given parameter space and at least potentially amenable to mathematical description.

Alberch derived four general conclusions from his conceptualization of the G>P map: (i) the map is (much) more complex than a one-to-one relation between genotype and phenotype, which means that the same phenotype may be obtained from different combinations of genetic informational resources; (ii) the area in parameter space where a particular phenotype exists gives an indication of how stable that phenotype is likely to be; (iii) the parameter space is marked by “transformational boundaries,” i.e. areas were a small change in one or more developmental parameters will cause the transition from one phenotypic state to another; and (iv) the phenotypic stability of a given population will depend on which area of the parameter space it occupies, and in particular whether it is close to a transformational boundary or not.

Alberch’s famous example of a phenotypic transition that is amenable to be described according to his idea of parameter space and mapping function was the evolution of the number of digits in amphibians. In particular, he showed how salamanders tend to lose their fifth toe every time the digit reduction evolves, while anurans tend to lose their first digit. The difference between the two groups can be recreated experimentally by administration of an inhibitor of cell division, a result that Alberch interpreted as telling us that anurans and salamanders find themselves in different areas of the parameter space, and in particular that they are located near different transitional boundaries, so that every time the transition happens within one of the two groups it occurs by the same developmental means, but when the two groups are compared the transitions happen by different developmental routes.

Ambystoma salamander

A salamander of the genus Ambystoma, a classic model system for evo-devo studies

A good starting point to tackle the G>P mapping problem is to start simple, and the simplest place to start is the growing literature on RNA folding. RNA folding is relatively well understood at a chemical-physical level, with increasingly sophisticated computer models capable of predicting the three-dimensional folding of a linear sequence of nucleotides based on thermodynamic considerations. Moreover, it is relatively straightforward to verify such predictions experimentally for a subset of simulated folding patterns, and researchers can even carry out competition experiments among RNA molecules for a given catalytic function.

As far as the G>P problem is particularly concerned, the step from genotype to phenotype is in this case as short as it is possible in any biological system, and indeed probably somewhat reflects the ancestral situation in the RNA world hypothesized within the context of the origin of life problem. RNA folding is therefore both an extremely suitable system to begin examining G>P mapping and one that may yield important clues to how historically mapping functions got started and became more complex and indirect. A crucial advantage of RNA folding studies of G>P mapping is that the fitness function of the molecules is not assumed arbitrarily to follow a particular statistical distribution, but can be studied empirically. In other words, the connections between genotype and phenotype on one hand and between phenotype and fitness on the other hand are explicit, relatively simple and biologically meaningful.

Several important generalizations have emerged from studies of RNA folding, generalizations that are crucial to our understanding of phenotypic evolution beyond the relatively simple framework offered by the Modern Synthesis. Consider, for instance, the study of mutational networks, i.e. of the structure of the genotypic landscape in terms of one-mutation steps surrounding a given focal genotype. The idea goes back to Kauffman & Levin’s work on genotypic landscapes, back in 1987. The problem to be tackled is how does evolution explore phenotypic landscapes by moving across a corresponding genotypic landscape in a non-saltatory manner, according to standard Darwinian theory. The solution requires an understanding of the connection between the genotypic and phenotypic landscapes, and in the case of RNA folding one can actually computationally explore the totality of both landscapes for a given short sequence length, or statistically sample the properties of landscapes defined by longer sequences.

For instance, all 30-nucleotide long binary RNA molecules produce about one billion unique sequences, a bewildering genotypic space. This space, however, corresponds to only 220,000 unique folding shapes in the Guanine/Uracile nucleotide landscape and a mere 1,000 shapes in the Adenine/Uracile landscape, the two situations that have been extensively studied. This is a spectacular example of what biologists call “degeneracy” (i.e., redundancy of sequence coding), which in turn is a fundamental concept underlying the neutral theory of molecular evolution — according to which most (but, crucially, not all) mutations are selectively neutral at the molecular level.

Genotypes on these landscapes are connected by mutational networks whose properties can then be explored. An interesting result is that the distribution of phenotypes on RNA mutational networks follows regular patterns, characterized by a few abundant RNA shapes and a large number of rare ones. The structure of the landscape is such that evolution can explore most or all of the common structures by one-step mutations that preserve structure while moving the population on a neutral path of constant fitness, until it bumps into a novel phenotype with higher fitness. Interestingly, most genotypes turn out to be located within a few mutational steps from most of the common phenotypes in the landscape, making it predictable that such phenotypes will in fact be found by natural selection in a relatively short period of time. However, the connectivity on the landscape is always asymmetrical, which means that which particular phenotypes will be reached more easily while starting with a given genotype will be a matter of historical contingency.

Research on the general properties of RNA folding evolution has showed that the G>P function is such that small movements in genotypic space do not necessarily correspond to small movement in phenotypic space, a rather flagrant contradiction of one of the standard assumptions of the Modern Synthesis. In particular, if we consider a genotype G with a given phenotype P, it is likely that G is connected to a one-step neighbor associated with a phenotype which is not structurally similar to P. This brings us to a rather surprising general behavior that emerges from studies of RNA folding (as well as of protein function, micro-organisms and simulated systems — as discussed in the rest of the full paper), a true “punctuated equilibrium” pattern of evolution that does not result from the usual suspects in terms of underlying causes.

Punctuated equilibrium, of course, was one of the early challenges to the Modern Synthesis brought about by palaeontologists Eldredge & Gould back in 1972. The standard explanation for the fossil record pattern of stasis punctuated by occasional rapid shifts in phenotype is that of stabilizing selection. Simulations of RNA folding evolution display the same general pattern that one sees in the fossil record, obviously at a much smaller temporal scale. The mechanism, however, has nothing to do with “stabilizing selection” (a rather vague concept in itself, really simply a way to describe a statistical pattern of constant mean and reduced variance). Rather, the punctuated evolution results from the fact that the population divides itself into smaller chunks, each of which explores a portion of the largely neutral genotypic landscape. From time to time, a population encounters a new phenotypic optimum and “jumps” on it quickly. Stasis, in this context, is then not the result of selection for a constant phenotype, but rather of the largely neutral structure of the landscape, which allows populations to wander around until they find a new functional phenotype and jump into a nearby neutral network, only to resume their evolutionary wanderings.

RNA-like systems can also be a model for the evolution of ecological communities, thereby beginning to forge a still surprisingly lacking direct link between ecology and evolutionary biology. For instance, Takeuchi & Hogeweg, in 2008, showed that a population of replicators originally made of just one genotype evolves into a complex system characterized by four functionally distinct groups of genotypes, which the authors call “species.” Interestingly, the model also evolved “parasites” which not only were able to coexist with catalytic molecules, but in turn were themselves catalysts for the evolution of further complexity in the system. While Takeuchi & Hogeweg’s definition of species in this context may appear artificial, the group of genotypes they identified are in fact both ecologically functionally distinct and genealogically related to each other, and a functional-genealogical concept is certainly one of the viable contenders as a definition of biological species.

The examples drawn from research on RNA folding (as well as those not discussed here, on protein sequence space) help bring to the forefront a major limitation of the Modern Synthesis: the almost utter disregard for developmental biology.

Notoriously, that field was essentially left out of the synthesis of the 1940s that gave us the current structure of evolutionary theory. Part of the reason for this is that it has never been conceptually clear what exactly the role of development in evolution is. Mayr, a founding father of the Modern Synthesis, famously made a distinction — arching back to Aristotle — between proximate and ultimate causes in biology, with the genetic bases of phenotypes counting as proximate causes and the evolutionary processes that brought those phenotypes about considered as ultimate causes (see this post for an in-depth discussion). Even if one accepts Mayr’s framework, however, it is not clear whether development should be considered a proximate or an ultimate cause.

The onset of evo-devo and calls for an Extended Synthesis in biology (see this previous post) have reopened that question. The answer is emerging from research on the structure of G>P maps, and in particular from a parallel literature in computational science that attempts to exploit the characteristics of biological development to produce a new generation of “evolvable hardware.” The picture that is forming out of these efforts is that development is a necessary link between proximate and ultimate causality, and that in a sense the G>P map is whatever specific type of “developmental encoding” (as opposed to the classic genetic encoding) a given species of organism uses to produce environmentally apt phenotypes. Developmental encoding refers to situations were information encodes not a detailed description of the full system (as in the blueprint metaphor), but rather the local steps necessary to build the system through a developmental process.

Several authors have pointed out the limitations of both direct genetic encoding of “information” and of the blueprint metaphor that results from it. Ciliberti and collaborators, in a 2007 paper, have for instance referred referred to human-engineered systems as being characterized by “brittleness,” i.e. the unfortunate property that if one component ceases functioning properly, there is a high probability that the whole system will unravel. This is most clearly not what happens with biological organisms, which means that the oft-made analogy (ironically, by both some biologists and proposers of intelligent design creationism) between living organisms and “machines” or “programs” is profoundly misleading. Along similar lines, Stanley, also in 2007, reiterated that the amount of direct genetic information present in, say, the human genome (now estimated to be around 30,000 protein-coding genes) is orders of magnitude below what would be necessary to actually specify the spatial location, functionality and connectivity among the trillions of cells that make up a human brain. The answer must be in the local deployment of information that is possible through developmental processes, where the “instructions” can be used in a way that is sensitive (and therefore capable of adjusting) to both the internal and external environments.

According to Hartmann and colleagues (in another 2007 paper), artificial development is increasingly being used to solve computational problems outside of biology by direct analogy with biological systems. The results indicate that replacing direct genetic encoding with indirect developmental encoding dramatically reduces the search space for evolutionary algorithms. Moreover, the resulting systems are less complex and yet more robust (“fault-tolerant” in engineering jargon) than those obtained by evolving standard genetic algorithms. Another way to put the point is that direct genetic encoding is limited by the fact that the length of the genetic string grows proportionally to the complexity of the phenotype, thereby quickly encountering severe limitations in search space. With developmental encoding, instead, the evolving system can take advantage of a small number of genetic instructions mapping to a large number of phenotypic outcomes, because those outcomes are determined by the (local) interactions among parts of the system and by interactions of the system with the environment.

Simulations comparing the evolution of standard genetic systems of information encoding with systems based on developmental encoding clearly show that genetic systems reach a maximum level of fitness for low levels of complexity; at higher levels of complexity developmental encoding “scales” much better, with developmental systems being capable of achieving high fitness more quickly and efficiently. Moreover, developmental encoding leads to the artificial evolution of systems that are both significantly more robust to internal disruptions and significantly more flexible in response to external environmental conditions than standard genetic systems. This is an interesting situation whereby a research area parallel to evolutionary biology, computational science, draws inspiration from the actual structure of biological systems and ends up providing a theoretical underpinning for why, in fact, those biological systems are structured the way they are.

In conclusion, the conceptual and mathematical foundations of evolutionary theory are evolving from a simple beginning as bean-bag genetics, Mayr’s derogatory term for population genetics theory, to a sophisticated patchwork that draws from population genetics, quantitative genetics, bioinformatics and computational science. Medawar & Medawar, in 1983, famously said that “genetics proposes, epigenetics disposes,” where epigenetics here means the whole of developmental processes, a way to highlight that evolutionary theory finally needs a good conceptual understanding of development, and not just of genetics. As I have argued in the paper referenced here, such a broadened theoretical framework cannot come from population genetics alone, but benefits from the input of computational research both on simple biological examples of G>P maps, such as those underlying RNA folding and protein function, and from consideration of broader issues such as the properties of large neutral networks in genotypic space (see full paper) and of developmental versus genetic-encoding systems.

Book Club: Darwin’s Unfinished Symphony, 3, fish tales and creativity

Threespine and Ninespine sticklebacks

Continuing our discussion of Kevin Laland’s Darwin’s Unfinished Symphony, on the evolution of culture, I am going to briefly cover “A tale of two fishes” (ch. 4) and “The roots of creativity” (ch. 5). Together with the chapters we have already discussed, they complete the first part of the book, devoted to the foundations of culture. (After this, we’ll move to the chapters in the second part, on the evolution of the mind.)

Chapter 4 is devoted primarily to research conducted over a period of two decades by Laland’s own lab, focusing on the contrast in the behavior between two small species of fish, the threespine and the ninespine sticklebacks. The reason for working on this sort of experimental animals is that if one is interested in social evolution then one needs to set up replicates of entire populations. Logistically, this is going to be impossible to do for large vertebrates, especially mammals, but it is eminently feasible with fish. Sticklebacks are a well studied group of 16 related species, common in rivers, streams and coastal regions of the Northern hemisphere. Evolutionarily speaking, they are closely related to seahorses.

Laland’s lab focused on sticklebacks’ use of public information, i.e., on how they socially learn from other members of their own or even of other species. When they started the research project, the consensus was that use of public information required a high degree of intelligence on the part of the animal. It turns out that was definitely not the case, thus providing another important piece of the cultural evolution puzzle. Chapter 4 details lots of fascinating experiments with these two species of sticklebacks, but I will summarize only the basic stuff, leaving it to the interested reader to dig deeper.

The basic setup is one in which an aquarium is divided into compartments. In one area some fish of either species are being fed at a high rate (“rich patch”); in a second one they are being fed at a lower rate (“poor patch”); and in a third one they can observe their fellow species members feeding before being allowed to do so themselves: “if the sticklebacks were capable of public-information use, they would be able to distinguish between the rich and the poor patch based solely on the reactions of the demonstrators to the food.” (p. 80).

Interestingly, the ninespine was apparently able to use public information and, when allowed access, swim preferentially to the rich patch. The threespine, by contrast, showed no preference, indicating that the observers in that species had not learned from the demonstrators. Why the difference?

Laland’s group performed several follow-up experiments aimed at eliminating a number of simple explanations, such as that perhaps the demonstrators of one species were not as good as the demonstrators of the other, or that there was an inter-specific difference in the visual acuity of the fish, or maybe some of the relevant cues were olfactory, not just visual. None of that was the case.

“We began to believe that what we had discovered might genuinely be an adaptive specialization in social learning, with ninespines capable of exploiting public information, while their close relatives, the threespines, were not.” (p. 81)

The answer turned out to have to do with the relative cost of social and asocial learning. The cost of asocial learning is different for the two species, because of differences in their anatomical structures. The threespine stickleback has large spines, which are very effective against predation, so much so that often the fish gets stuck into the predator’s mouth, and is forcefully rejected instead of being eaten. This is not the case with the ninespine stickleback, whose spines are more numerous but much smaller and less robust, and therefore not as effective an anti-predator device. Threespines don’t need to engage in public information use because they can afford to explore the various patches and learn on their own. That approach, by contrast, is very dangerous for the ninespines, which accordingly evolved the more advantageous habit of learning socially by observing others. Interestingly:

“The ninespines’ behavior is precisely that predicted by a sophisticated evolutionary game theory analysis conducted by an economist in order to understand human behavior.” (p. 89)

Moreover, comparative research conducted on 50 populations sampled from 8 species belonging to 5 genera showed that only the ninespine and their closest relative, the brooks sticklebacks, are capable of public information use, thus demonstrating the intricate relationship between evolutionary history, ecology, and morphology in shaping cultural evolution.

Chapter 5 of Darwin’s Unfinished Symphony opens with the classic example of animal learning and cultural spreading: the invention of a method to open home delivered milk bottles by blue tit birds back in 1921 England. The instance is well documented, and because of the involvement of amateur ornithologists, we know how quickly and how far it spread, eventually to involve several species other than the blue tits. Interestingly, the “invention” appeared to be relatively easy to come by, so that a number of animals arrived at the same solution independently, not necessarily relying on copying public information. So milk bottle opening is a good example of innovation, the devising of a solution to a new problem posed by the environment.

Things like the milk bottle opening clearly show that human beings do not have a monopoly on creativity, though Laland immediately qualifies this by remind his readers that:

“A vast difference exists between dipping food and inventing a microwave cooker, while banging cans together to send a message is a long way from developing e-mail.” (p. 100)

Still, studying innovation is crucial to understanding human creativity and cultural evolution, and it is not easy because it is difficult to recognize a behavior as innovative unless one has a solid baseline of studies on pre-innovation behaviors in whatever species of interest.

One of the classical studies on animal innovation was conducted by Edward Thorndike at Columbia University. He confined cats in small boxes from which it was difficult, but not impossible, to escape. This was something the cats clearly disliked, to put it mildly. Thorndike was able to show that cats — once they learn how to get out of the box — fine tune their behavior so that the escape becomes easier and easier. The interesting part was that the animals arrive at suitable solutions by trying out a bunch of seemingly random moves, until something happens to work, even sub-optimally. It’s innovation by trial and error, very much something human beings do quite well.

One of the most interesting things about this chapter is Laland’s detailed presentation of evidence that, as the saying goes, “necessity is the mother of invention,” meaning that innovations are triggered by new challenges faced by animals, often under unusual or novel environmental conditions. Moreover, studies in callitrichid monkeys clearly showed that it is often the older, more experienced, individuals that come up with innovative behaviors, not the young ones, who are presumably insufficiently experienced to have mastered the problems posed by their environment.

While experiments with mammals, and especially primates, are of course the most fascinating, as pointed out above, they are both logistically challenging and expensive. Hence, again, the use of fish, which are much easier to raise and manipulate in statistically sufficient numbers.

Laland then describes a series of experiments his lab has conducted on another common fish, familiar to aquarium enthusiasts: guppies. The results were fascinating:

“Innovators were significantly more likely to be females than males, more likely to be food deprived than not, and typically smaller rather than larger fish. … The observed patterns are best explained by differences among fish in their motivational state. The first individuals to solve the [problem posed by the experimenter] are those driven to find novel foraging solutions by hunger, or by the metabolic costs of growth, or pregnancy [hence the predominance of females among innovators].” (p. 112)

Research on birds yields equally tantalizing clues. For one thing, species of birds that are more capable of innovation tend to be the ones whose populations survive when introduced into a new environment. Moreover, migratory species are less likely to be innovators than non-migrant ones, apparently because they are not as capable of introducing innovations in order to cope with their environment. If you can’t thrive in a given place, then change place, seems to be the idea. So migration turns out to be an evolutionarily alternative strategy to the option of staying and coping + innovating. Finally, innovative species of birds are more likely to speciate, i.e., to give origin to new species.

Though it is difficult to carry out systematic experiments on primates, it is possible to canvass the extensive literature on primatology, searching for and categorizing examples of innovative behaviors. Laland did this with one of his collaborators, Simon Reader. They found that:

“Consistent with our hypothesis that necessity was the mother of much animal innovation (derived from our fish experiments), across all primates [we] found more reported incidences of innovation in low-status individuals and fewer reports of innovation in high-status individuals than expected in either, given their numbers in the populations. … [We] found that approximately half of the instances of innovation that had taken place among primates had followed some sort of ecological challenge, such as a period of food shortage, a dry season, or habitat degradation.” (pp. 116-117)

And here is the kicker: controlling for phylogenetic relatedness, there is a very strong correlation between a tendency of a species to innovate and both its relative and absolute brain size. This, however, led to a puzzle: while the obvious conclusion to be drawn is that intelligence (measured by brain size as a proxy) has been favored in certain lineages in order to facilitate social learning and innovation, it is also true that several small-brained species — from fruit flies to fish — are capable of both. Why, then, evolve large brains to begin with? That’s going to be the next topic, in the second part of the book.

Book Club: Darwin’s Unfinished Symphony, 2, it’s all about copying, isn’t it?

Human copying behavior

Human copying behavior: Ken Laland’s son imitates his father

I have recently began a series of posts to discuss my colleague Kevin Laland’s recent book on cultural evolution, Darwin’s Unfinished Symphony: How Culture Made the Human Mind. Last time we talked about just how different — quantitatively, for sure — human cognition is from anything else known in the animal world. This doesn’t make us the result of magic, of course, but it does mean that biologists, anthropologists, and cognitive scientists are faced with a rather unique challenge if they really want to understand what makes us humans.

Chapters 2 and 3 of the book, with which this post is concerned, deal with a widespread mechanism by which both humans and other species learn: copying other members of the same species. Chapter 2 aims at establishing that copying is, indeed, a very common learning strategy, while chapter 3 deals with the basic question of why, and under what circumstances, copying is an adaptive strategy (it isn’t always, so this is not a trivial question). There is far too much material for me to do a section-by-section commentary, so I will focus on a number of things that struck me as particularly interesting.

Rattus norvegicus, the brown rat, is not Norwegian, but of Chinese origins. It is also one of the animal species best adapted at living with humans, despite our systematic attempts to exterminate what we regard as a pest. Already Darwin had noted that the brown rat is so successful because it is very good at copying what works from other members of its species, as well as at avoiding behaviors that turn out to be lethal. Interestingly, rats have been able to adapt, both genetically, but — more importantly for our purposes here — behaviorally, even to new poisons that act slowly, designed to stretch the time between cause and effect, thus, in theory, precluding other rats from learning to stay away from poisoned foods.

Turns out, however, that the strategy adopted by these rodents is rather more canny: they don’t avoid dangerous foodstuff, they actively seek the sort of foods eaten by fellow rats who thrive. In other words, populations of Rattus norvegicus have developed “dietary traditions” in order to stay a step or two ahead of their human nemesis.

Rats are certainly not the only animals to have developed cultural traditions of sorts. Laland details several studies that, beginning back in the 1970s and ‘80s, have clearly shown the phenomenon to be common among apes and monkeys. Interestingly, we don’t really understand the adaptive value — if any — of some of these behaviors. For instance, at least three populations of orangutans are known to blow raspberries as they go to sleep. Nobody knows why. Or consider this: in the Lomas Barbudas reserve, in Costa Rica, pairs of monkeys simultaneously insert their fingers into each other’s nostrils, remaining in such odd position and swaying as if in trance, for several minutes. Go figure.

Learning from your social environment can be maladaptive, if one does not pay attention. For instance, blue tits and great tits birds often forage in mixed species groups. However, the blue tits eat twigs that are high on trees, while great tits feed mostly on the ground or on lower branches. Experimenters shifted things around so that young birds of one species would be reared by parents of the other. They observed that the animals adopted the foraging behavior of the rearing species, which sometimes was not exactly a great idea: some great tits attempted to forage hanging upside down from branches, in the manner typical of blue tits. And they kept falling off!

In order to copy, apparently, one does not have to be particular bright:

“The ubiquitous influence of social learning in nature is beautifully illustrated by the example of mate-choice copying, where an animal’s choice of partner is shaped by the mating decisions of other, same-sex individuals. This form of copying is extremely widespread, with examples known among insects, fishes, birds, and mammals, including humans. The fact that animals do not require a big brain to copy could not be more clearly demonstrated than by the tendency of tiny female fruit flies to select male flies that other females have chosen as mates.” (p. 41)

Another area where copying is fundamental is anti-predator behavior. Obviously, learning by trial and error in that case could easily be fatal, so animals tend to learn predator avoidance by watching what their conspecifics do. Interestingly, at least in some cases, fear of predators does not seem to be innate, but acquired, as demonstrated by the fact that rhesus monkeys reared in captivity are not afraid of snakes, while they wild counterparts certainly are. It’s also fascinating that experiments show that fear of predators is learned very quickly and lasts for a long time, while monkeys tend not to develop long lasting fears in response to stimuli that are not actually threatening. This, as Kevin puts it, is not only efficient in terms of predator avoidance, but also precludes the acquisition of potentially time wasting “superstitions,” i.e., fears of things that are not dangerous. If only humans were as good at avoiding superstition as monkeys…

So, copying as a learning strategy is widespread, and does not require large brains, only a cognitive system sophisticated enough to be capable of associative learning.

That said, for a long time biologists have struggled with why, exactly, copying is so widespread in nature. Even though the answer seems intuitive, mathematical models have repeatedly faced researchers with what is known as Rogers’ paradox, named after University of Utah anthropologist Alan Rogers: such models seem to show that copying is just as likely to lead to learning maladaptive, or outmoded, behaviors as adaptive ones. What gives? That is the topic of chapter 3 of the book.

Other animals aren’t the only ones engaging in copying, humans do it too. Experiments and observations in developmental psychology clearly show that children copy behaviors, especially from their care takers, from very early on. Their propensity to do so varies over time, peaks around age four, but never quite disappears. But they don’t copy indiscriminately, instinctively paying attention to behaviors that seem functional, and rapidly discarding others.

Obviously, members of a species cannot learn only by copying each other, or no innovation would ever be introduced, and any significant change in the environment would pose a threat of population extinction. Mathematical models, therefore, predict a mixed evolutionarily stable strategy, where learning by trial and error is in equilibrium with learning by copying. Except for the above mentioned problem posed by Rogers’ paradox. I will skip other fascinating bits of chapter 3 of Kevin’s book in order to focus on how he and his team managed to solve the paradox: by organizing a competition among different learning strategies, implemented by way of computer programs battling each other.

The idea isn’t new. It was successfully implemented, as Laland explains, back in the 1970s by economist Robert Axelrod to solve another famous biological puzzle: the evolution of cooperation. Axelrod organized a competition for the best program to solve the so-called iterative prisoner’s dilemma, a classical situation in game theory where agents have a strong incentive to cheat, even though cooperating would actually be the overall best strategy. Famously, the winning strategy in cooperation games is known as tit-for-tat, and was developed by Anatol Rapaport, a psychologist then at the University of Toronto. The strategy is as simple as it is efficient: when encountering a new agent, act cooperatively on the first round, then do what he does. If he cooperates, keep cooperating; if he cheats, beat the crap out of him. Cooperation, then, becomes a matter of so-called reciprocal altruism: I’m nice to you if you are nice to me, and so long as you will keep being nice to me.

Kevin goes into some detail on how his team — inspired by Axelrod’s success — organized a similar tournament to solve Rogers’ paradox and make progress on the issue of social vs asocial (i.e., solitary) learning. The result was indeed very insightful.

The tournament was structured so that players (i.e., computer program), in each round could implement a mixed strategy composed of three possible moves: INNOVATE (introduce a new behavior), OBSERVE (engage in social learning), or EXPLOIT (implement a previously learned behavior). Obviously, no pure strategy based on a single one of these moves would be adaptive, but which combination turned out to be the winner?

“The first finding that jumped out at us was that it is possible to learn too much! In the tournament, investing lots of time in learning was not at all effective. In fact, we found a strong negative correlation between the proportion of a strategy’s moves that were INNOVATE or OBSERVE, as opposed to EXPLOIT, and how well the strategy performed. Successful strategies spent only a small fraction of their time (5–10%) learning, and the bulk of their time caching in on what they had learned, through playing EXPLOIT.” (p. 66)


“Among the top-performing strategies that progressed to the melee, by and large, the more the strategy learned through OBSERVE rather than INNOVATE, the better it did. However, among the poorer performing strategies we actually witnessed the reverse relationship — the more they copied the worse they did. That told us something very interesting — copying was not universally beneficial. Copying only paid if it was done efficiently.” (p. 67)

Which was a very good clue toward the solution of the riddle posed by Rogers’ paradox. The winning strategy turned out to be one called DISCOUNTMACHINE by its authors, Dan Cownden and Tim Lillicrap, two graduate students from Queens University in Ontario. DISCOUNTMACHINE is so named because it discounts information according to how old it is: the older a behavior is, the more likely it is to be out of step with the always changing environment, and less likely therefore it is to be useful to the agent.

Also, both DISCOUNTMACHINE and all the other top strategies relied mostly on social learning. Indeed, when Kevin and colleagues introduced a “mutant” of the winning strategy that relied solely on innovation, its performance plummeted. Also, the researchers found that learning asocially is a viable strategy — perhaps not surprisingly — only under very extreme circumstances, when the environment changes drastically and rapidly, obviously because no previously developed behavior is then adaptive. Finally, and perhaps a bit more surprisingly, mostly copying was still the winning strategy even when the error rate was very high, up to 70%, further demonstrating the superiority of even badly executed social learning over the asocial option.

“Simple, poorly implemented, and inflexible social learning does not increment biological fitness, but smart, sophisticated, and flexible social learning does.” (p. 72)

One of the conclusions of the study was that, while copying obviously does not discover and introduce new behaviors, it preserves adaptive behaviors beyond the death of the individual that produced the original innovation. This is the first, widespread, step that makes cultural evolution possible.

Book Club: Darwin’s Unfinished Symphony, 1, are humans unique?

Time to get started with a new book for our occasional, in-depth book club. For our next round I have chosen an intriguing recent volume on cultural evolution, Kevin Laland’s Darwin’s Unfinished Symphony, with the provocative subtitle of “How Culture Made the Human Mind.”

Kevin is a professor of behavioral and evolutionary biology at the University of St. Andrews in Scotland, and one of the architects of the Extended Evolutionary Synthesis. In full disclosure, I collaborate to the EES project, which is partly supported by the Templeton Foundation.

The title of the book reminds us that Darwin was the first modern scientist to begin to investigate the evolution of the human mind, but he was unable — given the state of biological and psychological knowledge at the time — to go beyond some suggestive speculations. As Kevin puts it:

“Comprehending the evolution of the human mind is Darwin’s unfinished symphony. Unlike the unfinished compositions of Beethoven or Schubert, which had to be assembled into popular masterpieces using solely those fragmentary sketches left by the original composers, Darwin’s intellectual descendants have taken up the challenge of completing his work. In the intervening decades great progress has been made, and rudimentary answers to the conundrum of the evolution of our mental abilities have started to emerge.” (p. 14)

The question is far from trivial. As Laland notes, if a complex mind, language and a sophisticated culture are truly advantageous for survival and reproduction, why did they evolve only in the human lineage? And yes, we have discovered that other animals communicate and use tools, but their tool use has not become much more sophisticated over time, so that only we have moved from stone artifacts to computers and rockets, and as Kevin points out, there is a world of difference between a male chaffinch’s song and Puccini’s arias. Nothing at all exists in between those extremes to cover the gap, even though, possibly, some of the hominid species that are now extinct would have provided a bit (but arguably just a bit) of a bridge between Homo sapiens and the rest of the biological world. And don’t except our extraordinary capacities to be in any direct relationship with the only true currencies of evolution, survival and reproduction:

“The men and women who design and build computers and iPhones have no more children than those in other professions.” (p. 2)

A lot of what makes human culture so distinct is that it is the result of long term planning and extensive cooperation among unrelated individuals, again something unique in the animal world:

“With a little training, the same people could build a shopping mall, bridge, canal, or dock, but no bird ever built anything other than a nest or bower, and no termite worker deviated from constructing a mound.” (p. 3)

Kevin, as is clear from his careful discussion of other animals’ behavior in chapter 1 (and then even more so in the remaining four chapters of the first part of the book, which I will cover in future posts), is perfectly aware of the variety and even sophistication of what non humans do. And yet:

“There is no compelling evidence that other apes will go out of their way to teach their friends or relatives anything at all, let alone build elaborate institutions that dispense vast amounts of knowledge, skills, and values to hordes of children with factory-like efficiency. Teaching, by which I mean actively setting out to educate another individual, is rare in nature.” (p. 5)

Moreover, the bewildering diversity of human culture is hardly explained by the main focus of modern Darwinian explanations: genetics. Pace evolutionary psychology, the human genome actually sports comparatively little variability, and it cannot, per se, explain the infinite diversity of our cultural habits and traditions, although of course it has to be part of the story, if for no other reason that it provides the boundary conditions within which human behavioral plasticity can express itself.

One of the most interesting points of this first chapter is that it is culture (and thus cooperation) that makes us so successful, not intelligence per se (though, obviously, the two are related). Laland describes in detail the evolution of the paper clip, and it is a fascinating story that I highly recommend people read. Contra what one might expect, the paper clip did not come out of the mind of a single, brilliant individual, but — in its current form — is actually the product of centuries of evolution and improvements, beginning in the Middle Ages, when enough paper began to be produced and used to necessitate some way of binding it together.

The first solution was to use pins as fasteners, but they rusted and left holes that were not aesthetically pleasing. Eventually, the first patent for a paper clip was filed in 1867, but the things were still inefficient, which led to further trial and error:

“A variety of shapes were experimented with for several decades of the twentieth century before manufacturers finally converged on the now standard paper clip design, known as the ‘Gem.’ What appears at first sight to be the simplest of artifacts was in fact fashioned through centuries of reworking and refinement.” (p. 7)

The point is that many cultural artifacts are originated gradually, through diffuse cooperation among strangers, a process of cumulative culture unknown in the rest of the animal world. Other species learn, of course, and are capable of copying each other (chapters 2 and 3 of the book are devoted to copying as a basic mechanism of behavioral evolution), but reports of cultural traditions are limited in scope, and entirely lack the ratcheting effect that is so normal in the human context:

“The fact remains that humans alone have devised vaccines, written novels, danced in Swan Lake, and composed moonlight sonatas, while the most culturally accomplished nonhuman animals remain in the rain forest cracking nuts and fishing for ants and honey.” (p. 10)

As I reported recently  I was accused of “arrogance” when I stated this simple conclusion during a panel discussion at the New York Academy of Science. But the fact remains true, regardless of pious and well intentioned pleas for getting ourselves off the evolutionary pedestal. As Kevin says later in the chapter, yes, in a trivial sense every species is “unique,” but humans are unique in a highly interesting way, which is not comparable to the uniqueness of dolphins, birds, or what else. Indeed:

“Herein lies a major challenge facing the sciences and humanities; namely, to work out how the extraordinary and unique human capacity for culture evolved from ancient roots in animal behavior and cognition.” (p. 11)

As I have pointed out, even brilliant biologists like E.O. Wilson don’t get that culture isn’t going to be reduced to biology, and therefore that the humanities are not, and never will be, a branch of the biological sciences. That way of achieving “consilience” (really, reduction) between social and biological sciences is a dead end. Kevin’s book is a most refreshing and welcome attempt to seek another, more constructive, way to go about this project, one that fully respects the distinct contributions that different disciplines are capable of.

Laland acknowledges the existence of a long scientific tradition, dating, in fact, back to Darwin, of trying to bridge the gap between humanity and the rest of the biological world. He even suggests that this was necessary, especially at the onset of evolutionary biology, in order to convince people that humans too evolved by natural means and were not specially created. But that necessity is no longer with us (well, unless you live in the south of the United States, or in large swaths of the Muslim world), and it is time to reassess the limitations of that particular Darwinian research program.

Again, Kevin is anything but naive in this regard. He knows that a long list of traits at one point alleged to be distinctively human (e.g., use of tools, imitation, use of signals, etc.) turned out not to be so upon closer examination. Yet, the gap is real:

“In my view, too much has been made of superficial similarities between the behavior of humans and other animals, whether by inflating the intellectual credentials of other animals or by exaggerating humanity’s bestial nature.” (p. 15)

One good example is what Laland calls one of the most misunderstood statistics concerning the relation between humans and our primate cousins, the chimpanzees. We often hear that there is a 98.5% similarity between the genomes of these two species, meaning that we are separated by “only” 1.5% differences in DNA. That, however, translates to a whopping 35 million nucleotide differences, a vast informational gulf even at face value, made orders of magnitude vaster when we realize that many important differences between chimpanzees and us are not to be found in the structure of our respective genes, but in the way gene action is regulated. And gene regulation is highly combinatorial, not simply linearly additive.

“An instructive comparison here is between the English and German languages. In terms of their written symbolic form (i.e., the letters used), these two Indo-European languages are identical, although only German speakers make use of the umlaut, recognizable as two dots over a vowel, which changes its pronunciation. Yet it would clearly be ridiculous to claim that all differences between the two languages are attributable to the umlaut, or that to master German, an English speaker merely has to master the rules of umlaut usage.” (p. 17)

And guess what? A disproportionate number of genetic differences between us and chimps concern the human brain and its function, and are therefore especially pertinent to the issue of behavior and culture.

Much has been made, by primatologists like Frans de Waal of the similarities in behavior between humans and other primates, especially the bonobos, the pigmy chimpanzee. While this research is indeed fascinating, Laland cautions against waxing poetic about having found the building blocks of morality in non human species. Experiments he cites in the book, for instance, clearly show that, when exposed to versions of the so-called ultimatum game, which tests for the tendency to share resources fairly, humans are remarkably fair to strangers, but chimps are not. And the puzzling behavior, evolutionarily speaking, is ours, not theirs. Moreover, while it is true that other species of primates cooperate, they do not do so extensively, certainly not even close to the degree of cooperation found in hunter-gatherer human populations, let alone modern ones.

Kevin attributes this difference, at least in part, to the proportionally limited ability of non human primates to exhibit a “theory of mind,” that is of understanding the perspective of others. Again, while it is the case that some research hints at the existence of a simplified ability to do so in chimps:

“These conclusions remain contested, and crucially, such studies provide no evidence that chimpanzees understand that others may possess false beliefs. In contrast, children typically understand that others can have false beliefs by the age of four years, and possibly much earlier, which implies that this capability evolved in the hominin lineage.” (p. 21)

Humans easily comprehend up to six different levels of beliefs about beliefs, while chimpanzees struggle with first-order intentionality.

Or let us consider language. There certainly is by now abundant evidence of animal communication, for instance vervet monkeys possessing three distinct calls to label avian, mammalian, and snake predators. But primate communication consists of unrelated signals that are rarely combined, i.e., they lack a grammar. Human language, by contrast, is highly open-ended:

“A romance exists around the notion that animals, such as chimpanzees or dolphins, might covertly harbor complex natural communication systems as yet unfathomed by humans. Many of us quite like the idea that ‘arrogant’ scientists have prematurely assumed that other animals don’t talk to each other. … Sadly, all the evidence suggests that this is just fantasy. Animal communication has been subject to intense scientific investigation for over a century, and few hints of any such complexity have arisen.” (p. 22)

I’m not sure why this is sad. It is what it is. As hinted at above, something similar is the case for the so-called moral instinct:

“Equally romantic is the notion that science has not yet gauged the full depth of the moral lives of animals, a premise that sells an awful lot of popular science books and flushes the coffers of Hollywood moviemakers. … Many popular books claim that animals understand the difference between right and wrong, but precious few scientific papers demonstrate this.” (p. 25)

Again, Kevin is not denying the very clear evidence that some animals have rich emotional lives, that they form attachments, and that they have long term memories of their most traumatic experiences. He simply does not think — very reasonably — that any of this amounts to a sense of morality.

But wait! I’m pretty sure that my dog feels guilty when I reprimand him for something wrong he has done! Maybe, responds Laland. But isn’t it more likely, more parsimonious, to assume that your dog has simply learned that a certain kind of behavior on his part gets you to relent from punishing or yelling at him? What about reports of reconciliation after fights in monkeys, which some authors have interpreted as “forgiveness”? Again, maybe, but that interpretation becomes a bit less convincing once we learn that fish — with much more limited behavioral repertoires and brain structures — behave in the same way.

The evidence appears to point to the conclusion that human intelligence and culture evolved in a particular way, with experimental and simulational studies suggesting that selection for more efficient teaching may have been a crucial factor for the evolution of language in our ancestors. Teaching and language, in turn, led to widespread cooperation and a runaway process in which different components of cognition fed into each other. In the end:

“Humans are creatures of their own making. The learned and socially transmitted activities of our ancestors, far more than climate, predators, or disease, created the conditions under which our intelligence evolved. Human minds are not just built for culture; they are built by culture.” (p. 30)