Category Archives: Massimo’s Technical Stuff

Vindicating science by bringing it down, part II

1 (1)As we have seen in part I, scientists and early philosophers of science adopted an idealized conception of science, which focuses on the successful end result of scientific activity, and in which there is no place for any influence of the social, or indeed, for any of the actors involved in the scientific endeavor. All of that is swept under the carpet. But the fact that the eventual goal of science is to eliminate the social does not imply that social factors have no important role to play in the process. Science, after all, is nothing but the concerted effort of (sometimes not to) humble human brains, none of which was designed to unravel the mysteries of the world on its own.


In the past couple of decades, science has been brought down to earth again by sociologists, cognitive scientists, evolutionary psychologists, and historians. Unfortunately, the opposition between the rational and the social is still besetting the naturalization of science. The backlash against the traditional conception of science, epitomized by the logical positivists and their intellectual heirs, has swung the pendulum in the opposite direction. Still under the spell of the dichotomy between rational and social which we discussed in part I, many science naturalizers have assumed that, as they bring science down to earth, its pretensions will start to unravel.


The truth of the matter, however, is that all beliefs, the true and the false ones, or the scientific and the pseudoscientific ones, have a causal history, involving cognitive and social factors. If we want to understand how people come to believe stuff, even simple and obvious propositions (e.g., dolphins are mammals) are in need of an explanation. Likewise, if we want to understand how scientists have been able to unearth all sorts of true beliefs about the world, we need to understand what kinds of people scientists are, what kind of cognitive strategies they bring to bear on their research questions, what the social organization of science is, and how hypotheses are tested and evaluated within a scientific community.


The development of a cognitively and socially rich account of science has been delayed by the widespread misconception that such an account would compromise the epistemic standing of science. Because of our habit of pitting social and rational explanations against each other, we assume that the intrusion of sociology and psychology into the citadel of science will eat away at its foundations.


At the heart of this opposition between the social and the rational lies an individualist conception of reason, shared by both camps in the science wars. The notion of the social as a contaminant of the rational, to which even social constructivists seem to subscribe, is more indebted to logical positivism than the latter would like to admit. Radical sociologists during the science wars of the ‘90s were led astray by the very same intuition that made the logical positivists allergic to social explanations – only now they were welcoming the opposite conclusion. As philosopher David Hull put it:


“Because science did not possess the ideal characteristics that the ‘positivists’ insisted that it should, knowledge-claims made by scientists had no more warrant than those of magicians, faith healers, and politicians.”


Both camps are wrong. The simple opposition between the rational and the social-psychological explanations goes against the grain of naturalism. Scientific knowledge does not drop out of thin air: it is embodied in real human beings. If our best scientific theories in some way reflect the world out there, this must have come about through the usual perceptual capacities and cognitive operations, with available technological equipment, and in a complex network of social interactions. How else could it have come about?


The sociologists are right that science is a deeply social endeavor, and that all scientific knowledge is in this sense “socially constructed.” No single individual marooned on a desert island, no matter how brilliant, would be capable of finding out any of the significant truths about the universe that we currently possess. Though the history of science has known some solitary geniuses, working in relative isolation from their peers, even they were still engaged in a collective enterprise, in the sense that they were building on the work of numerous predecessors.


The realization that science is a deeply social enterprise, and that scientific consensus is reached through coalition forming and competition, should not surprise us. The question is what particular social organization is exemplified by science, and whether this is conducive to its epistemic aspirations. Scientists are human beings, warts and all. If scientists collectively succeed in finding out significant truths about the universe, while other endeavors have failed in this regard, this must have come about through the particular social dynamics of science.


Many scientists believe that being objective and impartial are the cardinal virtues of science, and that bias and prejudice make one unsuitable for scientific work. Although the culture of science rightly encourages these virtues, they are by no means necessary for the success of science. Indeed, a certain modicum of bias in this or that direction may actually facilitate the progress of science.


It is not a problem that an individual scientist is biased, or emotionally attached to a particular hypothesis. The social organization of science makes sure that these biases will be balanced by others tilting in different directions. A standard example of this is the reorientation of aspects of medical research as a result of feminist epistemological critiques: it is now increasingly acknowledged that, for example, we cannot conduct drug tests solely on a population of (mostly white, middle aged) men and simply assume that the results can be extrapolated to other human biological populations. Again, Hull:


“The objectivity that matters so much in science is not primarily a characteristic of individual scientists but of scientific communities. Scientists rarely refute their own pet hypotheses, especially after they have appeared in print, but that is all right. Their fellow scientists will be happy to expose these hypotheses to severe testing.”


A desire for fame and success is often viewed as unworthy of a real scientist. The goal of science is truth for its own sake. Although such base motives may indeed compromise one’s scientific work, if allowed to be unchecked, there is no convincing reason why they would stand in the way of significant discoveries. Even spite, jealousy, and the desire to humiliate a rival can result in excellent scientific work, if the competing parties know that they have to abide by certain rules, and will be called out whenever they violate them.


In a good number of cases, social influences are not an impediment to the epistemic ambition of science, but rather a facilitator of scientific progress. Science harnesses some of the baser motives of human behavior in the service of truth, making sure that the interplay of scientists’ individual interests and biases mostly align with epistemic progress.


Darwin may have been right from the start about the fact of common ancestry, but his theory would not have carried the day as swiftly as it did without his indefatigable efforts to enlist allies to the cause and to engage and negotiate with his critics. All the parties in the dispute were trying to enlist nature as their ally, but Darwin of course had one big advantage: nature really was on his side all along. In the long run, therefore, as evidence accumulates and factions wax and wane, the social influences on science will be filtered out, and rightly so.

Vindicating science by bringing it down, part I

1What is the role of the social in science? If one consults science textbooks, one will find that the social dimension of scientific knowledge is conspicuously absent. Science is supposed to reflect the way the world really is, independent of our petty human lives. It is, in the classical view, the epitome of a rational endeavor, free from social influences. Of course, science is carried out by human beings, but their individual backgrounds and social lives are simply taken to be irrelevant. What matters are the intellectual merits of a theory, not who conceived it. What matters is the evidence, not who gathered it. This stark contrast between the social and the rational can be found in philosophical accounts of science as well. Because social factors are rendered invisible in the end products of science, many philosophers have underestimated their constructive role in the acquisition of scientific knowledge.


In recent decades, sociologists and historians have tried to bring science back to earth, but many of them have unwittingly bought into the same simplistic opposition. Social influences on science have been relished by its cynical critics and resisted by its admirers, and for the same reason: the fear (or hope) that it would destroy the credentials of science. In a paper I co-wrote with my frequent partner in crime, Maarten Boudry (published in Perspectives in Science and Culture, edited by K. Rutten, S. Blancke, and R. Soetaert, Purdue University Press) we discuss the historical roots of this opposition, culminating in the sorry spectacle of the science wars. This post provides extensive excerpts from that paper, I hope you’ll enjoy it.


When do we feel the need to explain why someone beliefs something? Not all beliefs held by our fellow human beings appear to produce an epistemic itch. People believe that dolphins are mammals, that the earth orbits around the sun, and that World War II ended in 1945, but we rarely wonder how they arrived at such homely truths. Beliefs such as these are just obvious, and no sane person would dispute them. That said, who told you when WWII ended? Where did you acquire the belief that dolphins are mammals, or that the earth goes around the sun? Your sources for these convictions are hard to track down.


Memories of these facts are called semantic by psychologists, to distinguish them from episodic memories, related to things that happen to us. Episodic memories carry a tag with time, place, and the situation we acquired them. Not so for semantic memories, likely because doing so would be a waste of brain resources.


Take the belief that coal is black. If we ask you what your reasons are for believing that, you would probably be puzzled. The first answer that comes to mind is: “Why, because it is black, of course!” It doesn’t matter how you came to know that. You could have learned it in any number of ways. Anyone in doubt about the color of coal can quickly retrieve the answer through any number of sources.


Because the truth of such beliefs is obvious, we rarely question how other people acquired them, or how they can justify them. It seems as if such beliefs just drop out of thin air, without much in the way of a causal history.


That said, how do we account for other kinds of beliefs (as held by others, of course)? Beliefs that are false, quirky, idiosyncratic, or plainly irrational produce an epistemic itch. We want to explain how people end up embracing them. Who told him such nonsense? Did he fall for one of those conspiracy theories circulating on the internet?


We resort to special explanations only when something goes wrong. True beliefs that are part of common knowledge are taken at face value, but false and foolish beliefs cry out for an explanation. This is where social and cultural explanations come in. Such explanations, however, are not invoked when we account for true and justified beliefs. Only when rationality breaks down, it seems, a space is opened up for psychological explanations to fill. We seem to think that there is an association between the irrational and the social, but not between the rational and the social.


In the classical view, science is the epitome of reason. It is objective and impartial. It is ruthless in its indifference to what we fear or fancy. When it comes to the content of science, nature has the final say in the matter. Social, political, and ideological influences on science are anathema. When writing science textbooks, and for many other purposes, the social influences on the development of scientific theories can be safely ignored,  just like with many of our mundane beliefs about the world. Sure, there is a story to be told about how scientists pooled their efforts to acquire this or that piece of knowledge, who published it first, who convinced whom, and so on. But the details of this story make no difference: an alternative history of science would ultimately have led to the same result.


As a result, especially in the natural sciences, students are simply taught scientific theories as if they descended down from some Platonic heaven. The vagaries of scientific history, the false starts, wrong turns, and dead ends, the protracted controversies between rival views, the forerunners and pioneers of scientific ideas – all of this is rendered invisible.


For long, philosophers of science have also treated science in splendid isolation from the social world. Hans Reichenbach, one of the major proponents of logical positivism, taught us to strictly separate the context of discovery from the context of justification. The first deals with the historical conception of a scientific hypothesis, and is of little interest to philosophers trying to understand the logic of science. Philosophers of science should be solely concerned with how a scientific hypothesis, once it appears on the scene, relates to observations, whether it is internally consistent, whether it is falsifiable, and so on.


(next: bringing science down to earth)

Philosophy as the evocation of conceptual landscapes – part II

1 (1)In the first installment of this two-part series we have explored different concepts of what “making progress” may mean for disciplines such as science, math, logic, and philosophy. I have then suggested that philosophy makes progress because it explores what Nicholas Rescher has called “aporetic clusters,” i.e., families of alternative solutions to a given particular philosophical problem. I have advanced a similar idea in the past, suggesting that philosophy is in the business of discovering, refining or discarding such solutions as they are “evoked” (i.e., brought into objective existence by the human mind) within the context of a problem. This all sounds very theoretical, however, so let’s take a look at some empirical examples of these aporetic clusters found within specific philosophical conceptual landscapes.


Is there any evidence that philosophy progresses in the way described so far? I think so, and it comes at least in part from a landmark paper by Bourget and Chalmers, published in 2013. In it, they explore quantitatively what professional philosophers think of a number of prominent issues and positions within their own discipline. Even though Bourget and Chalmers did not carry out their study while informed by the ideas of aporetic clusters and progress, their data is suitable for the current discussion. It also represents a rare opportunity to systematically assess the views of an entire profession, the sort of thing that would probably be useful also in other disciplines, from the humanities to the natural sciences, but is all too seldom actually done.


I focus here on a subset of interesting findings that bear directly or indirectly on my overall project concerning progress in philosophy, even though the Bourget-Chalmers paper contains a number of additional stimulating bits of information for anyone interested in the current state of philosophical inquiry.


To begin with, apparently the common adage that if there are X philosophers in a room, they are bound to have X+1 opinions does not appear to be the case at all. Consider some of the main findings of the survey:


A total of 71% of respondents thought that a priori knowledge is possible, while only 18% didn’t think so. There is a clear majority here, and only two major aporetic clusters.


Things are more equally divided when it comes to views on the nature of abstract objects: Platonism gets 39% while nominalism is barely behind, at 38%. Superficially, this may seem an instance of precisely what’s wrong with philosophy, but is in fact perfectly congruent with my model of multiple peaks in conceptual space. Philosophers seem to have settled on two of Rescher’s aporetic clusters here, having eliminated a number of unacceptable alternatives. There may very well not be an ascertainable fact of the matter about whether Platonism or nominalism are “true.” They are both reasonable ways of thinking about the ontology of abstract objects, with each position subject to further criticism and refinement.


Every practicing philosopher knows that W.V.O. Quine thought he had demolished once and for all the distinction between analytic and synthetic propositions, but the bad news for him is that about 65% of philosophers disagree, and only 27% agree that such demise has in fact taken place. The latter may be an example of an aporetic cluster that gained more prominence immediately post-evocation by Quine, but may have eroded somewhat since, to use a geological analogy.


One of the most lopsided outcomes of the survey concerns what epistemic attitude is more reasonable to hold about the existence and characteristics of the external world: 82% of respondents qualified themselves as realists, followed by only 5% skeptics and 4% idealists. This may be as close as it gets for philosophers to actually settle a dispute.


In terms of ethical frameworks, things are pretty evenly split, with deontology barely leading at 26%, followed by consequentialism at 24% and virtue ethics at 18%. Here too, as in the case of Platonism vs nominalism, the result makes sense to me, as it is hard to imagine what it would mean to say that deontology, for instance, is the “true” approach to ethics. These three are reasonable, alternative ways of approaching ethics – and there are a number of unreasonable ones that have been considered and discarded over time (e.g., Ayn Rand’s “Objectivism,” based on a gross distortion, and likely lack of understanding, of Aristotle).


Even more fascinating – and congruent with my general thesis – are the pairwise correlations that the authors uncovered between philosophical views, hinting at the conclusion that philosophers tend to develop fairly internally coherent positions across fields. For instance:


If one thinks that the distinction between analytic and synthetic truths is solid, then one also tends to accept the idea of a priori knowledge – naturally enough.


If a philosopher is a moral realist, she is also likely to be an objectivist about aesthetic value. Interestingly, moral realists also tend to be realists in philosophy of science, and Platonists about abstract objects.


If one thinks that Star Trek’s Kirk survives teleportation (rather than being killed and replaced by a copy), one also – coherently – often adopts a psychological view of personal identity.


Perhaps the most interesting and nuanced approach that Bourget and Chalmers take to their data unfolds when they move from univariate and bivariate to multivariate statistics, in this case principal components analysis. This allows them to examine the many-to-many relationships among variables in their data.


The first principal component they identify, that is, the one that explains most of the variance in the sample, they label “Anti-naturalism,” as it groups a number of responses that coherently fall under that position: libertarianism concerning free will, non-physicalism about the mind, theism, non-naturalism as a metaphilosophy, and the so-called “further fact” view of personal identity. If one were to plot individual responses along this dimension (which Bourget and Chalmers don’t do, unfortunately), one would see anti-naturalist philosophers clustering at the positive and of it, and naturalist philosophers clustering at the negative end.


The second-ranked principal component is labelled “Objectivism/Platonism” by the authors, and features positive loadings (i.e., multivariate correlations) of cognitivism in moral judgment, realism in metaethics, objectivism about aesthetic value, and of course Platonism about abstract objects.


The third component is about Rationalism, with positive loadings for the possibility of a priori knowledge, the analytic-synthetic distinction, and rationalism about knowledge.


(Should you be curious about where I fall in the above multi-variable landscape, see the post-scriptum to this essay.)


The overall picture that emerges, then, is very much that of a conceptual landscape with a number of alternative peaks, or aporetic clusters, which are internally coherent and well refined by centuries of philosophical inquiry. I suspect that historically many more “peaks” have been explored and eventually discarded, and that the height of the current peaks (as reflected by the consensus gathered within the relevant epistemic communities) is itself heterogeneous and dynamic, with some in the process of becoming more prominent in the landscape and others on their way to secondary status or destined to disappear altogether.


If I am right and philosophy makes progress by evoking and refining aporetic clusters that are constrained by empirical evidence about the world, it makes sense to ask one last question: where is philosophy going, considering its continuous tension with the sciences, themselves making progress in what may be more of a teleonomic fashion? (See part I on teleonomy.)


Philosophers have often advanced a model of their discipline as a “placeholder” for the development of eventually independent fields of inquiry, presenting philosophy as the business of conducting the initial conceptual exploration (and, hopefully, clarification) of a given set of problems, handing it then to a special science as soon as those problems become empirically tractable. There are quite a few historical examples to back up this view, from the emergence of the natural sciences to that of psychology and linguistics. Philosophy of mind is arguably in the midst of this very process, interfacing with the nascent cognitive sciences.


Predictably, this model is often twisted by detractors of philosophy to show that the field has been in a slow process of disintegrating itself, with a hard core (metaphysics, ethics, epistemology, logic, aesthetics) that is the last holdout, and which has shown increasing signs of itself yielding to the triumphal march of Science (with a capital “S”).


If that is the case, of course, so be it. But I seriously doubt it. What we have seen over the last few centuries, and especially the last century or so, is simply a transformation of what it means to do philosophy. Although we use the general word “science” to indicate everything from Aristotle’s forays into biology to what modern physicists are doing with the Large Hadron Collider, the very nature of science has evolved throughout the centuries, and keeps evolving still. What counts as good scientific methodology, sound scientific theorizing, or interesting scientific problems has changed dramatically from Aristotle to Bacon to Darwin to Stephen Hawking. Why should it be any different for philosophy?


One of the most obvious indications that philosophy has been reinventing itself over the past century or so is the stark onset of a panoply of “philosophies of.” “Philosophies of” are the way the field has been responding to the progressive emancipation of some of its former branches: science is no longer natural philosophy, but that simply means that now philosophers are free to philosophize about science (and, more specifically, about biology, quantum mechanics, etc.) without doing science. The same idea applies to linguistics (and philosophy of language), psychology (and philosophy of the social sciences), economics (and philosophy of economics), and so on.


Is this sort of transformation also about to affect philosophy’s core areas of metaphysics, ethics, epistemology, logic, and aesthetics? It depends on how one looks at things. On the one hand, to a larger or lesser extent it certainly has become increasingly difficult to engage in any of the above without also taking on board results from the natural and social sciences. While logic is perhaps the most shielded of all core philosophical areas in this respect (indeed, arguably it has contributed to the sciences broadly construed much more than it has received), it is certainly a good idea to do metaphysics while knowing something about physics (and biology); ethics while interfacing with political and social sciences, and even biology and neuroscience; epistemology while being aware of the findings of the cognitive sciences; and aesthetics with an eye toward biology and the cognitive and social sciences.


Nonetheless, all the core areas of philosophy emain still very much recognizable as philosophy. Should they finally spawn their own independent disciplines, there will immediately arise in turn a need for more “philosophies of,” and the process will keep going, the field adapting and regenerating.


Ultimately, philosophy is here to stay for the same reason that other humanities (and the arts) will stay, regardless of how much science improves and expands, or how much narrow-minded politicians and administrators keep cutting humanities’ funding in universities. Human beings need more than facts and formulas, more than experiment and observation. They need to experience in the first person, and they need to reflect critically on all aspects of their existence. They need to understand, in the broadest possible terms, which means they need to philosophize.


_____


Post-Scriptum: where do I fall in the multi-variable conceptual landscape of philosophical positions? Here:


Epistemology: a priori knowledge (e.g., mathematics, logic) is possible.


Knowledge: neither a rationalist nor an empiricist, we need a judicious combination of the two, as Kant first surmised.


Abstract objects: something like nominalism, definitely not Platonism. I prefer Smolin’s “evocation” alternative, briefly discussed in part I.


Analytic / Synthetic distinction: it’s there, despite Quine making some good points about borderline or fuzzy cases.


External world: realism. (Though I do have strong sympathies for instrumentalism in philosophy of science, but the two are not mutually exclusive.)


Aesthetic values: anti-realist (though our aesthetic judgments, at a very basic level, have likely been shaped by evolution, which would make me a quasi-realist of sorts).


Transporter problem: Kirk dies every time. I do not think there is a unique metaphysical answer to personal identity, as it is a human construct constrained by human biology.


Ethics: virtue ethics, obviously.


Naturalism vs anti-naturalism: I’m definitely a naturalist, compatibilist about free will, non-theist, and think that the mind is the result of the activity of the physical (and embodied) brain.


Objectivism vs Platonism: quasi-realist in ethics (as explained here), ethical judgments are constrained by our understanding of human nature and what makes us flourish. More than one reasonable ethical judgment is compatible with any given empirical situation.

Philosophy as the evocation of conceptual landscapes – part I

1I come to philosophy from the natural sciences, where the idea that my former academic field (evolutionary biology) makes progress is taken for granted, and where one would receive amused or puzzled looks by simply asking the question of what constitutes progress. And yet, philosophers of science have convincingly argued that it is much more difficult than one might think to provide a good account of how, precisely, science makes progress. When it comes to philosophy, however, it is philosophers themselves who often deny that the field makes progress, no matter how one understands “progress.” Which is puzzling, because the obvious question, then, is why on earth would they bother spending their lives contributing to an area of inquiry that reliably goes nowhere?


This essay is a shortened version of a full paper that appeared in a volume edited by Russell Blackford and Damien Broderick, entitled Philosophy’s Future: The Problem of Philosophical Progress (if you are an “Aurelian” subscriber to this site you can download it for free).


Part of the problem is that “progress” is itself not at all easy to define, with the term taking on different meanings in the natural sciences and, for instance, in mathematics. I suggest that a reasonable approach to this issue is to “go Wittgensteinian,” so to speak, and argue that “progress” is a family resemblance concept. Wittgenstein’s own famous example of this type of concept was the idea of “game,” which does not admit of a small set of necessary and jointly sufficient conditions in order to be defined, and yet this does not seem to preclude us from distinguishing games from not-games, at least most of the time. Progress, in a sense, could then be thought to be like pornography, to paraphrase the famous quip by US Supreme Court Justice Potter Stewart: “I know it when I see it.”


I submit that progress in science is a teleonomic (i.e., goal oriented) process, where the goal is to increase our knowledge and understanding of the natural world in terms of the simplest possible set of general principles. By contrast, progress in mathematics, logic, and – with some important qualification – philosophy, could be described as an advancement toward a better, more complete, or more modern condition, a definition that does not imply any final goal to be achieved.


Consider first mathematics and logic: I do not think it is tenable to understand them as teleonomic disciplines because there is an infinite number of logical-mathematical objects to discover and theorems to prove, so that mathematics and logic look like ever expanding disciplines, not converging toward any “theory of everything” as is (allegedly) the case for science. So I do think of mathematics and logic as advancing toward a better, more complete position, “better” in the sense that the process both opens up new lines of internally generated inquiry (the solution of mathematical and logical problems generates new problems, and so forth) and “more complete” in the sense that mathematicians and logicians are best thought of as engaged in the exploration of a space of conceptual (as distinct from empirical) possibilities.


At this point in the paper I discuss in detail the concept of “evoking,” as distinct from discovering or inventing, new notions in the space of conceptual possibilities pertinent to, respectively, mathematics, logic, and philosophy. It’s a rather technical topic, inspired by Roberto Unger and Lee Smolin’s treatment of mathematical Platonism in their excellent The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. The interested reader can find a summary of it here. The basic notion, though, is simple. Consider the game of chess. It is clearly a human invention, i.e., the rules of chess are not independent of the human mind. Yet, once invented, the game has “evoked” (i.e., brought into existence) a number of objectively provable statements about it, including some that can be proven mathematically. Keeping this in mind, let’s now go back to how philosophy makes progress.


Current discussions on progress in philosophy have a number of precedents, although it is actually surprising how few scholarly papers have been devoted to the topic. One that I think is particularly important to discuss is Moody’s distinction among three concepts of progress, first published in 1986. What he calls progress-1 takes place when there is a specifiable goal about which people can agree that it has been achieved, or what counts toward achieving it. If you are on a diet, for instance, and decide to lose ten pounds, you have a measurable specific goal, and you can be said to make progress insofar your weight goes down and approaches the specific target. Progress-2 occurs when one cannot so clearly specify a goal to be reached, and yet an individual or an external observer can competently judge that progress has occurred when comparing the situation at time t with the situation at time t+1, even though the criteria by which to make that judgment are subjective. Moody thinks, for example, that composers guided by an inner sense of when they are “getting it right” would be making this sort of progress while composing. Finally, progress-3 is a hybrid animal, instantiated by situations where there are intermediate but not overarching goals.


Interestingly, Moody says that mathematics makes progress-3, insofar as there is no overall goal of mathematical scholarship, and yet mathematicians do set intermediate goals for themselves, and the achievement of these goals (like the proof of Fermat’s Last Theorem) is recognized as such by the mathematical community.


Moody’s next step is to assume provisionally that philosophy is a type of inquiry, and then ask whether any of his three categories of progress applies to it. The first obstacle is that philosophy does not appear to have consensus-generating procedures such as those found in the natural sciences or in technological fields like engineering. Moody claims that “the only thing that philosophers are likely to agree about with enthusiasm is the abysmal inadequacy of a particular theory.” While I think this is actually a bit of a caricature, I do not share Moody’s pessimistic assessment of that observation even if true: negative progress, that is, the elimination of bad ideas, is progress nonetheless.


Moody concludes that philosophy does not make progress-1 or progress-3, because its history has not yielded a trail of solved problems. What about progress-2? He takes up the possibility that perhaps philosophy is not a type of inquiry after all, and analyzes in some detail two alternative conceptions: Wittgenstein’s idea of philosophy as “therapy,” and Richard Rorty’s so-called “conversational model” of philosophy. As Moody summarizes:


“Wittgenstein believed that philosophical problems are somehow spurious and that the activity of philosophy … should terminate with the withdrawal, or deconstruction, of philosophical questions.”


On this view, then, there is progress, of sorts, in philosophy, but is the sort of “terminus” brought about by committing seppuku. As Moody rather drily comments, while nobody can argue that Wittgenstein’s ideas have not been taken seriously, it is equally undeniable that philosophy has gone forward largely as if the therapeutic approach had never been articulated. If a proposed account of the nature of philosophy has so blatantly been ignored by the relevant epistemic community, we can safely file it away.


Rorty’s starting point is what he took to be the (disputable, in my opinion) observation that philosophy has failed at its self-appointed task of analysis and criticism. Moody quotes him as saying: “The attempts of both analytic philosophers and phenomenologists to ‘ground’ this and ‘criticize’ that were shrugged off by those whose activities were purportedly being grounded and criticized.” Rorty arrived at this because of his rejection of what he sees as philosophy’s “hangover” from the seventeenth and eighteenth centuries, when philosophers were attempting to set their inquiry within a framework that allowed a priori truths to be discovered, even though David Hume dealt that framework a fatal blow during the eighteenth century.


While Moody finds much of Rorty’s analysis on target, I must confess that I don’t. For instance, the fact that other disciplines (like science) marched on while refusing to be grounded or criticized by philosophy is neither entirely true (lots of scientists have paid and still pay attention to philosophy of science, for example), nor necessarily apt as the ultimate test of the value of philosophy even if true: creationists and climate change deniers, after all, shrug off any criticism of their positions, but that doesn’t make such criticism invalid, or futile for that matter (since others are responding to it).


Yet there is something to be said for thinking of philosophy as a “conversation” more than an inquiry, as Rorty did. The problem is that this and other dichotomies presented to us by Rorty are, as Moody himself comments, false: “We do not have to choose between ‘saying something,’ itself a rather empty notion that manages to say virtually nothing, and inquiring, or between ‘conversing’ and ‘interacting with nonhuman reality,’” Indeed we don’t.


What account, then, can we turn to in order to make sense of progress in philosophy, according to Moody? I recommend that the interested reader check Moody’s discussion of Robert Nozick’s “explanational model” of philosophy, as well as John Kekes’ “perennial problems” approach, but my own treatment here will jump to Nicholas Rescher’s 1978 proposal of the concept of “aporetic clusters,” which is one path that supports the conclusion that philosophy does make progress, and it is a type-2 progress.


Rescher thinks that it is unrealistic to expect consensus in philosophy, and yet does not see this as a problem, but rather as an organic outcome of the nature of philosophical inquiry:


“In philosophy, supportive argumentation is never alternative-precluding. Thus the fact that a good case can be made out for giving one particular answer to a philosophical question is never considered as constituting a valid reason for denying that an equally good case can be produced for some other incompatible answers to this question.”


In fact, Rescher thinks that philosophers come up with “families” of alternative solutions to any given philosophical problem, which he labels aporetic clusters. According to this view, some philosophical accounts are eliminated, while others are retained and refined. The keepers become philosophical classics, like “virtue ethics,” “utilitarianism,” or “Kantian deontology” in ethics, or “constructive empiricism” and “structural realism” in philosophy of science. Rescher’s view is not at all incompatible with my idea of philosophy as evoking (in the sense briefly described above), and then exploring and refining, peaks in conceptual landscapes. As Moody aptly summarizes it: “That there are ‘aporetic clusters’ is evidence of a kind of progress. That the necronology of failed arguments is so long is further evidence.”


(next: empirical examples of progress in philosophy)

Darwinism in the modern era: more on the evolution of evolutionary theory – part II

1 (2)The many conceptual and empirical advances in evolutionary biology during the second half of the twentieth century that I have briefly sketched in part I of this essay naturally led to a broader theoretical turmoil. More and more people felt like the Modern Synthesis (MS) was increasingly becoming too restrictive a view of evolution to keep playing the role of biology’s “standard model.” This group included Carl Schlichting and myself, Mary Jane West-Eberhard (2003), Eva Jablonka, and others. But arguably none made a more concerted, if partial, effort than Stephen Jay Gould in his magnum opus, The Structure of Evolutionary Theory, published in 2002.


The Structure is comprised of two parts, one tracing the history of evolutionary ideas, both pre-and post-Darwin, and the second one presenting Gould’s view of contemporary theoretical debates within the field. While the constructive part of the book focuses too much on paleontology and multilevel selection, Gould correctly identified three conceptual pillars of Darwinism that got imported wholesale into the Modern Synthesis:

1. Agency: the locus of action of natural selection. For Darwin, this was the individual organism, while within the MS the focus expanded to the gene, thus leading to an overall increase of agency. Gould advocated further expansion, to include multiple levels of selection, from the gene to the individual to kin groups to species. This suggestion is perfectly in line with that of other authors advocating an Extended Evolutionary Synthesis (EES).


2. Efficacy: the causal power of natural selection relative to other evolutionary mechanisms. According to Darwin, natural selection is the chief mechanism of evolutionary change, and certainly the only one capable of producing adaptation. The MS formally described—by means of population genetic theory—four additional mechanisms: mutation, recombination, migration, and genetic drift. Gould adds a positive role for developmental constraints to the picture, and advocates of the EES further expand on this theme, including concepts such as those of evolvability (i.e., change over time of evolutionary mechanisms themselves), facilitated variation (from developmental biology), and niche construction (from ecology), among others.


3. Scope: the degree to which natural selection can be extrapolated from micro-to macro-evolutionary outcomes. As we have seen last time, this has been controversial early on, with the MS settling for the same basic picture proposed by Darwin: so-called macro-evolutionary processes are simply micro-evolutionary ones writ large. Gould, of course, questions this, on the basis of the already discussed theory of punctuated equilibria. Proponents of the EES also doubt the received view, suggesting that species selection and group-level ecological characteristics may partially, though not entirely, decouple micro-from macro-evolution.


If Gould’s general take is right, then, evolutionary theory has changed over time and the process can best be tracked conceptually by keeping tabs on changes in the agency, efficacy, and scope of natural selection within the theory. This, incidentally, makes natural selection the fundamental idea in biological evolution, and rightly so. No other concept, not even that of common descent, has had such a complex and convoluted history within the field. Moreover, what the EES is attempting to do can also be understood within Gould’s framework.


Now, as we have seen so far, the latter part of the twentieth century and the beginning of the twenty-first century have seen a renewed debate about the status of contemporary evolutionary theory, with a number of calls for an expansion of the Modern Synthesis into an Extended Evolutionary Synthesis. But what does the latter look like, at the current state of the discussion?


I provided an early sketch of it in a paper published in Evolution back in 2007 (available to Socratic level subscribers from my archives), and an updated and expanded version of that sketch has been put out by Laland and collaborators in 2015. My early analysis began by noting that philosopher Karl Popper famously interpreted the MS as a theory of genes, lacking a comparable theory of forms (i.e., phenotypes). The field got started, however, as a theory of forms in Darwin’s days, with genetics taking on a fundamental role only after the rediscovery of Mendel’s work at the turn of the twentieth century. Consequently, I suggested, a major goal that an EES aims for is an improvement and unification of our theories of genes and of forms. This, seems to me, may best be achieved through an organic grafting of novel concepts onto the foundational structure of the MS, particularly evolvability, phenotypic plasticity (i.e., the ability of a single genotype to produce different phenotypes in response to environmental variation), epigenetic inheritance, complexity theory (from mathematics), and the theory of evolution in highly dimensional adaptive landscapes (from population genetics).


Laland et al.’s paper from 2015 is the most focused and systematic attempt to articulate the EES, explicitly aiming at clearing away inconsistencies in previous works. They begin with a comparison of core assumptions of the MS versus the EES. To give you an idea of what they are getting at, here are the entries for inheritance:


Genetic inheritance (MS): Genes constitute the only general inheritance system. Acquired characters are not inherited.


Inclusive inheritance (EES): Inheritance extends beyond genes to encompass (transgenerational) epigenetic inheritance, physiological inheritance, ecological inheritance, social (behavioural) transmission and cultural inheritance. Acquired characters can play evolutionary roles by biasing phenotypic variants subject to selection, modifying environments and contributing to heritability.


They then run through a series of alternative interpretations of important evolutionary phenomena according to the two frameworks. For instance, in the case of developmental plasticity:


MS: conceptualized as a genetically specified feature of individuals that can evolve under selection and drift. Focus is on the conditions that promote adaptive evolution of plastic versus non-plastic phenotypes. The primary evolutionary role of plasticity is to adjust phenotypes adaptively to variable environments. Plastic responses regarded as pre-filtered by past selection.


EES: considers reducing plasticity to a genetic feature to be explanatorily insufficient. Retains an interest in adaptive evolution of plasticity, but also focuses on how plasticity contributes to the origin of functional variation under genetic or environmental change, and how the mechanisms of plasticity limit or enhance evolvability, and initiate evolutionary responses. Many plastic responses viewed as reliant on open-ended (e.g., exploratory) developmental processes, and hence capable of introducing phenotypic novelty.


Moreover, Laland et al. provide readers with a comparison of different predictions originating from the competing frameworks. For instance, in the case of the relationship between genetic and phenotypic change:


MS: genetic change causes, and logically precedes, phenotypic change, in adaptive evolution.


EES: phenotypic accommodation (a non-genetic process) can precede, rather than follow, genetic change, in adaptive evolution.


Laland et al. also present a graphical outline of the structure of the Extended Evolutionary Synthesis, as they see it . It is instructive to comment on a number of features of their model. Phenotypic evolution—the target of explanation of the entire framework, just as it was for Darwin—is assumed to be affected by three classes of processes: those that generate novel variation, those that bias selection, and those that modify the frequency of heritable variation.


Beginning with the first class, these processes include classical ones like mutation, recombination, gene expression, and developmental regulatory processes. But also EES-specific ones like environmental induction (of developmental processes), niche construction, phenotypic accommodation, and facilitated variation. The second class (processes that bias selection) include only EES-related entries: developmental bias and niche construction, while the third class (processes that affect heritable variation) are all classical (mutation pressure, selection, drift, and gene flow) but are in turn affected by the previous class.


The resulting picture is one of complete and, seems to me, highly coherent, meshing of the MS and the EES perspectives, where the latter adds to but does not really replace any of the previously recognized mechanisms. Which brings me to the next question I wish to address concerning the most recent developments of the now more than 150-year-old Darwinian tradition: is the proposed shift from the MS to the EES akin to a Kunhian paradigm shift?


One of the most controversial aspects of the discussion surrounding the MS versus EES debate is the extent to which the new framework is claimed to be distinct from the old one. At one extreme, there are scientists who simply reject the idea that the EES presents much that is new, claiming that whatever new concepts are being advanced were in fact already part of the MS, either implicitly or explicitly. At the opposite extreme, some supporters of the EES have been making statements to the effect that the new framework somehow amounts to a rejection of fundamental aspects of Darwinism, akin to what philosopher Thomas Kuhn famously termed a “paradigm shift” within the discipline, thus aligning themselves with a tradition that can be fairly characterized as anti-Darwinian. My own position has always been that the truth lies somewhere in the middle (in this case!): the EES is significantly different from the MS, and yet the change does not reflect any kind of scientific revolution within modern biology, but rather more of the same process that has led us from the original Darwinism to neo-Darwinism to the MS itself.


Kuhn famously argued—on the basis, crucially, of examples drawn exclusively from physics—that science goes through an alternation of two phases: during “normal” or “puzzle solving” science, practitioners are focused on addressing specific issues from within a given theoretical framework and set of methods (the “paradigm”), which itself is not the target of empirical testing or conceptual revision. From time to time, however, a sufficient number of “anomalies,” or unresolved puzzles, accumulate and precipitate a crisis within the field. At that point scientists look for a new paradigm, better suited to take into account the insofar unresolved issues. If they find it, the new framework is quickly adopted and deployed in turn to guide a new phase of normal science.


Kuhn suggested a number of approaches to tell whether a paradigm shift has occurred (or, in our case, is in the process of occurring). These include five criteria for theory comparison, as well as three classes of potential incommensurability between theories. Let’s begin by examining the five criteria: (1) accuracy, (2) consistency (internal and with other theories), (3) explanatory scope, (4) simplicity, and (5) fruitfulness of the accompanying research program. Here is how the MS and EES compare, in my mind, according to the Kuhnian criteria:


Accuracy, MS: building on the original Darwinism, it has produced quantitative accounts of the change over time of the genetic makeup of natural populations.


Accuracy, EES: incorporates the same methods and results of both the original Darwinism and the MS, adding the explanation of developmental and other self organizing biological phenomena.


Consistency, MS: as internally consistent as any major scientific theory, features explicit external links to genetics, molecular biology, and ecology.


Consistency, EES: same degree of internal and external consistency as the MS, with the addition of external links to developmental biology, genomics, and complexity theory, among others.


Scope, MS: new facts about the biological world that are explained have been consistently uncovered for the past several decades.


Scope, EES: further expands the scope of the MS by explicitly including questions about the origin of evolutionary novelties, the generation of biological form, and the problem of genotype–phenotype mapping.


Simplicity, MS: uses a limited number of mechanisms (natural selection, genetic drift, mutation, migration, assortative mating) to account for evolutionary change over time.


Simplicity, EES: makes use of all the mechanisms of the MS, adding a number of others such as epigenetic inheritance, evolvability, facilitated (i.e., self-emergent) variation, etc.


Fruitfulness, MS: has a history of more than 70 years of vigorous research programs, building on the previous fruits of the original Darwinism.


Fruitfulness, EES: builds on the ongoing research program of the MS but has also already led to empirical (e.g., emergent properties of gene networks and of cell assemblages) and conceptual (e.g., evolvability, phenotypic plasticity) discoveries, though of course it is very much a work in progress as of the moment of this writing.


Even this brief survey ought to make it clear that the MS => EES is not a paradigm shift, but rather an organic expansion. Then there is the second test proposed by Kuhn to consider, a test in a sense more stringent, that of incommensurability. If two theories are incommensurable in even one of the three classes, a good argument can be made that a paradigm shift is occurring. The classes in question are methodological, observational, and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. The EES takes on board the same puzzles, and the same set of approaches, of the MS, but it also adds new puzzles (such as the appearance of so-called evolutionary novelties, like eyes, feathers, spines, and so forth), which were largely untouched, or dealt with only superficially, by the MS. It further adds new approaches, like interpretations of evolutionary changes in terms of niche construction, developmental plasticity, or epigenetic inheritance.


Observational incommensurability is tightly linked to the idea that observations are theory dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. For instance, in pre-relativity physics there was a (supposed) fact of the matter that some kind of substance, referred to as ether, had to be present in space in order for light to travel through it. After the famous Michelson–Morley experiment demonstrating that there was no such thing as ether, the relevant fact became the constancy of the speed of light and therefore the relativity of frames of reference. Nothing like that seems to be happening in evolutionary biology at the moment: the very same facts that have been catalogued and explained by the MS enter into the empirical corpus of the EES, to be further expanded with new facts that come to the forefront because of the additional conceptual advancements.


Semantic incommensurability has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being that of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity. Again, I do not discern any analogous shift in the terminology used by proponents of the MS versus EES. Key biological concepts, such as species, genes, phenotypes, niche, and so forth, retain similar and perfectly commensurable meanings, even though our understanding of their referents becomes increasingly sharp.


It seems, therefore, that Darwinism after the Modern Synthesis has proceeded along similar lines to those followed by Darwinism before the MS: a continuous expansion of both empirical knowledge and conceptual understanding, an expansion that is likely to continue for the remainder of the current century and beyond.


This discussion is in part an opportunity to call for a bit of house cleaning, so to speak, on the part of evolutionary biologists and philosophers of science. For instance, it is truly astounding that in France the Modern Synthesis, and in particular population genetics, was not included in standardized university curricula, or addressed within main research programs until the 1970s. Against the Darwinian picture that was developing abroad, French life scientists supported various forms of Lamarckism throughout the twentieth century, and some of that attitude still lingers. There is no good scientific reason for that, and it is hard not to pin such an attitude on sheer nationalism and the cultural worship of Lamarck. Needless to say, that sort of thing has no place in a mature science. The French are not the only culprits here, and the fact that there are “German,” “Russian,” and other “traditions” within evolutionary biology is more than a little bizarre.


It’s also somewhat surprising that behavioral biologists are still clinging to simplistic notions from sociobiology and evolutionary biology, which have long since been debunked. It’s not the basic idea that behaviors, and especially human behaviors, evolve by natural selection and other means that is problematic. The problem, rather, lies with some of the specific claims made, and methods used, by evolutionary psychologists.


It is also both surprising and problematic that some researchers are still pursuing non-“mechanistic” or non-“physicalist” research programs, whatever that means. Indeed, a major point of the EES is to help bring the focus back on the organism and even the ecosystem, and yet—as I just argued above—this does not require a wholly alternative synthesis at all.


Over time, Darwinism has advanced its own agenda by incorporating a variety of themes proposed by its critics, including “saltationism” (punctuated equilibrium) and “Lamarckism” (epigenetic inheritance, phenotypic plasticity, and niche construction). This is fine, so long as we keep in mind that the terms within scare quotes above are to be understood in a modern, radically updated sense, and not along the lines of what biologists were thinking decades or even centuries ago. It’s this inherent flexibility of Darwinism that has allowed people with views as divergent as Stephen Jay Gould and Richard Dawkins to (rightly) claim the Darwinian mantle.


This ability to incorporate critical ideas is neither just a rhetorical move nor somehow indicative of serious problems inherent in the Darwinian approach. In the end, the various Darwinian traditions in evolutionary biology are best understood as a wide ranging family of conceptual and research approaches, always in dialectic dialogue with each other, always in a constructive tension that transcends the agendas and (sometimes strong) personalities of the many individual scientists that recognize themselves as intellectual descendants of Charles Darwin. More than a century and a half later, evolutionary theory keeps evolving.

Darwinism in the modern era: more on the evolution of evolutionary theory – part I

1 (1)Scientific theories are always provisional accounts of how the world works, intrinsically incomplete, and expected to be replaced by better accounts as science progresses. The theory of evolution, colloquially referred to as “Darwinism,” is, of course, no exception. It began in 1858 with joint papers presented to the Linnaean Society by Charles Darwin and Alfred Russell Wallace and was formalized shortly thereafter in On the Origin of Species. The original theory featured two conceptual pillars: the idea of common descent (which was accepted by a number of scholars even before Darwin), and that of natural selection as the chief mechanism of evolution, and the only one capable of generating adaptation.


The first bit of tinkering took place shortly thereafter, when Wallace himself, together with August Weismann, proposed to drop any reference to Lamarckian theories of heredity because of the newly proposed notion of the separation between sexual and somatic cellular lines, thus generating what is properly known as neo-Darwinism. After undergoing a temporary crisis, as a result of increasing skepticism from paleontologists and developmental biologists, we enter two phases of the so-called Modern Synthesis, the biological equivalent of the Standard Model in physics: the first phase consisted in the reconciliation between Mendelism (i.e., genetics) and Darwinism (i.e., the theory of natural selection), leading to the birth of population genetics; the second phase consisted in an expansion of the theory to include fields like natural history, population biology, paleontology, and botany.


What happened to “Darwinism” after 1950? The Modern Synthesis (MS) reigned as the dominant paradigm in the field, rather unchallenged until the late 1980s and early 1990s. At which point a number of authors, coming from a variety of disciplines, began to question not so much the foundations but the accepted structure of the MS. By the very late twentieth-century and early twenty-first-century, calls to replace the MS with an Extended Evolutionary Synthesis (EES) had begun to grow loud, and to be countered by equally loud voices raised in defense of the MS. How did this happen, and what does it mean for the current status and future of evolutionary theory? To understand this we need to step back for a moment and take a broad view of conceptual developments in the biological sciences during the second half of the twentieth century.


The second half of the twentieth century has been an incredibly exciting time for biology, a period that has put the discipline on the map at least at the same level of interest as physics, the alleged queen of sciences, and arguably even more so. Let me remind you of some of the major developments that have made this possible, because they all—directly or indirectly—eventually fed into the current discussion over the MS versus the EES as dominant conceptual frameworks in evolutionary biology.


A major breakthrough in one of the foundational fields of the Modern Synthesis, population genetics, came with the invention of a technique called gel electrophoresis, which for the first time made it possible to directly assess protein and gene frequencies in large samples drawn from natural populations. While research on electrophoresis began as early as the 1930s, it was the breakthrough work of Richard Lewontin and John Hubby in 1966 that set population genetics on fire. The unexpected discovery was, as the authors put it, that “there is a considerable amount of genic variation segregating in all of the populations studied …[it is not] clear what balance of forces is responsible for the genetic variation observed, but [it is] clear the kind and amount of variation at the genic level that we need to explain.” This new problem posed by a much larger degree of genetic variation than expected in natural populations eventually led to a revolution in population genetics, and also directly to the origination of the impactful neutral theory of molecular evolution first proposed in 1968 by Motoo Kimura.


The neutral theory was a landmark conceptual development because for the first time since Darwin it challenged the primacy of natural selection as an agent of evolutionary change. To be sure, Kimura and colleagues didn’t think that phenotypic evolution (i.e., the evolution of complex traits, like eyes, hearts, etc.) occurred in a largely neutral fashion, but if it turned out that much of what goes on at the molecular level is independent of selective processes, then the obvious question is how is it possible that largely neutral molecular variation can give rise to non-neutral phenotypic outcomes. Eventually, the debate about the neutral theory—which raged on intensely for a number of years—was settled with a sensible and empirically consistent compromise: a lot of molecular variation is “near-neutral,” which means that the role of stochastic processes such as genetic drift at the molecular level is significantly higher than might have been expected on the basis of a face-value reading of the tenets of the Modern Synthesis.


What could possibly connect the near-neutral molecular level with the obviously functional and therefore likely selected phenotypic level? The obvious answer was: development. The only problem was that developmental biology had famously been left out of the Modern Synthesis. It looked like something was seriously amiss with modern evolutionary theory.


Things began to change as an offshoot of yet another revolution in biology: the rapid advances made in molecular biology after the discovery of the structure of DNA in 1953. While molecular biology kept accelerating its pace independently of organismal biology for several decades—until their confluence in the era of evolutionary genomics—in the late 1970s the existence of homeotic genes regulating embryonic patterns of development in Drosophila was discovered. It soon turned out that this and similar classes of regulatory genes are both widespread and evolutionarily conserved (i.e., they don’t change much over time), so that they are one of the major keys to the understanding of the complex interplay among genotype, development, and phenotype.


This new approach eventually flourished into a new field, known as evolutionary developmental biology, or evo-devo for short, and one of its major contributions so far has been a marked shift of emphasis in the study of morphology and development, from the sort of classical population genetic studies focused on structural genes to an emphasis on regulatory genes and their potential to help us build a credible theory of the origin of evolutionary novelties (i.e., new structures like wings or flower). As Prud’homme and colleagues put it in 2007:


Because most animals share a conserved repertoire of body-building and -patterning genes, morphological diversity appears to evolve primarily through changes in the deployment of these genes during development. … Morphological evolution relies predominantly on changes in the architecture of gene regulatory networks and in particular on functional changes within [individual] regulatory elements. … Regulatory evolution: (i) uses available genetic components in the form of preexisting and active transcription factors and regulatory elements to generate novelty; (ii) minimizes the penalty to overall fitness by introducing discrete changes in gene expression; and (iii) allows interactions to arise among any transcription factor and [regulatory genes].


The picture that emerges from this and many other studies is not incompatible with the simple mathematical models that were incorporated into the Modern Synthesis, but it does present us with a much more complex and nuanced understanding of genetic, developmental, and phenotypic evolution, so much so that it is little wonder that people have been increasingly referring to the current, very much in flux, version of evolutionary theory as the Extended Synthesis.


I have already mentioned the molecular biology revolution initiated in the 1950s, which eventually led to the genomic revolution. Both these radical developments initially affected evolutionary biology only indirectly, by providing increasingly powerful new analytical tools, such as gel electrophoresis, and later on gene sequencing. But inevitably genomics itself became an evolutionary science, once technical developments made it possible to sequence entire genomes more quickly and cheaply, and molecular biologists fully internalized, as Theodosius Dobzhansky famously put it, that nothing in biology makes sense except in the light of evolution. The structure and function, as well as the sheer diversity, of genomes are themselves not understandable if not through evolutionary lenses, so that genomics and evolutionary biology currently represent a rare example of synergism between scientific disciplines: the first provides tools for the latter to advance, while the second one allows for a theoretical understanding of the data that the first one accumulates at such a heady pace.


While of course other disciplines within biology have made progress during the second part of the twentieth century—ecology, for instance—the next bit of this panoramic view I wish to briefly comment on concerns yet another area of inquiry that had played only a secondary role during the Modern Synthesis: paleontology. The field had always been a thorn in the side of Darwinism, since many paleontologists early on had rejected the Darwinian insight, proposing instead the idea that macro-evolutionary change was qualitatively distinct from the sort of micro-evolution that Darwin famously modeled on the basis of plant and animal breeding (and of course, notoriously, creationists have always made a big deal of the distinction between micro- and macro-evolution, often without understanding it). Indeed, it was this very rejection, together with the apparent incompatibility of Mendelism and Darwinism, that led to the above mentioned period of “eclipse” of the Darwinian theory at the turn of the twentieth century.


Paleontology’s early alternative to Darwinism took the shape of orthogenetic theory (according to organisms change in the same direction over millions of years), which in turn was essentially a scaled-up version of Lamarckism, since it postulated an inner vital force responsible for long-term evolutionary trends, which many paleontologists saw as otherwise inexplicable within the Darwinian framework. It was George Gaylor Simpson’s magistral role within the Modern Synthesis that cleared away any remnants of orthogenesis from paleontology, doing for that field what Fisher, Haldane and Sewall Wright had done for Mendelian genetics: he convincingly argued that the sort of so-called “micro”-evolutionary processes accounted for by Darwinism could be extrapolated to geological timescales, thus yielding the appearance of macro-evolutionary changes of a qualitatively different nature. In reality, Simpson argued, the second is simply a scaled up version of the former.


Simpson, however, was arguably too successful, essentially making paleontology a second-rate handmaiden to population genetics while overlooking the potential for its original contributions—theoretical as well as empirical—to the overall structure of evolutionary theory. Eventually, Simpson’s “conservatism,” so to speak, led to a backlash: Niles Eldredge and Stephen Jay Gould, the enfants terribles of modern paleontology, published in 1972 a landmark paper proposing the theory of punctuated equilibria, according to which evolution, when seen at the macroscopic scale, works by fits and starts: long periods of stasis during which not much appears to be happening in a given lineage, interrupted by sudden “bursts” of phenotypic change. The theory was immediately misunderstood by many population geneticists, who thought that Eldredge and Gould were attempting to revive an old notion known as “hopeful monsters,” i.e., of instantaneous evolutionary change resulting from genome-wide restructuring.


To be fair, at some point Gould’s own anti-establishment rhetoric, and the fact that creationists often mentioned him in their support, contributed to the confusion. But in fact, the sort of punctuations that Eldredge and Gould saw in the fossil record takes place over tens of thousands of generations, thus leaving plenty of time for standard Darwinian processes to do their work. As they pointed out later on in the debate, the real novel issue is that of prolonged stasis, over millions of years, not the allegedly (but not really) “instantaneous” change. A major class of explanation proposed especially by Gould for this observed stasis had to do with developmental processes and constraints, which nicely connects the new paleontology with the emerging field of evo-devo mentioned above, making both of them into pillars of the ensuing Extended Synthesis in evolutionary biology.


(next time: the Stephen Jay Gould conceptual revolution and the birth of the Extended Evolutionary Synthesis)

Biology’s last paradigm shift and the evolution of evolutionary theory – part II

Last time we have seen how evolutionary theory has evolved over the past century and a half, why so many contemporary biologists are calling for what they refer to as the Extended Evolutionary Synthesis (see here and here), and how Darwin, building on David Hume, definitely rebutted the intelligent design argument advanced by William Paley. All as part of a discussion of a paper I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.” (full text here) In this second part we are going to look at whether the transition between natural theology and Darwinism constituted a paradigm shift, according to criteria laid out by philosopher of science Thomas Kuhn. As I mentioned last time, in the paper I also apply the same analysis to what happened after Darwinism, to more and more recent incarnations of evolutionary theory, but will not discuss that section here.


According to Kuhn, change in science is comprised of two distinct and alternating phases: during “normal science” scientists use the dominant theoretical and methodological tools within a field of inquiry to solve “puzzles”, i.e. problems arising within a particular theory. However, from time to time the number of such problems that cannot be resolved within the adopted framework (“anomalies”) becomes large enough to trigger a crisis, which is then resolved if a new “paradigm” is arrived at to replace the old framework and provide new guidance for further normal-puzzle solving science.


Typically, one of the problems with the Kuhnian approach is that Kuhn did not define exactly what he meant by paradigm, which means that it is not entirely clear what may constitute a paradigm shift. For the purposes of my argument, I will use the commonly accepted interpretation of paradigms as encompassing the “disciplinary matrix,” which means not just the dominant theory or theories within a given field, but also the accompanying methodologies, training strategies for the next generation of scientists, and – no less important – the pertinent metaphysical and epistemological assumptions.


Kuhn suggested five criteria for comparing competing paradigms and for theory choice: 1) Accuracy; 2) Consistency, both internal and with other theories; 3) Scope, in terms of how widely the explanatory reach of a theory extends; 4) Simplicity; and 5) Fruitfulness, in terms of further research. Roughly speaking, then, the comparison between the two paradigms of natural theology and Darwinism is striking. Let’s go through it criterion by criterion.


Accuracy


Natural theology: all explanations are ad hoc, since God’s will is inscrutable.


Darwinism: it can explain some surprising facts about the biological world, like the complexities of the flower structure in some orchid species, or the intricacies of the life cycles of some parasites.


Consistency


Natural theology: internally inconsistent with the idea of an all-powerful, all good God (the problem of natural evil).


Darwinism: as internally consistent as any major scientific theory; external links to other sciences, particularly Darwin’s prediction that the age of the earth had to be greater than what commonly thought by geologists and physicists of the time (turns out, he was right).


Scope


Natural theology: allegedly all-encompassing, but supernatural “explanations” are epistemologically empty. That is, to say “God did it” sounds like an explanation, but it really doesn’t explain anything.


Darwinism: new facts about the biological world that are explained by the theory have been consistently uncovered for more than one and a half centuries.


Simplicity


Natural theology: deceptively simple, if one neglects the obvious question of the origin and makeup of the Creator.


Darwinism: in its original form invokes a small number of mechanisms to explain biological history and complexity; more recent versions invoke more mechanisms, but still a relatively limited number.


Fruitfulness


Natural theology: did not lead to any research program or discovery.


Darwinism: has maintained a vigorous research program for more than one and a half centuries.
According to the above summary, then, the Darwinian paradigm is definitely preferable to Paley’s natural theology – not surprisingly. More interestingly for our purposes here, these are all clear signs of a paradigm shift, the only one ever occurred in evolutionary biology, I argue in the rest of the original paper.


Kuhn’s theory of paradigm shifts famously included another controversial notion: incommensurability, the idea that crucial concepts within a given paradigm are simply not comparable to what superficially appear to be equivalent concepts within another paradigm. Kuhn identified three distinct types of incommensurability: methodological, observational and semantic.


Methodological incommensurability refers to the notion that different paradigms lead scientists to pick different “puzzles” as objects of research, as well as to the idea that scientists then develop distinct approaches to the solution of those puzzles. Obviously, natural theology and Darwinism are methodologically incommensurable: while they both rely on observation and comparative analyses, their goals are entirely different. For Paley, the focus is on the intricate complexity of living organisms, constantly interpreted as an obvious indication of the will and omnipotence of the Creator. Darwin, instead, pays particular attention to precisely those biological phenomena that are troubling to the notion of intelligent design, as in this famous passage:


“I cannot persuade myself that a beneficent and omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of Caterpillars.” (letter collected by Francis Darwin 1887).


More broadly, the sort of “puzzles,” to use Kuhn’s terminology, that Darwinists began to pay attention to concern the historical relationships between different species of organisms (something that is defined out of existence within the natural theological paradigm, since species are specially created), as well as the kind of ecological settings that bring about different adaptations (again, a problem ruled out within natural theology, where adaptations are the direct result of an intelligent act).


Observational incommensurability is tightly linked to the idea that observations are theory-dependent: what is considered a “fact” within one theoretical context may not be such in a different theoretical context. This is perhaps one of the most controversial of Kuhn’s notions, famously illustrated with images from Gestalt psychology, where the same pattern of lines on paper can be interpreted in dramatically different fashions (e.g., a vase or two faces, an old or a young woman, a rabbit or a duck, etc.).


The problem, of course, is that if we take the Gestalt metaphor seriously, we are led to the position that there is no true or even better way to interpret the data, which in turn leads to the constructivist temptation: any theory is just as good as any other, and there really is no way to measure progress in science. Kuhn strongly disavowed such an extreme interpretation of his ideas, and the notion of theory-dependence of observations is now commonly accepted in philosophy of science and embedded in textbook treatments of the subject.


Be that as it may, it is hard to imagine examples of observational incommensurability between natural theology and Darwinism, in part no doubt because no sophisticated way of gathering data was accessible – beyond direct observation and rudimentary experiments – to proponents of the two paradigms.


Finally we get to semantic incommensurability. This has to do with shifts in the meaning of terms used by scientists, one of Kuhn’s examples being the concept of “mass,” which is a conserved, static quantity in Newtonian mechanics, but becomes interchangeable with energy within the framework of Einstein’s relativity.


For the purposes of our discussion, one could make the argument that a similar situation holds for the shifting concept of species between natural theology and Darwinism. Both paradigms do refer to “species,” but the meaning of the term is entirely different. For Paley, species were fixed entities set in place by the action of the Creator – in that sense not far from Newton’s own conception of the physical world, and particularly of the laws governing it. For Darwin, however, species are ever changing entities with no sharp boundaries, which are altered by evolutionary processes in a continuous, gradualistic fashion.


All in all, then, it appears that whether we use the first set of Kuhnian criteria or the various notions of incommensurability, there are very strong reasons to conclude that the shift between natural theology and Darwinism was, in fact, a paradigm shift. It was also, in a very important sense, a shift from a proto-scientific to a scientific view of biology: Darwin and Wallace abandoned any reference to supernatural forces, thus literally establishing a whole new field of science, which keeps, ahem, evolving even today.

Biology’s last paradigm shift and the evolution of evolutionary theory – part I

I find myself devoting more and more time to public outreach and what is increasingly referred to as public philosophy. But, you might know, I am also an academic and publish scholarship in philosophy of science. Which is why from time to time I like to combine the two and do a post that highlights for the general public some of my own technical publications. (A series of these, covering the range 2010-2016, can be found here.) The present essay refers to a paper that I published back in 2012, entitled “Biology’s last paradigm shift. The transition from natural theology to Darwinism.”


The theory of evolution, which provides the conceptual framework for all modern research in organismal biology and informs research in molecular biology, has gone through several stages of expansion and refinement. Darwin and Wallace proposed the original idea back in 1858, centering on the twin concepts of natural selection and common descent. Shortly thereafter, Wallace and August Weismann worked toward the complete elimination of any Lamarckian vestiges from the theory, leaning in particular on Weismann’s concept of the separation of soma and germ lines, and resulting in what is sometimes referred to as “neo-Darwinism.”


The theory then experienced a period of “eclipse” at the turn of the 20th century. Many biologists accepted the idea of common descent but either rejected or greatly de-emphasized the importance of natural selection as an evolutionary mechanism. The situation was exacerbated by the rediscovery of Mendel’s work, which pointed to an apparent incompatibility between discrete inheritance particles (“genes”) and the sort of continuous quantitative variation necessary for natural selection to produce gradual evolutionary change.


Famously, the crisis was overcome with the reconciliation of Mendelian and statistical genetics made possible by Ronald Fisher, J.B.S. Haldane and Sewall Wright in the 1930s, and that later on culminated in the Modern Synthesis (henceforth, MS) of the 1940s, to which several additional authors made important contributions, including but not limited to Theodosius Dobzhansky, Julian Huxley, Ernst Mayr, and George G. Simpson. The MS is still the version of the theory largely accepted by the scientific community, and it is what graduate students in the discipline are trained on.


More recently, several authors have pushed for an Extended Synthesis (henceforth, ES) in evolutionary biology, initially from a variety of individual perspectives largely rooted in particular fields of inquiry, such as Evo-Devo or phenotypic plasticity, and now with a more concerted effort aimed explicitly at the formalization of a broader conceptual framework for evolutionary biology (see here).


The ES is very much a work in progress, but the idea is to accomplish a number of goals that have so far proven somewhat elusive: first and foremost, to finally bring developmental biology – famously left out of the MS – into the fold. Second, to provide a coherent way to reconcile the “holistic” tendencies of organismal biological research with the decidedly more reductionist approach of molecular biology and its most recent products, the various “-omics” (genomics, proteomics, metabolomics, etc.). Third, to incorporate as primary players a number of biological phenomena and processes that had been either discarded or minimized within the context of the MS, e.g., phenotypic plasticity, genetic accommodation, epigenetic inheritance, etc. Fourth, to expand the standard theoretical toolkit of the MS – which is primarily grounded in population and quantitative genetic theory – to include elements from computational biology and complexity theory. Fifthly, to incorporate in evolutionary theory new concepts that have emerged from theoretical research during the past several years, chief among them the triad constituted by evolvability (i.e., the possibility of the evolution of evolutionary mechanisms), modularity (of different components of an organism’s phenotype) and robustness (i.e., the degree of resilience of developmental mechanisms).


In the paper I address the question of whether any of the above amounts to something akin to Thomas Kuhn’s famous paradigm shifts, i.e. whether evolutionary biology has ever undergone anything like what Kuhn describes as a moment of revolutionary science. I argue that it has not, and that it will not, even if the ES will succeed in establishing itself. Rather, I think the only time in the history of biology when such a transition has occurred was during the 19th century, when Darwin’s original theory replaced the dominant “paradigm” of the day, Paley-style natural theology. In the following I will discuss that particular paradigm shift, together with the general Kuhian notion of alternation between revolutionary and “normal” science. People interested in the same analysis applied to what happened to evolutionary theory after the onset of Darwinism are referred to the full paper.


William Paley is responsible for the most articulate defense of the idea that living organisms are the result of a special creation by supernatural forces. In his Natural Theology (1802) he famously introduced the metaphor of a watch and its watchmaker:


“In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there: I might possibly answer, that for any thing I know to the contrary, it had lain there for ever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer which I had before given, that for any thing I knew, the watch might have always been there. Yet why should not this answer serve for the watch, as well as for the stone? Why is it not as admissible in the second case as in the first? For this reason, and for no other, viz., that when we come to inspect the watch, we perceive (what we could not discover in the stone) that its several parts are framed and put together for a purpose (…) This mechanism being observed (…) the inference, we think, is inevitable, that the watch must have had a maker; that there must have existed, at some time, and at some place of other, an artificer or artificers, who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.” (1802, p. 5).


Paley argued by analogy from the watch/watchmaker inference to the complex living organism/supernatural intelligent designer inference, despite the fact that such inference had been dealt a devastating philosophical blow by Hume, writing several decades before Paley, in 1779. Contrary to what has been assumed for a long time, Paley appeared to have been familiar with Hume, though he was obviously unconvinced by the latter’s arguments.


Darwin was initially persuaded by Paley’s reasoning, but eventually of course provided the decisive counterpoint that was missing in Hume: an alternative mechanism (natural selection) to generate both biological complexity and the pattern of common descent that was denied by Paley’s natural theology. It behooves us to briefly examine Darwin’s answer, so that we may then proceed to compare Paley’s and Darwin’s “paradigms” in view of Kuhn’s ideas to see whether we can sensibly talk of a paradigm shift occurring at the very onset of evolutionary biology as an independent discipline.


Although Paley is mentioned by name only once in The Origin (on p. 201 of the first edition in the chapter on “Difficulties on Theory”), Darwin mounts a concerted and sustained attack on natural theology in chapters VI and XIII of his magnum opus. Here are some relevant quotes to establish the case. First Darwin explicitly contrasts the type of “explanation” provided by natural theologians with a naturalistic explanation typical of the new science:


“He who believes in separate and innumerable acts of creation will say, that in these cases [of organisms’ behavior that have changed in response to a new environment, without – yet – an accompanying change in the structure of the proper organs] it has pleased the Creator to cause a being of one type to take the place of one of another type; but this seems to me only restating the fact in dignified language. He who believes in the struggle for existence and in the principle of natural selection, will acknowledge that every organic being is constantly endeavouring to increase in numbers; and that if any one being vary ever so little, either in habits or structure, and thus gain an advantage over some other inhabitant of the country, it will seize on the place of that inhabitant, however different it may be from its own place.” (1859, p. 186)


Then he proceeds to directly criticize Paley’s use of analogies to draw a parallel between the inference to human design and the inference to supernatural design:


“It is scarcely possible to avoid comparing the eye to a telescope. We know that this instrument has been perfected by the long-continued efforts of the highest human intellects; and we naturally infer that the eye has been formed by a somewhat analogous process. But may not this inference be presumptuous? Have we any right to assume that the Creator works by intellectual powers like those of man?” (p. 188)


Immediately afterwards, he goes so far as laying out the criteria for the falsification of his hypothesis, in sharp contrast of course with the natural theologian’s ideas, which cannot be falsified:


“If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. … Natural selection will never produce in a being anything injurious to itself, for natural selection acts solely by and for the good of each. No organ will be formed, as Paley has remarked, for the purpose of causing pain or for doing an injury to its possessor. If a fair balance be struck between the good and evil caused by each part, each will be found on the whole advantageous.” (pp. 189, 201)


To summarize, then, the idea of intelligent design – which had been around at least since Plato’s Timaeus – had been severely crippled on philosophical grounds by Hume in the 18th century. Still, Paley was able to mount a spirited and detailed defense of it at the onset of the 19th century, while Darwin provided the final blow to it (except of course for the modern resurgence of creationism, which is not an intellectually meaningful movement) for the first time on scientific grounds during the second part of the 19th century. It is on the latter transition that I wish to focus in part II, from the perspective of Kuhn’s paradigm shifts.


(next: the Paley-Darwin transition as a paradigm shift)

Biological landscapes, surfaces, and morphospaces: what are they good for?

ammonite

Metaphors are rampant in both everyday language and in science, and while they are inevitable, readers of this blog also know by now that I’m rather skeptical of their widespread use, both in professional publications and, especially, when addressing the general public. (See here, here, here, and here.) One such problematic metaphor is that of so-called adaptive landscapes, or surfaces, in evolutionary biology, something on which I did a fair amount of research when I was running a laboratory of ecology and evolutionary biology.

My detailed criticism of the way the landscape metaphor has sometimes warped biologists’ thinking is detailed in a chapter that was published back in 2012 as part of a very interesting collection entitled The Adaptive Landscape in Evolutionary Biology, edited by Erik Svensson and Ryan Calsbeek for Oxford University Press. As it often happens, mine was the lone contribution from the token skeptic…

Few metaphors in biology are more enduring than the idea of adaptive landscapes, originally proposed by Sewall Wright in 1932 as a way to visually present to an audience of typically non-mathematically savvy biologists his ideas about the relative role of natural selection and genetic drift in the course of evolution. The metaphor was born troubled, not the least reason for which is the fact that Wright presented different diagrams in his original paper that simply cannot refer to the same concept and are therefore hard to reconcile with each other. For instance, in some usages, the landscape’s non-fitness axes represent combinations of individual genotypes, while in other usages the points on the diagram represent gene or genotypic frequencies, and so are actually populations, not individuals.

typical (hypothetical) fitness landscape

Things got even more confusing after the landscape metaphor began to play an extended role within the Modern Synthesis in evolutionary biology and was appropriated by G.G. Simpson to further his project of reconciling macro- and micro-evolution, i.e. to reduce paleontology to population genetics. This time the non-fitness axes of the landscape were phenotypic traits, not genetic measures at all. How one would then translate from one landscape to another (i.e., genes to morphologies) is entirely unaddressed in the literature, except for vague motions to an ill-defined and very rarely calculated “genotype-phenotype mapping function.”

These are serious issues, if we wish to use the landscape metaphor as a unified key to an integrated treatment of genotypic and phenotypic evolution (as well as of micro- and macro-evolution). Without such unification evolutionary biology would be left in the awkward position of having two separate theories, one about genetic change, the other about phenotypic change, and no conceptual bridge to connect them.

To try to clarify things a bit, I went through the available literature and arrived at a typology of four different kinds of “landscapes” routinely used by biologists:

Fitness landscapes. These are the sort of entities originally introduced by Wright. The non-fitness dimensions are measures of genotypic diversity. The points on the landscape are typically population means, and the mathematical approach is rooted in population genetics. (see figure above)

Adaptive Landscapes. These are the non straightforward “generalizations” of fitness landscapes introduced by Simpson, where the non-fitness dimensions now are phenotypic traits. The points on the landscape are populations speciating in response to ecological pressures or even above-species level lineages (i.e., this is about macro-evolution). There is — with very special exceptions discussed in my paper — no known way to move from fitness to adaptive landscapes or vice versa, even though this is usually assumed by authors.

Fitness surfaces.These were introduced by Russell Lande and Steve Arnold back in the ‘80s to quantify the study of natural selection. Here phenotypic traits are plotted against a surrogate measure of fitness, and the landscapes are statistical estimates used in quantitative genetic modeling. The points on the landscape can be either individuals within a population or population means, in both cases belonging to a single species (i.e. this is about micro-evolution).

Morphospaces. These were first articulated by paleontologist David Raup in the mid-’60s, and differ dramatically from the other types for two reasons: (a) they do not have a fitness axis; and (b) their dimensions, while representing phenotypic (“morphological”) traits, are generated via a priori geometrical or mathematical models, i.e. they are not the result of observational measurements. They typically refer to across species (macro-evolutionary) differences, though they can be used for within-species work as well.

The first thing to note is that there are few actual biological examples of fitness landscapes (Wright-style) or Adaptive Landscapes (Simpson-style) available, while there is a good number of well understood examples of morphospaces (Raup-style) and particularly of adaptive surfaces (Lande–Arnold style). These differences are highly significant for my discussion of the metaphor. The paper summarizes examples — both conceptual and empirical — of each type of landscape and the complex, often barely sketched out, relationships among the different types.

When it comes to asking what the metaphor of landscapes in biology is for, we need to distinguish between the visual metaphor, which is necessarily low-dimensional, and the general idea that evolution takes place in some sort of hyper-dimensional space. Remember that Wright introduced the metaphor because his advisor suggested that a biological audience at a conference would be more receptive toward diagrams than toward a series of equations. But of course the diagrams are simply not necessary for the equations to do their work. More to the point, subsequent research by my former University of Tennessee colleague Sergey Gavrilets and his collaborators has shown in a rather dramatic fashion that the original (mathematical) models were far too simple and that the accompanying visual metaphor is therefore not just incomplete, but highly misleading. It turns out that hyper-dimensional dynamics are very much qualitatively different from the low-dimensional ones originally considered by Wright.

In a very important sense Wright’s metaphor of fitness landscapes was meant to have purely heuristic value, to aid biologists to think in general terms about how evolution takes place, not to actually provide a rigorous analysis of, or predictions about, the evolutionary process (it was left to the math to do that work). Seen from this perspective, fitness landscapes have been problematic for decades, generating research aimed at solving problems — like the so-called peak shift one (how do populations stuck on a local fitness peak “shift” to a higher one?) that do not actually exist as formulated, since high-dimensional landscapes don’t have “peaks” at all, as their topology is radically different.

There are problems also with the Lande-Arnold type landscapes (discussed in the paper), but here I want to shift to some good news: the actual usefulness of the fourth type of landscape: Raup-style morphospaces. One of the best examples was produced by Raup himself, with crucial follow-up by one of his graduate students, John Chamberlain. It is a study of potential ammonoid forms that puts the actual (i.e., not just heuristic) usefulness of morphospaces in stark contrast with the cases of fitness and adaptive landscapes. Ammonoids, of course, were beautiful shelled marine invertebrates that existed in a bewildering variety of forms for a good chunk of Earth’s biological history, and eventually went extinct 65 million years ago, together with the dinosaurs. This is going to be a bit technical, but stick with me, it will be worth it.

Raup explored a mathematical-geometrical space of ammonoid forms defined by two variables: W, the rate of expansion of the whorl of the shell; and D, the distance between the aperture of the shell and the coiling axis. Raup arrived at two simple equations that can be used to generate pretty much any shell morphology that could potentially count as “ammonoid-like,” including shells that — as far as we know — have never actually evolved in any ammonoid lineage. Raup then moved from theory to empirical data by plotting the frequency distribution of 405 actual ammonoid species in W/D space and immediately discovered two interesting things: first, the distribution had an obvious peak around 0.3 <D <0.4 and W near 2. Remember that this kind of peak is not a direct measure of fitness or adaptation, it is simply a reflection of the frequency of occurrence of certain forms rather than others. Second, the entire distribution of ammonoid forms was bounded by the W = 1/D hyperbola, meaning that few if any species crossed that boundary on the morphospace. The reason for this was immediately obvious: the 1/D line represents the limit in morphospace where whorls still overlap with one another. This means that for some reason very few ammonites ever evolved shells in which the whorls did not touch or overlap.

one-peak ammonoid morphospace

Raup’s initial findings were intriguing, but they were lacking a sustained functional analysis that would account for the actual distribution of forms in W/D space. Why one peak, and why located around those particular coordinates? Here is where things become interesting and the morphospace metaphor delivers much more than just heuristic value. John Chamberlain, a student of Raup, carried out experimental work to estimate the drag coefficient of the different types of ammonoid shells. His first result clarified why most actual species of ammonoids are found below the W=1/D hyperbola: shells with whorl overlap have a significantly lower drag coefficient, resulting in more efficiently swimming animals.

However, Chamberlain also found something more intriguing: the experimental data suggested that there should be two regions of the W/D morphospace corresponding to shells with maximum swimming efficiency, while Raup’s original frequency morphospace detected only one peak. It seemed that for some reason natural selection found one peak, but not the other. Four decades had to pass from Raup’s paper for the mystery of the second peak to be cleared up: the addition of 597 new species of ammonoids to the original database showed that indeed the second peak had also been occupied!, a rather spectacular case of confirmed prediction in evolutionary biology, not exactly a common occurrence, particularly in paleontology.

two-peak ammonoid morphospace, with representative shell forms

So, is the landscape metaphor in biology useful? It depends. The original versions, those introduced by Sewall Wright to make his math accessible to his colleagues, have been highly influential for decades, and yet have arguably channeled both empirical and theoretical research in unproductive directions, inventing problems (like the peak shift one) that arguably do not exist, at least not as formulated. The Lande-Arnold landscapes, which I have not discussed in this post, but do treat in the paper, have a mixed record. They have been heuristically useful for biologists interesting in quantifying natural selection in the field, but have also arguably brought about a degree of tunnel vision in both the theoretical and empirical study of that most important concept in modern evolutionary theory. Morphospaces, by contrast, have a very good record of being useful in terms of generating insight into the evolution of animal (and plant) form, and yet, they are actually the least commonly deployed version of the landscape idea in the technical literature. And because population genetics, with its mathematical approach, is considered more sophisticated than paleontology, things are unlikely to change in the near future. Unfortunately.

Why machine-information metaphors are bad for science education, part II: the search for new metaphors

metaphor vs simileWhile discussing some sections of a paper I wrote with Maarten Boudry, we have seen a number of reasons why using machine-information metaphors is bad for science education. As I pointed out before, the full paper also devotes quite a bit of space to arguing that those metaphors haven’t been particularly good in actual scientific research. One of the fascinating things to watch after I posted the first part of this commentary was the number of people who vehemently defended the “biological organisms are machines” take, both here on the blog and on my Twitter feed. It’s like here we are, in the second decade of the 21st century, and there are still a lot of Cartesians around, who have apparently never heard of David Hume. Oh well.

In the conclusion of this two-part series I am going to focus on the last section of my paper with Maarten, where we discuss the search for alternative metaphors, and in the end (spoiler alert!) suggest that the best thing to do at this point is just to describe things as they are, staying as clear as possible of metaphorical language. And when one really cannot avoid it, then use multiple metaphors and be very clear on the limits of their use. Let’s take a look.

In their classic work on metaphors, Lakoff and Johnson argue that the basic function of metaphorical concepts is to structure a new kind of experience in terms of a more familiar and delineated experience. In science as well as in everyday language, metaphors highlight particular aspects of whatever it is we are trying to grasp, but they will inevitably distort others. For example, the image of the “tree of life,” with new species branching off as budding twigs and extinct species as dead branches, is an instructive approximation of the relations of evolutionary descent. However, it can also foster misconceptions about “progress” in evolution, or lead to a simplistic conception of speciation events, or to a downplay of horizontal gene transfer and reticulate (i.e., by interspecies hybridization) speciation events. To give one more example, in physical chemistry the model of the atom as a miniature solar system, with electrons orbiting the nucleus as planets, though still having wide public appeal, is fundamentally inaccurate.

Of course, no metaphor will do its job perfectly, but it is crucial to realize, as Lakoff and Johnson have shown, that the widespread deployment of a particular metaphor can have a feedback effect on the way we perceive things, not just how we present them to others. In the examples discussed in my paper with Maarten, the lure of machine-information metaphors in the history of biology has invited scientists to think of genomes as “blueprints” for organisms, written in the four-letter alphabet of DNA and readable in a manner analogous to a computer code. But as we argue, the machine-information conception of living systems has led both the public and the scientific community astray.

In response to this problem, some scientists and science educators have proposed several alternative and improved metaphors to characterize the relationship between genotype and phenotype. Biologist Patrick Bateson, for instance, was probably the first to compare the DNA sequence of living organisms with a recipe for a cake. The idea of a genetic recipe has several advantages over the blueprint metaphor, the most important being that it takes into account pleiotropy (one gene affecting more than one trait) and epistasis (gene–gene interactions). As a consequence, the simple picture of a one-to-one (or close to) correspondence between particular genes and phenotypic traits is abandoned, which becomes clear when one considers that there is no way to locate particular ingredients in individual crumbs of a cake. Accordingly, there is no possibility of reverse-engineering the end product to the set of procedures (the “recipe”) that made the final product possible. This has important consequences not just for science education, but for research agendas, as the idea of ‘‘reverse engineering’’ is commonly invoked everywhere from genomic studies to the understanding of the brain.

Of course, if carried too far, the recipe metaphor can in turn be quite misleading. To get the desired result, a cook has to lump together different ingredients in the correct proportions and follow a set of instructions for handling the dough and preparing the oven. But actual developmental encoding in living organisms is an enormously more complex and very different sort of procedure, which is also highly dependent on epigenetic factors and unpredictable vagaries of the external environment. The expression of specific genes in the course of development resembles nothing like the way a cook handles the ingredients of a recipe. Living organisms are also highly differentiated in a number of functional parts or components (cell types, tissues, etc.), in contrast with the homogenous cake that comes out of the oven. Moreover, the genome is not written in anything like a ‘‘language,’’ as in the case of a recipe, and it certainly does not contain a description of the desired end product in any meaningful sense of the word ‘‘description.’’

Condit and colleagues have discussed the recipe metaphor as an alternative to talk of blueprints, pointing out that it was adopted ‘‘with surprising swiftness’’ by science popularizers and the media in the 1990s. However, they also remark that, as a new ‘‘master metaphor’’ to capture the relationship between genotype and phenotype, the image of a recipe for a cake has little to recommend either. For example, evoking recipes can invite people to think of the genome as a step-by-step manual that describes ‘‘how to make a human,’’ in that sense falling into the same trap as the idea of a blueprint.

That being said, if contrasted with the blueprint metaphor, the recipe metaphor conveys the point about lack of one-to-one correspondence between genes and phenotypes very well, and hence it highlights an important fact about development and what biologists call the Genotype => Phenotype map. If the recipe metaphor is used within this restricted context, for example in explicit contrast with the characteristics of a blueprint, it is immediately clear what are the salient points of connection with living systems, and people are less likely to be misled by stretching the metaphor beyond usefulness. If the recipe metaphor is presented as an alternative to the blueprint, however, it is bound to mislead people no less than its rival.

The same point applies to other interesting metaphors that have been proposed in this context, for example Lewis Wolpert’s comparison of early embryonic development with the Japanese art of origami. The analogy highlights the circuitous step-by-step development of the early embryo, but of course in a piece of origami art the structure is imposed top-down from an intelligent agent, whereas the functional differentiation in the embryo is regulated bottom-up by a complex interaction between genes and environment. Moreover, origami simply fold to yield the final product, which in a very real sense is already there from the beginning. This is definitely not the way embryos develop, with their ability to respond to local and external environmental fluctuations.

The general problem that we have been discussing seems to us to be not just that one kind of metaphor or another is woefully inadequate to conceptualize biological organisms and their evolution. It is that it simply does not seem to be possible to come up with a metaphor that is cogent and appropriate beyond a very limited conceptual space. Although some of the alternatives are more accurate than the blueprint metaphor (in some respects), Maarten and I certainly have not found one that we would recommend as a replacement. Should we therefore try to avoid the use of metaphors in biological teaching and research altogether? Or do we simply expect too much from metaphors in science and education?

Analogical and metaphorical thinking is widespread among human beings, although of course different cultures and historical moments inspire people to use different metaphors. After all, a metaphor is an attempt to make sense of novel concepts by pairing them with known ideas to increase our overall understanding. Metaphorical thinking is therefore part of our language, and language is inextricably connected to our thinking, but to put it as Wittgenstein did: ‘‘It is, in most cases, impossible to show an exact point where an analogy starts to mislead us.’’ Yet a great part of doing philosophy consists precisely in clarifying our language in an attempt to advance our thinking. To quote Wittgenstein again: ‘‘Philosophy is a battle against the bewitchment of our intelligence by means of our language.’’ To complicate matters further, there is emerging empirical evidence that the human brain processes metaphors in a specific fashion: research on Alzheimer’s patients, for instance (see ref. in the paper), found that impairment of the brain’s ‘‘executive’’ function, associated with the prefrontal cortex, leads to poor understanding of novel metaphors (while, interestingly, comprehension of familiar metaphors is unaffected). Metaphorical thinking seems to be a biologically entrenched functional mode of our brains, and may therefore be hard to avoid altogether.

Both science and philosophy have made ample use of metaphorical and analogical thinking, sometimes with spectacularly positive results, at other times more questionably so. Nonetheless, it seems that nowhere is metaphorical thinking so entrenched — and so potentially misleading — as in biology. Given the maturity of biology as a science, and considering that it deals with objects whose nature is not as alien to our daily experience as, say, those of quantum physics, Maarten and I do not actually see any good reason for clinging onto outdated metaphors in biological education and research for characterizing living organisms, their genomes and their means of development. Taking into account the fact that the machine information metaphors have been grist to the mill of ID creationism, fostering design intuitions and other misconceptions about living systems, we think it is time to dispense with them altogether. Still, we are also not as naive as to expect that this advice will be followed by scientists and science educators any time soon, precisely because the machine/information metaphor is so entrenched in biology education. What to do then? We propose two approaches, one for science educators, the other for practicing scientists.

In science education, talk of metaphorical thinking can be turned into a teaching moment. Students (and the public at large) would actually greatly benefit from explanations that contrast different metaphors with the express goal of highlighting the limitations intrinsic in metaphors and analogies. So, for instance, science educators and writers could talk about the human genome by introducing the blueprint metaphor, only to immediately point out why it does not capture much of what genomes and organisms are about; they could then proceed to familiarize their students and readers with alternative metaphors, say the recipe one, focusing on differences with the original metaphor while of course not neglecting to point out the (different) deficiencies of the new approach as well. The goal of this process would be to foster a cautious attitude about metaphorical thinking, as well as to develop a broader understanding of how unlike commonsense modern science really is. On the latter point, it is interesting to note, for instance, that a popular refrain among evolution or global warming deniers is that ‘‘simple commonsense’’ shows that the scientists are wrong, a position that ignores the proper weight of technical expertise in favor of a folk understanding of nature. It is therefore crucial that the public appreciates the limitations of common sense thinking about science.

There is an analogous teaching moment that can be brought to bear when research scientists engage in unbridled metaphorical thinking: we could refer to this as a philosophy appreciation moment. Scientists are notoriously insensitive to, or even downright dismissive of, considerations arising from the history and philosophy of their discipline, and often for good practical reasons: modern science is a highly specialized activity, where there is barely enough time to keep up with the overwhelming literature in one’s own narrow field of research, and certainly not enough incentive to indulge in historical readings or philosophical speculation. Nonetheless, historians and philosophers of science can easily show the pitfalls of metaphorical thinking (by using well-documented historical examples) and even get across to their colleagues some basic notions of philosophy (by analyzing the effects of particular metaphors on the development of specific lines of scientific inquiry). None of this will quickly amount to overcoming C.P. Snow’s infamous divide between ‘‘the two cultures,’’ but it may bring about better understanding and appreciation of philosophy by scientists, and perhaps even help science see new horizons that have been hitherto obscured by a superficially illuminating metaphor.