On the different ways of doing theory in biology

Theoretical biology’ is a surprisingly heterogeneous field, partly because it encompasses ‘‘doing theory’’ across disciplines as diverse as molecular biology, systematics, ecology, and evolutionary biology. Moreover, it is done in a stunning variety of different ways, using anything from formal analytical models to computer simulations, from graphic representations to verbal arguments. A few years ago I co-organized a workshop on this topic at the Konrad Lorenz Institute for theoretical biology in Vienna, and then published an edited volume of the journal Biological Theory collecting all contributions.

In my paper I surveyed a number of aspects of what it means to do theoretical biology, and how they compare with the allegedly much more restricted sense of theory in the physical sciences. I also tackled a somewhat recent trend toward the presentation of all-encompassing theories in the biological sciences, from general theories of ecology to an attempt to provide a conceptual framework for the entire set of biological disciplines. I also discussed the roles played by philosophers of science in criticizing and shaping biological theorizing. The full paper is available for download here (free), and the edited volume can be found here (articles beyond paywall). Let me, however, summarize my main points to facilitate a general discussion.

First, I discussed the issue of alleged laws in biology. If there is anything that characterizes physics as a science it is its unending quest for universal laws, from Newton’s mechanics to the current (and highly controversial) string theory. This is the case despite the fact that influential philosophers of science like Van Fraassen and Giere maintain laws play a marginal and mostly didactical role, even in physics. Regardless, it is not surprising, that discussions of general laws in biology are a recurrent staple of the literature and—interestingly—one that provides a good example of positive interactions between theoretically inclined biologists and philosophers of science.

In a number of cases authors draw a direct parallel between physical laws and proposed biological equivalents. For instance, M. Elgin argues that the ‘‘epistemic functions of a priori biological laws in biology are the same as those of empirical laws in physics.’’ Elgin begins by acknowledging the (almost) universal agreement among philosophers who subscribe to the concept of laws that these must be both universal and empirical in nature, though he hastens to say that these conditions are necessary but not sufficient to distinguish laws from ‘‘accidental’’ generalizations. He then discusses Elliot Sober’s proposal that the Hardy–Weinberg principle in population genetics is an example of a biological law, even though it is universal but not empirical.

There are several problems with this proposal, chiefly the fact that Hardy–Weinberg cannot meaningfully be thought of as a ‘‘zero force law’’ analogous to, say, the law of inertia (as Elgin suggests), as well as the above mentioned lack of empirical content. Jonathan Kaplan and, back in 2006, have discussed in detail why the various evolutionary mechanisms that can cause a population to deviate from Hardy–Weinberg equilibrium are not conceptually equivalent, and should not be thought of as ‘‘forces’’ characterized by intensity and direction. Moreover, it simply seems strange to suggest that a scientific law can have no empirical content and instead simply be true a priori (as Hardy–Weinberg surely is, mathematically speaking). This risks embarking philosophy of science down the slippery slope of considering logical and mathematical principles themselves as ‘‘laws,’’ a usage that clearly does not accord to scientific practice at all. Apparently, however, this point is not at all clear in the minds of some biologists, since it is possible to find statements like the following: ‘‘The global-optimum model is not so much a predictor of nature as a definition of nature. It must be true that a perfectly adapted organism leaves the most possible offspring!’’ (In a paper by Nonacs and Dill, published in 1993). Or: ‘‘The existence of a global-optimum point is a ‘deep axiom’: a tautology that guarantees logical consistency at the core of a theory’’ (by Stearns and Schmid-Hempel, in 1987). This is surely one area where more communication between theoretically minded biologists and philosophers of science would be welcome.

Besides appeals to Hardy–Weinberg as an example of biological laws, the next most popular area of discussion concerning this topic is perhaps the possible existence of laws in ecology. For instance, G.M. Mikkelson makes a case for moving ecology from an idiographic (historical) mode of explanation to a nomothetic (law-based) one. He maintains that—contra to what he perceives as the practice among ecologists—generalizations should be interpreted in terms of law-like generalizations because functional kinds (such as ‘‘predators’’) and structural kinds (like the various community types) correlate better than taxa (historical kinds) with fundamental ecological patterns and processes. As Mikkelson puts it, ‘‘Imagine being dropped at a random spot on the land surface of the Earth. Which would allow you to predict the density of plant species around you—that is, the number of species per 10,000 square kilometers—most precisely: knowing the climate, or knowing the landmass on which you stand? Answer: climate wins, hands down.’’. Well yes, but it is arguable that such predictions are the result of ‘‘laws’’ in any way like those that physicists are after, and it is telling that Mikkelson is in fact cautious enough to talk about ‘‘law-like generalizations.’’

Interestingly, the issue of, shall we say, “physics envy” shows up explicitly in yet another author’s treatment of the issue of laws in ecology, D.R. Lockwood. In this case it is an ecologist who takes on the matter, and comes down rather negatively on the possibility of laws in his discipline. Lockwood discusses two frequent suggestions as examples of ecological laws: Malthusian growth and the logistic equation. He quickly finds them inadequate to the task, as they do not support counterfactuals, are not temporally universal, and in fact repeatedly fail empirical tests. In the end, Lockwood agrees with philosopher W.C. Wimsatt’s suggestion that ‘‘aggregative systems’’ (those typically studied by physics) do follow robust laws, while emergent systems (like those studied in biology) do not. This does not mean that biologists cannot generalize their empirical findings (within certain limits), and that such generalizations cannot be used to make reasonable predictions about the behavior of the systems of interest to them. And that, after all, is what actually matters.

If not laws, are there general theories in biology? Theodosius Dobzhansky famously said that ‘‘nothing in biology makes sense except in the light of evolution.’’ Adding that to Richard Dawkins’ quest for ‘‘universal Darwinism’’ and to Daniel Dennett’s contention that Darwinism is a ‘‘universal acid’’ of sorts that cuts across disciplines, extending the idea of Darwinian evolution well beyond biology itself, one would think that biologists have settled on their version of a theory of everything long ago. One would be surprised. A perusal of the recent literature shows quite a bit of activity in this department, again largely on the side of ecologists. I will briefly comment on one such attempts, referring the interested reader to two more case studies discussed in the paper.

Stephen Hubbell’s unified neutral theory of biodiversity and biogeography attempts to do precisely what its name implies: to propose a combined theoretical framework for biodiversity (measured by species–abundance curves) and biogeography (measured by species–area curves), where the ‘‘neutrality’’ consists in assuming that the differences among species that belong to the same trophic level within a given ecological community do not matter for the dynamics of that community. Hubbell’s theory draws from explicit parallels with the neutral theory of molecular evolution proposed by Motoo Kimura back in 1985, and from the above mentioned Hardy–Weinberg equilibrium in population genetics.

The unified theory has generated a significant literature, including a number of critics and empirical tests. It is important to realize a couple of things, however: first, that the scope of the theory is crucially limited by the clause that it applies to species of similar trophic level within a given community, which makes it quite a bit more narrow in scope than its name (and some of the discussion that has followed the publication of Hubbell’s book) might otherwise give the impression. Moreover, the theory is notoriously difficult to test, because while it does make distinctive predictions when compared to, say, niche assembly theories (which are non neutral), the predicted differences are very small, and easily lost in the noise characteristic of ecological data sets. This is not the place to get into an in-depth discussion of Hubbell’s theory, but I can hazard a prediction based on the similar history of the neutral theory of molecular evolution: in that case more than a decade of discussions led to the conclusion that a modified ‘‘quasi-neutral’’ theory was the best bet. Which basically means that stochastic as well as selective processes affect the outcome of evolution, just as it would be reasonable to expect.

My more general point in the paper was that even a cursory look at the literature allows one to distinguish four modalities for theoretical biology (though similar distinctions can also be found in, say, physics, especially if one considers the entire discipline, and not just specific subsets like particle physics). I refer to these as analytical modeling, statistical modeling, computer modeling, and conceptual analysis.

The classic example of analytical approaches in theoretical biology is represented by much of the body of works that makes up population genetics theory, beginning again with the Hardy–Weinberg principle and arriving at more recent advances such as coalescent theory. The basic approach here is to use mathematical formalism to arrive at analytical (i.e., precise, non-statistical) solutions of sets of equations describing the behavior of idealized populations of organisms.

The second general type of approach to biological theorizing is statistical in nature, beginning with Ronald Fisher’s famous ‘‘fundamental’’ theorem of natural selection, which was proposed as explicitly equivalent to one of the most solid pieces of theory in classical physics, the second principle of thermodynamics. Fisher laid the foundations for statistical genetics, which—when reconciled with the apparently discrepant Mendelian genetics—resulted in the Modern Synthesis of the 1940s, basically the still current standard model in evolutionary theory (but see this).

The third way of doing theoretical biology is based on computer modeling, and it is in a sense a continuation of a long standing trend in the field: when things get too complicated even for a quantitative (i.e., statistical) genetic approach (let alone for a population genetic, analytical one), researchers move toward computationally intensive simulations of biological populations. There are many examples of this, some of which are continuous with the population-quantitative genetic type of issues just discussed, some having to do with broader questions concerning the evolution of evolutionary mechanisms (evolvability), and some concerning the relationship between structural biology and evolutionary dynamics.

The fourth and last modality of biological theorizing is based on the articulation of verbal-conceptual models, and obviously comes closest to what philosophers of biology themselves engage in when they analyze the concepts deployed by working biologists. Verbal-conceptual models in science have the reputation of being second grade when compared to ‘rigorous’ mathematical modeling, even though of course both the original work by Darwin and much of the work done during the Modern Synthesis (except for the part that was explicitly population-genetic) fall into this category. Indeed, there seems to be a resurgence of this approach as a necessary complement to increasingly ‘‘experimental’’ mathematical treatments like the ones discussed above. Verbal-conceptual models include a broad category of biological theorizing that is particularly popular in molecular biology and biochemistry, where many papers present the results of complex experiments on the structure of genetic networks or biochemical pathways in the form of conceptual diagrams that are meant to both summarize the current status of knowledge and provide food for thought for the developing of new hypotheses and subsequent empirical tests.

My conclusions at the end of the full paper: the term ‘‘speculation’’ has a rather bad reputation in science, often associated with the much-dreaded accusation hurled at philosophers that they engage in ‘‘armchair theorizing.’’ But of course all theory is armchair speculation, and unless one thinks of mathematics in a special Platonic fashion, mathematical approaches are simply continuous with, and complementary to, all the other ways of doing theory in science.

Which brings me to the role of philosophy of science in all of this. I think that philosophy of science itself is characterized by different modalities, some of which have little to do with helping scientists and reflect instead on the logic of scientific theories, the epistemology underlying scientific claims, and so on. Indeed, philosophy of science itself is continuous with the history of science, since it would be difficult to attempt generalizations about the nature of science while focusing only on currently ongoing (and therefore far from being settled) scientific research.

To begin with, then, classic philosophy of science is concerned with the study of the logic of scientific discovery, as exemplified by the well-known names (even among scientists!) of Popper, Kuhn, and—to a lesser extent—Feyerabend and Lakatos (and, of course, a number of contemporary scholars, too many to mention). This type of philosophy of science is, arguably, of very little direct relevance to scientists themselves (except insofar as they are curious about how outsiders see and analyze their own activity). It is perhaps this sort of philosophizing that has brought a number of physicists (e.g., Steven Weinberg, Stephen Hawking, and Lawrence Krauss) to claim that ‘‘philosophy is dead’’ on the ground that, of late, it has not managed to solve any scientific problem with which physics is concerned. In so arguing, these scientists are committing an elementary category mistake prompted by a combination of intellectual hubris and a surprising amount of ignorance.

Philosophy of science, however, also functions in modalities that are (or ought to be) of more direct interest to practicing scientists themselves—whether the latter realize it or not. One such modality is represented by always necessary (if prone to annoy the targeted scientists) external criticism of socially relevant scientific claims (e.g., concerning race, gender, or the validity and application of certain types of medical research). I hesitate to use the label ‘‘science criticism’’ for this activity—even though it is arguably the most appropriate one available—because the term has been possibly irreparably tainted by much post-modern-inspired nonsense at the height of the so-called ‘‘science wars’’ of the 1990s. Regardless of what we end up calling it, it is the sort of philosophical inquiry that actually has practical implications, analogous to the better known ones usually associated with, say, business ethics, medical ethics, and bioethics, and one that should develop into an earnest dialogue between philosophers and scientists about the social implications of science itself.

The third and final modality for philosophy of science is in even more close symbiotic relationship with science, one that seems to be welcome by scientists themselves. Indeed, recent years have seen an increasing number of philosophers of physics, biology, and other disciplines who have been publishing conceptual papers on a large variety of topics that are hard to distinguish from theoretical physics, biology, etc. This is, I think, a much welcome development, and a small (but, hopefully, growing) number of scientists have started to collaborate with philosophers and/or to publish in philosophical journals, as the case of debates about laws in biology discussed above exemplifies. As I pointed out elsewhere, this is along the lines of what Hasok Chang called ‘‘the continuation of science by other means’’:

Complementary science [based on history and philosophy of science] is critical but not prescriptive in relation to specialist science. … Complementary science identifies scientific questions that are excluded by specialist science. … The primary aim of complementary science is not to tell specialist science what to do, but to do what specialist science is presently unable to do. It is a shadow discipline, whose boundaries change exactly so as to encompass whatever gets excluded in specialist science. (pp. 249–250)

From this perspective, then, philosophy of biology represents a fifth type of theoretical biology, albeit one that is practiced from the outside looking into the core discipline. Because of that, it is uniquely positioned, I think, to perceive the threads connecting the other four modalities, as well as the advantages and limitations of each. The idea, of course, is not to make philosophers the ultimate arbiters in theoretical biology (or in anything else, for that matter). Rather, it is a recognition that it does take some distance from the nitty gritty of the specialized literature to be able to perceive the broad picture that is necessary for the advancement of broadly construed theoretical biology. Accordingly, it is not by chance that when biologists themselves step back to contemplate a more inclusive level of analysis they begin to sound like philosophers. Perhaps, then, ongoing cross-fertilization—like the one fostered by that special issue of Biological Theory—will bring less distrust and more fruitful collaboration between the two disciplines.

Advertisements

109 thoughts on “On the different ways of doing theory in biology

  1. Bunsen Burner

    DM:

    ‘Oh, I see. You should pass this on to Nick Bostrom’

    Bostrom’s ideas have problems that even I cannot help with…

    ‘Because you can get around the problems posed by Bell’s Theorem by allowing superluminal communication’

    Not within the current formalism of QM. You need to come up with a valid hidden variables theory. However, my point was that you cannot reduce QM to a stochastic process. Such approaches have been tried, and all have failed to reproduce the correlations of entanglement. They also contradict contextuality as shown by Kochen-Specker.

    Like

  2. synred

    >observable must correspond to a self-adjoint operator. Not only am I unaware of anyone constructing self-adjoint operators for macroscopic observables, but I suspect that the whole idea is nonsense.

    A self-ajoint operator or a ‘state’ for a planet or a cat is absurd. None the less in Coel like manner you might in principle run a simulation at the level of quarks or something and observe (in the plain English, non-technical sense) planets and cats appear. What you could not do is understand this process any better than in the real world w/o introducing higher level concepts like planets and cats (dead or alive[a]) as well as thermodynamics, chemistry, etc.

    [a] https://goo.gl/N4dBp2

    Like

  3. synred

    I once tried to write a ‘many worlds’ simulation in which you keep all the ‘worlds’ generated. Even for simple two date system it quickly runs out of memory. You have to ;collapse the wave function’.

    Also, ‘many worlds’ doesn’t give the Born rule if you base it on finding-your-self in a world. You have to impose that in the ‘collapses.’

    I too find it useful to think it term of simulations, but don’t think I am one, even though simulating me would be less difficult than simulating universes (in detail).

    Like

  4. synred

    . It will always just be a bunch of molecules.

    >Indeed, it always is just a bunch of molecules (bound by gravity). You can observe these clumps even though they are not ‘observables’ and even though they were likely seeded by QM fluctuations. We do it all the time.

    Like

  5. synred

    Bunsen Burner. In Babar experiment we simulate Bell type correlations to simulate the events used to measure mixing and CP violation. It’s done as I described – selecting events from the amplitude with the Born rule.

    I think I remember seeing a paper where somebody tested Bell’s inequalities, but I didn’t read it.

    Like

  6. Disagreeable Me (@Disagreeable_I)

    Hi Bunsen,

    You need to come up with a valid hidden variables theory.

    This doesn’t seem right to me. First of all, as I understand it, Bell showed there can be no valid hidden variables theory. A hidden variables theory is supposed to show how the correlations between entangled particles can be explained without spooky action at a distance, by assuming that each particle is “pre-programmed” with regard to how it will respond to any measurement in advance. But the correlations we see in Bell type experiments show that this is impossible. So you need spooky action at a distance, to communicate instantly what measurement was taken from one particle to the other. Which you can do in a simulation.

    However, my point was that you cannot reduce QM to a stochastic process

    And I agree, sort of. You certainly can’t reduce QM to a classical stochastic process. It really isn’t just that there is some random stuff we can’t measure. It is fundamentally weirder than that. But that weirdness can be simulated (Arthur seems to know of such cases). You need a random number generator. I believe a pseudo random number generator will do. And you need to simulate spooky action at a distance. (Or else superdeterminism, which is a bit daft — you could do it by simulating all possible worlds including those which violate Bell’s inequality and somehow declaring “real” all those that happened not to violate it).

    Like

  7. Massimo Post author

    Synred,

    “thermodynamics can not be understood purely in terms of atoms and other particles. You need to had statistics of large numbers”

    Right, but, again, let’s keep distinct the epistemic and the ontological issues. Everyone, I think, agrees that one cannot, in practice, do what Laplace thought was possible in theory. But my understanding — per Bunsen — is that quantum mechanics actually makes it impossible to do in principle. That’s a much stronger position.

    Like

  8. synred

    Massimo:

    But my understanding — per Bunsen — is that quantum mechanics actually makes it impossible to do in principle.

    That’s not correct. You generate an event flat in phase space. You calculate the events probability by calculating it amplitude from theory. You throw a random number (flat between 0 and 1) and decide whether to keep it or not.
    (‘collapse’ the wave function, if like that terminology’)

    You get out a set simulated events that follow the distribution that follow the quantum mechanical predictions for the process under consideration (whew).

    If you consider an entangled process, the events will exhibit violations of Bell’s inequalities.

    This was my business in the good-old-days. It’s simple and straight forward. Operators only enter in setting up the theory. In the case of an e+e- –> B-Bbar experiment that does involve entanglement (BaBar) I redid these calculations myself, but in other cases I just took them from a theory paper.

    CP violation in B-mesons:
    http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-12440.pdf

    Liked by 1 person

  9. davidlduffy

    It may be a bit of a diversion into physics, but it’s useful to me in thinking out the idea of “lawful mechanism” as it applies to explanation in biology. In the case of Mendel’s First Law of segregation of genes, this was a phenomenological approximate law ie a regularity in data that was nicely explained by a model that pairs of alleles (in diploid organisms) determine phenotype, and that there is a random reduction of the number of alleles contributed by each parent to their offspring from two to one. We now know a great deal about the actual biochemical apparatus that underlies this mechanistically, as well as the failures of the mechanism that break this First Law. So is it still a law? In one way it is not that different from the laws of thermodynamics, in that it still describes simple properties of macroscopic objects.

    In the case of Hardy-Weinberg Equilibrium, it seems this is a direct extension of the First Law combined with the idea that mating is approximately enough random with respect to any one gene you are looking at. I don’t see that as law-like. It is the case that one chooses a genetic variant to examine in the population, then often the genotype proportions in the population will be close to HWE expectations, and it is extremely useful when screening genome-wide genotyping data for technical errors, but reasoning about such statistics must be probabilistic.

    Like

  10. saphsin

    Massimo

    I understand and am inclined to agree with your distinction on first impression but on second thought, isn’t the distinction between epistemic & ontological kind of blurred at the quantum level? Like not just in terms of overlap but what it “means” to calculate an electron’s position in the electron cloud.

    Like

  11. ejwinner

    Dan,
    “Biology anyone?”
    It is highly instructive that the physicists and mathematicians commenting here have a difficult time wrapping their heads around biology. How then are they going to respond to a different level of discussion concerning social ontology or human behavior..

    It is notable that biologist Jerry Coyne allows much physicist discussion in comments on his site, because he’s a strict reductionist and a strict determinist; it’s as though biologists are somehow poor relations to physicists. I don’t think this carries any discussion further at all.

    Anyway, didn’t comment further because I admit I am an amateur at biology, and its theory, and Thought I might learn something. I believe that’s one reason Massimo posts here.

    Liked by 2 people

  12. Bunsen Burner

    synred:

    ‘None the less in Coel like manner you might in principle run a simulation at the level of quarks or something and observe (in the plain English, non-technical sense) planets and cats appear’

    How will you do this? You know of a way of taking the probabilities of a multi particle state and assembling them into a cat? In fact, into all cats, right, since evaluating your Hamiltonian will give you an exponentially increasing number of possible states.

    Like

  13. Bunsen Burner

    DM:

    ‘First of all, as I understand it, Bell showed there can be no valid hidden variables theory’

    No. Bell showed what constraints need to exist on a hidden variable theory. Bohm’s theory is a valid hidden variable theory. Anyway, as Dan has pointed out this is not the topic of the OP, and I feel this digression has pretty much gone on as far as it needs to. I will make a few more points in my reply to Massimo and then I’m done.

    Like

  14. Disagreeable Me (@Disagreeable_I)

    Hi Massimo,

    I agree with you on the importance of distinguishing the epistemic and ontological issues.

    But my understanding — per Bunsen — is that quantum mechanics actually makes it impossible to do in principle. That’s a much stronger position.

    A couple of things. First, nobody is actually suggesting that what Laplace envisioned is possible even in principle. Laplace imagined that a demon who had perfect knowledge of the positions of all the particles in the universe could predict what was going to happen. But on QM there are many possible futures (and on some interpretations all these futures happen), so no computational process can pick out a single future.

    However I claim that (a) one can in principle simulate all possible futures (and their respective probabilities) and (b) that one can in principle simulate a single arbitrary possible timeline using pseudorandom number generators. Bell’s theorem means the simulation would have to be non-local (spooky action at a distance), not that such a simulation is impossible.

    Bunsen seems to disagree with me, although I would be curious to see what he says in response to my last clarification that I’m not talking about a classical stochastic process and my point that the only reason to have a hidden variable theory is to avoid nonlocality.

    But my understanding — per Bunsen — is that quantum mechanics actually makes it impossible to do in principle.

    This is not likely to be resolved by us here now, but I would encourage you if not to believe me at least to keep an open mind for now. I don’t think this is actually a contentious issue. I think if you asked an expert (e.g. Sean Carroll) whether there is consensus on whether Bell’s Inequality means it is impossible to simulate a given viewpoint within QM (i.e. with a non-local simulation), I think you would find that there is such a consensus and I suspect that it would agree with me.

    Like

  15. Bunsen Burner

    Massimo:

    ‘But my understanding — per Bunsen — is that quantum mechanics actually makes it impossible to do in principle’

    Exactly. I think people here are mistaking two different views. The fact that we can make QM models of molecules, and use computers to calculate things like amplitudes is uncontroversial. Thinking that you can create QM models of macroscopic objects like galaxies or bacteria, or that you can identify macroscopic object in a QM model of particle states, that is controversial. I think people here who understand QM need to think very carefully about this.

    I would also say that philosophy helps here, especially the ideas of Sellars regarding the manifest and scientific images. Galaxies and bacteria exist because people have identified these patterns in nature. Nature itself isn’t divisible into such human constructed categories. The idea of these things exist in the manifest image, and to the degree we try and model them in the scientific image, we need to use techniques that are commensurate with the length and time scales of the objects themselves. Thinking you can somehow ‘discover’ concepts from the manifest image in a QM Hamiltonian strikes me as some species of nonsense.

    Also, please allow me a technical aside. The question of what how a classical world emerges is still an open question. All current research shows it’s not a simple as just squinting at a suitable Hamiltonian correctly. A lot of the ideas expressed here are rooted in a classical intuition about these things that are know to be false.

    Anyway, I’m now going to stop disrupting the main focus of this discussion.

    Liked by 3 people

  16. Coel

    Hi Massimo,

    But my understanding — per Bunsen — is that quantum mechanics actually makes it impossible to do in principle.

    What Bunsen is saying is that the current “official” and rigorous formalism of quantum mechanics doesn’t lend itself to computer Monte-Carlo simulations. But (unless you think that one-big-wavefunction, many-worlds QM is the final answer) that is because current formalism is incomplete, it doesn’t do the “collapse” of the wavefunction and thus doesn’t do the quantum indeterminacy properly.

    But, computer simulations are never a matter of doing things fully rigorously, that is always way too impractical. Such simulations are always good-enough approximations, and the art of being such a theorist is to develop ways of approximating that work well enough.

    Thus, to computer-simulate a quantum system, what one does is to throw in some random-number probabilities. This works fine because, in any complex system, the probabilistic aspects almost always average out. The example I gave of modelling the mass of the proton is a good example. Internally the proton is a huge mess of virtual particles, all coming and going according to QM probabilities. But these probabilities average out to give a definite value for the total overall mass.

    For any macro-scale ensemble we know that QM probabilities average out because classical physics does work very well in the macro-scale world. Thus computer modelling techniques are routine in many areas of physics.

    Like

  17. brodix

    I think the non-collapse of probabilities is similar to determinism, as it projects future probabilities onto the past, as determinism projects past determination onto the future.
    Biology, at least neurology, is presentist;
    https://www.newscientist.com/article/2132847-your-brain-is-a-time-machine-why-we-need-to-talk-about-time/?utm_campaign=Echobox&utm_medium=Social&utm_source=Twitter#link_time=1496743708
    I realize the point I keep making, that time is change, by which future possibility coalesces into the present and fades into the residual, future becomes past, making it an effect of activity, similar to temperature, gets roundly ignored, but consequently, no one is disproving it either.
    By treating time as a measure of duration, physics explicitly assumes it is the flow from past to future, with duration as evidence of this temporal dimension, between two events, just as distance is a measure of a single spatial dimension, between two points.
    Yet duration is only the present, as events unfold.
    This lack of distinction between past and future leads to projecting them on each other, thus determinism, or Everitt’s multiworlds. The present is where the input of events are calculated and determined.

    Like

  18. Coel

    Hi ej,

    … didn’t comment further because I admit I am an amateur at biology, …

    Then how can you be sure that “the physicists and mathematicians commenting here have a difficult time wrapping their heads around biology”? Do you just mean that you disagree with them?

    Like

  19. Massimo Post author

    Bunsen,

    Again, thanks for the clear explanation.

    Synred,

    Maybe I misunderstand either you or Bunsen, but it seems like you are not talking about Laplace-type kind of problem, nor what DM is suggesting. It is perfectly possible, as you say, to simulate quantum behavior on a classical computer. The question is whether one can, by using only the equations of QM, predict the unfolding of the current universe. The answer to that questions seems to be no. My understanding of physics is limited, but I gather also from previous discussions that the reason for the negative answer is that while the equations describing QM systems are deterministic, actual physical QM events are fundamentally random. So, as someone (Bunsen?) said earlier, all you can do is to simulate all possible universes, which is of no help at all.

    DM,

    “Laplace imagined that a demon who had perfect knowledge of the positions of all the particles in the universe could predict what was going to happen. But on QM there are many possible futures (and on some interpretations all these futures happen), so no computational process can pick out a single future.”

    Right, but then you say:

    “I claim that (a) one can in principle simulate all possible futures (and their respective probabilities) and (b) that one can in principle simulate a single arbitrary possible timeline using pseudorandom number generators”

    That sounds to me awfully close to saying that you want to do what Laplace’s demon was supposed to do, especially your move from (a) to (b).

    Saphsin,

    “Like not just in terms of overlap but what it “means” to calculate an electron’s position in the electron cloud”

    Good point, but that’s why the discussion has been mostly about “calculating” macroscopic events from QM principles. And as Bunsen said above, we have no idea, really, of how to do that. So at the macroscopic level the distinction between ontology and epistemology still holds clearly, I think.

    David,

    Back to biology! Mendel’s “law” has all sorts of exceptions, so it is a fairly limited empirical generalization, and nothing like laws in physics. Again, I suspect early 20th century biologists called it a law because of their pronounced physics envy. Fisher explicitly modeled his “fundamental” theorem of natural selection (which also has all sorts of known exceptions) after the second principle of thermodynamics, precisely because he thought that the new science of biology should model itself after physics. Fortunately, the new generations of biology seem to have gotten over that particular hang-up.

    Like

  20. Disagreeable Me (@Disagreeable_I)

    Hi Massimo,

    My understanding of physics is limited, but I gather also from previous discussions that the reason for the negative answer is that while the equations describing QM systems are deterministic, actual physical QM events are fundamentally random.

    That depends on your interpretation. On MWI, for instance, there is no randomness fundamentally. Everything happens. It just looks random subjectively because you’re seeing an arbitrary set of events, not all of them.

    That sounds to me awfully close to saying that you want to do what Laplace’s demon was supposed to do, especially your move from (a) to (b).

    (a) You can simulate all futures and predict how the overall wavefunction of the universe will evolve. But that doesn’t tell you what you will subjectively experience, so it differs from Laplace in this respect.

    (b) You can simulate one possible future, but there is no reason to suspect this will be the future you will subjectively experience, so again it differs from Laplace.

    So I agree with you that it is fundamentally impossible to predict the future (unless by predict the future you mean predict all possible futures, which on MWI will all happen to you, so that’s actually an accurate prediction if not all that helpful).

    Like

  21. Daniel Kaufman

    EJ: Yeah, I found this essay interesting too, but don’t know enough about the subject to really push the discussion. Unfortunately, it is clear that no one else does either, as the discussion has slid back to the old favorites: “supervenience” (one of the most explanatorily useless words ever concocted by philosophers) and “computable universes” and “mathematical descriptions of everything,” none of which is even remotely serious, but is sort of the internet’s version of bar chatter.

    Liked by 1 person

  22. Disagreeable Me (@Disagreeable_I)

    From my point of view, the problem is only that Massimo covered the topic very well in the original article. There doesn’t seem like there’s much to say on that apart from congratulating him. I’m indulging my interest in the tangents only because Massimo also seems interested. I’m not sure that it comes at the cost of a deeper discussion on Massimo’s article.

    Like

  23. Daniel Kaufman

    Massimo: Here is an effort at several questions, regarding the article’s topic. If they are completely inapt, please say so.

    Does part of the difficulty with theory in the biological sciences have to do with the fact that biological explanations are teleonomic? Or at least, that some are? That not simply material and efficient causality are involved, but a type of cause that is dependent on the sorts of variables that do not easily admit of lawlike generalization?
    On a related note, I take it that biological laws, to the extent that there are any, rely far more on ceteris paribus clauses than do laws in physics or chemistry. Is this also part of the problem as you see it? After all, once a ceteris paribus clause gets too large, its unclear in what sense the generalization consists of a law at all.
    I wonder to what extent many explanations in biology, while grammatically appearing to be causal in a roughly mechanistic way, are really more natural-historical in nature; i.e, as a matter of their underlying logic, rather than their surface structure.

    **Again, please let me know if these questions are inapt or so wrong as to be unanswerable.

    Liked by 1 person

  24. couvent2104

    “On a related note, I take it that biological laws, to the extent that there are any, rely far more on ceteris paribus clauses than do laws in physics or chemistry. Is this also part of the problem as you see it?”

    Excellent question, I think. One of my problems when I try to understand ET is that the theory not always explains which ceteris are supposed to be paribus.

    Like

  25. Daniel Kaufman

    I’m not sure that it comes at the cost of a deeper discussion on Massimo’s article.

    = = =

    Well, obviously, a number of us disagree. And the idea that there is nothing to discuss in an essay on a subject of this complexity and depth isn’t credible. Also, it isn’t just some coincidence that this particular cluster of tangents is brought up over and over and over and over again, regardless of the topic of the initial post.

    Liked by 1 person

  26. synred

    It can’t be implemented. I take it to be a thought experiment.

    It does not involve implementing a cat. You just run it from the bottom up and see what turns up planets or cat-like creatures. Most random seeds would not yield cats qua cats.

    Liked by 1 person

  27. Massimo Post author

    Dan,

    Those are all damn good questions!

    “That not simply material and efficient causality are involved, but a type of cause that is dependent on the sorts of variables that do not easily admit of lawlike generalization?”

    Right. That is the basis on which Fodor — mistakenly — thinks that Darwin “got it wrong.” Because Fodor seems to think that the only acceptable scientific theories are those that lend themselves to law-like generalizations. I dont have to explain to you why that’s very clearly off the mark.

    “After all, once a ceteris paribus clause gets too large, its unclear in what sense the generalization consists of a law at all.”

    Right, that was also part of Fodor’s problem. But all it means is that there are no law-like generalizations to be made in biology, not that the theory of evolution is somehow wrong or unscientific.

    And of course, Cartwright argues that a very similar problem holds also for physics, only to a different degree.

    “I wonder to what extent many explanations in biology, while grammatically appearing to be causal in a roughly mechanistic way, are really more natural-historical in nature”

    That’s an ongoing open debate in the philosophy of biology. I wouldn’t exactly say natural-historical, but certainly there is disagreement about mechanisms vs causal explanations, and “cause” means different things in different biological contexts. I participate to a worksho at the KLI this past spring on that very topic, proceedings due out next year.

    As for your debate with DM concerning why we keep going off topic into physics, I strike a balance between the two positions. In this case I was genuinely curious about Laplace vs QM, but yes, it gets rapidly away from the focus of the OP…

    Liked by 1 person

  28. SocraticGadfly

    Per Dan, Couvent, et al, the environmental factor in biology is always an issue. But, how much, where, and when, certainly varies. I just saw a piece a couple of days ago that prion-like proteins could be part of the reason for the spread of Type II diabetes. http://www.sciencemag.org/news/2017/08/could-diabetes-spread-mad-cow-disease

    ==

    Otherwise, Dan’s question kind of gets at what I was kind of getting at in the question I wasn’t sure how to ask yesterday. That’s how much environment and other ceteris paribus, sometimes stochastic, can be addressed by a theory. Darwin himself used the famous “tangled bank” phrase, after all.

    Like

Comments are closed.