On the different ways of doing theory in biology

Theoretical biology’ is a surprisingly heterogeneous field, partly because it encompasses ‘‘doing theory’’ across disciplines as diverse as molecular biology, systematics, ecology, and evolutionary biology. Moreover, it is done in a stunning variety of different ways, using anything from formal analytical models to computer simulations, from graphic representations to verbal arguments. A few years ago I co-organized a workshop on this topic at the Konrad Lorenz Institute for theoretical biology in Vienna, and then published an edited volume of the journal Biological Theory collecting all contributions.

In my paper I surveyed a number of aspects of what it means to do theoretical biology, and how they compare with the allegedly much more restricted sense of theory in the physical sciences. I also tackled a somewhat recent trend toward the presentation of all-encompassing theories in the biological sciences, from general theories of ecology to an attempt to provide a conceptual framework for the entire set of biological disciplines. I also discussed the roles played by philosophers of science in criticizing and shaping biological theorizing. The full paper is available for download here (free), and the edited volume can be found here (articles beyond paywall). Let me, however, summarize my main points to facilitate a general discussion.

First, I discussed the issue of alleged laws in biology. If there is anything that characterizes physics as a science it is its unending quest for universal laws, from Newton’s mechanics to the current (and highly controversial) string theory. This is the case despite the fact that influential philosophers of science like Van Fraassen and Giere maintain laws play a marginal and mostly didactical role, even in physics. Regardless, it is not surprising, that discussions of general laws in biology are a recurrent staple of the literature and—interestingly—one that provides a good example of positive interactions between theoretically inclined biologists and philosophers of science.

In a number of cases authors draw a direct parallel between physical laws and proposed biological equivalents. For instance, M. Elgin argues that the ‘‘epistemic functions of a priori biological laws in biology are the same as those of empirical laws in physics.’’ Elgin begins by acknowledging the (almost) universal agreement among philosophers who subscribe to the concept of laws that these must be both universal and empirical in nature, though he hastens to say that these conditions are necessary but not sufficient to distinguish laws from ‘‘accidental’’ generalizations. He then discusses Elliot Sober’s proposal that the Hardy–Weinberg principle in population genetics is an example of a biological law, even though it is universal but not empirical.

There are several problems with this proposal, chiefly the fact that Hardy–Weinberg cannot meaningfully be thought of as a ‘‘zero force law’’ analogous to, say, the law of inertia (as Elgin suggests), as well as the above mentioned lack of empirical content. Jonathan Kaplan and, back in 2006, have discussed in detail why the various evolutionary mechanisms that can cause a population to deviate from Hardy–Weinberg equilibrium are not conceptually equivalent, and should not be thought of as ‘‘forces’’ characterized by intensity and direction. Moreover, it simply seems strange to suggest that a scientific law can have no empirical content and instead simply be true a priori (as Hardy–Weinberg surely is, mathematically speaking). This risks embarking philosophy of science down the slippery slope of considering logical and mathematical principles themselves as ‘‘laws,’’ a usage that clearly does not accord to scientific practice at all. Apparently, however, this point is not at all clear in the minds of some biologists, since it is possible to find statements like the following: ‘‘The global-optimum model is not so much a predictor of nature as a definition of nature. It must be true that a perfectly adapted organism leaves the most possible offspring!’’ (In a paper by Nonacs and Dill, published in 1993). Or: ‘‘The existence of a global-optimum point is a ‘deep axiom’: a tautology that guarantees logical consistency at the core of a theory’’ (by Stearns and Schmid-Hempel, in 1987). This is surely one area where more communication between theoretically minded biologists and philosophers of science would be welcome.

Besides appeals to Hardy–Weinberg as an example of biological laws, the next most popular area of discussion concerning this topic is perhaps the possible existence of laws in ecology. For instance, G.M. Mikkelson makes a case for moving ecology from an idiographic (historical) mode of explanation to a nomothetic (law-based) one. He maintains that—contra to what he perceives as the practice among ecologists—generalizations should be interpreted in terms of law-like generalizations because functional kinds (such as ‘‘predators’’) and structural kinds (like the various community types) correlate better than taxa (historical kinds) with fundamental ecological patterns and processes. As Mikkelson puts it, ‘‘Imagine being dropped at a random spot on the land surface of the Earth. Which would allow you to predict the density of plant species around you—that is, the number of species per 10,000 square kilometers—most precisely: knowing the climate, or knowing the landmass on which you stand? Answer: climate wins, hands down.’’. Well yes, but it is arguable that such predictions are the result of ‘‘laws’’ in any way like those that physicists are after, and it is telling that Mikkelson is in fact cautious enough to talk about ‘‘law-like generalizations.’’

Interestingly, the issue of, shall we say, “physics envy” shows up explicitly in yet another author’s treatment of the issue of laws in ecology, D.R. Lockwood. In this case it is an ecologist who takes on the matter, and comes down rather negatively on the possibility of laws in his discipline. Lockwood discusses two frequent suggestions as examples of ecological laws: Malthusian growth and the logistic equation. He quickly finds them inadequate to the task, as they do not support counterfactuals, are not temporally universal, and in fact repeatedly fail empirical tests. In the end, Lockwood agrees with philosopher W.C. Wimsatt’s suggestion that ‘‘aggregative systems’’ (those typically studied by physics) do follow robust laws, while emergent systems (like those studied in biology) do not. This does not mean that biologists cannot generalize their empirical findings (within certain limits), and that such generalizations cannot be used to make reasonable predictions about the behavior of the systems of interest to them. And that, after all, is what actually matters.

If not laws, are there general theories in biology? Theodosius Dobzhansky famously said that ‘‘nothing in biology makes sense except in the light of evolution.’’ Adding that to Richard Dawkins’ quest for ‘‘universal Darwinism’’ and to Daniel Dennett’s contention that Darwinism is a ‘‘universal acid’’ of sorts that cuts across disciplines, extending the idea of Darwinian evolution well beyond biology itself, one would think that biologists have settled on their version of a theory of everything long ago. One would be surprised. A perusal of the recent literature shows quite a bit of activity in this department, again largely on the side of ecologists. I will briefly comment on one such attempts, referring the interested reader to two more case studies discussed in the paper.

Stephen Hubbell’s unified neutral theory of biodiversity and biogeography attempts to do precisely what its name implies: to propose a combined theoretical framework for biodiversity (measured by species–abundance curves) and biogeography (measured by species–area curves), where the ‘‘neutrality’’ consists in assuming that the differences among species that belong to the same trophic level within a given ecological community do not matter for the dynamics of that community. Hubbell’s theory draws from explicit parallels with the neutral theory of molecular evolution proposed by Motoo Kimura back in 1985, and from the above mentioned Hardy–Weinberg equilibrium in population genetics.

The unified theory has generated a significant literature, including a number of critics and empirical tests. It is important to realize a couple of things, however: first, that the scope of the theory is crucially limited by the clause that it applies to species of similar trophic level within a given community, which makes it quite a bit more narrow in scope than its name (and some of the discussion that has followed the publication of Hubbell’s book) might otherwise give the impression. Moreover, the theory is notoriously difficult to test, because while it does make distinctive predictions when compared to, say, niche assembly theories (which are non neutral), the predicted differences are very small, and easily lost in the noise characteristic of ecological data sets. This is not the place to get into an in-depth discussion of Hubbell’s theory, but I can hazard a prediction based on the similar history of the neutral theory of molecular evolution: in that case more than a decade of discussions led to the conclusion that a modified ‘‘quasi-neutral’’ theory was the best bet. Which basically means that stochastic as well as selective processes affect the outcome of evolution, just as it would be reasonable to expect.

My more general point in the paper was that even a cursory look at the literature allows one to distinguish four modalities for theoretical biology (though similar distinctions can also be found in, say, physics, especially if one considers the entire discipline, and not just specific subsets like particle physics). I refer to these as analytical modeling, statistical modeling, computer modeling, and conceptual analysis.

The classic example of analytical approaches in theoretical biology is represented by much of the body of works that makes up population genetics theory, beginning again with the Hardy–Weinberg principle and arriving at more recent advances such as coalescent theory. The basic approach here is to use mathematical formalism to arrive at analytical (i.e., precise, non-statistical) solutions of sets of equations describing the behavior of idealized populations of organisms.

The second general type of approach to biological theorizing is statistical in nature, beginning with Ronald Fisher’s famous ‘‘fundamental’’ theorem of natural selection, which was proposed as explicitly equivalent to one of the most solid pieces of theory in classical physics, the second principle of thermodynamics. Fisher laid the foundations for statistical genetics, which—when reconciled with the apparently discrepant Mendelian genetics—resulted in the Modern Synthesis of the 1940s, basically the still current standard model in evolutionary theory (but see this).

The third way of doing theoretical biology is based on computer modeling, and it is in a sense a continuation of a long standing trend in the field: when things get too complicated even for a quantitative (i.e., statistical) genetic approach (let alone for a population genetic, analytical one), researchers move toward computationally intensive simulations of biological populations. There are many examples of this, some of which are continuous with the population-quantitative genetic type of issues just discussed, some having to do with broader questions concerning the evolution of evolutionary mechanisms (evolvability), and some concerning the relationship between structural biology and evolutionary dynamics.

The fourth and last modality of biological theorizing is based on the articulation of verbal-conceptual models, and obviously comes closest to what philosophers of biology themselves engage in when they analyze the concepts deployed by working biologists. Verbal-conceptual models in science have the reputation of being second grade when compared to ‘rigorous’ mathematical modeling, even though of course both the original work by Darwin and much of the work done during the Modern Synthesis (except for the part that was explicitly population-genetic) fall into this category. Indeed, there seems to be a resurgence of this approach as a necessary complement to increasingly ‘‘experimental’’ mathematical treatments like the ones discussed above. Verbal-conceptual models include a broad category of biological theorizing that is particularly popular in molecular biology and biochemistry, where many papers present the results of complex experiments on the structure of genetic networks or biochemical pathways in the form of conceptual diagrams that are meant to both summarize the current status of knowledge and provide food for thought for the developing of new hypotheses and subsequent empirical tests.

My conclusions at the end of the full paper: the term ‘‘speculation’’ has a rather bad reputation in science, often associated with the much-dreaded accusation hurled at philosophers that they engage in ‘‘armchair theorizing.’’ But of course all theory is armchair speculation, and unless one thinks of mathematics in a special Platonic fashion, mathematical approaches are simply continuous with, and complementary to, all the other ways of doing theory in science.

Which brings me to the role of philosophy of science in all of this. I think that philosophy of science itself is characterized by different modalities, some of which have little to do with helping scientists and reflect instead on the logic of scientific theories, the epistemology underlying scientific claims, and so on. Indeed, philosophy of science itself is continuous with the history of science, since it would be difficult to attempt generalizations about the nature of science while focusing only on currently ongoing (and therefore far from being settled) scientific research.

To begin with, then, classic philosophy of science is concerned with the study of the logic of scientific discovery, as exemplified by the well-known names (even among scientists!) of Popper, Kuhn, and—to a lesser extent—Feyerabend and Lakatos (and, of course, a number of contemporary scholars, too many to mention). This type of philosophy of science is, arguably, of very little direct relevance to scientists themselves (except insofar as they are curious about how outsiders see and analyze their own activity). It is perhaps this sort of philosophizing that has brought a number of physicists (e.g., Steven Weinberg, Stephen Hawking, and Lawrence Krauss) to claim that ‘‘philosophy is dead’’ on the ground that, of late, it has not managed to solve any scientific problem with which physics is concerned. In so arguing, these scientists are committing an elementary category mistake prompted by a combination of intellectual hubris and a surprising amount of ignorance.

Philosophy of science, however, also functions in modalities that are (or ought to be) of more direct interest to practicing scientists themselves—whether the latter realize it or not. One such modality is represented by always necessary (if prone to annoy the targeted scientists) external criticism of socially relevant scientific claims (e.g., concerning race, gender, or the validity and application of certain types of medical research). I hesitate to use the label ‘‘science criticism’’ for this activity—even though it is arguably the most appropriate one available—because the term has been possibly irreparably tainted by much post-modern-inspired nonsense at the height of the so-called ‘‘science wars’’ of the 1990s. Regardless of what we end up calling it, it is the sort of philosophical inquiry that actually has practical implications, analogous to the better known ones usually associated with, say, business ethics, medical ethics, and bioethics, and one that should develop into an earnest dialogue between philosophers and scientists about the social implications of science itself.

The third and final modality for philosophy of science is in even more close symbiotic relationship with science, one that seems to be welcome by scientists themselves. Indeed, recent years have seen an increasing number of philosophers of physics, biology, and other disciplines who have been publishing conceptual papers on a large variety of topics that are hard to distinguish from theoretical physics, biology, etc. This is, I think, a much welcome development, and a small (but, hopefully, growing) number of scientists have started to collaborate with philosophers and/or to publish in philosophical journals, as the case of debates about laws in biology discussed above exemplifies. As I pointed out elsewhere, this is along the lines of what Hasok Chang called ‘‘the continuation of science by other means’’:

Complementary science [based on history and philosophy of science] is critical but not prescriptive in relation to specialist science. … Complementary science identifies scientific questions that are excluded by specialist science. … The primary aim of complementary science is not to tell specialist science what to do, but to do what specialist science is presently unable to do. It is a shadow discipline, whose boundaries change exactly so as to encompass whatever gets excluded in specialist science. (pp. 249–250)

From this perspective, then, philosophy of biology represents a fifth type of theoretical biology, albeit one that is practiced from the outside looking into the core discipline. Because of that, it is uniquely positioned, I think, to perceive the threads connecting the other four modalities, as well as the advantages and limitations of each. The idea, of course, is not to make philosophers the ultimate arbiters in theoretical biology (or in anything else, for that matter). Rather, it is a recognition that it does take some distance from the nitty gritty of the specialized literature to be able to perceive the broad picture that is necessary for the advancement of broadly construed theoretical biology. Accordingly, it is not by chance that when biologists themselves step back to contemplate a more inclusive level of analysis they begin to sound like philosophers. Perhaps, then, ongoing cross-fertilization—like the one fostered by that special issue of Biological Theory—will bring less distrust and more fruitful collaboration between the two disciplines.

109 thoughts on “On the different ways of doing theory in biology

  1. Daniel Kaufman

    Massimo: Interesting point re: Fodor, which is why I thought that his book, while mistaken, was mistaken in an interesting way, such that it did not deserve the abuse it got. In a sense, you, Coel, and Fodor are on a spectrum, considering what should be called “science.” Coel has the broadest sense in mind. You a narrower one. And Fodor an even narrower one. (In a sense, Fodor’s stance is not entirely surprising, given the generation he comes from and the influence that Hempel had over the philosophy of science.)

    But, as a virtue ethicist, of course, you choose the mean between extremes. 😉

    Liked by 1 person

  2. ejwinner

    My remark was really a footnote to an idea that is not yet fully formed. I’m getting the notion that biology and physics require two very different epistemic strategies for research and understanding, and that the epistemic strategies of physics are similar to those of math, while those of biology are closer to the strategies we use to understand language. I don’t know why that would be so. Nor is it clear to me why any scientist (or social scientist) – may trust the epistemic strategies dominant in other fields of research than their own as somehow superior or more explanatory to those in their own field, while others are quite at home with the strategies dominant in their own field. But this raises another issue, namely, the manner by which these choices of epistemic strategy lead to further choices in epistemic or even ontological claims.

    All this would help explain, or rather raise the question, why it is that scientists from one field would try to bring a conversation in another field into the domain of the epistemic strategies they are most comfortable with.

    In any event, this article, and the comment thread, triggered these thoughts, because at the base of Massimo’s discussion here of theory in biology, there does seem to be a problem of bringing together stories of how we know what is learned in research into a viable explanation of what we know – what I mean by “epistemic strategy.”.

    Liked by 1 person

  3. SocraticGadfly

    EJ, per your comment, at a minimum, because of its greater feedback loops vis-a-vis physics, that alone would require a different research strategy, I would think. Or rather, a different research mindset. I think that’s the difference between different sciences, and social sciences and hard sciences, for that matter.

    And, of course, this would directly militate against the likes of Coyne.

    It would also underscore Massimo noting that there is no “philosophy of science,” but rather various philosophies of different sciences. For that matter, we should probably speak of different philosophies for different social sciences.

    And, to go meta on Massimo … “philosophy of philosophy”? Ask Doug Hofstadter to work that into a GEB type essay. 😉

    Liked by 2 people

  4. Coel

    Massimo,

    … the reason for the negative answer is that while the equations describing QM systems are deterministic, actual physical QM events are fundamentally random.

    If you count the Born rule as part of the equations of QM then it does do the non-deterministic part, and one can then use QM to do computer simulations.

    Dan,

    And Fodor an even narrower [view of science]

    If Fodor is insisting “that the only acceptable scientific theories are those that lend themselves to law-like generalizations” (where “laws” need to be both simple and universal) then that would rule out large swathes of physics, much of chemistry, nearly all of biology, all of geology, meteorology, oceanography, archaeology, psychology et cetera.

    ej,

    … and that the epistemic strategies of physics are similar to those of math, while those of biology are closer to the strategies we use to understand language.

    It’s important to realise that physics covers a broad swathe of topics. Fundamental physics deals with the simplest and most basic entities, and that lends itself to simple and general descriptions (“laws”). But fundamental physics is only one part of physics, and physicists are just as at-home dealing with complex aggregate systems.

    For example, astronomers study the nature, formation and evolution of stars, galaxies and planetary systems. These are all vast, complicated agglomerations, and while the fundamental laws still apply, the ensembles are heavily affected by their environment, by historical contingency and by stocasticity. Further, they are way too complex and multi-part to model exactly (a galaxy can contain 10^68 particles) so absolutely all models about them are rough approximations at best. Again, physicists are quite at home dealing with this. Ditto lots of things that physics gets applied to, such as meteorology and climate, physical geology, physical chemistry, et cetera.

    The point is that the nature of biology is not that different. Many of the ways in which it is different from fundamental physics also apply to many other areas of physics. (Though obviously biological organisms have a much greater degree of complexity.) When I read Massimo’s account of how theory is done in biology, there is nothing there that seems alien to a physicist, nothing radically different from how a physicist might think if they turned to that subject.

    There are obviously differences in subject matter, and that dictates lots of pragmatic differences in how to approach it, but beyond that it doesn’t seem very different in epistemological terms. As you have noted, some successful biologists, such as Jerry Coyne, think pretty much like physicists on plenty of issues.

    Like

  5. Massimo Post author

    Ej,

    “I’m getting the notion that biology and physics require two very different epistemic strategies for research and understanding”

    In part, mainly for two reasons: (i) biology deals with incredibly complex and variable systems compared to physics; and (ii) biological systems evolve, i.e., they change through time, which makes biology an inherently historical science (more like astronomy, say, than fundamental physics).

    Socratic,

    I wouldn’t say that there is no philosophy of science. There is, but it has taken a back seat to the more specialized philosophies “of.” We are no longer in the times of Popper, Kuhn, Lakatos and Feyerabend.

    As for meta-philosophy, as you know, here is my contribution to it: http://tinyurl.com/y8rb2r6e

    Coel,

    “If you count the Born rule as part of the equations of QM then it does do the non-deterministic part, and one can then use QM to do computer simulations”

    I don’t know nearly enough about this, but from what I’ve read it still doesn’t get us to anything like Laplace-style predictions of the macroscopic world, does it?

    Like

  6. SocraticGadfly

    Massimo, I may have overstated you a bit, but yes, that’s the gist. And a philosophy of an individual science is in part, or at least a fair overlap, with its “mindset,” is it not?

    ==

    Per the other dialog here, given that there’s at least, what, 15 different interpretations of what QM “actually” is, isn’t most of this really spit-balling?

    Like

  7. Daniel Kaufman

    Coel: Fodor does think there are psychological laws. About a third of his quite substantial output was devoted to demonstrating that. As for evolutionary biology, he thinks it really is a form of natural history, rather than explanatory science, which is why I asked Massimo what I did about natural-historical explanations.

    Fodor comes from a generation when the Hempelian covering-law model of scientific explanation still loomed large over the philosophy of science. It’s not surprising that his views are what they are in light of that fact. While ostensibly anti-positivistic, like many others, he is still strongly affected by that framework and way of thinking.

    Like

  8. Coel

    Hi Massimo,

    … but from what I’ve read it still doesn’t get us to anything like Laplace-style predictions of the macroscopic world, does it?

    Correct, it doesn’t. Laplace dreamt of being able to make exact predictions, such as predicting whether it will rain in a certain place on a day 100 years into the future. There’s no way that’s even in-principle possible, given the combination of quantum indeterminacy and chaos (sensitivity to initial conditions).

    But that doesn’t negate the general usefulness of computer simulations. For example, if you build a computer model of the climate, and how it responds to CO2, you’ve no chance of the rain-on-a-certain-day prediction, but you could in-principle predict the climate 100 years into the future and thus the average yearly rainfall.

    Further, many systems are lnot that affected by quantum indeterminacy, chaos and stocasticity, because the system averages over those things and the behaviour is robust to them. To give an example, if you want to predict whether a certain star will go supernova in the next 10 million years, a computer simulation can (in principle) predict its future evolution reliably and give you a pretty certain yes/no answer.

    Liked by 1 person

  9. wtc48

    Socratic: “And, to go meta on Massimo … “philosophy of philosophy”? Ask Doug Hofstadter to work that into a GEB type essay. 😉”

    In grad school, we used to chuckle over the category “bibliography of bibliographies.”

    Like

  10. synred

    >Exactly. I think people here are mistaking two different views. The fact that we can make QM models of molecules, and use computers to calculate things like amplitudes is uncontroversial. Thinking that you can create QM models of macroscopic objects like galaxies or bacteria, or that you can identify macroscopic object in a QM model of particle states, that is controversial. I think people here who understand QM need to think very carefully about this.

    I at least use QM theory to calculate the amplitudes and put the amplitudes to model the behavior of particles. This includes all the QM effects of interference and entangle ment.

    If the currently (and I suspect always) impractical ‘Coel’ experiment of simulating the Big Bang were undertaken planets and cats might turn up. We would not preform QM measurements on them. They would not be QM objects [decoherence] and we would observe them by just looking at the arrangements of matter the show up on macro scopic scales. You don’t need a ‘planet operator’ to observe planets that ‘emerge’. Being a simulation this might be rather easier than having to build telescopes on the tops of mountains (which are also not QM objects).

    Like

  11. synred

    Thinking you can somehow ‘discover’ concepts from the manifest image in a QM Hamiltonian strikes me as some species of nonsense.

    And indeed no body things that. Not even Coel. At most (even DM I think) would claim that nature is the result of a QM Hamiltonian operating on stuff that obeys QM ‘law.’

    Even QM with some considerable digging was discovered using eyes, ears and brains, not ‘operators.’

    Like

  12. synred

    >Maybe I misunderstand either you or Bunsen, but it seems like you are not talking about Laplace-type kind of problem, nor what DM is suggesting. It is perfectly possible, as you say, to simulate quantum behavior on a classical computer. The question is whether one can, by using only the equations of QM, predict the unfolding of the current universe. The answer to that questions seems to be no. My understanding of physics is limited, but I gather also from previous discussions that the reason for the negative answer is that while the equations describing QM systems are deterministic, actual physical QM events are fundamentally random. So, as someone (Bunsen?) said earlier, all you can do is to simulate all possible universes, which is of no help at all.

    When we do a simulation, we do collapse the wave function, usually using a pseudo random number. One could use a ‘real’ random number, by say using the time between radioactive decays to ‘generate’ them. There can be problems with the pseudo random number generators that have correlations in them, but the art of making random number generators is well advanced and such problems are minimal if you chose one of the good ones (freely available in, e.g., CERN software products).

    It’s also to make a ‘many worlds’ MC, but the amount of memory and CPU needed grows exponentially, so it’s not practical accept for a very small problem run for a very short time. It’s better to ‘collapse’ (a.k.a., pick one) as ;you go. It’s not an approximation, but how QM is done in the standard interpretation which gives the same answer as ‘many worlds’.

    I really don’t understand Bunsen argument, but then I’m only an experimentalist and we use eyes and ears and brains to observe our results. I do not know what Bunsen’s field is, but to me his argument appears meaningless.

    As Coel points out no simulations today are done w/o approximation. Simulations of the evolution of Universe are mostly done using Newtonian gravity with little GR and QM only put in at the beginning to predict the spectrum of fluctuation at seed galaxy formulation and they do not work at the level even atoms, much lets quarks and such and still take every bit CPU and memory we can muster.

    Any thought of simulating all the way from whatever the fundamental physics is to a cat is a just ‘thought simulation’. It would be nice. You could do experiments on ‘mc cats’ that would be considered unethical on real cats – or could you? </:-]

    https://goo.gl/lJbcx5 ‘Causality — a tale of many worlds’ in which a truly super computers is imagined and the programmers are plants (very rough draft)

    Like

  13. synred

    Newton’s first law states that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force. … The third law states that for every action (force) in nature there is an equal and opposite reaction.

    ceteris paribus ? Seems so to me.

    Of course you can just write F=mA and fit the first two in one line.

    Or you can invoke the principle of least action (God is lazy <|:-o) ) and make it appear teleological.

    Like

  14. synred

    “If you count the Born rule as part of the equations of QM then it does do the non-deterministic part, and one can then use QM to do computer simulations”
    I don’t know nearly enough about this, but from what I’ve read it still doesn’t get us to anything like Laplace-style predictions of the macroscopic world, does it?

    Hi Massimo, w/o the Born rule QM predicts nothing and ‘many worlds’ gets the answers wrong (if you just count worlds in the obvious way).

    Advocates of ‘many worlds’ (E.g., Oxford philosophers) do elaborate mathematical hand waving to hide the fact that they are assuming the Born rule too.

    Like

  15. davidlduffy

    “a fairly limited empirical generalization, and nothing like laws in physics”: I guess I would be closer to those that argue there are no “universal and exceptionless (across all space–time points)” laws, even in physics, let alone chemistry. I suspect the term Law was applied to Mendel’s theories because he finds nice geometrical ratios like 1:2:1, 1:3 in such messy stuff as the results from biological experiments – Galton’s attempts to find similar numbers were not quite right.

    Like

  16. Disagreeable Me (@Disagreeable_I)

    Hi Arthur,

    At most (even DM I think) would claim that nature is the result of a QM Hamiltonian operating on stuff that obeys QM ‘law.’

    I guess so, although I wouldn’t put it in those terms necessarily. I’m just imagining that if you had a perfect simulation of physical law on an infinitely powerful computer, you ought to be able to look inside it and explore it much as we do with much simpler virtual worlds all the time (e.g World of Warcraft). You could do so by simulating a viewpoint within the simulation which wouild act as an “observer”, and we could render to a screen what that “observer” would see. That “observer” would see stars and planets much as we do. The simulation wouldn’t think of them as stars and planets — they’re just collections of particles from the point of view of the code, but we humans peering into the simulation would recognise them.

    Liked by 1 person

  17. brodix

    Given the feedback loops and the epistemic nature of knowledge, philosophy of philosophy would probably amount to the politics of philosophy.

    Liked by 1 person

Comments are closed.