Progress in Science — III

laws of physics[for a brief explanation of this ongoing series, as well as a full table of contents, go here]

Progress in science: different philosophical accounts

The above discussion has largely been framed in terms that do not explicitly challenge the way most scientists think of their own enterprise: as a teleonomic one, whose ultimate goal is to arrive at (or approximate as far as possible) an ultimate, all-encompassing theory of how nature works, Steven Weinberg’s (1994) famous “theory of everything.” However, the epistemic, semantic and functionalist accounts do not all seat equally comfortably with that way of thinking. Bird’s epistemic approach can perhaps be most easily squared with the idea of teleonomic progress, since it argues that science is essentially about accumulation of knowledge about the world. The obvious problem with this, however, is that accumulation of truths is certainly necessary but also clearly insufficient to provide a robust sense of progress, since there are countless trivial ways of accumulating factual truths that no one in his right mind would count as scientific advances (e.g., I could spend a significant amount of grant funds to count exactly how many cells there are in my body, then in the body’s of the next person, and so on. This would hardly lead to any breakthrough in human cell biology, though.)

Niiniluoto’s semantic approach, based as it is on the idea of verisimilitude, is a little less friendly to the idea of a single ultimate goal for science. We have seen above that Niiniluoto’s way of cashing out “verisimilitude” is locally defined, and provides no way to compare how close we are to the truth on one specific problem, or even in a relatively broad domain, of science to another such problem or domain. So, for instance, progress toward truth about, say, ascertaining the neuronal underpinnings of human consciousness has prima facie nothing at all to do with progress in understanding how general relativity and quantum mechanics can be reconciled with each other in cases in which they give divergent answers.

Finally, the functionalist approach that can be traced to Kuhn and has been brought forth by Laudan, among several others, is even less friendly to a broad scale teleonomic view of science. Just recall Kuhn’s own skepticism about the possibility of a “coherent direction of ontological development” of scientific theories and his qualified distancing himself from a “relativist” view of scientific knowledge. If science is primarily about problem-solving, as both Kuhn and Laudan maintain, then there is only a limited sense in which the enterprise makes progress, Kuhn’s “evolutionary” metaphor notwithstanding.

But things can get even more messy for defenders of a straightforward concept of scientific progress — as, again, I take most scientists to be. As a scientist myself, I have always assumed that there is one thing, one type of activity, we call science. More importantly, though I am a biologist, I automatically accepted the physicists’ idea that — in principle at the least — everything boils down to physics, that it makes perfect sense to go after the above mentioned “theory of everything.” Then I read John Dupré’s The Disorder of Things (Dupré 1993), and that got me to pause and think hard.

I found Dupré’s book compelling not just because of his refreshing, and admittedly consciously iconoclastic tone, but also because a great deal of it is devoted to subject matters, like population genetics, that I actually know a lot about, and am therefore in a good position to judge whether the philosopher got it right (mostly, he did). Dupré’s strategy is to attack the idea of reductionism by showing how it doesn’t work in biology. He rejects the notion of a unified scientific method (a position that is nowadays pretty standard among philosophers of science), and goes on to advocate a pluralistic view of the sciences, which he claims reflects both what the sciences themselves are finding about the world (with a multiplication of increasingly disconnected disciplines and the production of new explanatory principles that are stubbornly irreducible to each other), as well as a more sensible metaphysics (there aren’t any “joints” at which the sciences “cut nature” — Kitcher’s “natural kinds” from above — so that there are a number of perfectly equivalent ways of thinking about the universe and its furnishings).

Dupré’s ideas have a long pedigree in philosophy of science, and arguably arch back to a classic and highly influential paper by Jerry Fodor (1974), “Special sciences (or: the disunity of science as a working hypothesis),” and are connected to Nancy Cartwright’s (1983) How the Laws of Physics Lie and Ian Hacking’s (1983) already mentioned Representing and Intervening.

Let me start with Fodor, whose target was, essentially, the logical positivist idea that the natural sciences form a hierarchy of fields and theories that are (potentially) reducible to each next level, forming a chain of reduction that ends up with fundamental physics at the bottom. So, for instance, sociology should be reducible to psychology, which in turn collapses into biology, the latter into chemistry, and then we are almost there. But what does “reducing” mean, in this context? At the least two things: call them ontological and theoretical. Ontologically speaking, most people would agree that all things in the universe are indeed made of the same substance, be it quarks, strings, branes or whatever; moreover, complex things are made of simpler things. For instance, populations of organisms are collections of individuals, while atoms are groups of particles, etc. Fodor does not object to this sort of reductionism.

Theoretical reduction, however, is a different beast altogether, because scientific theories are not “out there in the world,” so to speak, they are creations of the human mind. This means that theoretical reduction, contra popular assumption among a number of scientists (especially physicists), does most definitely not logically follow from ontological reduction. Theoretical reduction was the holy (never achieved) grail of logical positivism: it is the ability to reduce all scientific laws to lower level ones, eventually reaching our by now infamous “theory of everything,” formulated of course in the language of physics. Fodor thinks that this will not do. Consider a superficially easy case. Typically, when one questions theory reduction in science one is faced with both incredulous stares and a quick counter-example: just look at chemistry. It has successfully been reduced to physics, so much so that these days there basically is no meaningful distinction between chemistry and physics. But it turns out after closer scrutiny that there are two problems with this move: first, the example itself is questionable; second, even if true, it is arguably more an exception than the rule.

As Weisberg et al. (2011) write: “Many philosophers assume that chemistry has already been reduced to physics. In the past, this assumption was so pervasive that it was common to read about ‘physico/chemical’ laws and explanations, as if the reduction of chemistry to physics was complete. Although most philosophers of chemistry would accept that there is no conflict between the sciences of chemistry and physics, most philosophers of chemistry think that a stronger conception of unity is mistaken. Most believe that chemistry has not been reduced to physics nor is it likely to be.” For instance, both Bogaard (1978) and Scerri (1991, 1994) have raised doubts about the feasibility of reducing chemical accounts of molecules and atoms to quantum mechanics. Weisberg et al. (2011) add examples of difficult reductions of macroscopic to microscopic theories within chemistry itself (let alone between chemistry and physics), even in what are at first glance obviously easy cases, like the concept of temperature. I will refer the reader to the literature cited by Weisberg et al. for the fascinating arguments that give force to this sort of cases, but for my purposes here it suffices to note that the alleged reduction has been questioned by “most” philosophers of chemistry, which ought to cast at least some doubt on even this oft-trumpeted example of theoretical reduction. Another instance, closer to my own academic home field, is Mendelian genetics, which has also not been reduced to molecular genetics, contra to what commonly assumed by a number of geneticists and molecular biologists (Waters 2007). In this case one of the problems is that there are a number of non-isomorphic concepts of “gene” being used in biology, which gets in the way of achieving full inter-theoretical reduction.

Once we begin to think along these lines, the problems for the unity of science thesis — and hence for straightforward accounts of what it means to have scientific progress — are even worse. Here is how Fodor puts it, right at the beginning of his ’74 paper: “A typical thesis of positivistic philosophy of science is that all true theories in the special sciences [i.e., everything but fundamental physics, including non-fundamental physics] should reduce to physical theories in the long run. This is intended to be an empirical thesis, and part of the evidence which supports it is provided by such scientific successes as the molecular theory of heat and the physical explanation of the chemical bond. But the philosophical popularity of the reductivist program cannot be explained by reference to these achievements alone. The development of science has witnessed the proliferation of specialized disciplines at least as often as it has witnessed their reduction to physics, so the wide spread enthusiasm for reduction can hardly be a mere induction over its past successes.” In other words, echoing both Fodor and Dupré one could argue that the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, historical induction points the other way around from the commonly accepted story.

Turns out that even some scientists seem inclined toward at least some bit of skepticism concerning the notion that “fundamental” physics is so, well, (theoretically) fundamental. (It is, again, in the ontological sense discussed above: everything is made of quarks, or strings, or branes, or whatever.) During the 1990’s the American scientific community witnessed a very public debate concerning the construction of a Superconducting Super Collider (SSC), which was the proposed antecedent of the Large Hadron Collider that recently led to the discovery of the Higgs boson. The project was eventually nixed by the US Congress because it was too expensive. Steven Weinberg testified in front of Congress on behalf of the project, but what is less known is that some physicists testified against the SSC, and that their argument was based on the increasing irrelevance of fundamental physics to the rest of physics — let alone to biology or the social sciences. Here is how solid state physicist Philip W. Anderson (1972) put it early on, foreshadowing the arguments he later used against Weinberg at the time of the SSC hearings: “The more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.” So much for a fundamental theory of everything.

Let us go back to Fodor and why he is skeptical of theory reduction, again from his ’74 paper: “If it turns out that the functional decomposition of the nervous system corresponds to its neurological (anatomical, biochemical, physical) decomposition, then there are only epistemological reasons for studying the former instead of the latter [meaning that psychology couldn’t be done by way of physics only for practical reasons, it would be too unwieldy]. But suppose there is no such correspondence? Suppose the functional organization of the nervous system cross cuts its neurological organization (so that quite different neurological structures can subserve identical psychological functions across times or across organisms). Then the existence of psychology depends not on the fact that neurons are so sadly small, but rather on the fact that neurology does not posit the natural kinds that psychology requires.” And just before this passage in the same paper, Fodor argues a related, even more interesting point: “If only physical particles weren’t so small (if only brains were on the outside, where one can get a look at them), then we would do physics instead of paleontology (neurology instead of psychology; psychology instead of economics; and so on down). [But] even if brains were out where they can be looked at, as things now stand, we wouldn’t know what to look for: we lack the appropriate theoretical apparatus for the psychological taxonomy of neurological events.”

The idea, I take it, is that when physicists say that “in principle” all knowledge of the world is reducible to physics, one is perfectly within one’s rights to ask what principle, exactly, are they referring to. Fodor contends that if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics. There is, it seems, no known “principle” that would guide anyone in pursuing such a quest — a far more fundamental issue than the one imposed by merely practical limits of time and calculation. To provide an analogy, if I told you that I could, given the proper amount of time and energy, list all the digits of the largest known prime number, but then decline to actually do so because, you know, the darn thing’s got 12,978,189 digits, you couldn’t have any principled objection to my statement. But if instead I told you that I can prove to you that there is an infinity of prime numbers, you would be perfectly within your rights to ask me at the least the outline of such proof (which exists, by the way), and you should certainly not be content with any vague gesturing on my part to the effect that I don’t see any reason “in principle” why there should be a limit to the set of prime numbers.

Tantalizing as the above is for a philosopher of science like myself, in order to bring us back to our discussion of progress in science we need some positive reasons to take seriously the notion of the impossibility of ultimate theory reduction, and therefore to contemplate the idea of a fundamental disunity of science and what it may mean for the idea of progress within the scientific enterprise. Cartwright (1983) and Hacking (1983) do put forth some such reasons, even though of course there have been plenty of critics of their positions. Cartwright has articulated a view known as theory anti-realism, which implies a denial of the standard idea — almost universal among scientists, and somewhat popular among philosophers — that laws of nature are (approximately) true generalized descriptions of the behavior of things, especially particles (or fields, doesn’t matter). Rather, Cartwright suggests that theories are statements about how things (or particles, or fields) would behave according to idealized models of reality.

The implication here is that our models of reality are not true, and therefore that — strictly speaking — laws of nature are false. The idea of laws of nature (especially with their initially literal implication of the existence of a law giver) has been controversial since it was championed by Descartes and opposed by Hobbes and Galileo [5], but Cartwright’s suggestion is rather radical. She distinguishes between two ways of thinking about laws: “fundamental” laws are those postulated by the realists, and they are meant to describe the true, deep structure of the universe. “Phenomenological” laws, by contrast, are useful for making empirical predictions, they work well enough for that purpose, but strictly speaking they are false.

Now, there are a number of instances in which even physicists would agree with Cartwright. Take the laws of Newtonian mechanics: they do work well enough for empirical predictions (within a certain domain of application), but we know that they are false if they are understood as being truly universal (precisely because they have a limited domain of application). According to Cartwright, however, all laws and scientific generalizations, in physics as well as in the “special” sciences are just like that, phenomenological. [6] And there are plenty of other examples: nobody, at the moment, seems to have any clue about how to even begin to reduce the theory of natural selection, or economic theories, for instance, to anything below the levels of biology and economics respectively, let alone fundamental physics. If Cartwright is correct (and Hacking argues along similar lines), then science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.

Here is how Cartwright herself puts it, concerning physics in particular: “Neither quantum nor classical theories are sufficient on their own for providing accurate descriptions of the phenomena in their domain. Some situations require quantum descriptions, some classical and some a mix of both.” And the same goes, a fortiori, for the full ensemble of scientific theories, including all those coming out of the special sciences. So, are Dupré, Fodor, Hacking and Cartwright, among others, right? I don’t know, but it behooves anyone who is seriously interested in the nature of science to take their ideas seriously. If one does that, then it becomes far less clear that “science” makes progress, although one can still articulate a very clear sense in which individual sciences do.

The goal of this chapter was to show that the concept of progress in science — which most scientists and the lay public seem to think is uncontroversial and self-evident — is anything but. This does not mean at all that we do not have good reasons to think that science does, in fact, make progress. But when scientists in particular loudly complain that philosophy doesn’t progress they should be reminded that it is surprisingly difficult to articulate a coherent and convincing theory of progress in any discipline, including their own — where by their account it ought to be a no brainer. In the next chapter we will pursue our understanding of progress in different fields of inquiry by turning to mathematics and logic, were I think the concept definitely applies, but in a fashion that is interestingly distinct from the sense(s) in which it does in science. And it will be from a better understanding of progress in both science(s) and mathematics-logic that we will eventually be in a position to articulate how philosophy (or at least certain fields within philosophy) are also progressive.


[5] See my lay summary of this in: Are there natural laws?, by M. Pigliucci, Rationally Speaking, 3 October 2013 (accessed on 6 August 2015).

[6] Interestingly, some physicists (Smolin 2007) seem to provide support for Cartwright’s contention, to a point. In his The Trouble with Physics Smolin speculates that there are empirically intriguing reasons to suspect that Special Relativity “breaks down” at very high energies, which means that it would not be a law of nature in the “fundamental” sense, only in the “phenomenological” one. He also suggests that General Relativity may break down at very large cosmological scales.


Anderson, P.W. (1972) More is different. Science, 177:393-396.

Bogaard, P.A. (1978) The limitations of physics as a chemical reducing agent. Proceedings of the Philosophy of Science Association 2:345–356.

Cartwright, N. (1983) How the Laws of Physics Lie. Oxford University Press.

Dupré, J. (1993) The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Harvard University Press.

Fodor, J. (1974) Special sciences (Or: the disunity of science as a working hypothesis). Synthese 28:97-115.

Hacking, I. (1983) Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge University Press.

Scerri, E. (1991) The electronic configuration model, quantum mechanics and reduction. British Journal for the Philosophy of Science 42:309–325.

Scerri. E. (1994) Has chemistry been at least approximately reduced to quantum mechanics? In: D. Hull, M. Forbes and R. Burian (eds.), PSA 1994 (Vol. 1), Philosophy of Science Association.

Smolin, L. (2007) The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next. Mariner Books.

Waters, K. (2007) Molecular genetics. Stanford Encyclopedia of Philosophy (accessed on 6 August 2015).

Weinberg, S. (1994) Against philosophy. In: Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature. Vintage.

Weisberg, M., Needham, P., and Hendry, R. (2011) Philosophy of chemistry. Stanford Encyclopedia of Philosophy (accessed on 6 August 2015).

144 thoughts on “Progress in Science — III

  1. synred

    Sorry, YouTube cut that one off. Here’s Fred’s whole ‘gravity defying’ act from Royal Wedding. It fits with Massimo introductory cartoon.

    Love is a higher level concept than gravity, but not completely logically decoupled though Fred makes it look easy.


  2. brodix


    Thanks. I like the word “quixotic.”

    Money is necessarily a contract. Presumably every asset is backed by a debt. The Federal Reserve issues money to buy Treasury notes, so it is backed by the obligations of the government. Even a gold backed currency is presumably backed by the assumption it could be traded for a quantity of gold.

    The problem is that we experience it as quantified hope. With 5 dollars, you could buy virtually anything for sale for 5 dollars, but when you do buy something, it collapses to that particular item.

    So since the two tools of social control are hope and fear, there is a strong political initiative to keep society fairly well greased with lots of money. Which consequently means creating lots of debt. In Japan and the EU, it seems they have taken to buying debt other than government securities and that gets to be another slippery slope.

    Money, like lots of things, from religions to ideologies, is propelled by hope, but it still needs that modicum of stability and solidity to give it some foundation.

    Like the Titanic, it is unsinkable. Until it isn’t.


  3. Daniel Kaufman

    Synred: I am not attributing these stronger views to you. But there have been others here who have wanted to claim unity of the sciences to varying degrees, and I just wanted to be clear that they are only unified in the most minimal, least interesting sense of the word, which means, they are effectively dis-unified. At least if we are talking about the physical sciences versus the social sciences.

    Liked by 1 person

  4. Robin Herbert

    Hi Coel,

    “If physics and economics were logically independent, as Robin has suggested, then they could each operate under their own systems of maths and logic, that could be totally inconsistent with each other.”

    Doesn’t follow. Or at least, if it does follow I would like to see the reasoning.

    But if all Weinberg was saying was that they could all be described using the same mathematical and logical framework then fine. I am not sure why he would have thought that worth saying.

    I would point out that even two logically inconsistent things could be described using the same system of logic and mathematics.


  5. davidlduffy

    “Money”: I don’t think it peculiar that the cells making up my body use arbitrary codes to organise distribution of essential resources, that the constraints that make this organisation necessary are physical, and that they are studied by biologists. I use the term arbitrary in the sense that the molecules and changes in distribution of electrical charges over time and space could be different but have the same functional effects providing there is an “agreement” on their meaning, and where disagreement leads to death. Unlike Dupre, I do think the mathematics of game theory and economics and biology (and population genetics!) actually have something to say about the world in a deep way, and the reduction will be underpinned by the thermodynamics of information. This would be reduction only in the sense of offering a mechanism by which brute particle physics allows complexity to flourish. Of course, I could be mistaken ;).

    Liked by 2 people

  6. synred

    Something I just learned from Weinberg:

    De Broglie guessed that the electron can be regarded as some sort of wave, with a wavelength that is related to the electron’s momentum in the same way that light wavelengths according to Einstein are related to the photons’ momentum: the wavelength in both cases is equal to a fundamental constant of nature known as Planck’s constant, divided by the momentum. De Broglie did not have any idea of the physical significance of the wave, and he did not invent any sort of dynamical wave equation; he simply assumed that the allowed orbits of the electrons in a hydrogen atom would have to be just large enough for some number of complete wavelengths to fit around the orbit: one wavelength for the lowest energy state, two wavelengths for the next lowest, and so on. Remarkably, this simple and not very well motivated guess gave the same successful answers for the energies of the orbits of the electron in the hydrogen atom as Bohr’s calculation a decade earlier.

    Weinberg, Steven (2011-04-20). Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature (p. 70). Knopf Doubleday Publishing Group. Kindle Edition.

    Well this is something I didn’t know (or perhaps forgot). As I recall it the geological presentation of the Bohr atom relied on de Broglie waves fitting around the orbit. Whereas, in fact, Bohr predated de Broglie by about 10 years. Bohr got the answer for spectral line spacing just based on quantization of light and orbits. As I recall it was presented as if de Broglie came first – not possible, but easier to follow.

    This alone made the book worth buying. Thanks, PF.

    My QM education started off a little rocky. Our Professor, when our text failed to show up on time, decided to teach us from the master himself, Wolfgang Pauli, using his famous article in the ‘Handbuch der Physik’ in German. This was a disaster. Pauli likely got Bohr-de Broglie order right, but we could not make heads or tails of him with our German dictionary beside us trying to unscramble is half page sentences with the verbs at the end. Our text never did show up.

    I had to take elementary QM over again in grad school. I’m pretty sure the history was misleading, but don’t have my text to check.

    Ah! This is one of the books:

    I may even still have this puppy somewhere. It’s not available on Kindle.

    Found it at Open Library a nice free lending service with a lot of books. Their free web based reader is a bit slow. There’s a down loadable one too, but I haven’t figured out how to download it. Maybe a place to get books I don’t want to

    buy. drich#page/182/mode/2up/search/’Bohr+atom ‘

    Cut and paste doesn’t work, so I had to use snip.

    The resulting image proving my recollection wrong may not show up on WordPress – an experiment.

    Anyway it shows that at least this version of ‘Modern Physics’ got the Bohr-de Broglie thing right. Likely, I didn’t read it carefully enough or just miss-remember. It’s been a hell of a long time.

    It’s a stretch to relate this to the OP, but an interesting side-bar to me.


  7. synred

    The further you get apart the weaker the coupling.

    Still even for money you need to be able to pick up the coin or flip the bit which involves a bit of physics (though no need to understand it), more like the physical than the physics.

    Some market economist do ignore physics at our peril. We will run out of oil. We will heat up the atmosphere, if we burn all that’s left.

    But you don’t need quarks or string theory to figure that out. I doubt there’s a climatologist out there that’s into string theory — they might read Weinberg for fun.


  8. Robin Herbert

    Hi Coel,

    “Hold on, can you define what you mean by “unity of the laws of science” and “common observer language”?

    With all the miscommunication over this issue, and all the different people meaning different things by the same terms, it’s not at all clear to me what either of those means.”

    As usual I am perplexed that I have again failed to make myself understood.

    My very point was that before Nagel, people were using the expression “unity of laws” without there being a clear idea of what this meant.

    Nagel addresses this very question and showed what a unity of laws would entail. Are you asking me to summarise Nagel?

    “Common observer language” is, as I said, the criterion suggested by Neurath and Carnap as an alternate way of unifying science. “Common observer language ” seems to be a reasonably self explanatory concept and I am not sure how I could put it better. In any case it refers to the kind of “unity of science” that we are not discussing, so it doesn’t really matter.

    “Hold on, hold on. Supervenience (with nothing else added) by itself means that different areas of science cannot be “logically independent”.”

    As I have said often enough, I don’t really know what you mean by ‘supervenience’.

    But let me ask you a key question, if A is logically dependent on B doesn’t that imply that there would be, at least in principle, a statement of that logical dependency?

    Or do you think that there is a kind of logical dependency that is not even in principle stateable?


  9. synred

    From google:

    In philosophy, supervenience is an ontological relation that is used to describe cases where (roughly speaking) the upper-level properties of a system are determined by its lower level properties.

    This seems a stronger what Dan K told me. That higher levers depend on lower levels I don’t doubt.

    It seems to me they the rules must be determined on the underlying reality. What else could they be determined by?

    It doesn’t depend on what we think about the underlying reality (theory). The theories are connected by trying to describe reality.

    Even if you had a complete theory of the lowest level you can’t do more to then ‘simulate’ higher levels. To understand them, you need to add new concepts (like statistics of large numbers and averages in the clear cut case of thermodynamics),

    Pre-Boltzmann thermodynamics borders on logically independent of Newtonian mechanics. It had its own three laws. If ,however, had average over particles and found something inconsistent with thermodynamics that would be a problem. A logical inconsistent with depending on the same reality.

    At a minor level this happened. In thermodynamics increase entropy is a one of the laws.

    In statistical mechanics 2nd law violations are possible, just not very probable (to say the least). Statistical mechanics explains the 2nd law in terms of the reduction to particles + the statistics of large numbers. The details of the particle interactions don’t matter too much — at least for gases.

    Levels that our father apart are more weekly coupled. It seems unlikely will figure out the connection anytime soon and the additional concepts need are likely more subtle than statistics.


  10. Daniel Kaufman

    Synred wrote:

    In philosophy, supervenience is an ontological relation that is used to describe cases where (roughly speaking) the upper-level properties of a system are determined by its lower level properties.

    This seems a stronger what Dan K told me. That higher levers depend on lower levels I don’t doubt.

    I don’t believe that “the higher levels” — being a nation state; being a prime minister; being a currency; etc. — “depend on” the lower levels in any meaningful or interesting sense.


  11. synred

    Well I don’t get that. W/o out stuff there’d be no higher levels or lower levels from a realist perspective.

    I take a realists’ stance because w/o it science makes no sense to me. E.g., see ‘Grass is Green’

    I believe we can’t understand higher levels w/o adding concepts as in the example of thermodynamics from statistical mechanic, but the low levels doing there thing create or generate the higher levels.

    If we had a complete theory of the low level and could simulate it (impossible of course) then higher levels would be generated by the simulation. Some details of the lower levels might not matter in which case you could leave them out and save CPU time (as is done in simulating galaxy evolution).

    Which exact higher levels you get would be somewhat random due to QM if nothing else. Like in Gould’s rewinding the tape metaphor getting the same creatures each time you ran the simulation would be unlikely. Sometimes you might get something pretty much like people and money, other times it might just be bacteria and viruses. Some runs might be sterile. Sometimes something ‘completely different.’

    If the underlying stuff always produces the standard model you’d get thermodynamics and chemistry all the time.

    If basic underlying physical nature doesn’t generate higher levels and us, what does?

    The thing itself is just doing whatever it does. All those gas molecules create pressure on in our balloon whether we understand it or not and yes the relationships between gas and molecules is really there. Pressure is really there; lots of molecules are hitting the wall.

    Life is really there, but it’s made by stuff interacting according to whatever rules govern the interaction of stuff. I don’t take the view that only the elementary things are there. Stuff they do is there as well.

    I don’t take the view that only the most elementary interactions ‘fundamental’. Natural selection, e.g., is fundamental too.

    Maybe QM, which appears to be the rules for the rules, might be more fundamental in someway. It is the underlying structure that everything depends bottom to top depends on.

    And yes it can be a good approximation to ignore at big sizes and high temperatures. It’s not clear what role it plays in biology, but there is some evidence it does. I don’t know how it could other than as a source of underlying randomness like mutations, but who knows?


  12. Massimo Post author


    # I am curious whether Massimo could expand on why he was so impressed by Dupre’s book. #

    It is true that Dupre`’s book was written before recent advances in molecular genomics, but those pertain mostly empirical discoveries or statistical descriptions. Dupre`’s criticism is aimed at the alleged centrality of population genetic theory within the broader scheme of evolutionary theory. There, I think, he is right on target. Popgen is far too simplistic, riddled with assumptions that contradict the empirical data, and adds comparatively little to evolutionary theory. It doesn’t come even close to the analogous role of, say, quantum mechanics or general relativity in physics.

    Moreover, Dupre` and I suspect that this state of affairs isn’t the result of biologists being stupid in comparison with physicists, but rather of the fact that biological phenomena are too complex and too contingent to be described by a mathematical theory of that sort (possibly by *any* mathematical theory), except at a very basic and simplistic level.

    Liked by 2 people

  13. Coel

    Hi synred,

    Use of such jargon is bad PR, though to be fair I think (but I’m not sure) that Coel brought it up here. He likely got it from Fodor or one of those guys.

    The term “supervenience” is a philosophers’ term, and the reason you’ve not encountered it among physicists is that physicists generally use the term “reductionism” to mean what philosophers mean by “supervenience” — whereas by “reductionism” philosophers generally mean a much stronger notion as espoused by Ernst Nagel. Hence rampant misunderstanding among philosophers of what physicists such as Weinberg intend.

    The concept “supervenience” would usually be defined along the lines: replicate a complete low-level description of an ensemble, and the high-level behaviour would be entailed.

    So, if you got a list of every molecule in a chair (including position, motion, etc), and used that to assemble a duplicate, then the duplicate would function as a “chair” and its high-level properties would be manifest. You would not have to add in high-level properties separately and in addition to the low-level description, since they would be entailed.

    [One then has to make caveats about quantum indeterminacy and deterministic chaos, such that two duplicates would diverge over time.]

    Regarding money: if you had nothing except a low-level molecule-by-molecule description of an entire room containing poker players, and used that low-level description alone to assemble a duplicate of the room and its contents, then the high-level properties of the room — “poker game” and “money” — would also be present in the duplicate. In that sense they supervene on the low-level physical description.

    You would not find that the duplicated room was some sort of “zombie” room without high-level properties such as a “game in progress”, and then find that you have to add in such high-level properties independently, as you would if they were entirely “logically independent” of the low-level description.

    Liked by 2 people

  14. Coel

    Hi Robin,

    My very point was that before Nagel, people were using the expression “unity of laws” without there being a clear idea of what this meant. Nagel addresses this very question and showed what a unity of laws would entail. Are you asking me to summarise Nagel?

    The reason I’m not understanding you is that I can interpret that two very different ways. By “what a unity of laws would entail”, do you mean

    “what one particular conception of a unity of laws might entail, though this would not apply to other conceptions of a unity of laws”.

    Or do you mean:

    “what any and all conceptions labelled “unity of laws” would have to entail”? This whole discussion is bedeviled by lack of clarity of exactly that type.

    As I have said often enough, I don’t really know what you mean by ‘supervenience’.

    See my comment just above to synred.


  15. Coel

    Hi Massimo,

    … biological phenomena are too complex and too contingent to be described by a mathematical theory of that sort (possibly by *any* mathematical theory), except at a very basic and simplistic level.

    At the risk of over-repetition, physicists (including Weinberg et al) would entirely agree with you. No-one thinks one could write down a mathematical theory of the behaviour of an elephant, as one can for, say, a planetary orbit. Isn’t that entirely obvious to everyone?

    [Though one could, in principle though of course not in practice, brute-force simulate the entire ensemble, molecule by molecule.]


  16. Robin Herbert


    Fisrt you complained that I didn’t have a definition.

    Then, when I pointed out that I was referring to a rather specific definition you complain about tha too.

    As I sais, before Nagel nobody had a good idea of what they meant by “unity of laws” and didn’t get anywhere. Nagel clearly wasn’t content with arguing eternally at cross purposes using vague and nebulous concepts and sat down and thought about what it meant to say that a set of theories were “unified”, about what exactly it was that could unify them.

    So he did and people were able to show why they could not be unified in this sense and we could all move on.

    Now, what about my ‘key question’ from my previous post? That will help me to understand what your claim is.

    Liked by 1 person

  17. brodix


    Then there is feedback throughout the entire system. You don’t get pressure from just the atoms bouncing around, without the wall and so you can’t build out from any particular frame, without taking into account all possible input into that frame and feedback between frames, so you need the whole system in place to truly model the system. Map versus territory.


  18. Coel

    Hi Robin,

    [Nagel] sat down and thought about what it meant to say that a set of theories were “unified”, about what exactly it was that could unify them.

    Sure, but that unification scheme is only one way in which the set of theories could be regarded as “unified”.

    If Nagel’s scheme is wrong, that does not mean that scientific theories are not unified, it only means they are not unified in the way described by Nagel.

    As to your key question:

    But let me ask you a key question, if A is logically dependent on B doesn’t that imply that there would be, at least in principle, a statement of that logical dependency?

    Sure, there would indeed be a statement of that dependency. That statement would be supervenience (as in my comment to synred just above) along with the doctrine that scientific theories must be mutually consistent.

    Liked by 1 person

  19. Coel

    On “logically independent” models:

    If someone were writing science fiction or counter-factual historical novels, the events could be inconsistent with reality and indeed inconsistent with other science-fiction books. That would not be a problem; there is no expectation of mutual consistency between sci-fi books or counter-factual historical novels — indeed, that’s the whole point of them. They are logically independent.

    Now suppose one is writing true historical accounts, “novels” but fully true, about different people’s lives. These would all have to be consistent with basic facts about how the world works: children have mothers and fathers, pigs don’t fly, etc.

    Yet, much of the detail would be independent from book to book. If one’s account were of a citizen in the French Revolution, the day-to-day life would have little dependence on the happenings in the life of a peasant boy in Mongolia two hundred years earlier. There would absolutely be no “bridge law” relations linking everyday events in the French-Revolution account to those in the Mongolian account.

    Yet, the underlying requirement for consistency would still be there. And now suppose that one considered the set of all such truthful historical novels — one for each person in history. The requirement that they all be consistent would then be a very strong thesis, linking them all together and unifying them in a way that would not be the case for the set of all sci-fi novels. The same events would be being seen from different points of view, and being the same events the accounts would have to be consistent.

    Science is, if you like, the set of all “novels” truthfully describing the world. And since the world is unified and consistent, that set is unified by the requirement for consistency. But that is not arguing that one can read an account of the Mongolian peasant boy, and from there apply elementary logic or lines of algebra, and so derive the novel of the citizen in the French Revolution. That idea is, of course, absurd.

    Liked by 1 person

  20. Robin Herbert

    Hi Coel,

    Sure, there would indeed be a statement of that dependency. That statement would be supervenience (as in my comment to synred just above) along with the doctrine that scientific theories must be mutually consistent.

    ‘Supervenience is a word, not a statement and doctrinal statements cannot demonstrate a logical dependence.

    So it really does not answer my question.

    Liked by 1 person

  21. Robin Herbert

    The supply and demand effect depends upon there being needs and scarcity. So I don’t see how that logically depends upon there being, for example, atoms.

    There is no inconsistency in there being a reality in which there were no particles, no dimensions and yet there was still needs and scarcity and therefore a supply and demand effect.


  22. synred

    Not even ’caused by’?

    What do they depend on then? Surely, language doesn’t just float about in nothing?


Comments are closed.