[for a brief explanation of this ongoing series, as well as a full table of contents, go here]
Progress in science: different philosophical accounts
The above discussion has largely been framed in terms that do not explicitly challenge the way most scientists think of their own enterprise: as a teleonomic one, whose ultimate goal is to arrive at (or approximate as far as possible) an ultimate, all-encompassing theory of how nature works, Steven Weinberg’s (1994) famous “theory of everything.” However, the epistemic, semantic and functionalist accounts do not all seat equally comfortably with that way of thinking. Bird’s epistemic approach can perhaps be most easily squared with the idea of teleonomic progress, since it argues that science is essentially about accumulation of knowledge about the world. The obvious problem with this, however, is that accumulation of truths is certainly necessary but also clearly insufficient to provide a robust sense of progress, since there are countless trivial ways of accumulating factual truths that no one in his right mind would count as scientific advances (e.g., I could spend a significant amount of grant funds to count exactly how many cells there are in my body, then in the body’s of the next person, and so on. This would hardly lead to any breakthrough in human cell biology, though.)
Niiniluoto’s semantic approach, based as it is on the idea of verisimilitude, is a little less friendly to the idea of a single ultimate goal for science. We have seen above that Niiniluoto’s way of cashing out “verisimilitude” is locally defined, and provides no way to compare how close we are to the truth on one specific problem, or even in a relatively broad domain, of science to another such problem or domain. So, for instance, progress toward truth about, say, ascertaining the neuronal underpinnings of human consciousness has prima facie nothing at all to do with progress in understanding how general relativity and quantum mechanics can be reconciled with each other in cases in which they give divergent answers.
Finally, the functionalist approach that can be traced to Kuhn and has been brought forth by Laudan, among several others, is even less friendly to a broad scale teleonomic view of science. Just recall Kuhn’s own skepticism about the possibility of a “coherent direction of ontological development” of scientific theories and his qualified distancing himself from a “relativist” view of scientific knowledge. If science is primarily about problem-solving, as both Kuhn and Laudan maintain, then there is only a limited sense in which the enterprise makes progress, Kuhn’s “evolutionary” metaphor notwithstanding.
But things can get even more messy for defenders of a straightforward concept of scientific progress — as, again, I take most scientists to be. As a scientist myself, I have always assumed that there is one thing, one type of activity, we call science. More importantly, though I am a biologist, I automatically accepted the physicists’ idea that — in principle at the least — everything boils down to physics, that it makes perfect sense to go after the above mentioned “theory of everything.” Then I read John Dupré’s The Disorder of Things (Dupré 1993), and that got me to pause and think hard.
I found Dupré’s book compelling not just because of his refreshing, and admittedly consciously iconoclastic tone, but also because a great deal of it is devoted to subject matters, like population genetics, that I actually know a lot about, and am therefore in a good position to judge whether the philosopher got it right (mostly, he did). Dupré’s strategy is to attack the idea of reductionism by showing how it doesn’t work in biology. He rejects the notion of a unified scientific method (a position that is nowadays pretty standard among philosophers of science), and goes on to advocate a pluralistic view of the sciences, which he claims reflects both what the sciences themselves are finding about the world (with a multiplication of increasingly disconnected disciplines and the production of new explanatory principles that are stubbornly irreducible to each other), as well as a more sensible metaphysics (there aren’t any “joints” at which the sciences “cut nature” — Kitcher’s “natural kinds” from above — so that there are a number of perfectly equivalent ways of thinking about the universe and its furnishings).
Dupré’s ideas have a long pedigree in philosophy of science, and arguably arch back to a classic and highly influential paper by Jerry Fodor (1974), “Special sciences (or: the disunity of science as a working hypothesis),” and are connected to Nancy Cartwright’s (1983) How the Laws of Physics Lie and Ian Hacking’s (1983) already mentioned Representing and Intervening.
Let me start with Fodor, whose target was, essentially, the logical positivist idea that the natural sciences form a hierarchy of fields and theories that are (potentially) reducible to each next level, forming a chain of reduction that ends up with fundamental physics at the bottom. So, for instance, sociology should be reducible to psychology, which in turn collapses into biology, the latter into chemistry, and then we are almost there. But what does “reducing” mean, in this context? At the least two things: call them ontological and theoretical. Ontologically speaking, most people would agree that all things in the universe are indeed made of the same substance, be it quarks, strings, branes or whatever; moreover, complex things are made of simpler things. For instance, populations of organisms are collections of individuals, while atoms are groups of particles, etc. Fodor does not object to this sort of reductionism.
Theoretical reduction, however, is a different beast altogether, because scientific theories are not “out there in the world,” so to speak, they are creations of the human mind. This means that theoretical reduction, contra popular assumption among a number of scientists (especially physicists), does most definitely not logically follow from ontological reduction. Theoretical reduction was the holy (never achieved) grail of logical positivism: it is the ability to reduce all scientific laws to lower level ones, eventually reaching our by now infamous “theory of everything,” formulated of course in the language of physics. Fodor thinks that this will not do. Consider a superficially easy case. Typically, when one questions theory reduction in science one is faced with both incredulous stares and a quick counter-example: just look at chemistry. It has successfully been reduced to physics, so much so that these days there basically is no meaningful distinction between chemistry and physics. But it turns out after closer scrutiny that there are two problems with this move: first, the example itself is questionable; second, even if true, it is arguably more an exception than the rule.
As Weisberg et al. (2011) write: “Many philosophers assume that chemistry has already been reduced to physics. In the past, this assumption was so pervasive that it was common to read about ‘physico/chemical’ laws and explanations, as if the reduction of chemistry to physics was complete. Although most philosophers of chemistry would accept that there is no conflict between the sciences of chemistry and physics, most philosophers of chemistry think that a stronger conception of unity is mistaken. Most believe that chemistry has not been reduced to physics nor is it likely to be.” For instance, both Bogaard (1978) and Scerri (1991, 1994) have raised doubts about the feasibility of reducing chemical accounts of molecules and atoms to quantum mechanics. Weisberg et al. (2011) add examples of difficult reductions of macroscopic to microscopic theories within chemistry itself (let alone between chemistry and physics), even in what are at first glance obviously easy cases, like the concept of temperature. I will refer the reader to the literature cited by Weisberg et al. for the fascinating arguments that give force to this sort of cases, but for my purposes here it suffices to note that the alleged reduction has been questioned by “most” philosophers of chemistry, which ought to cast at least some doubt on even this oft-trumpeted example of theoretical reduction. Another instance, closer to my own academic home field, is Mendelian genetics, which has also not been reduced to molecular genetics, contra to what commonly assumed by a number of geneticists and molecular biologists (Waters 2007). In this case one of the problems is that there are a number of non-isomorphic concepts of “gene” being used in biology, which gets in the way of achieving full inter-theoretical reduction.
Once we begin to think along these lines, the problems for the unity of science thesis — and hence for straightforward accounts of what it means to have scientific progress — are even worse. Here is how Fodor puts it, right at the beginning of his ’74 paper: “A typical thesis of positivistic philosophy of science is that all true theories in the special sciences [i.e., everything but fundamental physics, including non-fundamental physics] should reduce to physical theories in the long run. This is intended to be an empirical thesis, and part of the evidence which supports it is provided by such scientific successes as the molecular theory of heat and the physical explanation of the chemical bond. But the philosophical popularity of the reductivist program cannot be explained by reference to these achievements alone. The development of science has witnessed the proliferation of specialized disciplines at least as often as it has witnessed their reduction to physics, so the wide spread enthusiasm for reduction can hardly be a mere induction over its past successes.” In other words, echoing both Fodor and Dupré one could argue that the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, historical induction points the other way around from the commonly accepted story.
Turns out that even some scientists seem inclined toward at least some bit of skepticism concerning the notion that “fundamental” physics is so, well, (theoretically) fundamental. (It is, again, in the ontological sense discussed above: everything is made of quarks, or strings, or branes, or whatever.) During the 1990’s the American scientific community witnessed a very public debate concerning the construction of a Superconducting Super Collider (SSC), which was the proposed antecedent of the Large Hadron Collider that recently led to the discovery of the Higgs boson. The project was eventually nixed by the US Congress because it was too expensive. Steven Weinberg testified in front of Congress on behalf of the project, but what is less known is that some physicists testified against the SSC, and that their argument was based on the increasing irrelevance of fundamental physics to the rest of physics — let alone to biology or the social sciences. Here is how solid state physicist Philip W. Anderson (1972) put it early on, foreshadowing the arguments he later used against Weinberg at the time of the SSC hearings: “The more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.” So much for a fundamental theory of everything.
Let us go back to Fodor and why he is skeptical of theory reduction, again from his ’74 paper: “If it turns out that the functional decomposition of the nervous system corresponds to its neurological (anatomical, biochemical, physical) decomposition, then there are only epistemological reasons for studying the former instead of the latter [meaning that psychology couldn’t be done by way of physics only for practical reasons, it would be too unwieldy]. But suppose there is no such correspondence? Suppose the functional organization of the nervous system cross cuts its neurological organization (so that quite different neurological structures can subserve identical psychological functions across times or across organisms). Then the existence of psychology depends not on the fact that neurons are so sadly small, but rather on the fact that neurology does not posit the natural kinds that psychology requires.” And just before this passage in the same paper, Fodor argues a related, even more interesting point: “If only physical particles weren’t so small (if only brains were on the outside, where one can get a look at them), then we would do physics instead of paleontology (neurology instead of psychology; psychology instead of economics; and so on down). [But] even if brains were out where they can be looked at, as things now stand, we wouldn’t know what to look for: we lack the appropriate theoretical apparatus for the psychological taxonomy of neurological events.”
The idea, I take it, is that when physicists say that “in principle” all knowledge of the world is reducible to physics, one is perfectly within one’s rights to ask what principle, exactly, are they referring to. Fodor contends that if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics. There is, it seems, no known “principle” that would guide anyone in pursuing such a quest — a far more fundamental issue than the one imposed by merely practical limits of time and calculation. To provide an analogy, if I told you that I could, given the proper amount of time and energy, list all the digits of the largest known prime number, but then decline to actually do so because, you know, the darn thing’s got 12,978,189 digits, you couldn’t have any principled objection to my statement. But if instead I told you that I can prove to you that there is an infinity of prime numbers, you would be perfectly within your rights to ask me at the least the outline of such proof (which exists, by the way), and you should certainly not be content with any vague gesturing on my part to the effect that I don’t see any reason “in principle” why there should be a limit to the set of prime numbers.
Tantalizing as the above is for a philosopher of science like myself, in order to bring us back to our discussion of progress in science we need some positive reasons to take seriously the notion of the impossibility of ultimate theory reduction, and therefore to contemplate the idea of a fundamental disunity of science and what it may mean for the idea of progress within the scientific enterprise. Cartwright (1983) and Hacking (1983) do put forth some such reasons, even though of course there have been plenty of critics of their positions. Cartwright has articulated a view known as theory anti-realism, which implies a denial of the standard idea — almost universal among scientists, and somewhat popular among philosophers — that laws of nature are (approximately) true generalized descriptions of the behavior of things, especially particles (or fields, doesn’t matter). Rather, Cartwright suggests that theories are statements about how things (or particles, or fields) would behave according to idealized models of reality.
The implication here is that our models of reality are not true, and therefore that — strictly speaking — laws of nature are false. The idea of laws of nature (especially with their initially literal implication of the existence of a law giver) has been controversial since it was championed by Descartes and opposed by Hobbes and Galileo [5], but Cartwright’s suggestion is rather radical. She distinguishes between two ways of thinking about laws: “fundamental” laws are those postulated by the realists, and they are meant to describe the true, deep structure of the universe. “Phenomenological” laws, by contrast, are useful for making empirical predictions, they work well enough for that purpose, but strictly speaking they are false.
Now, there are a number of instances in which even physicists would agree with Cartwright. Take the laws of Newtonian mechanics: they do work well enough for empirical predictions (within a certain domain of application), but we know that they are false if they are understood as being truly universal (precisely because they have a limited domain of application). According to Cartwright, however, all laws and scientific generalizations, in physics as well as in the “special” sciences are just like that, phenomenological. [6] And there are plenty of other examples: nobody, at the moment, seems to have any clue about how to even begin to reduce the theory of natural selection, or economic theories, for instance, to anything below the levels of biology and economics respectively, let alone fundamental physics. If Cartwright is correct (and Hacking argues along similar lines), then science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.
Here is how Cartwright herself puts it, concerning physics in particular: “Neither quantum nor classical theories are sufficient on their own for providing accurate descriptions of the phenomena in their domain. Some situations require quantum descriptions, some classical and some a mix of both.” And the same goes, a fortiori, for the full ensemble of scientific theories, including all those coming out of the special sciences. So, are Dupré, Fodor, Hacking and Cartwright, among others, right? I don’t know, but it behooves anyone who is seriously interested in the nature of science to take their ideas seriously. If one does that, then it becomes far less clear that “science” makes progress, although one can still articulate a very clear sense in which individual sciences do.
The goal of this chapter was to show that the concept of progress in science — which most scientists and the lay public seem to think is uncontroversial and self-evident — is anything but. This does not mean at all that we do not have good reasons to think that science does, in fact, make progress. But when scientists in particular loudly complain that philosophy doesn’t progress they should be reminded that it is surprisingly difficult to articulate a coherent and convincing theory of progress in any discipline, including their own — where by their account it ought to be a no brainer. In the next chapter we will pursue our understanding of progress in different fields of inquiry by turning to mathematics and logic, were I think the concept definitely applies, but in a fashion that is interestingly distinct from the sense(s) in which it does in science. And it will be from a better understanding of progress in both science(s) and mathematics-logic that we will eventually be in a position to articulate how philosophy (or at least certain fields within philosophy) are also progressive.
Notes
[5] See my lay summary of this in: Are there natural laws?, by M. Pigliucci, Rationally Speaking, 3 October 2013 (accessed on 6 August 2015).
[6] Interestingly, some physicists (Smolin 2007) seem to provide support for Cartwright’s contention, to a point. In his The Trouble with Physics Smolin speculates that there are empirically intriguing reasons to suspect that Special Relativity “breaks down” at very high energies, which means that it would not be a law of nature in the “fundamental” sense, only in the “phenomenological” one. He also suggests that General Relativity may break down at very large cosmological scales.
References
Anderson, P.W. (1972) More is different. Science, 177:393-396.
Bogaard, P.A. (1978) The limitations of physics as a chemical reducing agent. Proceedings of the Philosophy of Science Association 2:345–356.
Cartwright, N. (1983) How the Laws of Physics Lie. Oxford University Press.
Dupré, J. (1993) The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Harvard University Press.
Fodor, J. (1974) Special sciences (Or: the disunity of science as a working hypothesis). Synthese 28:97-115.
Hacking, I. (1983) Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge University Press.
Scerri, E. (1991) The electronic configuration model, quantum mechanics and reduction. British Journal for the Philosophy of Science 42:309–325.
Scerri. E. (1994) Has chemistry been at least approximately reduced to quantum mechanics? In: D. Hull, M. Forbes and R. Burian (eds.), PSA 1994 (Vol. 1), Philosophy of Science Association.
Smolin, L. (2007) The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next. Mariner Books.
Waters, K. (2007) Molecular genetics. Stanford Encyclopedia of Philosophy (accessed on 6 August 2015).
Weinberg, S. (1994) Against philosophy. In: Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature. Vintage.
Weisberg, M., Needham, P., and Hendry, R. (2011) Philosophy of chemistry. Stanford Encyclopedia of Philosophy (accessed on 6 August 2015).

Reblogged this on The Logical Place.
LikeLike
Hi Massimo,
This whole article has an undercurrent of scientists, “especially physicists”, being a bit naive about reductionism, whereas the philosophers have realised that there are problems with it.
So let me (at the risk of repeating well-trodden ground) put the counter. The idea that is here called “theoretical reduction” didn’t originate from physicists, it originated — as the article says — from Logical Positivists, who were engaged in a highly normative program of how they thought science and academic enquiry should be.
Unfortunately for the Logical Positivists, nature isn’t like that. Their brand of “theoretic reduction” doesn’t work — not just in the (weirdly named) “special science”, but it doesn’t work in any science, not even fundamental physics.
This brand of “theoretic reduction” has never had much standing or traction in among physicists (most would not even know about it). Some philosophers have attributed this view to physicists because that’s how the Logical Positivists saw it, but that has never been what physicists meant by “reduction” or by phrases like “theory of everything”.
Rather, the physicists view of what “reductionism” means is the ontological thesis. That alone is sufficient for a “unity of science” thesis. Why? Because ontological reductionism has one big implication for empistemology: that is that different models at different levels of analysis need to be fully consistent with each other.
That requirement for inter-consistency between different levels of analysis is what physicists mean by “reductionism”. It is a vastly weaker thesis than the Logical Positivists’ “theoretic reductionism”, and yet it is still a very powerful thesis that underpins a lot of how science is actually done. Ensuring that different models are consistent with each other is a very powerful tool of science.
It is in that sense that chemistry “reduces to” physics. But no-one who has ever done chemistry thinks that chemistry inter-theoretically reduces to physics in the Logical Positivists’ sense. Nor have I heard physicists asserting that.
The incredulous stares, I suggest, arise largely from people not understanding that philosophers and physicists mean different things by the term “reductionism”.
LikeLike
Another good addition to the book, even though I’m looking forward to reading it all together to keep it more coherent.
I think we should drop the word reductionism in an ontological sense all together as it is so misleading. Perhaps ontological integration is better.
Coel,
Would you also say it the other way around, physics reduces to chemistry or even psychology? If your view is that all sciences are fully consistent with one another, than wouldn’t that apply in every direction?
LikeLike
Hi Imad,
What (it seems to me) a physicist means by “reductionism” is the ontological thesis of supervenience physicalism. Thus psychology and chemistry supervene on physics but not the other way round.
That doctrine then implies the doctrine that theories at different levels must be fully consistent with each other, and yes that is fully symmetric, running both upwards and downwards between levels of description.
It is worth emphasizing that of course physics does not start from the fundamental level and build from there. It starts at the middle-scale level (everyday objects and telescopes and microscopes) and from there extends — epistemologically — both down to the microscopic fundamental level and also to the larger-scale ensemble level (e.g. galaxies).
Thus the only way we can ever do “fundamental physics” is through the doctrine that theories must be consistent with the middle-scale level (and indeed all levels).
Thus epistemologically “reductionism” does not privilege the lower-level descriptions. The name “reductionism”, though, comes from the ontological side of the thesis.
LikeLiked by 3 people
Hi Coel
In fact I would have to disagree with Massimo that the Logical Positivists thought that science was reducible in this sense. In fact they would have regarded such a claim as metaphysical, and entirely counter to their position.
They wanted to unify science at the level of observational language. See, for example, Neurath’s essay “Physicalism”.
LikeLike
As I’ve pointed out multiple times on this blog (and its predecessor) this whole topic is bedeviled by miscommunication.
The physicist assumes a supervenience thesis requiring consistency between different levels of description, and uses the term “reductionism” for it (having no idea that philosophers mean something else by the term).
The philosopher interprets that as declaring for the much stronger thesis of Nagel-style inter-theoretic reductionism and so tells the physicist that “reductionism” doesn’t work.
The physicist interprets that as a declaration that supervenience is false and that inter-theoretic consistency is not required. The physicist thus looks aghast.
The philosopher then interprets the physicist’s reaction as being naive and hide-bound to Nagel-style inter-theoretic reductionism, and concludes that the physicists needs to think harder and learn some philosophy.
Meanwhile the physicist, thinking that the philosopher is effectively rejecting supervenience, concludes that philosophers are New-Agey Post-Modernists who have little idea how the world actually is, and who can thus be dismissed and ignored.
I try my best to resolve miscommunication between the two camps, though so far have not got anywhere. 😦
LikeLiked by 3 people
Coel,
In your first comment. I get the impression your historical analysis is inaccurate and what you’re reading into Massimo’s tone also. On Massimo’s point your replying to, I think your comment might actually be supporting it:
# ontological reductionism has one big implication for epistemology: that is that different models at different levels of analysis need to be fully consistent with each other #
I read that as ontological reductionism implies that theoretical models at different levels of analysis need to reduce to each other or “be fully consistent with each other” (which at least seems to be as strong a claim if not stronger). But then you say:
# This brand of “theoretic reduction” has never had much standing or traction in among physicists (most would not even know about it) #
Can you elaborate on those two quotes by a short and clear contrasting between what you think this brand of “theoretic reduction” is and what you think your brand of “theories fully consistent with each other” is.
LikeLike
Massimo,
I’m glad I had the morning off and could take my time reading the last section of the chapter, and I enjoyed it, in particular, what you said about Cartwright’s views, and I’ll also definitely be looking at Dupré’s The Disorder of Things.
LikeLiked by 1 person
Hi marc,
The “inter-theoretic reduction” as defined by Ernst Nagel required very tight relations between statements at one level of description and those at another level of description. For example, quoting from SEP:
“As a consequence, Nagel introduces two “necessary formal conditions” required for the reduction to take place:
1) Connectability. “Assumptions of some kind must be introduced which postulate suitable relations between whatever is signified by ‘A’ [the term to be reduced, that is, an element of the vocabulary of theory T′] and traits represented by theoretical terms already present in the primary [reducing] science.”
2) Derivability. “With the help of these additional assumptions, all the laws of the secondary science, including those containing the term ‘A,’ must be logically derivable from the theoretical premises and their associated coordinating definitions in the primary discipline.” (Nagel, 1961, pp. 353–354)”
This idea doesn’t work (perhaps it might in particular cases, but not in general) and it is this idea that was refuted by Fodor and others.
The above is a vastly stronger conception than mere consistency.
Take, for example, the two descriptions:
“Tom loves chocolate and that’s why he chose the chocolate ice cream over the lemon sorbet”.
“Here is a list of every particle in Tom’s body and every one of them is behaving in a manner described by the Schroedinger equation”.
Those two descriptions are entirely consistent, in the sense that both of them can be entirely true at the same time. But, the two descriptions cannot be linked together with the “bridge laws” required by Nagel’s conception.
Thus, the above two descriptions are entirely compatible with a physicist’s conception of reduction (supervenience, and consistency between levels of description) but not with Nagel-style inter-theoretic reduction.
LikeLiked by 2 people
Possibly it should be kept in mind that perception and rationality is a fundamentally reductionistic process. Signal from the noise, remember?
So reductionism is to extract some pattern and order from the larger dynamic, necessarily editing away all that is superfluous to one’s preferred goal. Thus all the various sciences gravitate into their particular domains and assume there must be some larger domain into which they all fit, from TOE to God.
Sad to say, no it doesn’t. As order/structure accumulates, energy dissipates.
“The more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.”
” Ontologically speaking, most people would agree that all things in the universe are indeed made of the same substance, be it quarks, strings, branes or whatever; moreover, complex things are made of simpler things. For instance, populations of organisms are collections of individuals, while atoms are groups of particles, etc. Fodor does not object to this sort of reductionism.”
Yet the simplest form, be it a circle, flatline, point, etc, is the most stable and thus least dynamic. So energy necessarily moves to the less stable and more disordered forms. The point at which the wave is most defined is when it crests.
Consequently, the more we grasp at nature, the more slippery it seems to become.
LikeLike
I think of two analogies when explaining reductionism. The first is the recipe analogy. One gets a bunch of ingredients and rules or directions for putting them together and voila ‘ birthday cake’. The second is the building analogy. One gets a bunch of wood and the directions for building and — house. The first analogy is the more usual one in the experiential world while the second is the more atomic with things being built up of the same element in different configurations.
The logical positivist reduction was a mathematical reduction which was supposed to mirror physical reality.
LikeLike
Coel,
I don’t think I’m not following you.
Could you elaborate on what you mean by scientific theories at different levels being “fully consistent with each other” in terms of scientific fields say like for physics and biology.
LikeLike
Coel,
Fair enough, I think supervenience as I understand it from Wiki and SEP, is not typically seen as reductionistic but basically talks about how each field is constrained by the other and that there is information at one level that cannot be acquired at another. That is consistent with your point though that physicists and philosophers are using the word reductionism differently (see more on that below).
I would probably speak about this topic more in terms of integration and how different fields and knowledge fit together and not contradict each other. That’s partly because I’m not as familiar with supervenience but also partly because it has the potential of be misinterpreted as “lower levels are more important”. As before, Haack puts the view I’m trying to articulate better so I’m going to quote her:
Eh, blockquote fail. Everything starting with “My only note of skepticism…” is my writing.
LikeLike
Coel, I didn’t realize I used a double negative, I meant “I don’t think I’m following you”
LikeLike
Great cartoon! Caption is hard to read on blog, but I guess it will show up better in book.
LikeLike
Synred,
If you click on any image it will open in all its splendor in a new window.
LikeLike
brodix: Possibly it should be kept in mind that perception and rationality is a fundamentally reductionistic process
One of your better points!. As I understand it the brain has already applied a lot of conceptual filtering before ‘you’ are aware of it.
Exploiting this is one of the ‘tricks’ used by magicians…and why scientist should not have been testing Uri Geller by themselves.
LikeLike
Yes, I did that. Thanks.
Also, copied it because I like it so much.
LikeLike
Hi Imad,
That Susan Haack quote seems very sensible to me; I agree. And yes, my perspective does come from being a physicist and knowing about physics culture. I’m much less familiar with the behavioural sciences.
Hi marc,
What I mean by “consistent” is that they can both by fully true at the same time. Thus they cannot be inconsistent in the sense of one level of analysis saying “X happens” and another level says “X does not happen”.
LikeLiked by 1 person
Arthur,
Thank you.
It does go a point I’ve made previously;
“For instance, populations of organisms are collections of individuals,’
What gets overlooked here is that as individual forms coalesce and dissolve, i.e., are born, live and die, they essentially go from being in the future to being in the past. Meanwhile the population as a whole, species level, is constantly moving onto future generations and shedding old ones, going past to future.
Consequently those forms essentially move one direction of time, as the process moves the opposite direction. Much as thoughts come into being and fade, future to past, while the mind moves onto new ones, past to future.
Or in a factory, the product goes start to finish, while the process moves the other way, consuming raw material and expelling finished product.
So this process of reducing/refining to ever more distilled formulations exists within a larger context, that is recycling all that is excised.
Reductionism constitutes an action and so there is the equal and opposite reaction.
How much does this have to do with the OP, of “progress” in science, when the result is ever more specialization and intersectional conflicts. Not to say connections are not made, but they are also part of the process, in that every new relationship places stresses on other relationships.
LikeLike
Reductionism constitutes an action and so there is the equal and opposite reaction
…but there you go again stringing words together w/o knowing what they mean. Reductionism is not that kind of action. Newton’s Laws don’t apply. It’s not even a good metaphor.
You need to layoff physics or learn some.
-Arthur
LikeLike
Hi marc et al,
Let me amplify my previous comment (at the risk of hogging the thread) since this is important for how “reduction” in science does work.
Take the situation: “Tom loves chocolate and that’s why he chose the chocolate ice cream over the lemon sorbet”.
Now, if we were to do the thought experiment of exactly replicating, particle by particle, everything about Tom and his local environment then, since every particle is obeying the same physical laws, then the ensemble would behave the same way, and thus we would expect the replicated-Tom to also like chocolate and make the same choice.
[Let’s leave aside that this is impossible in practice, and also ignore quantum indeterminacy and deterministic chaos, which would lead to the two identical Tom’s then diverging in behaviour over time.]
The above is what I mean by a “consistency” thesis. Thus, if one had a *perfect* low-level, particle-by-particle description, then one could work out what the high-level-description behaviour actually was by the suck-it-and-see, brute-force approach of replicating or simulating the entire ensemble, and then watching what it does. In that sense the high-level behaviour is entailed by the low-level description.
But, that brute-force, simulate-the-entire-ensemble approach might be the only way of linking together the high-level description with the low-level description. Even then, the high-level **description** would not be present in the simulated ensemble, only the higher-level *behaviour* would be present. That high-level behaviour would still need *describing*.
That is directly contrary to Nagel’s conception, which wanted relatively straightforward and simple “bridge laws” linking the *descriptions* at the different levels.
Now, on to why Nagel’s conception does not work even in fundamental physics:
Take a proton, which is made up of quarks and gluons. We also have a theory quantum chromodynamics, which does a very good job of describing quarks, gluons and their interactions. So, it should be straightforward enough to calculate the mass of the proton in terms of the properties of the constituent quarks and gluons, should it not?
Well, no, in practice, the complications are so messy, that the only way to arrive at the mass of the proton is to do a full brute-force simulation of the ensemble on a supercomputer, involving millions (literally) of possibilities. In other words you cannot even write down a “bridge law” relating something as basic as the mass of something as simple as a proton to the properties of its constituents.
The way that nature seems to be is that larger-scale things are ensembles of smaller-scale stuff, but their interactions are always so messy and complicated that Nagel-style inter-theoretic reduction just doesn’t work. Yet, despite that, the fact that a full-scale simulation of the low-level behaviour of a system must be entirely consistent with the high-level behaviour of the system, and thus that everything must mesh together, is still a powerful statement and one of the central tools of science.
LikeLiked by 2 people
Coel,
# What I mean by “consistent” is that they can both by fully true at the same time. Thus they cannot be inconsistent in the sense of one level of analysis saying “X happens” and another level says “X does not happen”. #
Going back to your original proposition:
# but that has never been what physicists meant by “reduction” or by phrases like “theory of everything”. Rather, the physicists view of what “reductionism” means is the ontological thesis. That alone is sufficient for a “unity of science” thesis. Why? Because ontological reductionism has one big implication for empistemology: that is that different models at different levels of analysis need to be fully consistent with each other. #
So rather than scientific ‘models at different levels of analysis need to be fully consistent with each other’ you mean scientific ‘models at different levels of analysis need to not contradict each other’, and this is what you think most physicist mean when they talk of reduction?
Last question for now, do you think that within the field of physics today, between different levels of analysis within the field, there is or isn’t conflict or contradiction between models in the above sense.
LikeLike
> “If one does that, then it becomes far less clear that “science” makes progress, although one can still articulate a very clear sense in which individual sciences do.”
If individual sciences make progress in a very clear sense, then is how is it possible that it’s far less clear that “science” makes progress?
It’s obvious that progress in one science does not always count as progress in another science. If somebody formulates a new theory of continental drift, chances are almost zero that there’ll be something new it it for physicists. When physics discovered the Englert-Brout-Higgs, I don’t think this counted as progress in paleontology.
But to go from this obvious observation to the conclusion that it’s far less clear that “science” makes progress when there’s a new, better theory of continental drift or when physicist discover the Englert-Brout-Higgs, one needs a pretty particular defintion of what constitutes science. Perhaps the definition is the problem?
Apparently, we don’t really know why nor in what sense scientific statements about objective reality are true or how they correspond to this objective reality. Moreover, every scientific theory is, perhaps, in principle false. Science is a patchwork of fundamentally unrelated disciplines,each with their own methods and aims. Sometimes they make very clear individual progress – but, and this is important, they make progress by the standards of their own individual disciplines (there are no generally accepted scientific standards, or so it seems).
Yet, scientists and other people think “science” makes progress.
If scientists are allowed to claim progress in those circumstances, then philosophers are allowed to claim progress too. That much is sure.
However, we’re coming close to admitting that psycho-analysis makes progress too, and that phrenology made progress in the 1830s. Is there anything left that doesn’t make progress?
LikeLike
Arthur,
So formulating a thought or perception out of compiled information is not an act?(thus discarding excess information)
Maybe we should wait until the logic section, so there isn’t as much territorialism over this.
LikeLike
I want an edit function now!
A thorough philosophical evaluation of scientific progress needs criteria that tell you what *doesn’t* counts as progress too (and what counts as regress). Moreover, the same criteria should be used to judge progress and to judge regress (or standstill).
I one claims that phrenology didn’t make scientific progress because it is “unscientific” or “doesn’t describe reality correctly” one has, implicitly or explicitly, a clear idea of what constitutes science or what it means that statements “correspond with reality”. It means that in an evaluation of scientific progress, we also know what science is, and how and to what extent it describes reality.
LikeLike
Coel: Take a proton, which is made up of quarks and gluons. We also have a theory quantum chromodynamics, which does a very good job of describing quarks, gluons and their interactions. So, it should be straightforward enough to calculate the mass of the proton in terms of the properties of the constituent quarks and gluons, should it not?:
In practice things are worse than that. You end up having to input masses of quarks, etc. in by hand as they come out infinite when attempting to calculate them from first principles. This is fixed by the awkward procedure called renormalization.
String theory is supposed to tame these infinities, but so far can’t compute things like quark or electron masses either. Some infinities do vanish in 10 dimensions. The taming is supposed to be due to the fact that the elementary constituents are finite sized strings rather than infinitely small point particles. Why 10 dimensions are needed, I have no idea: (D-10) just appears multiplying some of the infinite terms.
http://smile.amazon.com/Superstring-Theory-Introduction-Monographs-Mathematical/dp/0521323843?ie=UTF8 &keywords=Ed%20Witten%20string%20theory&qid=1462566650&ref_=sr_1_fkmr2_1&sr=8-1-fkmr2
I always ask string theorist if they can compute g-2 [a] w/o resorting to renormalization The can’t. I’m not sure they can even get the ‘2’.
[a] g is the ratio of the magnetic moment of an electron to its spin. To first order it is 2 in QED, but there are corrections. We call the corrections g-2. Using renormalization g-2 can be computed to very high precision. It’s been experimentally verified to like 8 decimal points. Last I looked small discrepancies were showing up at 9 though even those may have been resolved by now.
One of the greatest theoretical successes of all time and a place where I think a tiny change would have no effect on day-to-day life. No need for WAP or gods. If strings could do something like that they’d be in business.
See https://en.wikipedia.org/wiki/Precision_tests_of_QED
LikeLike
If all the variants of string theory ruled out DNA, then we could I think safely exclude string theory. The prospects of this happening are small.
LikeLike
Lisa Randal in ‘Dark Matter and Dinosaurs’ does claims the dark matter makes progress in paleontology.
The book is, however, over-hyped crap full of bad statistics (admitted), bad paleontology and even some bad physics.
I don’t think it represents a threat to the consistency of physics and paleontology.
LikeLike