Disclaimer: I’m neither a physicist, nor a philosopher of physics. Moreover, I don’t play either role on television! Nonetheless, I’m fascinated by physics, as well as by debates amongst physicists, or between physicists and philosophers. So I perked up when a couple of weeks ago the regular colloquium at the Philosophy Program of CUNY’s Graduate Center was scheduled to be by Nina Emery, of Brown University, who gave an unusually lucid talk (given the topic) entitled “Against radical quantum ontologies.”
We have all heard of the wave function, hopefully from a real physicist rather than, say, from Deepak Chopra. It is a fundamental concept in quantum mechanics, being a description of the quantum state of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. Okay, you may ask, but what is a wave function, physically — rather than mathematically or statistically — speaking? Hell if I know. And apparently, hell if anyone else knows either.
Which is where Nina’s talk comes in. I’m going to follow her handout from now on, adding a few comments here and there. Near the end of the post I will get to why the issue may be of broader interest than “just” understanding what the wave function actually is.
To begin with, Nina introduced wave function realism as the view that all that exists at the fundamental level is a field defined in a highly dimensional physical space, the configuration space, where none of its physical dimensions correspond to the standard three spatial and one temporal ones we are all familiar with. There are two types of wave function realism out there: wave function monism, which claims that all that exists is a field in configuration space, which gives rise directly to our everyday experience of the world; and wave function fundamentalism, which says that what exists at the fundamental level is a field in configuration space, which then gives rise to ordinary objects in 3D space, which in turn we then somehow perceive (i.e., fundamentalists allow for additional transitions when compared to monists).
What Nina set out to do was to build an argument against wave function realism, based on something she calls the minimal divergence norm, which states: “insofar as we have two empirically adequate theories (i.e., two theories that both accurately predict the phenomena we observe), we ought to choose the one that minimizes the difference between how the theory says the world is and the way the world appears to be (to us).
To use the classical distinction famously introduced by philosopher Wilfrid Sellars, the minimal divergence norm says that we should try to minimize the distance between the scientific and the manifest images of the world.
Nina explained that we should care about this for a couple of reasons: first, because wave function realism is taken increasingly seriously by a number of philosophers and physicists; second, because the minimum divergence norm may be helpful in metaphysics against what she amusingly called “incredulous stare arguments” (i.e., arguments based on some sophisticated version of “are you f**ing kidding me?”).
Nina’s argument can be summarized in the following way:
P1: wave functional realism (either of the monist or the fundamentalist type) violates the minimal divergence norm.
P2: we should accept the minimal divergence norm.
C: therefore, we should reject wave function realism.
The argument is valid, which means that the only way to reject it is to doubt one or the other of the two premises (i.e., to question its soundness). Accordingly, Nina proceeded to defend her premises. Before doing that, however, she cautiously added a few caveats, which I’m going to briefly examine here.
First, she explained that her focus is on ontologies compatible with Everett-type (so-called “many-worlds”) interpretations of quantum mechanics, but that the argument applies also (straightforwardly, she said) to Bohmian-type dynamics. Don’t ask, I can’t tell.
Second, she distinguished between her proposed minimal diverge norm and a similar, but more restrictive, no-divergence norm. The latter says that we should not endorse a scientific theory that says that the world is significantly different from the way it appears to us.
This is crucial, and of high interest to me. Basically, while the minimal divergence norm attempts to put a leash on scientific theories and keep them as close as empirically and theoretically possible to the manifest image, the no-divergence norm says that, no matter what, priority should go to the manifest image. The first one is, I think, a reasonable attempt to remind us that scientific theories are human constructions, not god’s eye-views of the world, and so that one of their virtues is to make the world understandable to us. The second norm, however, is basically what flat-earthers and creationists happily support: no matter what science tells me, what I see is what I get. Clearly, the minimal divergence norm is a (debatable, for sure) reasonable constraint on science, while the no-divergence norm quickly degenerates into a rejection of science and possibly a support of pseudoscience.
Nina’s third caveat was that she wasn’t going to propose any positive account of the ontology of the wave function, since that’s a much more complex issue, on which there simply isn’t any agreement, either among physicists or among philosophers.
Caveats duly put out there and set aside, Nina proceeded to defend her first premise, that wave function realism violates the minimal divergence norm. To do that, she identifies one possible alternative to wave function realism, mass-density ontology, according to which what exists fundamentally is a mass-density field in 3D space. (Just like wave function realism, mass-density ontology comes in two flavors, monism and fundamentalism, but the difference is irrelevant here.)
Nina claims that wave function realism diverges more from the manifest image than mass-density ontology, for instance because mass-density ontology at the least includes objects in 3D space, which wave function realism does not.
The general idea is that neither wave function realism nor mass-density ontology contradict the manifest image (because they both somehow recover the appearance of everyday objects), but they both go beyond the manifest image. The difference is that wave function realism goes further beyond when compared to mass-density ontology. We could say that it is a less parsimonious departure from the manifest image.
Nina then turns to her defense of the second premise, that we should accept the minimal divergence norm. This, for me, is the more crucial point, and one that has far wider applications than this particular issue in fundamental (meta)physics.
Her move is interesting, though certainly controversial. She claims that the minimal divergence norm is the chief — indeed te only — reason that keeps our theorizing theorizing from sliding inot so-called radical metaphysical notions. Here are some examples of the latter:
* Solipsistic idealism, the notion that I don’t have a physical body and brain, and that all that exists is my mental states.
* Brain-in-a-vat hypothesis. My brain is floating in a vat, receiving sensorial inputs indirectly. The physical world is nothing like what it appears to be.
* Bostrom simulation hypothesis. The physical universe is nothing like physics describes it, it is, rather, a simulation in someone else’s computer.
* Boltzmann brain hypothesis. My brain formed spontaneously in an empty region of space, as a result of a number of coincidences. Again, the physical universe is nothing like what it appears to be.
At first I thought that Nina’s claim that these radical metaphysical hypotheses are incompatible with science was a bit too strong, and that it would have sufficed to say that they are in no way entailed by the current scientific worldview. But upon further reflection I think she is right. Notice the recurrence above of a specification along the lines of “… and the world is nothing like it appears to be.” If any of the radical metaphysical hypotheses were true (and it is possible that one of them is!), then it would not just be the manifest image that would be incorrect, but also the scientific one. When physicists talk about electrons, quarks, strings, and what not, they most certainly do mean that these things are physical components of fundamental aspects of our reality. Which would be false if any of the above scenarios actually held.
Further, Nina makes clear two additional points to be considered insofar as this discussion is concerned: i) while it is true that the radical metaphysical hypotheses can be designed so as to present a number of extra-empirical virtues (such as simplicity, elegance, etc.), this is irrelevant unless one also has a reasonable story to explain how those virtues are acquired by scientists and why they should be deployed in a way that favors the metaphysically radical scenarios; ii) her argument for the second premise goes through even if one limits the radical scenario to just the fundamental level, for instance by saying that the Bostrom simulation hypothesis claims that what exists fundamentally is a computer simulation, which is then capable of giving rise to a world of 3D objects.
Nina’s conclusion, which seems reasonable to me, is that “anyone who rejects the minimal divergence norm must either take seriously the radical metaphysical scenarios, or give up a plausible story about how they are ruled out.”
Obviously, there is plenty of room for disagreement with Nina’s argument and conclusions, though I find them quite plausible. Nevertheless, the reason this is very interesting — other than its application to the ontology of quantum mechanical concepts such as the wave function — is because of the broader issue I mentioned earlier: the difference (and tension) between Sellars’ manifest and scientific images of the world.
Indeed, I have been invited to contribute a chapter to a forthcoming book on the common sense tradition in philosophy, to be published by Oxford University Press. My chapter will be on the challenges posed by science to such tradition. As a scientist and philosopher, of course, I wholly realize that science has introduced and will continue to introduce notions that will depart from “common sense,” or from the manifest image. But as I said above, I also think that science is a human-centered activity that never had, nor ever will, achieve a god’s eye-view of things. Science, in other words, isn’t just in the neutral business of discovering how the world works, it is in the partially subjective business of facilitating human understanding of how the world works.
That is why I find Nina’s notion of the minimal divergence norm useful: we have to allow the scientific image to diverge from the manifest one, or we give up on science altogether. But we also want such divergence to be kept at a minimum, because otherwise we have no criteria to reject non- or even anti-scientific hypotheses, such as the radical metaphysical ones mentioned in Nina’s talk (and a number of others, I would add, like Max Tegmark’s mathematical universe). To give up on the norm of minimal divergence would basically give free rein to metaphysical speculation in science, which I’m pretty positive would not be a good idea.

OK. That’s good.
I agree that if we were really ‘a simulation’ it might be possible in principle to figure it out.
However, such hypotheses are usually framed such that it is impossible to figure it out, making them purely ‘philosophical’ devices.
Thus, there’s really nothing to disagree about. These are two different cases.
LikeLike
Hi DM,
I knew you would be the first one to comment, and to disagree… The two of us are getting a bit predictable after all these years!
“I’m not convinced that it does violate the minimal divergence norm. I certainly agree that it does diverge from the manifest view, but I’m not sure that the divergence is greater than minimal”
I think Nina makes it clear that divergence is relative, so don’t take “minimal” to be an indication of any absolute quantity. Her point is that wave function realism diverges more than mass-density ontology fro the manifest image, and she explains why. It has nothing to do with scientific plausibility, since both views are scientifically plausible. If one were actually empirically favored over the other than the whole argument would be irrelevant, and that view would have to be provisionally accepted.
” don’t know how you get all the quantum weirdness we have empirically verified out of such a classic view of the world”
Here I can only direct you to check Nina’s papers. It seems to me that she knows what she’s talking about, in terms of the physics, and none of the people present at the colloquium (including some physicists) took her to task on the science.
“I don’t think we should, or at least we should accept it only weakly. A much more important norm for me is parsimony.”
The norm is a type of parsimony, but one that is constrained by a fixed reference point: the manifest image. Frankly, the norm seems very commonsensical to me. As Nina put it, without it there is no way to stop runaway metaphysical speculation.
“I think we ought to try to keep an open mind on such radical metaphysical notions”
Of course, so long as we don’t mistake them for actual science. And that we are willing to pay the price for accepting them — that the scientific image of the world is fundamentally wrong. I had actually never thought of it that way, but it strikes me that Nina is right on target about this.
“Metaphysical speculation in philosophy of science is entirely appropriate.”
Actually, no. Philosophy of science is in the business of studying science. Metaphysical speculation is admissible only in, well, metaphysics!
Coel,
“The “minimal divergence norm” seems sensible, but also seems to be pretty much a restatement of Occam’s razor and parsimony.”
As I wrote above, it is a type of parsimony, but not directly analogous to Occam’s razor, which is concerned with comparing theories among themselves, not with the fixed reference point of the manifest image.
Synred,
“I don’t know what the hell ‘mass density realism’ is. Does it predict anything?”
No, it’s an interpretation, not a scientific theory. Just like there are several interpretations of quantum mechanics, equally compatible with the empirical evidence. (I know Sean Carroll claims otherwise in the case of the multiverse version, but apparently many others disagree.)
“”Stuff” emerges as a consequence of lots of “excitations” and the Pauli”
So you are proposing an excitation ontology…
“I would say that with the ‘should’ in there this not a valid syllogism. Shouldn’t a premise be a premise?”
First, that was simply Nina’s specific phrasing, I think “should” in this context means “ought.” But no, the argument is still valid, even when phrased in probabilistic terms. A lot of arguments are.
LikeLike
Arthur,
Consider the premise of “reductio ad absurdum” isn’t that when things get hopelessly muddled, we need to find a way to smooth everything over, but that back when things did seem obvious, some errors were overlooked and because of them, things have become hopelessly muddled.
For the last several decades, the tendency has been to patch the holes, rather than really go back and question why they are there. The premise of Massimo’s post is that science and math seem to be leading us ever deeper into the swamp of detachment from actual experience and so rather than keep adding enormous invisible forces, alternate mathematical dimensions, etc, to patch the theories, maybe we should return to the experienced world and see what might have been overlooked, what shortcuts were taken and became foundational assumptions, etc.
Given that generations of physicists have been trained to the “shut up and calculate” philosophy, as well as the sense that the stranger ideas are, the more likely they must be true, it does seem as though it will take some time to turn the ship.
LikeLike
I can see probabilistic and even moral ‘ought/should’ syllogisms work. I don’t see how they get us anywhere in the case. P2. Is not a verifiable in any meaningful way. If some body thinks I should accept minimalism, then they can draw the conclusion that they think I should not accept things that violate it. Ok.
LikeLike
Synred,
But Nina defended P2, and I gave a sketch of her defense. The short version of the argument t simply lists the premises and identifies the conclusion. It is the longer version that defends both premises.
LikeLike
People are always patching holes. Nothing wrong with that. Quite often the patches are successful. When they are not the holes grow, “then comes the revolution.”
It takes awhile. It’s part of the process. You can’t force it. 50, a 100 years is nothing.
I’d love to be around for the next revolution, but it ain’t likely.
LikeLike
Nina gives tries to convince us we should accept ‘minimalism.’ It’s still not a crisp argument. It’s still only a ‘should/ought’ that we can take or leave.
LikeLike
I think jbonnicerenoreg has it right. It’s a bit unclear to me what “manifest image” means in this discussion, but it’s safe to assume that we can take the “classical worldview” (like in classical, Newtonian mechanics) as an example of a manifest image when we’re talking about QM.
In CM there are no objects that are waves and particles, you can’t walk through a wall (no QM tunneling) etc. Yet the only thing we have are needles moving on macroscopic (classical) instruments, even when we measure QM properties. There cannot be a “distance” between QM and the manifest image. If there were such a distance, then how could we learn about the QM world by using macroscopic, classical instruments?
LikeLike
I don’t think that the minimum divergence norm is the same as parsimony, in fact they pull different ways in some cases.
For example wave function realism can be defended on the basis of parsimony, as Sean Carroll does for example.
LikeLiked by 1 person
>P1: wave functional realism (either of the monist or the fundamentalist type) violates the minimal divergence norm.
This premise is also debatable. QFT (in the form of the Standard Model) does allow for a world like ours. What do the mean it violates ‘minimal divergence’? Just because electrons don’t behaving like little balls? But they don’t. That is observed. Throw one at two slits and it goes through both! As a wave!
We can calculate the ratio of the magnetic moment (which produces a field) to the spin of the electron to 8 decimal points. g/2 = 1.001 165 920 8 (6), using QFT.
Rocks are not the solid stuff of our perception. We can measure that too (x-ray diffraction).
Is that not part of ‘the world is and the way the world appears to be (to us).’
Indeed I don’t really understand what is meant by diverging from ‘the world as it appears to us’. Of course a theory has to allow for the ‘world as it appears to us’, but why should have to resemble it at the ‘nanoscopic’ level as long as long as the ‘nanoscopic’ level can produce the ‘worlds as it appears to us.’
LikeLike
First, thanks for a really interesting post. Made me think a lot about this stuff on and off all day. I completely agree that the minimal divergence norm is right. The question seems to be what justifications we can provide for it.
Second–and some of my previous posts emphasizing pragmatism might prepare you for this–could it be that one strong justification has to do with the reliable ways that human minds connect with the world, where “reliable” is parsed as efficiency for survival and thriving. What I’m thinking here is that some forms of non-human consciousness, however minimal, can very effectively utilize perceptual cues to navigate effectively in the world. Bees appear to build “mental maps” based on some sort of visual cues; bats seem to do much the same with respect to finding prey and avoiding obstacles but by echolocation. So often single perceptual channels are sufficient to function reliably (as I define it). Moving closer to us through other mammals we have lots of creatures with multiple perceptual channels that integrate lots of information in producing very complex behaviors (mating, predation, avoiding danger, etc.) that are still reliable in the same ways. Certainly the survival of human beings is built upon the same kind of reliability of mind in dealing with reality–at least our manifest image of it. While that isn’t enough to easily justify any particular scientific image of the world, it should give us some pragmatic assurance that minds have dealt with an outside world reliably well, and that provides some justification for a minimal divergence norm. FWIW.
LikeLiked by 1 person
Couvent,
“There cannot be a “distance” between QM and the manifest image. If there were such a distance, then how could we learn about the QM world by using macroscopic, classical instruments?”
Of course there is a divergence. We don’t perceive quarks and electrons, we perceive mid-size objects in three dimensions. That’s the manifest image, the world as it appears to us.
The idea is that: I) as a matter of history, science has produced an image of the world that diverges from the manifest one, and to reject such image would mean to reject science; but II) science is in the business of furthering human understanding, which is inextricably connected to the manifest image, because we are creatures of a particular sort, both in terms of sensorial capacity and intellectual ability. Hence the idea that we want to keep the scientific image from running away from us, so to speak, and give rise to unleashed metaphysical speculation.
LikeLiked by 1 person
Something I forgot to mention: I don’t like the idea that QM is “constrained” by the manifest image (the manifest image taken as classical mechanics).
The manifest image has it’s own limitations and there are regions where it can constrain nothing at all. In classical mechanics a particle with insufficient energy can’t get out of a deep potential well. In QM it can (tunneling). You can’t walk through a wall, an electron can, in a certain sense. Both are experimentally verified facts Therefore they are compatible, and a complete theory would explain both.
Unfortunately, we don’t have this theory. But the manifest image (classical mechanics) should show some humility and stay out of regions where it has nothing to contribute.
LikeLiked by 1 person
I don’t know about science in general. I’m only writing about QM and the manifest image, which I define here as classical, Newtonian mechanics. To believe that the image of the world of QM and of classical mechanics “diverge”, you would have to believe that you can extrapolate from an elektron to a football. I don’t see a reason why such an extrapolation would be correct.
LikeLike
Couvent,
The manifest image is not Newtonian mechanics. The latter is already a scientific theory, which already goes beyond the manifest image. It isn’t that QM is constrained, it is that we want science itself not to wander too far, or we don’t understand what it’s telling us.
LikeLiked by 1 person
Ah, I don’t agree. When you’re talking about QM the manifest image is Newtonian mechanics. You have to start somewhere. We all know that the manifest image of heavy objects is that they fall faster than light objects. But I don’t want to be constrained by that particular manifest image.
LikeLike
Massimo;
In what sense does quantum mechanics violate the ‘manifest image’? Electrons and neutrons and other ‘particles’ do manifest both wave-like and particle-like behavior.
Is this all because we can’t picture in our minds what’s going on in realms we have no experience of?
LikeLike
Is there a difference between the norm of minimal divergence and the idea that we should trust our intuitions?
LikeLike
Arthur,
In your first post you made the very lucid observation; “In Quantum Field Theory there only fields and excitations of the fields.(waves).”
Now it would seem to me that space would be a requirement for “fields,” but time would be an effect of “excitations.”
This whole dimensionality of time is due to our minds being based on a recording of events, i.e. narrative. Not some underlaying physical requirement. Frankly, if a dimension of time were a requirement for “excitations,” why wouldn’t temperature be an even more foundational dimensional requirement?
We can use ideal gas laws to correlate temperature and volume. Wouldn’t “spacetemperature,” be a necessary basis for “excitations of the fields?” Or just descriptive of it?
LikeLike
So it seems that the fundamental premiss is “we should not take radical metaphysical scenarios seriously”
Why not? Why not just accept Cartesian doubt and get on with life?
LikeLiked by 1 person
The reason for universal time is that everything exists in this point of the present, as it flows from past to future, but as an effect within the present, then all actions are their own clock and it doesn’t matter that some are faster, or slower.
LikeLike
Also, the question of whether an ontology is radical or not seems to be subjective.
Given she is considering ontologies compatible with EQM it does appear to me that she is already fairly deep into radical territory.
I wonder if there is an objective way to distinguish whether any EQM ontology is less radical than, say, Idealism, BIV, simulation hypothesis etc.
LikeLike
Finishing that thought; That such actions often synchronize would seem to be a consequence of thermal activities.
LikeLike
To further that thought; “entangling particles,” i.e, synchronizing the waves, would be a thermal exchange.
LikeLike
Couvent,
It isn’t a question of you agreeing or not. The manifest image is a concept introduced by Wilfrid Sellars (see SEP link in the OP), and it is not equivalent to Netownian mechanics. Also, the manifest image is not theory dependent, it’s not that QM has one corresponding manifest image while another physical theory has a different manifest image.
Synred,
QM doesn’t “violate” anything, it departs from the manifest image for th obvious reason that everyday human discourse about the world does not include concepts from quantum mechanics.
Robin,
The norm of minimal divergence is different from intuitions because one can specify in what sense two scientific images depart from the manifest one, and therefore assess whether one does so more than the other. As Nina does in her paper.
As to why we shouldn’t take Cartesian solipsism seriously, three reasons: I) we have no positive reason/evidence for doing so; II) it would amount to a rejection of the scientific image of the world, which is substantiated by plenty of evidence.; III) it would be practically useless.
LikeLiked by 2 people
>QM doesn’t “violate” anything, it departs from the manifest image for the obvious reason that everyday human discourse about the world does not include concepts from quantum mechanics.
And why is that a problem? No theory that works is going explainable in terms of everyday human discourse. And this is not just theory but experiment. We were driven to QM by experiment. By the black body radiation spectrum (Planck), by the stability of atoms (Bohr and Pauli), by the decay of Nuclei (Tunneling)., etc. The ‘manifest image’ is built on the foundation of QM behavior. Without it there would be no ‘manifest image’. No rocks or blocks or bowling balls or people to through them.
And QFT doesn’t need fine tuning to get the most basic basics. Fine tuning is needed for ‘details’ like getting the neutron to be heavier than the proton, so the world is not all neutrons and thus likely boring,[a] but ‘stuff’ (neutron stars galore) would exist anyway.
[a] Deep in Neutron star I can imagine it becoming energetically favorable to for protons to form. The Pauli exclusion principle prevents neutrons from compacting further than a certain point, but if a proton could go deeper thus extracting energy more energy from the gravitational field. Then there could be some chemistry going on deep in the stars where protons and electrons could accumulate. I sort of doubt this actually works out or somebody would have already have thought of it. A black hole likely forms before things go to that point, so just a fantasy about a different sort of ‘life.’
LikeLike
Cartesian doubt is not solipsism.
The statement “I don’t know if X is true or not” is not the v same as “X is not true”.
And in particular it doesn’t mean “Some particular alternative to X is true”
LikeLike
And what exactly is the scientific image of the world?
Is the scientific image of the world that there are infinitely many parallel multiverses?
If not then what?
LikeLike
P.S. A good article be W. Sellars using the manifest image Is “Philosophy and the Scientific Image of Man”.
LikeLike
Hi Massimo,
Occam says that you should not multiply entities “unnecessarily”, where the “necessary” is what is demanded by empirical adequacy. Thus Occam has a reference point also, in that to be empirically adequate a theory must explain the “manifest image”. Thus I still don’t think that the concepts are that different. Both could be construed as an instruction to minimize one’s speculation beyond what is empirically substantiated.
LikeLiked by 1 person