Disclaimer: I’m neither a physicist, nor a philosopher of physics. Moreover, I don’t play either role on television! Nonetheless, I’m fascinated by physics, as well as by debates amongst physicists, or between physicists and philosophers. So I perked up when a couple of weeks ago the regular colloquium at the Philosophy Program of CUNY’s Graduate Center was scheduled to be by Nina Emery, of Brown University, who gave an unusually lucid talk (given the topic) entitled “Against radical quantum ontologies.”
We have all heard of the wave function, hopefully from a real physicist rather than, say, from Deepak Chopra. It is a fundamental concept in quantum mechanics, being a description of the quantum state of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. Okay, you may ask, but what is a wave function, physically — rather than mathematically or statistically — speaking? Hell if I know. And apparently, hell if anyone else knows either.
Which is where Nina’s talk comes in. I’m going to follow her handout from now on, adding a few comments here and there. Near the end of the post I will get to why the issue may be of broader interest than “just” understanding what the wave function actually is.
To begin with, Nina introduced wave function realism as the view that all that exists at the fundamental level is a field defined in a highly dimensional physical space, the configuration space, where none of its physical dimensions correspond to the standard three spatial and one temporal ones we are all familiar with. There are two types of wave function realism out there: wave function monism, which claims that all that exists is a field in configuration space, which gives rise directly to our everyday experience of the world; and wave function fundamentalism, which says that what exists at the fundamental level is a field in configuration space, which then gives rise to ordinary objects in 3D space, which in turn we then somehow perceive (i.e., fundamentalists allow for additional transitions when compared to monists).
What Nina set out to do was to build an argument against wave function realism, based on something she calls the minimal divergence norm, which states: “insofar as we have two empirically adequate theories (i.e., two theories that both accurately predict the phenomena we observe), we ought to choose the one that minimizes the difference between how the theory says the world is and the way the world appears to be (to us).
To use the classical distinction famously introduced by philosopher Wilfrid Sellars, the minimal divergence norm says that we should try to minimize the distance between the scientific and the manifest images of the world.
Nina explained that we should care about this for a couple of reasons: first, because wave function realism is taken increasingly seriously by a number of philosophers and physicists; second, because the minimum divergence norm may be helpful in metaphysics against what she amusingly called “incredulous stare arguments” (i.e., arguments based on some sophisticated version of “are you f**ing kidding me?”).
Nina’s argument can be summarized in the following way:
P1: wave functional realism (either of the monist or the fundamentalist type) violates the minimal divergence norm.
P2: we should accept the minimal divergence norm.
C: therefore, we should reject wave function realism.
The argument is valid, which means that the only way to reject it is to doubt one or the other of the two premises (i.e., to question its soundness). Accordingly, Nina proceeded to defend her premises. Before doing that, however, she cautiously added a few caveats, which I’m going to briefly examine here.
First, she explained that her focus is on ontologies compatible with Everett-type (so-called “many-worlds”) interpretations of quantum mechanics, but that the argument applies also (straightforwardly, she said) to Bohmian-type dynamics. Don’t ask, I can’t tell.
Second, she distinguished between her proposed minimal diverge norm and a similar, but more restrictive, no-divergence norm. The latter says that we should not endorse a scientific theory that says that the world is significantly different from the way it appears to us.
This is crucial, and of high interest to me. Basically, while the minimal divergence norm attempts to put a leash on scientific theories and keep them as close as empirically and theoretically possible to the manifest image, the no-divergence norm says that, no matter what, priority should go to the manifest image. The first one is, I think, a reasonable attempt to remind us that scientific theories are human constructions, not god’s eye-views of the world, and so that one of their virtues is to make the world understandable to us. The second norm, however, is basically what flat-earthers and creationists happily support: no matter what science tells me, what I see is what I get. Clearly, the minimal divergence norm is a (debatable, for sure) reasonable constraint on science, while the no-divergence norm quickly degenerates into a rejection of science and possibly a support of pseudoscience.
Nina’s third caveat was that she wasn’t going to propose any positive account of the ontology of the wave function, since that’s a much more complex issue, on which there simply isn’t any agreement, either among physicists or among philosophers.
Caveats duly put out there and set aside, Nina proceeded to defend her first premise, that wave function realism violates the minimal divergence norm. To do that, she identifies one possible alternative to wave function realism, mass-density ontology, according to which what exists fundamentally is a mass-density field in 3D space. (Just like wave function realism, mass-density ontology comes in two flavors, monism and fundamentalism, but the difference is irrelevant here.)
Nina claims that wave function realism diverges more from the manifest image than mass-density ontology, for instance because mass-density ontology at the least includes objects in 3D space, which wave function realism does not.
The general idea is that neither wave function realism nor mass-density ontology contradict the manifest image (because they both somehow recover the appearance of everyday objects), but they both go beyond the manifest image. The difference is that wave function realism goes further beyond when compared to mass-density ontology. We could say that it is a less parsimonious departure from the manifest image.
Nina then turns to her defense of the second premise, that we should accept the minimal divergence norm. This, for me, is the more crucial point, and one that has far wider applications than this particular issue in fundamental (meta)physics.
Her move is interesting, though certainly controversial. She claims that the minimal divergence norm is the chief — indeed te only — reason that keeps our theorizing theorizing from sliding inot so-called radical metaphysical notions. Here are some examples of the latter:
* Solipsistic idealism, the notion that I don’t have a physical body and brain, and that all that exists is my mental states.
* Brain-in-a-vat hypothesis. My brain is floating in a vat, receiving sensorial inputs indirectly. The physical world is nothing like what it appears to be.
* Bostrom simulation hypothesis. The physical universe is nothing like physics describes it, it is, rather, a simulation in someone else’s computer.
* Boltzmann brain hypothesis. My brain formed spontaneously in an empty region of space, as a result of a number of coincidences. Again, the physical universe is nothing like what it appears to be.
At first I thought that Nina’s claim that these radical metaphysical hypotheses are incompatible with science was a bit too strong, and that it would have sufficed to say that they are in no way entailed by the current scientific worldview. But upon further reflection I think she is right. Notice the recurrence above of a specification along the lines of “… and the world is nothing like it appears to be.” If any of the radical metaphysical hypotheses were true (and it is possible that one of them is!), then it would not just be the manifest image that would be incorrect, but also the scientific one. When physicists talk about electrons, quarks, strings, and what not, they most certainly do mean that these things are physical components of fundamental aspects of our reality. Which would be false if any of the above scenarios actually held.
Further, Nina makes clear two additional points to be considered insofar as this discussion is concerned: i) while it is true that the radical metaphysical hypotheses can be designed so as to present a number of extra-empirical virtues (such as simplicity, elegance, etc.), this is irrelevant unless one also has a reasonable story to explain how those virtues are acquired by scientists and why they should be deployed in a way that favors the metaphysically radical scenarios; ii) her argument for the second premise goes through even if one limits the radical scenario to just the fundamental level, for instance by saying that the Bostrom simulation hypothesis claims that what exists fundamentally is a computer simulation, which is then capable of giving rise to a world of 3D objects.
Nina’s conclusion, which seems reasonable to me, is that “anyone who rejects the minimal divergence norm must either take seriously the radical metaphysical scenarios, or give up a plausible story about how they are ruled out.”
Obviously, there is plenty of room for disagreement with Nina’s argument and conclusions, though I find them quite plausible. Nevertheless, the reason this is very interesting — other than its application to the ontology of quantum mechanical concepts such as the wave function — is because of the broader issue I mentioned earlier: the difference (and tension) between Sellars’ manifest and scientific images of the world.
Indeed, I have been invited to contribute a chapter to a forthcoming book on the common sense tradition in philosophy, to be published by Oxford University Press. My chapter will be on the challenges posed by science to such tradition. As a scientist and philosopher, of course, I wholly realize that science has introduced and will continue to introduce notions that will depart from “common sense,” or from the manifest image. But as I said above, I also think that science is a human-centered activity that never had, nor ever will, achieve a god’s eye-view of things. Science, in other words, isn’t just in the neutral business of discovering how the world works, it is in the partially subjective business of facilitating human understanding of how the world works.
That is why I find Nina’s notion of the minimal divergence norm useful: we have to allow the scientific image to diverge from the manifest one, or we give up on science altogether. But we also want such divergence to be kept at a minimum, because otherwise we have no criteria to reject non- or even anti-scientific hypotheses, such as the radical metaphysical ones mentioned in Nina’s talk (and a number of others, I would add, like Max Tegmark’s mathematical universe). To give up on the norm of minimal divergence would basically give free rein to metaphysical speculation in science, which I’m pretty positive would not be a good idea.

I don’t know, I seem to annoy people here when I fail to make unnecessary assumptions.
LikeLike
Just thought I’d echo something synred said:
Current physics could of course be wrong (or, more precisely, not right enough). But to show that was the case we’d need to come up with something better. That’s the hard part.
So let’s suppose that you don’t like string theory. OK, no problem. Now come up with a better approach.
Unlike in metaphysics, where one can scheme up whatever suits one’s intuition, in physics one needs to make one’s new ideas compatible with a vast, vast array of already known empirical evidence about how the world works. That really is the hard part.
LikeLiked by 1 person
That deriving some fairly objective explanations for our complex and subjective experience should yield many interesting and unforeseen insights is to be expected. Not only in physics, but many other fields as well.
That this process has wandered far away from our actual experience in the world is increasingly evident at the extremes, with concepts like Everitt’s multiworlds and the block time aspect of spacetime, where our experience of the dynamic present effecting specific change is not resolved, but simply dismissed as an illusion. Apparently the theorists talked to God and that’s just the way it is.
The question is whether there are truly insurmountable issues, or are more mundane, possibly sociological issues involved. To argue the possibility of the latter, here is an interesting essay, by a very practical minded person, one of the chief engineers to the Hubble telescope and his experiences dealing with cosmologists; http://www.americanscientist.org/issues/pub/2007/9/modern-cosmology-science-or-folktale
Where his impression was that evidence contradicting theory was actively avoided.
So it does seem, in some cases at least, that it will be funerals, as much as insights, that solve these roadblocks. Future generations of scientists are not going to chase these unicorns and yetis ever further into the woods.
However long it is put off, a reset will occur.
LikeLike
Hi Massimo,
I get that. So what I’m saying is that I neither reject nor accept that premise. I suspend my acceptance, because it depends on how much sense mass-density ontology makes, and I don’t know enough about it to judge. I haven’t heard any prominent physicists promoting it so it’s already at a disadvantage compared to wavefunction realism, promoted by the likes of Sean Carroll.
I googled it a bit and all I could glean is that it has very few mentions and that it seems to be some sort of objective wavefunction collapse theory, of which there are a few. These theories are interesting and worth investigating, but I do think they have (elusive) empirical consequences, namely that under certain conditions (e.g. when an observation is made by a conscious observer, or when a system attains a certain mass density), the wavefunction collapses and something like the classical world emerges. In principle, it should be possible to design an experiment to test whether the wavefunction collapses, e.g. by testing whether there is some mass threshold beyond which systems can no longer be in superposition, but in practice it’s difficult to keep systems isolated enough to keep superpositions stable.
Did she go into detail on mass-density ontology? If yes, fair enough. If not, then it’s likely nobody there was familiar enough with it to object. As I said, it’s not a phrase you find very much in Google, so it’s not widely used.
What is the value of this constraint? Ordinary parsimony is enough to dispense with ideas such as idealistic solipsism. And even if it weren’t, then perhaps these ideas should not be dismissed.
Of course there is. We have parsimony. Or just deciding that metaphysical speculation is worthless, at least from a scientific point of view. Scientifically, agnosticism on wavefunction realism is all we need. There’s no need to accept it or reject it. Nina is engaging in metaphysical speculation just as much as Carroll or Tegmark or whoever.
The norm is basically enshrining common sense as a criterion by which to judge metaphysical ideas. Of course it is commonsensical. That’s precisely what’s wrong with it. The question is whether common sense has any use in judging metaphysical ideas or interpretations of physical theory. I don’t think it does.
Are you saying that the philosophical unpacking and interpretation of scientific theories is not part of philosophy of science? So philosophy of science, like science studies, is then only concerned with the practice of science as opposed to the ideas that come out of it? Because ideas such as the Mathematical Universe Hypothesis and wavefunction realism are certainly philosophical unpackings and interpretations of scientific theories. In my mind this would put them in the intersection of philosophy of science and metaphysics.
You can specify in what sense two scientific images depart from the manifest one. But isn’t this just another way of saying we can assess how much each theory accords with intuition?
LikeLike
I’m a bit surprised by the stares of incomprehension that Nina’s ideas are generating. I mean, one may or may not agree, but I thought the main concepts were pretty clear. Perhaps I’m oversimplifying though and some of you guys see through that better than I do.
Synred,
The “problem” is that we want the scientific image to depart as little as possible from the manifest one, in order to avoid runaway metaphysical speculation of the kind that DM finds so attractive… 😉
But there is no requirement that a scientific theory work entirely within the manifest image.
Robin,
Cartesian doubt by itself is simply a skeptical position, but if one basis one’s metaphysics on it then it becomes solipsism. That’s the way it was used in Nina’s talk, as one of the four runaway metaphysical hypotheses.
While there is one manifest image of the world (the one that normally functioning human beings perceive), the scientific image is a moving target, for instance it got further away from the manifest image during the transition from classical to quantum mechanics. That’s the nature of science.
Infinitely parallel universes are not (yet?) part of the accepted scientific image. Indeed, they are half-way between science and metaphysics, at this point.
Coel,
Yes, according to Occam’s razor any scientific theory also has to explain the scientific image. But, again, the difference between it and minimal divergence is that in the first case the comparison is between theories, in the second it is between theories and the manifest image.
LikeLiked by 1 person
Wonderful suggestion Robin! Accepting the possibility that you might be under the domain of an “evil demon” (or whatever), doesn’t mean that you must entertain possible but amazingly implausible notions. The reason that it might be helpful for a person to take Rene Descartes seriously, however, is because apparently “thought” is all that anything can ever know to exist with perfect certainly. Given the many bazaar notions that we hear about both inside and outside of academic studies, shouldn’t we make sure that our science begins from the beginning? This is not a rejection of the scientific image of the world, but rather a potential foundation for it. Hopefully we all seek more effective epistemic positions? Regardless, I have two such suggestions.
First off, I suspect that “definition” very much holds us back today. Because there are no true definitions, but rather just more and less useful ones, I’d have us all accept someone else’s definitions in order to potentially understand their ideas.
Then secondly, it would seem that there is only one process by which anything conscious, consciously figures anything out (whether for the human, the bird, the conscious robot, or whatever). It takes what it thinks it knows (evidence), and then checks to see how consistent this happens to be with what it’s not so sure about (theory).
LikeLike
But the whole thing seems circular.
She says on the one hand that we should accept the minimal divergence norm because it is the only thing that makes radical metaphysical scenarios incompatible with science.
But then she says that the reason for rejecting radical metaphysical scenarios is that they are incompatible with science.
But by her own thesis they are only incompatible with science if one accepts the minimal divergence norm.
So that implies there is no reason to accept the minimal divergence norm unless we first accept minimal divergence norm.
On the other hand, if radical metaphysical scenarios are incompatible with science with or without the MDN then there is no reason to accept it in the first place.
LikeLiked by 1 person
Hi Robin,
She doesn’t say that, I don’t think. On the other hand Massimo said something similar — that these scenarios mean that the physical world is very different from what our very successful scientific theories say it is.
I think Massimo has got this wrong, as I would say our scientific theories only really model and predict observations and don’t really make any claims about the ultimate nature of the foundation of the world we see. For instance, if we are all in a simulation, then our scientific theories are successfully modelling that simulation, not claiming that the world is not a simulation — they are neutral on this question. I don’t see any reason here to reject the simulation hypothesis, although such reasons abound elsewhere.
LikeLike
Robin, no. The radical metaphysical hypotheses are incompatible with science period, regardless of the minimal divergence criterion. They don’t just diverge from the manifest image, they are contradictory of both the manifest and the scientific image. The divergence criterion is what stops us from getting on a runaway situation that ends up with the radical hypotheses.
LikeLiked by 1 person
DM,
I really don’t think it is right to say that our scientific theories don’t make any claims about the ultimate foundations of reality. Indeed, that is precisely the business of fundamental physics (hence the word “fundamental”). I truly believe that Steven Weinberg would be astonished at your claim.
LikeLiked by 1 person
Hi Massimo,
How so?
The MUH is not incompatible with science. On the MUH, science is investigating the mathematical object that is our world.
The Simulation Hypothesis is not incompatible with science. On the simulation hypothesis and the BiV, science is investigating the simulation.
Solipsistic Idealism is not incompatible with science. On solipsistic idealism, science is investigating the mind of the one person who exists.
These ideas are not scientific. But that doesn’t make them incompatible with science. They are orthogonal to science.
LikeLike
Hi Massimo,
But in the article you say
“She claims that the minimal divergence norm is the chief — indeed the only — reason why our scientific theories are incompatible with so-called radical metaphysical notions. ”
So if radical metaphysical notions are incompatible with science with or without the minimal divergence norm then there goes her reason for accepting it in the first place.
LikeLike
Hi Massimo,
They make claims as to the mathematical structure of the foundations of reality. They don’t make claims as to its nature — e.g. whether it is a simulation or the dream of a God or a physical world.
LikeLike
Robin,
Thanks for pointing that out, it was sloppy writing on my part. I went back to check Nina’s original handout and I have rephrased accordingly. I hope it is now clear why her argument is not circular (which, really, would have been a pretty elementary mistake for a philosopher…).
DM,
The radical metaphysical hypotheses directly contradict the scientific image of the world. It is either the case that, for instance, superstrings and branes are the the fundamental constituents of reality, or that we live in simulation. I know you want to have it both way, but you can’t.
And, relatedly, no, scientific theories don’t just make claims as to the mathematical structure of reality but not its nature. That is a wholly mistaken view of science. Science is in the business of describing and understanding the world as is, not just as a mathematical construct.
LikeLike
Hi Massimo,
I think that’s a misunderstanding.
String theory is not the hypothesis that superstrings or branes are the fundamental constituents of reality. String theory is the hypothesis that we will find that there are strings and branes one level down from our current most fundamental theories. String theory is agnostic on whether there are deeper levels to describe how these strings come to be, including on whether they are the products of a simulation.
I’m talking about its nature vis a vis whether it’s a simulation or not. Scientific theories don’t make claims about this sort of thing.
Fundamental physics is very much about describing the underlying mathematical structure of the world (as is). That’s what fundamental physics is. I’m not talking about biology or the special sciences or whatever.
LikeLike
Hi Massimo,
Physics starts off from the mid-level of the everyday world, and works downwards from there to the “fundamental” levels (and upwards to cosmology and other ensembles). That means that fundamental physics tries to find the fundamentals, pushing down as far as possible, but does not necessarily claim to have succeeded in getting to “ultimate foundations”.
It is widely accepted that any current physics models are provisional, rather than being final answers in any sense (afterall everyone knows that current models of QM and GR are inadequate; they don’t fully work, in the sense of not being mutually consistent, and attempts to solve that are bogged down). Thus I would suggest that leading physicists are a lot more “instrumentalist” in how they think about such things as wavefunctions than philosophers of science tend to suppose.
That includes Weinberg (whose philosophy of science has always seemed to me much more sensible than he gets credit for among philosophers).
LikeLiked by 1 person
Coel,
I’m fully aware that physicists don’t claim to actually have reached bottom. But as you say, they get closer and closer, and that isn the goal.
DM,
All I can do is to repeat that you are working from a fundamentally mistaken view of the nature of science. And I say this pretty confidently as both a scientist and a philosopher of science. But I also realize that I won’t sway you, so this will be my last comment on that particular sub-topic.
LikeLike
What Coel said.
I would add that I’m sure Steven Weinberg would take as dim a view of metaphysical speculation as you do, Massimo, but I don’t think he would do so on the basis that it contradicts or is not compatible with science. He would probably rather do so on the basis that it is unscientific and a waste of time. As Coel said, he would be instrumentalist, and simply not care about ontological questions such as whether the wavefunction is the basis of reality or not — he would perhaps even reject such questions as meaningless.
And, again, being unscientific does not mean that it contradicts or is incompatible with science — it just means its outside the domain where science applies (and so outside the scope of Weinber’gs interest).
LikeLike
Hi Massimo
That makes it a lot clearer, thanks
“(which, really, would have been a pretty elementary mistake for a philosopher…).”
Which is why I said “seems”, not to imply you had actually made a circular argument.
LikeLiked by 1 person
Hi Massimo,
We can set aside the question of what it is science is about.
As Arthur said, these thought experiments are often set up to be unfalsifiable. As well as making them unscientific, doesn’t that mean they can’t really be incompatible with science? If we can’t falsify them with science then how can they contradict science? I don’t see how this makes them anything other than orthogonal to science.
LikeLike
What is “bottom?” Strings?
What is “top?” The universe as a singular entity, evolving over a timeline?
It is interesting that when we get to these scalar extremes, rather than radical metaphysicalism, we seem to find extremely conventional concepts like objects and histories.
LikeLike
I don’t know anything about it. Google turns up nothing useful.
Does it have Fermions? Is it field-theory by another name? Fields have mass=energy…
LikeLike
Hi Massimo,
Well it’s not going to solve that ‘problem’. DM, Tegmark and some of the wilder string theorist will do what they do irregardless [a].
And undergraduates will continue to drink too much and babble all night about the Matrix (or whatever succeeds it).
We just have to live with it. It’s not that important. BS is always with us. _{;>)=
[a] DM at least doesn’t pretend it’s scientific.
LikeLike
As a rule of thumb BS detector I can see it being a little useful. If it claims QM fails the test it BS.
Mostly I can detect such BS as the BIV w/o a formal rule. If some people take such things seriously there’s not much anyone can do about it.
LikeLike
It would be interesting to study the history and conceptual antecedents of the various radical premises, such as Boltzmann brains, brains in a vat, artificial intelligence, etc. For example, it would seem cartesian dualism would be a significant influence.
Then to study the actual evolution of neurology and how feedback from the senses of the external world evolved the mental processes, from emotions to rationalism. Which might better frame the fact that various of these ideas really are not so much revolutionary, as rest on fairly primitive assumptions.
LikeLike
Well I agree with that, but not everybody does. Positivist didn’t’.
While I assume there is a physical nature underlying everything (a ‘ding am sich’), it may turn out that we can not construct a mental image of it. It certainly does not resemble the ‘image’ constructed by our brains at the lowest level of theory, though it allows [a] for the existence and such images/maps/models.
[a] I’m tempted to say predicts, but it is, of course, post-diction and a minimal requirement of any theory.
LikeLike
Just because strings, fields, atoms or rocks are not fundamental does not make them less real (I’m pretty sure you agree!)
Science assumes there is a reality underneath it all. We may never figure out exactly how that works, but keep trying.
If it’s all just the caprice of a god or programmer why bother? The rules could shift at any moment.
LikeLike
Hi Arthur,
In fairness to Nina and Massimo, I don’t think anyone is claiming that. It just interprets QM differently to how Carroll interprets it. In general, Emery is hostile to Everettian many worlds and Bohmian mechanics and she seems to favour some sort of objective collapse theory.
LikeLike
DM, QM failing is P1 in Nina’s syllogism.
LikeLike
Arthur,
“it may turn out that we can not construct a mental image of it.”
It would seem the very first place to start would be to keep in mind that our mental images are static impressions of an exceedingly dynamic reality. For example, our eyesight necessarily functions like a movie camera, taking flashes of perception from particular bands of the radiant spectrum, which is obviously traveling at the speed of light.
A logical conclusion to draw from this is that at its most elemental stage, reality is inherently dynamic and we can only grasp at straws to reconstruct some deeper sense of this dynamic.
Yet much of our theorizing is based on trying to extract some static model that is foundational to this reality. To do so, we search the most extreme scales, from the micro to the macro cosmic and the most abstracted formulations, i.e., math, because they appear to be the most ordered and stable. It seems a bit like the drunk looking for his keys under the street light.
LikeLike