The problem of wave function ontology

Disclaimer: I’m neither a physicist, nor a philosopher of physics. Moreover, I don’t play either role on television! Nonetheless, I’m fascinated by physics, as well as by debates amongst physicists, or between physicists and philosophers. So I perked up when a couple of weeks ago the regular colloquium at the Philosophy Program of CUNY’s Graduate Center was scheduled to be by Nina Emery, of Brown University, who gave an unusually lucid talk (given the topic) entitled “Against radical quantum ontologies.”

We have all heard of the wave function, hopefully from a real physicist rather than, say, from Deepak Chopra. It is a fundamental concept in quantum mechanics, being a description of the quantum state of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. Okay, you may ask, but what is a wave function, physically — rather than mathematically or statistically — speaking? Hell if I know. And apparently, hell if anyone else knows either.

Which is where Nina’s talk comes in. I’m going to follow her handout from now on, adding a few comments here and there. Near the end of the post I will get to why the issue may be of broader interest than “just” understanding what the wave function actually is.

To begin with, Nina introduced wave function realism as the view that all that exists at the fundamental level is a field defined in a highly dimensional physical space, the configuration space, where none of its physical dimensions correspond to the standard three spatial and one temporal ones we are all familiar with. There are two types of wave function realism out there: wave function monism, which claims that all that exists is a field in configuration space, which gives rise directly to our everyday experience of the world; and wave function fundamentalism, which says that what exists at the fundamental level is a field in configuration space, which then gives rise to ordinary objects in 3D space, which in turn we then somehow perceive (i.e., fundamentalists allow for additional transitions when compared to monists).

What Nina set out to do was to build an argument against wave function realism, based on something she calls the minimal divergence norm, which states: “insofar as we have two empirically adequate theories (i.e., two theories that both accurately predict the phenomena we observe), we ought to choose the one that minimizes the difference between how the theory says the world is and the way the world appears to be (to us).

To use the classical distinction famously introduced by philosopher Wilfrid Sellars, the minimal divergence norm says that we should try to minimize the distance between the scientific and the manifest images of the world.

Nina explained that we should care about this for a couple of reasons: first, because wave function realism is taken increasingly seriously by a number of philosophers and physicists; second, because the minimum divergence norm may be helpful in metaphysics against what she amusingly called “incredulous stare arguments” (i.e., arguments based on some sophisticated version of “are you f**ing kidding me?”).

Nina’s argument can be summarized in the following way:

P1: wave functional realism (either of the monist or the fundamentalist type) violates the minimal divergence norm.

P2: we should accept the minimal divergence norm.

C: therefore, we should reject wave function realism.

The argument is valid, which means that the only way to reject it is to doubt one or the other of the two premises (i.e., to question its soundness). Accordingly, Nina proceeded to defend her premises. Before doing that, however, she cautiously added a few caveats, which I’m going to briefly examine here.

First, she explained that her focus is on ontologies compatible with Everett-type (so-called “many-worlds”) interpretations of quantum mechanics, but that the argument applies also (straightforwardly, she said) to Bohmian-type dynamics. Don’t ask, I can’t tell.

Second, she distinguished between her proposed minimal diverge norm and a similar, but more restrictive, no-divergence norm. The latter says that we should not endorse a scientific theory that says that the world is significantly different from the way it appears to us.

This is crucial, and of high interest to me. Basically, while the minimal divergence norm attempts to put a leash on scientific theories and keep them as close as empirically and theoretically possible to the manifest image, the no-divergence norm says that, no matter what, priority should go to the manifest image. The first one is, I think, a reasonable attempt to remind us that scientific theories are human constructions, not god’s eye-views of the world, and so that one of their virtues is to make the world understandable to us. The second norm, however, is basically what flat-earthers and creationists happily support: no matter what science tells me, what I see is what I get. Clearly, the minimal divergence norm is a (debatable, for sure) reasonable constraint on science, while the no-divergence norm quickly degenerates into a rejection of science and possibly a support of pseudoscience.

Nina’s third caveat was that she wasn’t going to propose any positive account of the ontology of the wave function, since that’s a much more complex issue, on which there simply isn’t any agreement, either among physicists or among philosophers.

Caveats duly put out there and set aside, Nina proceeded to defend her first premise, that wave function realism violates the minimal divergence norm. To do that, she identifies one possible alternative to wave function realism, mass-density ontology, according to which what exists fundamentally is a mass-density field in 3D space. (Just like wave function realism, mass-density ontology comes in two flavors, monism and fundamentalism, but the difference is irrelevant here.)

Nina claims that wave function realism diverges more from the manifest image than mass-density ontology, for instance because mass-density ontology at the least includes objects in 3D space, which wave function realism does not.

The general idea is that neither wave function realism nor mass-density ontology contradict the manifest image (because they both somehow recover the appearance of everyday objects), but they both go beyond the manifest image. The difference is that wave function realism goes further beyond when compared to mass-density ontology. We could say that it is a less parsimonious departure from the manifest image.

Nina then turns to her defense of the second premise, that we should accept the minimal divergence norm. This, for me, is the more crucial point, and one that has far wider applications than this particular issue in fundamental (meta)physics.

Her move is interesting, though certainly controversial. She claims that the minimal divergence norm is the chief — indeed te only — reason that keeps our theorizing theorizing from sliding inot so-called radical metaphysical notions. Here are some examples of the latter:

* Solipsistic idealism, the notion that I don’t have a physical body and brain, and that all that exists is my mental states.

* Brain-in-a-vat hypothesis. My brain is floating in a vat, receiving sensorial inputs indirectly. The physical world is nothing like what it appears to be.

* Bostrom simulation hypothesis. The physical universe is nothing like physics describes it, it is, rather, a simulation in someone else’s computer.

* Boltzmann brain hypothesis. My brain formed spontaneously in an empty region of space, as a result of a number of coincidences. Again, the physical universe is nothing like what it appears to be.

At first I thought that Nina’s claim that these radical metaphysical hypotheses are incompatible with science was a bit too strong, and that it would have sufficed to say that they are in no way entailed by the current scientific worldview. But upon further reflection I think she is right. Notice the recurrence above of a specification along the lines of “… and the world is nothing like it appears to be.” If any of the radical metaphysical hypotheses were true (and it is possible that one of them is!), then it would not just be the manifest image that would be incorrect, but also the scientific one. When physicists talk about electrons, quarks, strings, and what not, they most certainly do mean that these things are physical components of fundamental aspects of our reality. Which would be false if any of the above scenarios actually held.

Further, Nina makes clear two additional points to be considered insofar as this discussion is concerned: i) while it is true that the radical metaphysical hypotheses can be designed so as to present a number of extra-empirical virtues (such as simplicity, elegance, etc.), this is irrelevant unless one also has a reasonable story to explain how those virtues are acquired by scientists and why they should be deployed in a way that favors the metaphysically radical scenarios; ii) her argument for the second premise goes through even if one limits the radical scenario to just the fundamental level, for instance by saying that the Bostrom simulation hypothesis claims that what exists fundamentally is a computer simulation, which is then capable of giving rise to a world of 3D objects.

Nina’s conclusion, which seems reasonable to me, is that “anyone who rejects the minimal divergence norm must either take seriously the radical metaphysical scenarios, or give up a plausible story about how they are ruled out.”

Obviously, there is plenty of room for disagreement with Nina’s argument and conclusions, though I find them quite plausible. Nevertheless, the reason this is very interesting — other than its application to the ontology of quantum mechanical concepts such as the wave function — is because of the broader issue I mentioned earlier: the difference (and tension) between Sellars’ manifest and scientific images of the world.

Indeed, I have been invited to contribute a chapter to a forthcoming book on the common sense tradition in philosophy, to be published by Oxford University Press. My chapter will be on the challenges posed by science to such tradition. As a scientist and philosopher, of course, I wholly realize that science has introduced and will continue to introduce notions that will depart from “common sense,” or from the manifest image. But as I said above, I also think that science is a human-centered activity that never had, nor ever will, achieve a god’s eye-view of things. Science, in other words, isn’t just in the neutral business of discovering how the world works, it is in the partially subjective business of facilitating human understanding of how the world works.

That is why I find Nina’s notion of the minimal divergence norm useful: we have to allow the scientific image to diverge from the manifest one, or we give up on science altogether. But we also want such divergence to be kept at a minimum, because otherwise we have no criteria to reject non- or even anti-scientific hypotheses, such as the radical metaphysical ones mentioned in Nina’s talk (and a number of others, I would add, like Max Tegmark’s mathematical universe). To give up on the norm of minimal divergence would basically give free rein to metaphysical speculation in science, which I’m pretty positive would not be a good idea.

Advertisements

218 thoughts on “The problem of wave function ontology

  1. Even a non local hidden value could, in principle, be used to infer the spin of a particle without explicitly measuring it.

    Indeed! Pretty much the point of ‘hidden variables’. That doesn’t make it over-determined, just determined.

    Like

  2. In the usual collapse picture a photon propagates as a wave but mysteriously only shows up as in one place when measured. As the field carries energy this seems odd; not only is ‘information’ collapsed, but energy is ‘magically’ transported to from all over to one ‘phototube’.

    This is sometimes called Einsteins bubble. The bubble burst and all the soap ends up at the point where it starts bursting. Might work for a bubble (surface tension), but not faster than light.

    I find it useful to think of the problem in terms of entangled states. The state ‘here no there’ is entangled with ‘there not here’. You pick one. It is I think equivalent to alternate histories, but I find it useful to think of it in terms of entanglement. There never is a state where the photon is in ‘two places’, but a entangled superposition of (|1>_here|0>_there>+|0>_here|1>_there>). There is still ‘collapse/selection’ but there’s no super-luminal energy transport. All the energy transport has already occurred.

    Entanglement Interpretation
    or decoherence by another name

    https://1drv.ms/w/s!AopZ_VaOySSavHmQ2qe7qw6HG9N7
    8/30/2016 12:10 PM, Version 1.8
    A. Snyder, SLAC
    What I am grandiosely calling the ‘entanglement interpretation’ is pretty much just decoherence by any other name. I find it a useful way to thinking about what happens. I can think about ‘selection’ rather than ‘collapse.’
    This is in part a response to this terrible book:
    Musser, George. Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time–and What It Means for Black Holes, the Big Bang, and Theories of Everything (p. 3). Farrar, Straus and Giroux. Kindle Edition.
    which seems to go to absurd lengths in labeling quantum phenomena non-local and does not seem to explain anything.
    I’ll explain ‘entanglement interpretation’ with a simple example.
    Picture an experimental set up where a photon is aimed at a partially reflecting mirror. If the photon goes through it lands in photo-tube A if it is reflected in lands in phototube B. Let’s say the amplitude to pass through the mirror is α and that to reflect is β with |α|^2+|β|^2=1.
    Now in the ‘collapse’ picture the state of the photon is taken to be
    ψ=α|1>A+β|1>_B (1)
    where if the photon is measured to be in phototube A, the wave function in B collapses.
    Now consider that the state of the phototubes is |0>_A |0>_B before the photon is sent through the system, it is natural to consider the state after the photon reaches the phototubes to be an entangled state given by
    ψ=α|1>_A |0>_B+β|0>_A |1>_B (2)
    where the state of one photon and no photon are entangled. A measurement finding the photon in A selects a state where there is no photon in B. It works just like correlated polarizations. You don’t have to talk about anything collapsing. No energy needs to be magically transported from B to A; you just selected the state in which the photon is in A .
    What is being ‘collapsed’, if you still want to use that word, is not a state of the particle being here and there, but a state of the particle being here and not there or a state of a particle being there and not here.
    You can think of that as only ‘information’.
    The state in which the photon was two places at once never existed. There is no state like this
    α|1_A>+β|1>_B (3)
    for which one of the states needs to ‘collapse’ when you look.
    The state is a superposition of one where it’s here and not there with one where it’s not here, but there! There is no action at a distance, just selection of a state either by finding ‘yourself’ in one or by the ‘gods’ playing dice.
    So how did we get to this happy state of affairs? Decoherence!
    Let’s examine this in a little more detail and consider the photon that is transiting the experimental setup. We start with a photon. When it hits the splitter its state becomes
    ψ
    γ=(αψA+βψ_B) (4)
    before hitting the phototubes, but on the way toward them as indicated by the subscripts.
    The phototubes remain empty or off, so
    ψ=(αψ_A+βψ_B )|0>_A |0>_B (5)
    Is the state of the system before the photon reaches the phototube?
    After the photon reaches the tubes, it interacts. Only ψ_A can affect phototube A and only ψ_B can affect B. The interaction is strictly local. The interaction Hamiltonian H_A does this
    H_A ψ_A |0>_A=|1>_A (6)
    which is not effected by the norm of the state and where I’m assuming that a photon that hits A (or B for that matter) is always detected.
    The expectation value of the energy in A does not have to be high enough to kick out an electron, just the energy of the photon. There is no problem with energy being ‘magically’ propagated at superluminal speeds.
    The resulting entangled state is
    ψ=|0>
    γ×(α|1>A |0>_B+β|0>_A |1>_B) (7)
    where I’ve tacked on |0>
    γ to indicate the now empty photon state. Nothing non-local has happened. There is no state where the photon is detected here and there. The photon wave-packet splitting into a part travel toward A and a part traveling toward B was a local effect — at the mirror. The Schrödinger equation doesn’t do anything that isn’t local. The photon wave got to different places by traveling to them.
    This is much like ‘many worlds’ or ‘consistent histories’.
    There is a final step. In the density matrix corresponding to the above state, there are still cross terms that would allow one to in principle reassemble things. I think it’s called ‘quantum erasure’. If the states of A and B corresponded to some states of identical particle you could in principle bring them together and observe some interference effects. It is decoherence that really selects this particular superposition of states.
    However, interaction of the ‘phototube’ states with the environment suppresses these terms exponentially making it virtually impossible to undo the ‘measurement’. And that’s is all local too.
    Pick one; pick only one! Pick it with the Born rule. This what we do in Monte Carlo. I think this is equivalent the consistent histories approach.

    Like

  3. Well DM, I’ve had a bit of a look at Bell’s theorem and Superdeterminism. Thanks for the suggestion! Here’s a narrative of the situation as I see it:

    John Bell proposed, “No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.” The point seemed to be, “Hey, you guys shouldn’t be determinists for the reason that quantum mechanics can’t be predicted, and thus all of your ‘determination’ ideas must actually be crap.” So then other physicists came along and said, “Screw you dude! We can be ‘superdeterminists,’ or demonstrate a position by which quantum mechanics could conceivably be predicted.”

    I oppose John Bell, but consider there to be a far better way to do so than this “superdeterminism” stuff. Apparently each of these sides are in a debate regarding determinism, though they all believe that the human itself needn’t apply — surely it must be different than the rest of nature? (Or at least the rest of non conscious nature?) While these people may be “actual scientists,” they seem lost to the anthropocentric notion that when the human thinks, somehow this is not a natural process. Thus it seems to me that each side believes in “magical souls.”

    I oppose John Bell, as well as the mental sadism of theorizing how something could conceivably predict a quantum mechanical state (superdeterminism). Instead I’m a “hard determinist,” which is to say that I believe in perfect causality regarding all that’s natural. Does the human apply as well? Of course it does! Thus billions of years ago it was already quite well mandated that I would think these very thoughts right now, and so write them down for your potential entertainment. (According to Wikipedia, John Bell did admit that his theorem was unable to challenge hard determinism.)

    If the quest is to “preserve freedom,” I would hope for others to consider my own such perspective: Freedom doesn’t exist “ultimately,” though it does effectively exist to the extent of a given subject’s ignorance about what’s going to happen. This seems substantial to me!

    Anyway my ideas here came up when Coel observed that science and philosophy have not yet developed a reasonable way of distinguish “natural” from “non” or “super” alternatives. I suggest that we use the concept of “causality.”

    Like

  4. Hi Eric,

    I’m glad you’ve looked into it but I think you’re confused on a few things. You don’t seem to have understood the implications of Bell’s theorem.

    “Hey, you guys shouldn’t be determinists for the reason that quantum mechanics can’t be predicted, and thus all of your ‘determination’ ideas must actually be crap.”

    That’s not it at all.

    Bell’s theorem proved, beyond a shadow of a doubt, that it is impossible for there to be local hidden variables as long as the choices of experimenters (whether determined or not) are independent of the values of the hidden variables they are trying to measure.

    The concern is not that QM cannot be predicted, it is that experiments can show that any naive version of the idea that there are hidden values we just can’t see has to be wrong.

    So then other physicists came along and said, “Screw you dude! We can be ‘superdeterminists,’

    Actually it was Bell who proposed superdeterminism as a way out of his theorem, but even he didn’t find it plausible. Also, not even a superdeterminist would think QM can be predicted. Superdeterminism isn’t about humans making predictions. It is about explaining away Bell’s theorem with something like the idea that the universe just happened to have an initial configuration where, as if by a massive coincidence, all the measurements scientists would ever perform would happen to conform to Bell’s theorem. Alternatively it could be about nature simulating its own future in advance so that it knows what is going to happen and so have future events impacting present events to yield a kind of retrocausality. You can think of it as the idea that nature is conspiring to trick scientists into thinking there are no hidden variables while there secretly are, by predicting what scientists will do and preparing accordingly so as to mislead them. That’s perhaps not a fair characterisation but superdeterminism is only a little less weird and ad hoc than that.

    Thus it seems to me that each side believes in “magical souls.”

    Nope. Nobody in this debate is assuming there is anything special about human beings.

    I suggest that we use the concept of “causality.”

    Except that the version of causality you seem to be working with just doesn’t work in light of Bell’s theorem.

    If you really want to preserve determinism, there are other ways you can go. I favour many worlds. You might want to consider the de Broglie-Bohm interpretation. Superdeterminism isn’t a serious option.

    Liked by 1 person

  5. Arthur,

    So the energy is always “somewhere,” while it is only information that is being created/erased/determined, instantaneously, by measuring/extracting information from the system?

    Like

  6. Eric,

    “Freedom doesn’t exist “ultimately,” though it does effectively exist to the extent of a given subject’s ignorance about what’s going to happen.”

    What if there is no “ultimate,” as a finite proposition?

    Like

Comments are closed.