The problem of wave function ontology

Disclaimer: I’m neither a physicist, nor a philosopher of physics. Moreover, I don’t play either role on television! Nonetheless, I’m fascinated by physics, as well as by debates amongst physicists, or between physicists and philosophers. So I perked up when a couple of weeks ago the regular colloquium at the Philosophy Program of CUNY’s Graduate Center was scheduled to be by Nina Emery, of Brown University, who gave an unusually lucid talk (given the topic) entitled “Against radical quantum ontologies.”

We have all heard of the wave function, hopefully from a real physicist rather than, say, from Deepak Chopra. It is a fundamental concept in quantum mechanics, being a description of the quantum state of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. Okay, you may ask, but what is a wave function, physically — rather than mathematically or statistically — speaking? Hell if I know. And apparently, hell if anyone else knows either.

Which is where Nina’s talk comes in. I’m going to follow her handout from now on, adding a few comments here and there. Near the end of the post I will get to why the issue may be of broader interest than “just” understanding what the wave function actually is.

To begin with, Nina introduced wave function realism as the view that all that exists at the fundamental level is a field defined in a highly dimensional physical space, the configuration space, where none of its physical dimensions correspond to the standard three spatial and one temporal ones we are all familiar with. There are two types of wave function realism out there: wave function monism, which claims that all that exists is a field in configuration space, which gives rise directly to our everyday experience of the world; and wave function fundamentalism, which says that what exists at the fundamental level is a field in configuration space, which then gives rise to ordinary objects in 3D space, which in turn we then somehow perceive (i.e., fundamentalists allow for additional transitions when compared to monists).

What Nina set out to do was to build an argument against wave function realism, based on something she calls the minimal divergence norm, which states: “insofar as we have two empirically adequate theories (i.e., two theories that both accurately predict the phenomena we observe), we ought to choose the one that minimizes the difference between how the theory says the world is and the way the world appears to be (to us).

To use the classical distinction famously introduced by philosopher Wilfrid Sellars, the minimal divergence norm says that we should try to minimize the distance between the scientific and the manifest images of the world.

Nina explained that we should care about this for a couple of reasons: first, because wave function realism is taken increasingly seriously by a number of philosophers and physicists; second, because the minimum divergence norm may be helpful in metaphysics against what she amusingly called “incredulous stare arguments” (i.e., arguments based on some sophisticated version of “are you f**ing kidding me?”).

Nina’s argument can be summarized in the following way:

P1: wave functional realism (either of the monist or the fundamentalist type) violates the minimal divergence norm.

P2: we should accept the minimal divergence norm.

C: therefore, we should reject wave function realism.

The argument is valid, which means that the only way to reject it is to doubt one or the other of the two premises (i.e., to question its soundness). Accordingly, Nina proceeded to defend her premises. Before doing that, however, she cautiously added a few caveats, which I’m going to briefly examine here.

First, she explained that her focus is on ontologies compatible with Everett-type (so-called “many-worlds”) interpretations of quantum mechanics, but that the argument applies also (straightforwardly, she said) to Bohmian-type dynamics. Don’t ask, I can’t tell.

Second, she distinguished between her proposed minimal diverge norm and a similar, but more restrictive, no-divergence norm. The latter says that we should not endorse a scientific theory that says that the world is significantly different from the way it appears to us.

This is crucial, and of high interest to me. Basically, while the minimal divergence norm attempts to put a leash on scientific theories and keep them as close as empirically and theoretically possible to the manifest image, the no-divergence norm says that, no matter what, priority should go to the manifest image. The first one is, I think, a reasonable attempt to remind us that scientific theories are human constructions, not god’s eye-views of the world, and so that one of their virtues is to make the world understandable to us. The second norm, however, is basically what flat-earthers and creationists happily support: no matter what science tells me, what I see is what I get. Clearly, the minimal divergence norm is a (debatable, for sure) reasonable constraint on science, while the no-divergence norm quickly degenerates into a rejection of science and possibly a support of pseudoscience.

Nina’s third caveat was that she wasn’t going to propose any positive account of the ontology of the wave function, since that’s a much more complex issue, on which there simply isn’t any agreement, either among physicists or among philosophers.

Caveats duly put out there and set aside, Nina proceeded to defend her first premise, that wave function realism violates the minimal divergence norm. To do that, she identifies one possible alternative to wave function realism, mass-density ontology, according to which what exists fundamentally is a mass-density field in 3D space. (Just like wave function realism, mass-density ontology comes in two flavors, monism and fundamentalism, but the difference is irrelevant here.)

Nina claims that wave function realism diverges more from the manifest image than mass-density ontology, for instance because mass-density ontology at the least includes objects in 3D space, which wave function realism does not.

The general idea is that neither wave function realism nor mass-density ontology contradict the manifest image (because they both somehow recover the appearance of everyday objects), but they both go beyond the manifest image. The difference is that wave function realism goes further beyond when compared to mass-density ontology. We could say that it is a less parsimonious departure from the manifest image.

Nina then turns to her defense of the second premise, that we should accept the minimal divergence norm. This, for me, is the more crucial point, and one that has far wider applications than this particular issue in fundamental (meta)physics.

Her move is interesting, though certainly controversial. She claims that the minimal divergence norm is the chief — indeed te only — reason that keeps our theorizing theorizing from sliding inot so-called radical metaphysical notions. Here are some examples of the latter:

* Solipsistic idealism, the notion that I don’t have a physical body and brain, and that all that exists is my mental states.

* Brain-in-a-vat hypothesis. My brain is floating in a vat, receiving sensorial inputs indirectly. The physical world is nothing like what it appears to be.

* Bostrom simulation hypothesis. The physical universe is nothing like physics describes it, it is, rather, a simulation in someone else’s computer.

* Boltzmann brain hypothesis. My brain formed spontaneously in an empty region of space, as a result of a number of coincidences. Again, the physical universe is nothing like what it appears to be.

At first I thought that Nina’s claim that these radical metaphysical hypotheses are incompatible with science was a bit too strong, and that it would have sufficed to say that they are in no way entailed by the current scientific worldview. But upon further reflection I think she is right. Notice the recurrence above of a specification along the lines of “… and the world is nothing like it appears to be.” If any of the radical metaphysical hypotheses were true (and it is possible that one of them is!), then it would not just be the manifest image that would be incorrect, but also the scientific one. When physicists talk about electrons, quarks, strings, and what not, they most certainly do mean that these things are physical components of fundamental aspects of our reality. Which would be false if any of the above scenarios actually held.

Further, Nina makes clear two additional points to be considered insofar as this discussion is concerned: i) while it is true that the radical metaphysical hypotheses can be designed so as to present a number of extra-empirical virtues (such as simplicity, elegance, etc.), this is irrelevant unless one also has a reasonable story to explain how those virtues are acquired by scientists and why they should be deployed in a way that favors the metaphysically radical scenarios; ii) her argument for the second premise goes through even if one limits the radical scenario to just the fundamental level, for instance by saying that the Bostrom simulation hypothesis claims that what exists fundamentally is a computer simulation, which is then capable of giving rise to a world of 3D objects.

Nina’s conclusion, which seems reasonable to me, is that “anyone who rejects the minimal divergence norm must either take seriously the radical metaphysical scenarios, or give up a plausible story about how they are ruled out.”

Obviously, there is plenty of room for disagreement with Nina’s argument and conclusions, though I find them quite plausible. Nevertheless, the reason this is very interesting — other than its application to the ontology of quantum mechanical concepts such as the wave function — is because of the broader issue I mentioned earlier: the difference (and tension) between Sellars’ manifest and scientific images of the world.

Indeed, I have been invited to contribute a chapter to a forthcoming book on the common sense tradition in philosophy, to be published by Oxford University Press. My chapter will be on the challenges posed by science to such tradition. As a scientist and philosopher, of course, I wholly realize that science has introduced and will continue to introduce notions that will depart from “common sense,” or from the manifest image. But as I said above, I also think that science is a human-centered activity that never had, nor ever will, achieve a god’s eye-view of things. Science, in other words, isn’t just in the neutral business of discovering how the world works, it is in the partially subjective business of facilitating human understanding of how the world works.

That is why I find Nina’s notion of the minimal divergence norm useful: we have to allow the scientific image to diverge from the manifest one, or we give up on science altogether. But we also want such divergence to be kept at a minimum, because otherwise we have no criteria to reject non- or even anti-scientific hypotheses, such as the radical metaphysical ones mentioned in Nina’s talk (and a number of others, I would add, like Max Tegmark’s mathematical universe). To give up on the norm of minimal divergence would basically give free rein to metaphysical speculation in science, which I’m pretty positive would not be a good idea.

218 thoughts on “The problem of wave function ontology

  1. synred

    David D: Now that is interesting!

    I’ve been playing around with something similar, in which I add a bounded random walk in |amplitude|*2. If you do an unbiased walk the probability of hitting a boundary (0 or 1) is the Born prob. This, of course, still assumes Born prob. by doing the walk in |amplitude|^2, but each step is a unitary transform the can be described by a Hamiltonian. It would have to be very fast or occur in an extra-time like variable. It is non-local.

    I don’t have a mechanism (a ‘rotation’ does not work because the steps in |amplitude|^2 are not unbiased) for generating the walk or making it stick when it hits the boundaries. It is kind of between many worlds and collapse and contains just as many if not more ad-hoc assumptions. It I quantize the steps so that only steps of size 1/N occur then the amplitude can go to exactly to 0 for which ever state hits the boundary first. This is needed since the occupants of any state would be unaware of its norm, to kill them off you really have to get to zero.

    -Arthur

    Like

  2. Coel

    Hi synred,

    Many worlds introduces very many worlds! Maybe even an uncountable infinity…

    True, but it is not multiplying information content. Indeed, MW is deterministic so that the proliferation of worlds can be computed from a previous state, and therefore it is not generating new information. Thus that aspect is not a problem from the viewpoint of parsimony.

    Collapse introduces a new physical principle — ‘colapse’ — that does not follow from Schrodinger equation?

    True, but MW is just as bad. MW also needs a new concept, namely “splitting”, and if you enquire into how and why that happens then MW gets rather vague. That’s why I don’t like MW: it doesn’t actually solve anything. You still have an incomplete theory. There is something we don’t understand about QM, and calling it “splitting” rather than “collapse” doesn’t really help.

    Like

  3. synred

    Classical logic fails

    Here I tend to agree with Robin. That particles can’t be two places is not intrinsic to logic, but a property of particles as we define them as they behave for all practice purposes. An electron is not a particle, but ‘something else’. It can pass through both holes. It can be distributed smoothly around an atom and not concentrated on one place. It can become concentrated in one place. It can’t be detected in two places. It’s weird but not illogical.

    Logic should still work if you ask sensible questions.

    I’ve been bothered in the collapse picture by the energy of a photon being suddenly concentrated.

    Liked by 1 person

  4. Philosopher Eric

    Quantum uncertainty the Heisenberg UP are not the same thing as quantum indeterminacy. One could still have an Uncertainty Principle in an entirely deterministic version of QM.

    Well yes Coel, not only do I realize this, but that’s exactly what I believe — I believe that Heisenberg gave us a wonderfully useful principle, which describes a QM that is in the end entirely deterministic.

    Also as DM mentioned, I only “believe” that this is the case, since by my definitions, reality would otherwise have a non-causal component and so be “magic.” Instead of me dabbling in metaphysics, it would seem that most physicists do so (apparently dancing around on Einstein’s grave!), by theorizing that reality must harbor a “natural uncertainty.” I’d have them at least become agnostic about this.

    By the way, thanks for emphasizing the circularity of our various reference books regarding “natural” and “non natural.” Perhaps it takes an outsider like yourself to make such critical observations. I’d hope for Massimo’s take on that one.

    DM,

    I don’t have time to look up Bells theorem right now, but from what you’ve described it does seem to conform quite well with my own position. We human “idiots” surely can’t reconcile hidden variables (which I suspect are the vast majority of variables) with experimental results — not if we’re ultimately “puppets on strings.” If everything is perfectly causal, then obviously thought experiments about “time travel” and such aren’t going to get us what we want.

    As you say, I realize that people find determinism very repugnant (and therefore they seemed to have jumped at the metaphysical chance to ditch it when Heisenberg came along!). But is “human palatability” really what matters? Of course not! Thus we’re left with two basic alternatives. Either there is a noncausal aspect of reality (which may thus be defined as “magic”), or existence is perfectly determined by means of causality.

    (Furthermore a perfectly certain reality isn’t unpalatable to me at least. I’m “compatible” in the sense that we are free to the magnitude of our stupidity, and so I think extremely free in this regard.)

    Like

  5. synred

    What is forbidden by QM is having two electrons with same spin in the same “place” (state). A single electron can be in a mixed state of spin-up and spin-down. Two electrons in the in the same mixed state in the same “place” is not allowed (by Pauli), but one in one mixture and the other in an orthogonal mixture is. That’s what makes chemistry work.

    If you ask a sensible QM question logic works fine. Ask a silly question. you get a silly answer.

    Liked by 1 person

  6. synred

    True, but MW is just as bad. MW also needs a new concept, namely “splitting”, and if you inquire into how and why that happens then MW gets rather vague

    MW is no longer vague. Decohorence explains splitting purely interaction with the ‘environment’. It’s really very elegant. It as application to ‘colapse’ explaining why particular eigenstates are selected. It does not rule out either ‘many worlds’ or ‘collapse’ interpretations, but was inspired by ‘many worlds.’

    The actual cases worked out are rather simple, but the principle seems sound. I still don’t find i’many worlds’ convincing. It has very ugly implications.

    See, e.g., Zeh https://goo.gl/PRQf5d

    Like

  7. Disagreeable Me (@Disagreeable_I)

    Hi Eric,

    Superdeterminism is profoundly weirder than ordinary determinism. I’m quite happy to accept determinism but not superdeterminism, Superdeterminism implies, for instance, that you could predict what axis an experimenter is going to choose to measure spin on just by looking at the hidden variables. I don’t think it’s plausible that there is this deep connection between the hidden variables of electrons and the decisions made by experimenters. Almost nobody is willing to accept something that weird. Fundamental randomness is much simpler and less weird to boot.

    Look up Bell’s theorem and superdeterminism and come back to me.

    Liked by 1 person

  8. Robin Herbert

    What is the difference between determinism and superdeterminism?

    Under plain determinism there is already a fact of the matter about which axis an experimenter is going to measure spin on before she does so, whether we could know this fact or not.

    If determinism was true the limit on our being able to predict anything at all would be purely practical, limits on what we can observe and calculate.

    For example I wonder if I could look at the state of the world now and calculate what I will do next Tuesday.

    Maybe next Tuesday I could try doing something different and see if I could.

    But if you examine the project more closely you find that it would not be possible to finish that calculation before next Tuesday because it would involve calculating the result of the current calculation.

    Like

  9. Robin Herbert

    If, in QM, a particle can be simultaneously spin up and spin down then this presents no more problem to classical logic than the fact that a Bilby can be simultaneously cute and furry.

    But, you say, “cute” and “furry” are not mutually exclusive properties. But, apparently, neither are spin up and spin down, so no problem.

    You might go the other way and say that spin up and spin down really are mutually exclusive properties. In which case a super position of spin up and spin down is not the same as the particle being simultaneously spin up and spin down. The ket is not the particle.

    But if you want to insist that spin up and spin down are mutually exclusive properties and that, under QM a particle can be simultaneously spin up and spin down, then you would be saying that, under QM a particle can simultaneously exemplify two properties that it could not, under any circumstances, simultaneously exemplify.

    That would just be a deliberately perverse use of language, and you can do that in classical situations as well.

    Like

  10. Disagreeable Me (@Disagreeable_I)

    Hi Robin,

    What is the difference between determinism and superdeterminism?

    Superdeterminism is determinism with the addition of some pretty strange and stringent constraints on how the universe is set up. It’s a bit like the idea that the initial state of the universe was set up just so that Bell’s inequalities would never be violated, as if by coincidence.

    It’s not just determinism, anyway. On plain old determinism, there’s no reason to think that the hidden variables would correlate with the decisions of the experimenter who is going to measure them. Everything is still determined but there is no reason to expect to see such clear patterns of correlation in such ostensibly unrelated particulars. Superdeterminism is implausible because it’s hard to see how it could possibly work. It’s almost like the ultimate conspiracy theory, but the conspirators are all the particles in the universe.

    Liked by 1 person

  11. brodix

    The basic process of determination makes a lot more sense when you think of time as an effect of action and events have to occur in order to fully compute the total input.
    Determinism is based on the premise of all input being known, which assumes an absolute objectivity.
    The problem is that time is not some foundational block time, which can be viewed from that godlike perspective, but a dynamic of occurrence.
    There is no future out there, waiting for you. It emerges from the dynamic of occurrence.
    It is a construct of our mental function to project past events onto the future and assume a singular stream of events, from this point in time, but the light cone of output from any event will not encompass the light cone of input into any other event, no matter how closely they occur.

    Like

  12. Robin Herbert

    In “Quantum Mechanics, The Theoretical Minimum”, an otherwise excellent book by Leonard Susskind, he tries to show that classical logic breaks down under QM, but his argument is incoherent.

    He begins with two propositions:

    A: The z component of the spin is +1
    B: The x component of the spin is +1

    So that (A or B) is “The z component of the spin is +1 or the x component of the spin is +1″

    He then says to consider testing the proposition (A or B) by measuring first the z component then the x component. He says that if we find that the result of the measurement of the spin is +1 then we find that (A or B) is true.

    Then he says to consider testing (B or A) on a particle prepared in the identical state, by testing the x component and then the z component, and that we might find that both B and A were false.

    From this he concludes that in QM the inclusive or is not symmetric, that (A or B) does not imply that (B or A).

    But this makes no sense (and if you doubt that he is really saying this I invite you to get a preview of the book and read for yourself. I had to read it several times to confirm that he really was saying this).

    If we find that (A or B) is true then we have also found that (B or A) is also true, by the very definition of or.

    He is only getting different results because the measurement is changing the property of the particle.

    We could get just the same thing in the classical physics with, say:

    A: The car is in the driveway
    B: The car is stationary

    So we test (A or B) by first testing if the car is in the driveway and finding that it is in the driveway, therefore (A or B) is true.

    Then I begin with the car prepared in exactly the same state as it was in the first test and test B first and find that the car is not stationary. Then I test A and find that the car is not in the driveway, so (B or A) is false.

    By Susskind’s logic I have discovered that inclusive or is not symmetric even in the world of classic physics.

    But of course it is nonsense, the symmetric properties of the inclusive or are functions of which you test first, or even an ontological claim about the way the world is. It is simply part of the definition. If I find that (A or B) is true then I have also found that (B or A) is true, by definition.

    And that goes for Susskind’s example as well as mine.

    Like

  13. Robin Herbert

    Hi DM,

    If plain old determinism was true and there were hidden variables then you would find a correlation between the states of the hidden variables and the choices of the experimenters.

    Like

  14. Disagreeable Me (@Disagreeable_I)

    Hi Robin,

    He is only getting different results because the measurement is changing the property of the particle.

    Yes, this is the key point. Or rather, the measurement is causing the proiperty to take on a defined value, either by collapsing the wavefunction or by entangling the observer with the particle, depending on whether you think the wavefunciton collapses or not.

    (Or, I guess, the measurement propagates backwards in time and sets the property of the particle. Or, on superdeterminism, the very decision to test B before A means the particle is not prepared in the same state, because the state of the particle is sensitive to such decisions).

    Like

  15. Robin Herbert

    In other words, superdeterminism only requires a) determinism is true and b) the “action at a distance” behaviour of entangled particles is explained by hidden variables.

    At least as I understand it.

    Like

  16. Robin Herbert

    HI DM,

    “Where do you get this idea from?”

    I don’t remember where I read it, but it is basically, as I understand it, the only way left of keeping hidden variables alive.

    Like

  17. Disagreeable Me (@Disagreeable_I)

    Hi Robin,

    Superdeterminism is not more deterministic than determinism. It is the idea that not only is the universe determined, but that it is determined in a very particular way so as to yield just what we observe in quantum experiments.

    Hidden variables are not an explanation for action at a distance. Hidden variables is just that idea that quantities such as spin have definite values before being measured.

    There are many ways of reconciling determinism with quantum mechanics. The many worlds interpretation is one which dispenses with hidden variables. Non local interpretations such as Bohmian mechanics are another and can include hidden variables. Superdeterminism is the only one that posits a correlation between hidden variables and the decisions of the experimenters. That makes it implausible in the eyes of almost everyone.

    Liked by 1 person

  18. Robin Herbert

    Hi DM,

    I am sure I will be corrected if I am wrong but as I understand it, the hidden variables idea is that separated entangled particles are a little like a pair of shoes in a box that are separated without keeping track of which is the right and left shoe. One shoe is sent to Alice Springs, the other to Luton.

    If I am in Alice and check which shoe I have and find I have a left shoe then I know straight away that the shoe in Luton is a right shoe even if no one has looked at it.

    Is that the basic idea.

    I also understand that experiments using Bell’s theorem have ruled out that this situation applies to tangled particles so if there are hidden variables, they can’t be a definite value of the particle before measurement.

    Unless, that is, they also correlate to the choices of the experimenter.

    Like

  19. Robin Herbert

    Suppose there were faster than light particles and we could receive a message before it was sent.

    So we set up a piece of equipment that has two states and we wait for the message. If the message says “The machine was set to state B” we set the machine to state A. If the message says “The machine was set to state A” we set it to state B. Then we send the message containing the true state of the machine.

    Some say that this is a paradox that shows that there cannot be faster than light particles. But others point out that there is no paradox at all, that, if there were FTL particles, this experiment could be set up and we would always find that the machine would malfunction or that the message would be garbled or perhaps we would find ourselves lying in the message about the state of the machine without being able to help it.

    This is, they point out, no more mysterious than trying to make 15 ball bearings fit into a box that only has enough room for 14 ball bearings. We would just be trying to do something physically impossible and failing.

    We would not be able to see what it was that was preventing our sending the true message, it would be some complex interaction of cause and effect of the physical stuff around us, but it would be no more mysterious than trying to fit 15 ball bearings into a box that could only hold 14 and failing.

    So “superdeterminism” would be no more mysterious than this. If the particles had some definite value before they were measured then this definite value can have some effect on the environment and, by a complex process of physical interaction have an effect on what the experimenters choices are. Their choices would be determinined by what is physically possible given the state of the state of the universe at that time and facts about the way the universe is.

    And this is no different to any interaction in a deterministic universe. As I hold my finger above “OK” and “Cancel” it seems to me that touching either word is possible. I can see no reason why one of these options would be impossible. But if determinism is true then at least one of these is already impossible as I hold my finger above the screen.

    The reason would not be obvious, it would be some complex interaction in my physical environment. And the same would go for everything we do.

    Like

  20. synred

    He begins with two propositions:
    A: The z component of the spin is +1
    B: The x component of the spin is +1
    So that (A or B) is “The z component of the spin is +1 or the x component of the spin is +1″

    If he says this he asking a silly question.

    If the state of the particle is |+1>_z then it is also in (|+1>_x+|-1>_x)/sqrt(2). It’s the same state expressed in a different basis. If you measure x-spin then the probability of getting +1 is 50% and the probability of getting -1 is 50%. If you measure z-spin the probability of getting +1 is 100% and the probability of getting -1 is 0%. If you chose to change to a basis not orthogonal to z and do your measurement only that direction, you can get all different probabilities. There’s no violation of logic.

    The appearane of logic violation only arises when ask on non-QM question of a QM system.

    I surprised somebody like Suskind would say there is. May be I should ignore some of his weirder string ideas too …

    Like

  21. synred

    >If plain old determinism was true and there were hidden variables then you would find a correlation between the states of the hidden variables and the choices of the experimenters.

    I don’t think so! If you knew the hidden variables you could predict the results of your experiment. However, Bell’s in equalities can be violated because Bohemian mechanics is non-local.

    Like

  22. synred

    f I am in Alice and check which shoe I have and find I have a left shoe then I know straight away that the shoe in Luton is a right shoe even if no one has looked at it.

    That kind of correlation is perfectly classical. It will not violate Bell’s inequalities.

    Like

  23. Robin Herbert

    “That kind of correlation is perfectly classical. It will not violate Bell’s inequalities.”

    Which is, again as I understand it, why those experiments showed that this is not the situation with entangled particles

    Like

  24. davidlduffy

    DM and RH: “Quantum logic”: this term goes back to Birkhoff and von Neumann, but yes is only really a problem if you want a quantum realism allowing deductions to be made about objects and properties using classical logic.
    Friedberg and Hohenberg (2014) discussion of an ontology based on the consistent histories approach
    https://arxiv.org/abs/1405.1961v1
    suggest von Neumann type logic is equivalent to classical logic without a distributive law. Section III gives their version of the Susskind argument – maybe it is easier to understand?.

    “Superdeterminism”: a defence involving consistent histories is
    https://arxiv.org/abs/1609.08148

    Like

  25. synred

    Which is, again as I understand it, why those experiments showed that this is not the situation with entangled particles

    Exactly.

    Bell’s calculation is classical. It determines the maximum correlation that can occur in a situation like you describe.

    An entangled state specifies, e.g., that the spins are opposite, w/o specifying what the spins are.

    The reality of the would be that two electrons with opposite spins are traveling to Alice Springs and wherever. The state might be (|+>|-> – |+>|->)/sqrt(2). That is the only reality there is while they electron excitations are in transit (the minus sign in between is because electrons are Fermions, for photons it would be +). The state is a superposition of two excitation patterns. It makes perfect sense in QFT in which is closer to the wave than the particle picture (though the excitations are not exactly particles or waves in the classic sense, but something else..mathematically they are well understood.).

    In the ‘collapse” picture one or the others is tossed when a measurement occurs; in many worlds both are kept but the complexity of the measurement process makes it so the two ‘worlds’ are orthogonal and never influence each other again.

    This leads to correlations bigger than classically possible. That is what has been observed in the Bell’s experiments.

    It’s weird for sure from the human perspective.

    Like

  26. Robin Herbert

    “I don’t think so! If you knew the hidden variables you could predict the results of your experiment. However, Bell’s in equalities can be violated because Bohemian mechanics is non-local.”

    Even a non local hidden value could, in principle, be used to infer the spin of a particle without explicitly measuring it.

    Like

Comments are closed.