Why Trust a Theory? — part III

Munich[Notes: what follows are only lightly edited notes taken while the meeting was in progress, so they are more likely than usual to contain typos and odd phrasing; also, apologies to my readers, but I may not be able to participate much to the hopefully interesting discussion at PlatoFootnote about this series of posts, I’m trying to also get some sleep here in Munich… Below and throughout, comments in brackets are mine.]

This is the last installment of my coverage of the workshop on the current status of fundamental physical theory, organized by philosopher Richard Dawid at the Center for Mathematical Philosophy based at the Ludwig-Maximilians University in Munich, Germany, a conference that was prompted by a high profile, controversial position paper published in Nature by George Ellis and Joe Silk (who are co-organizers of the workshop), entitled “Defend the integrity of physics.”

The first speaker of the day was one of the co-organizers, George Ellis, on “Limits in testing the Multiverse.” The idea of a multiverse has received increased attention, and it has been linked to the Everett (“many worlds”) interpretation of quantum mechanics. Marten Reese and Max Tegmark have been some of its most popular supporters. According to Greene there are a whopping nine different types of multiverse. They can’t be all true, since they conflict with each other. And of course maybe none of them are real, there is only one universe.

The underlying impulse seems to be a philosophical principles along the lines of “everything that can happen does happen.” The multiverse is also claimed to be a necessary outcome of chaotic inflation, and it is invoked as an explanation for life and human consciousness. The key to keep in mind is that the domains considered are beyond the particle horizon that is actually observable to us. This means that no observational data whatever are available, and this situation will not change. To think otherwise is hubris on a vast scale.

Arguments in favor include the “slippery slope” [a fallacy, in favor?], where people say, well, there are likely galaxies beyond the observable limits, so… Right, but this assumes an extrapolation over huge scales, and crossing qualitatively different physical domains. Moreover, if the idea of extrapolation is taken seriously, it actually leads to the old idea of spatial homogeneity forever, known as the Cosmological Principle.

A second argument is that the multiverse is implied by physics, specifically by chaotic inflation. But, again, this is a case of extrapolating known to unknown physics. That extrapolation is unverified and unverifiable.

The problem, in a sense, isn’t with extrapolation per se (which is a time honored practice in science). It is about how far it is reasonable to extrapolate, or how far behind one has left the data (or established, tested, theory).

Inflation is not yet a well defined theory, and the available data is implied by some versions of inflation but not others. Particularly, not all inflation is chaotic. The Encyclopedia Inflationaris considers a whopping 195 different models of inflation. [Again, underdetermination of theory by data rears its ugly head.] At the moment, the data do not actually prefer a universe containing bubbles.

Third argument: fine tuning. Here a multiverse is one of the possible explanations: an infinite set of universe domains allows all possibilities to occur, so somewhere things work out okay — though one ought to be able to demonstrate that the proper multiverse is a physical reality derivable from the theory, not just a conceptual possibility. In a sense the multiverse is used to make the highly improbable appear probable.

The problem is that this is a probability argument predicated on the existence of the multiverse. It is obviously inapplicable if there is only one universe. We can do many observations of that one object (the known universe), but we still have a sample size of one, as far as universes are concerned.

No value of the cosmological constant can prove whether a multiverse does or does not exist. At best the multiverse idea is simply consistent with an anthropically constrained universe, but that doesn’t get us much of theoretical value.

Tom Banks wrote a paper entitled “The top 10^5000 reasons not to believe in the landscape,” where he says that the string landscape is a fantasy. You may or may not agree with that paper, but it shows that the physics underlying the multiverse is simply not uncontroversial or established.

[Interestingly, through Ellis’ talk, David Gross — a supporter of string theory — kept nodding vigorously in first row… He made it clear that he doesn’t believe in the multiverse.]

Now chaotic inflation could be disproved by observing that we live in a small universe. This is currently being tested, by checking whether we can see “all around” the universe. At the moment it doesn’t appear to be so, but it is an open question. The other possibility is about confirmation (rather than disproof), based on the idea of collisions among bubbles, if nucleation is large relative to the rate of expansion. So far, nothing.

An additional issue concerns the claim of physically existing infinities, which the author says can simply not be achieved. “The infinite is nowhere to be found in reality,” according to David Hilbert. Talk of infinity is not scientifically testable. [Here Gross visibly reacted in the negative, and with obvious condescension…]

A second problem is posed by vacuum energy. Quantum Field Theory suggests a huge cosmological constant, discrepant with General Relativity if vacuum gravitates [hmm, I’m afraid I don’t know what the latter means].

The broad problem is that the multiverse makes few if any testable predictions, but it does explain everything. But successful scientific theories are those that make lots of testable predictions, not those that provide all-encompassing explanations.

Ellis argues that Smolin’s Darwinian cosmological natural selection is the best alternative on offer. [I don’t, see here] Still, it is incomplete in a number of ways. [Others during the discussion pointed out that it has recently failed a crucial test.]

There are also worse proposals, like the idea that the universe is a simulation, which raises far more questions than it actually answers. It confuses science fiction with science.

Talk of Popperazzi and falsifiability police is emotional talk and says something about just how thin the critics’ argument really is. What data would allow supporters to abandon the multiverse idea? If the answer is none, than that’s dogma, not science.

Ellis also criticized the use of Bayesianism to support non empirical confirmation, since Bayes theorem actually requires the addition of new data (not just new theory) to change the priors. [I actually think this is an excellent point.]

Next: Joseph Polchinski (in absentia, played by David Gross) on “String Theory to the Rescue.” [A pre-emptive strike about this talk has been published here, based on a version of the paper that has already been uploaded at arXiv.] The talk addresses both string theory and the multiverse. Gross said at the outset that he agrees with the first but not the latter. [This was, of course, not at all warranted and rather discourteous to the missing speaker, but whatever.]

Planck length, time and mass can be considered universal dimensionless constants of nature, to be used throughout the cosmos as natural units. They were arrived at in 1899, before both quantum mechanics and general relativity. The Planck length scale (10^-33 cm) is so far away from what current physics can do that it directly affects the tools and strategies physicists have available to make progress.

Theory has occasionally been able to leap gaps of this type, so we should try, and we should be weary of defining science too rigidly. [Not of defining it too loosely?] It is strange to say, as Smolin does in one of his books, that it is unfortunate that string theorists are not like Einstein, meaning not philosophically savvy. [Here Gross chuckled, stating that he wouldn’t know, since he doesn’t read “those books.”]

The short distance problem is one of the successes of string theory. General relativity and quantum mechanics, when applied to short distances, yield infinities that are non-renormalizable, resulting in something called spacetime foam. This sort of thing already happened to Fermi’s theory of the weak interaction, which gave physicists a clue to the discovery of the W, Z and Higgs bosons.

Another success of string theory: uniqueness of dynamics. String theory was discovered in an incomplete and approximate form, but it is a discovery of mathematical-physical structures that actually exist. [Mathematical Platonism?]

String success: physics from geometry. If general relativity, then gravity is the curvature of spacetime. So it is an attractive idea that the other interactions come from the geometry of time as well. But GR has used up the spacetime that we know, so we need more spacetime. String theories require extra dimensions, plus branes, of a form nicely compatible with the rest of physics.

String success: duality between gauge fields and strings. This is related to the quest for equivalences between classical and quantum theories. Here the string themselves (as well as general relativity, branes, and bulk spacetime) are emergent, and the best context to understand the theory is holography.

Another success of string theory: the relationship between black hole entropy and information. [See yesterday’s talk by Chris Wüthrich.]

Moving to the multiverse. If the physics that we see depends on the (compactified) geometry of spacetime, what determines that geometry? How many solutions does the 10D Einstein equation have that look minimally like our world? Estimates vary from 10^9 vacuum solutions, to combinatorially — with matter — anything between 10^500 and 10^272,000 [yep, you read that last number correctly…]. Einstein encountered a tiny version of the landscape, the radius of the Kaluza-Klein circle.

If there is a landscape, what determines its geometry in the new physics? Dynamics erases much of the initial conditions. Given a positive cosmological constant, we get expansion, then localized tunneling to a new vacuum, and repeat. So: GR + QM + landscape => multiverse. [Aahh…]

A multiverse is the price we pay for getting physics from geometry [too expensive, say some]. Yet there is evidence that we live in just such a universe. This is because all theories that predict a value of the cosmological constant (no supersymmetry, unbroken supersymmetry) get it wrong; the only options are theories where the constant is a free parameter in the dynamical equations, or the multiverse.

Of the various kinds of theories considered, only the multiverse predicts that observers see an unnaturally small (meaning, below the “natural” scale) cosmological constant. But these conditions do not require the constant to be zero, so a small nonzero value is predicted.

It is often said that the discovery of dark energy came as a surprise, except to cosmologists who had paid attention to the data without theoretical prejudice, and to those who had tried to solve the constant problem by conventional means, knew how hard it is, and were aware of Weinberg’s prediction on the subject. The vacuum energy was a prediction of the multiverse. [I bet this claim is gonna be controversial.]

The section of the talk on the multiverse concluded with a “quasi-Bayesian” estimate of the likelihood that there is a multiverse: 94% — and this is a conservative argument. [Laugh in the audience, though I don’t think this was intended as a joke…] The multiverse is on a similar footing as the Higgs, to which Polchinski attached a prior of 99.9%. [Even I can tell that this analogy is, shall we say, a stretch?]

The first speaker after coffee break was Elena Castellani on “Scientific Methodology: A View from Early String Theory.” Questions about scientific methodology are raised by dramatic scale changes in fundamental physics.

Traditionally, philosophers of science are concerned with the modalities followed in building theories, i.e. questions about discovery; they are also concerned with the modalities followed in assessing scientific theories, i.e. questions about justification. They are moreover concerned about the relationship between these modalities, since they are not sharply separated, as well as the influence of external aspects, such as sociological and economic ones.

The problematic aspects concerning participants to this workshop regard assessment methodology, not building ones. We have assessment based on empirical support, which is problematic for scientific reasons, both technical and theoretical. Philosophers have little to contribute here, since these are scientific issues. But we also have “extra-empirical” support, which is problematic for philosophical reasons.

There are general meta-strategies for extra-empirical support: for instance Dawid’s three arguments discussed on the first day. There are also more specific internal criteria: consistency, unifying power, generality, simplicity, fertility, explanatory power, elegance, beauty, etc.

Another internal criterion is the convergence argument: the convergence of results obtained in alternative, independent, ways, even beginning from different starting points. An example comes from early string theory, back to the work by Veneziano (1968) to the first string revolution in 1984.

This is an interesting period because it illuminates the origin of ideas such as strings, duality, supersymmetry, extra dimensions, as well as of mathematical techniques that are basic ingredients in today’s fundamental physics. Historical case studies like this are “data” informing discussions in philosophy of science.

First phase (1968-1973): the so-called dual theory of strong interactions, which was falsified as such. Second phase (1984-1984): those features that were a drawback of the theory for describing hadronic physics (spin-one and spin-two massless particles and extra dimensions) were taken to reveal the true nature of string theory, leading to reinterpret the theory as a unified quantum theory of all fundamental interactions.

In the second period the theory was regarded as so beautiful and having such a compelling mathematical structure that it had to relate to the physical world. This was sufficient motivation to pursue it.

The focus of the author was on the first phase, in particular the string conjecture, which originated in 1969 from three independent authors, in order to arrive at a deeper understanding of the physics described by dual amplitudes. The starting point was Veneziano’s discovery in 1968 of the dual amplitude for the scattering of four mesons. This is a case of discovery, illustrative of both the rationale leading to apparently bold guesses and the kind of evidential support motivating a theory’s progress.

Early work arrived at the conclusion of the number of spacetime dimensions = 26 via three different routes (with a fourth one added later on, in 1981). It was eventually shown that d=26 would fill Hilbert’s space. Of course, the theoretical beauty was achieved at the price of 22 extra dimensions. Nonetheless, these became gradually accepted because of what appeared to be a surprising convergence of different calculation procedures to the same result could be seen as a natural consequence of the theory in its full-fledged form.

Castellani concluded that this is the sort of non-empirical support that has kept string theory alive and well for all these years.

Last talk of the morning: Dieter Lüst on “Aspects of Quantum Gravity.” More than asking whether we can trust a theory we should ask how far we can go in theoretical physics. So far in the history of physics we have been able to probe shorter and shorter distances by accessing higher and higher energies. Extrapolation has always been a good tool, but does it work in the regime of quantum gravity?

The basic picture from quantum mechanics includes the wave-particle duality, Heisenberg’s uncertainty principle and phase space quantization. The other “column” of modern physics, of course, is general relativity. The problem is that the two seem to be mutually incompatible. QM and GR clash at short distances in a variety of ways, leading to infinities. Related questions concern the quantum structure of gravity, the quantum structure of spacetime, the emergence of geometry and gravity, and the storage of information in quantum gravity. Nobody has answers to these issues as yet, but string theory is the most promising framework. [It has been “promising” for quite some time now…]

Graviton scattering in quantum gravity: in quantum gravity, the gravitational force is mediated by the exchange of a massless spin 2 particle, the graviton. Are gravitational waves going to be discovered soon, by LIGO? Meanwhile, one can use a corpuscular model of black holes, which is equivalent to a Bose-Einstein condensate of N gravitons. [Sure, why not… This is one of a number of talks at the meeting that have unfortunately missed the mark. The whole idea of the workshop wasn’t to provide a platform to string theorists to discuss details of string theory and where it may go. It was to focus on a meta-discussion, if you will, of string and multiverse theories vis-a-vis scientific methodology. Oh well.]

String theory, the big picture: particles are string excitations, the spin 2 graviton is always part of the string excitations, it includes the other forces of nature, among other things. We can develop a good understanding of black hole entropy in terms of string states. String theory provides a consistent framework. [There is a lot of emphasis on consistency, which of course is a mathematical/logic requirement, but fairly minimal as far as scientific theories go.]

Of course there is the problem of the landscape, currently estimated at 10^1500 [notice how different people give wildly diverging estimates of the exponent]. But according to Kane [see his controversial talk from yesterday] the theory still has predictive power.

And we are back from the lunch break, with Sabine Hossenfelder and her talk on “Lost in Math.” Theory assessment, non-empirical or not, relies on judgment, which in turn can be trusted only if it is objective. Problem is, cognitive science tells us that human judgment is typically biased. Scientists have of course always suffered from cognitive biases, but the problem becomes much more urgent when data is harder to come by. [I.e., scientists rationalize just as much as anyone else, and if they are free to do that without empirical constraints, then…]

Hossenfelder declared herself offended by an earlier snide remark by David Gross about bloggers. She said as one of the few professionals who blog about fundamental physics she sees her job actually as that of “cleaning up” the mess often left around by science journalists.

The problem with fundamental physics is that it takes a long time to come up with new experiments, which in turn has led to a focus on mathematical consistency. But consistency is clearly insufficient. [See my comment above.] There are infinitely many consistent sets of mathematical axioms, we need to pick the right one(s) in comparison with observation.

The mathematical universe of Max Tegmark is “pretty much useless” because it does not help us to pick the right axioms. Multiverses are merely signs of under constrained theories. They are a symptom of trying to explain everything with nothing. [Ouch!]

Physicists use many assumptions that are never explicitly stated, and they are not required by mathematical consistency. That’s what gets us “lost in math.”

Physics also features “hidden rules,” having to do with the relevance of simplicity, naturalness, elegance and beauty. These requirements get converted into technical statements and their status as assumptions is forgotten.

Beauty used to be a sign of godly influenced (e.g., in Newton), then it became a bonus (Poincare`), then a guide (Hermann Weyl), and finally a mandate (Dirac). As physicists have become more successful they have also become convinced that they can understand nature by introspection. [Double ouch.]

[It strikes me like all this talk of equating beauty and truth is uncharacteristically Platonic…]

Beauty as a guide actually has a pretty bad track record. Many theories were considered beautiful and turned out to be wrong. Contrariwise, some successful theories were considered ugly when proposed.

What about naturalness? It has worked at times (e.g., top quark) but not others (e.g., cosmological constant, Higgs mass). Naturalness too is an aesthetic, not a mathematical principle. It’s empty without assuming a probability distribution.

Any theory is infinitely fine tuned because we have to pick a few sets of consistent axioms out of infinitely many.

Simplicity: by itself it is an utterly empty principle. It only makes sense to require a theory to be as simple as possible given certain data. But simplicity acts in a powerful way through the expectation that fundamental theories should be simpler. This has manifestly not been the case, otherwise we should have stopped with the ancient Greeks.

Perception of simplicity depends on exposure: the more familiar you are with a theory the simpler it seems. This is a phenomenon well known in arts and music, it ought to be acknowledged in physics as well.

Beauty, naturalness and simplicity are human desires. They have a mixed score card, and we have no reason to think that nature conforms to them.

Practitioners come to believe that these criteria are not hypotheses, but rather consistency requirements. This leads to two problems: a) people
forget that the criteria are hypotheses (“lost in math”) and b) the selection of criteria is affected by cognitive and social bias – which then don’t get corrected because they are not recognized as choices to begin with. This is what limits the space of theories that physicists explore in the first place, thus weakening the no-alternatives claim.

Scientific progress relies on objective expert judgment, which is hindered by social and cognitive biases, which need to be explicitly addressed in order to do good science. [Seems to me this was one of the best and most on target talks at the workshop.]

Next to the last talk: Karim Thebault on “What can we learn from analogue experiments?” In 1975 Hawking derived a semi-classical result associating a radiative flux to the black hole event horizon. Direct testing of his prediction seemed and still seems impossible. However, in 1981, Unruh showed that Hawking’s arguments can be applied to sonic horizons in fluids. In 2014, Steinhauer’s group claimed to have created an analogue charged black hole within an atomic Bose-Einstein condensate.

Should we think of the results of analogue experiments as providing empirical evidence for a given notion, or as analogies?

Experimental arguments from analogy have a long history in science. For instance, in medicine, the fact that a given substance X has an effect on experimental animals similar to another substance Y has been used to infer that if X works in humans, then Y will too, with positive results.

Formally, this sort of argument by analogy is invalid. Analogies merely establish plausibility, not proof, giving us reasons to proceed with further research. The suggestion is that if arguments that can provide external validation are added to analogue experiments, then the latter can actually be considered as going beyond arguments by analogy and move into the realm of actual experimental confirmation, the golden standard in science.

The author then developed an argument [much of it very, very technical] for closeness between hydrodynamics and the original context of Hawking’s calculations. where the role of the black hole event horizon is played by the effective acoustic horizon. As he put it, the black hole is replaced by a dumb hole. The general form of the argument can be modeled using Bayesian confirmation theory, which leads to the use of analogue experiments (under the specified conditions, which require the availability for external validation) to provide confirmation in the Bayesian sense of the term.

The last talk of the three-day workshop was by Georgi Dvali on “Secret quantum lives of black holes and dark energy.” [Another speaker who wrote his slides out by hand! Is this a new trend among slavic physicists? Not a good one, not a good one. But at the least his accent was not as incomprehensible as Mukhanov’s yesterday.] The author is concerned with how black holes process information [which of course is only marginally related to the actual topic of the workshop].

Classical black holes carry little information because they are featureless. Quantum mechanical black holes, however, do carry huge Beckenstein entropy. [He was talking as if the two were different objects, which is true mathematically, though I thought this was a conference about physics.] But in the classical limit black holes carry infinite information, except that it takes an infinite amount of time to decode.

[Unfortunately, the rest of the talk was a technical (obviously, theoretical) discussion of black holes physics, which was entirely irrelevant to the topic of the workshop, so I will not bore you with it.]

That’s really it, folks! It was a very interesting experience, not only because I learned quite a bit about physics and ongoing discussions among leading physicists, but because I got an invaluable first-person experience of the personalities and attitudes behind the controversy. You should keep an eye on the conference web site, where the videos of the talks will soon be published. The organizer, Richard Dawid, is also working on publication of the proceedings, either in book form (he is in conversations with Oxford Press, at the moment of this writing) or, less desirably in my mind, as a special issue of some journal. So stay tuned, there is much more to come, the string wars continue…

This update from Lee Smolin, concerning the alleged falsification of one of his predictions derived from his model of cosmological natural selection:

“Sabina was referring to this paper. From the abstract it claims the ‘measurement of a 2.01 +/- 0.04 solar mass pulsar.’ This is clearly not in contradiction with my prediction of an upper mass limit of 2 solar masses. For the reasoning and other checks of this prediction, See page 20 and reference 31 of Lee Smolin, ‘A perspective on the landscape problem,’ Invited contribution for a special issue of Foundations of Physics titled: Forty Years Of String Theory: Reflecting On the Foundations, DOI: 10.1007/s10701-012-9652-x arXiv:1202.3373. That reference 31 is: James M. Lattimer, M. Prakash, What a Two Solar Mass Neutron Star Really Means. 4. arXiv:1012.3208 , to appear in Gerry Brown’s Festschrift; Editor: Sabine Lee (World Scientific), which is by the people who did the nuclear physics that the 2 solar mass limit is based on. So I think its correct to say that this prediction is still standing up.”

78 thoughts on “Why Trust a Theory? — part III

  1. Coel

    Morning Massimo,

    What data would allow supporters to abandon the multiverse idea? If the answer is none, than that’s dogma, not science.

    What data would lead supporters to abandoning the idea that the Big Bang happened once and once only? Is that idea also “dogma”?

    Like

  2. Massimo Post author

    Coel,

    Simple: any data, any at all, that would actually provide positive clues to the existence of the multiverse. Any, really. Otherwise it’s science-informed metaphysics. And I mean that in the positive sense of the word.

    Liked by 1 person

  3. Philip Thrift

    The leap to ‘reify’ a model or theory (“Electrons are real”, but are they? e.g. There are no particles, there are only fields) seems to be a common reflex in a desire to want to know (or to say that one knows) what’s ‘true’.

    But perhaps all our models and theories are nothing more than lucky ‘successes’ of reverse engineering a noumenological reality. And that’s the extent to which we can ‘trust’ a theory.

    Like

  4. Coel

    Hi Massimo,

    Simple: any data, any at all, that would actually provide positive clues to the existence of the multiverse. Any, really.

    If you’re happy with giving that answer, then my answer to your question:

    “What data would allow supporters to abandon the multiverse idea?”

    Would be:

    Simple: any data, any at all, that would actually provide positive clues that there really is only one Big Bang. Any really!

    By the way, why doesn’t Weinberg’s prediction of a tiny but non-zero cosmological constant count as a “positive clue to the existence of the multiverse”? It was a verified prediction from a multiverse model, and so far we don’t have any better explanation.

    Now, I don’t regard that as *strong* evidence, only as weak evidence. But, given one Big Bang, if we start from the Bayesian priors for “… and only once” being no higher than for “multiple times” (and I see no reason at all for it being higher), then Weinberg’s successfully verified prediction budges things a tiny bit towards “multiple times”.

    Like

  5. Disagreeable Me (@Disagreeable_I)

    Hi Massimo,

    My basic problem with this argument is that what is likely true is too often conflated with what is scientific and falsifiable. The multiverse may not in all its incarnations be scientific, but that doesn’t mean it isn’t likely to be true. We may very much want an explanation that makes testable predictions, and by all means we should continue to look for one, but we shouldn’t delude ourselves into thinking that such an explanation must exist.

    (Some forms of the multiverse probably are scientific, especially many worlds and inflationary, since they arguably make predictions — quantum computation and potential collisions between inflationary bubbles).

    > They can’t be all true, since they conflict with each other.

    This doesn’t seem right to me. For instance, if the MUH is true, it would entail all other kinds of multiverses (as long as each is mathematically well-defined and internally consistent). On the other hand, what you or the speaker might be getting at is that some multiverses are incompatible in the sense that our particular universe couldn’t be part of two incompatible multiverse types at the same time.

    > What data would allow supporters to abandon the multiverse idea? If the answer is none, than that’s dogma, not science.

    I don’t think so. It’s neither dogma nor science. There are philosophical reasons to believe in the multiverse (so it’s not dogma), and for this belief to be unfalsifiable in principle (so it’s not science), since we expect other universes to be completely cut off from us. There is nothing (short of a good argument I haven’t thought of) that could convince me the multiverse idea is false, but this is not dogma — I’m not denying or explaining away evidence. Rather there simply cannot be evidence to refute it because of the nature of the idea.

    That’s not to say that I disagree with Coel’s answer. I think he’s right, but I also happen to think that “any data, any at all, that would actually provide positive clues that there really is only one Big Bang. Any really!” cannot exist in principle.

    > Glad to hear that. So I can safely ignore it now.

    Umm, you seem to be saying that you can ignore the idea of the MUH because it is mere philosophy. That seems scientistic, and I know you are not scientistic yourself, so I must be misunderstanding you.

    > And I don’t even need to give too much credit to Max, since he’s just repeating in fancier language the principle of plenitude, which goes back to Plato

    I don’t much care if you give credit to Max or to Plato. It’s the idea I care about. But I’m not sure that the two ideas are quite the same. In addition to the idea of plenitude, Max is tying the whole thing directly to mathematical Platonism, which though it is named after Plato is not necessarily quite what he was going for with plenitude. Or is it? Did Plato actually argue that the world itself is a mathematical object? I don’t think so, since (if I understand correctly) he was arguing that the objects in the real world were just reflections of the world of ideal forms.

    Like

  6. Coel

    Hi DM,

    I think he’s right, but I also happen to think that “any data, any at all, that would actually provide positive clues that there really is only one Big Bang. Any really!” cannot exist in principle.

    That was actually my point! Those criticising the multiverse (one BB plus more-of-the-same) for being unfalsifiable are overlooking the fact that their single-BB belief (one BB plus “… and only once”) is equally unfalsifiable and thus equally dogmatic (or otherwise).

    From empirical evidence, the number of Big Bangs, N, is simply not known. No one has given any empirical evidence that favours N = 1 over N = 10^1000, and no-one has given any rational argument for preferring N = 1 over N = 10^1000 or any other number.

    Hence there is no reason for putting the Bayesian prior for N = 10^1000 any lower than that for N = 1.

    Like

  7. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    > Those criticising the multiverse (one BB plus more-of-the-same) for being unfalsifiable are overlooking the fact that their single-BB belief (one BB plus “… and only once”) is equally unfalsifiable

    Actually, I’m not so sure that’s true. There could in principle be evidence of other big bangs or other universes, which makes the belief in a single BB model quite falsifiable after all. But that doesn’t make it any more likely to be true!

    Like

  8. Coel

    Hi DM,

    … which makes the belief in a single BB model quite falsifiable after all.

    True. But someone can only declare that their belief in a single-BB model is falsifiable if they declare that they regard the multiverse as testable. Win either way! 🙂

    Like

  9. Arun Gupta (@macgupta123)

    Even if we had a working theory of everything, the problem posed by postulating something that exists outside this theory of everything and is empirically inaccessible will always be there; it certainly is always feasible mathematically speaking to write down some such thing. It boils down to an aesthetic principle then, how many hours of your life do you want to spend thinking about such things?

    Like

  10. ejwinner

    Massimo,

    “Beauty, naturalness and simplicity are human desires. They have a mixed score card, and we have no reason to think that nature conforms to them.”

    Well, I’m glad someone was able to say this at the conference. From your report, I would agree that this was the best presentation there. I look forward to reading it once published.

    The application of aesthetic terminology to mathematics or physics is understandable, and yet curious, even troubling. It’s one thing to say of a theory, ‘there’s a beautiful simplicity to it. It’s quite another thing to demand that ‘a good theory must be beautifully simple.’ It’s annoying (occasionally alarming) when we find physicists unable to make this distinction. (I suspect Hossenfelder ‘s address fell on more than one pair of deaf ears.)

    The ‘community of scientists,’ as a community, will share many of the same problems as other communities (the community of political activists, say, or the community of Utica, New York, for that matter). Some of these problems will effect their work, as similar problems will effect political organization of activist events, or placement of children in schools in Utica, etc. Many scientists, however, are in serious denial of this – and this seems especially true in physics, although other heavily math-infused researches, like that in AI, trend in the same direction. Could math be the problem? does the kind of concentration of thought that goes into solving mathematical puzzles generate an illusion of detachment from issues that all we mortals share?

    Well, that’s an obtuse question, requiring greater thought. But I would suggest that many physicists should give greater attention to the sociology and history of science, which would help them to develop a greater epistemic humility, one would hope.

    Liked by 3 people

  11. Jim Baggott (@JimBaggott)

    Massimo,

    Thanks for posting these excellent summaries. They’re helpful even to people who (like me) were actually there!

    I came away thinking that the conference had been a helpful meeting of minds, a chance for philosophers to set the record straight on the use of falsifiability criteria and help get past the slogans. But it was also a missed opportunity. Ellis and Silk made points in their Nature article last year that were simply not addressed. Instead, the conference was broadly configured as a way to defend string theory against the charge that it is untestable and so unscientific. Despite Gordon Kane’s attempt to shoot the counsel for the defence in the foot with his absurd claims, I suspect the string theorists will be broadly satisfied with the outcome. Shifting attention to the multiverse was a clever ploy, as even astrology would look vaguely scientific in comparison (I missed the final panel session, after which a straw poll was taken – all remaining participants voted that the multiverse is nonsense).

    I agree with you that this was just an opening skirmish. The success of a series of best-selling popular science books, a continuing stream of popular science articles and television documentaries, and the award of prestigious “breakthrough” prizes valued at $3 million have all helped to create the impression in the public consciousness that string theory is a valid or even “true” description of nature. This is surely not a good thing.

    I was really struck by the consensus in the room that string theory is NOT confirmed or validated. In his opening presentation Gross explained that it’s actually not even a theory, of the kind that can be put into a single equation and printed on a t-shirt. It is rather a “framework”, a set of ideas, concepts and mathematical relationships. To become a theory it has to be set up in the right way. Gross admitted that we can test theories but it’s really hard to test a framework. The trouble is that string theorists don’t yet know how to set up the string framework in precisely the right way.

    Setting aside the question of whether they will ever be able to figure this out, it seems to me that it would be really refreshing to see one or two leading string theorists make an honest appraisal of the status of the theory in a couple of popular science vehicles, much as Gross did at this conference. I think this would really help to set the record straight. People have all sorts of different reasons for wanting to invest belief in stuff like this, but making the position on string theory clear would in my view restore some sense of integrity.

    Liked by 2 people

  12. Joe Polchinski

    You have clearly misquoted me on the very precise distinction that I made between the multiverse and the Higgs and the Big Bang. I was careful to write precisely, it would be useful if others would do the same.

    Like

  13. brodix

    I guess I’m completely off the reservation on all this, as I don’t see the logic in the MUH, or even one Big Bang.

    I would like to pose a question, even though it isn’t likely to be answered;

    How do we arrive at math in the first place, if not as a function of distilling order, form and regularity out of the world around us. The result is that it gives us a distinct, measurable framework to work with.

    Yet does that mean it is foundational to reality, or is it simply giving us those aspects which are orderly, stable and repeatable?

    What if order and structure is emergent? If your body were boiled down to its most solid elements, it would give you the skeleton, not the seed and broader context from which it sprang.

    With QM, we are told that nature is both wave and particle, but it does seem there is much more emphasis on particles than waves. It seems there is “particle physics” and waves are little more than a statistical effect. Yet we seem stumped by such anomalies as “spooky action at a distance” that might be far more amenable to a wave description of reality.

    Many years ago, I read an interview with one Carver Mead, one of the godfathers of the computer revolution and he made a similar argument; http://freespace.virgin.net/ch.thompson1/People/CarverMead.htm

    Yet it seems like this aspect has little hope of being considered for the foreseeable future, given where the community seems stuck at the moment.

    Just sit back for a moment and think about this. We are giving serious consideration of the possibility of multiple universes, having come to the conclusion the entire universe exists as a singular unit, without any thought whatsoever about how units relate to context. I would note this paradigm could have only arisen in the object oriented western tradition, than the context oriented eastern tradition, where it is not all about boundaries, quanta, units, etc, but a network view, where connections are what matter and the units are emergent from it.

    So it has been interesting being on the sidelines of this conversation, but I guess I may as well wander back up into the seats and wait and see how long the game can be played, before the absurdities overwhelm it.

    The fuzziness is not going to go away, no matter how many decimal points you measure it, because those point particles are probably just the crests of small waves.

    Like

  14. Robin Herbert

    I have heard that even a multiverse might not be as simple as “many BB’S’, that the Big Bang might turn out not to be a thing, but an artifact of the time and place of the observer, kind of like a rainbow.

    I am hoping to live long enough to see some significant development in all of this.

    Like

  15. Marko Vojinovic (@vvmarko)

    Massimo,

    “Another speaker who wrote his slides out by hand! Is this a new trend among slavic physicists? Not a good one, not a good one. But at the least his accent was not as incomprehensible as Mukhanov’s yesterday.”

    Hand-written slides are mostly a consequence of latest technology — writing slides on a tablet/touchscreen is much like writing on a blackboard, which a lot of physicists prefer over typing text or making more complicated presentations. Given the lack of a blackboard during the conference, hand-written slides are the next best thing for a blackboard-oriented scientist.

    And of course, this certainly has nothing to do with nationality.

    Regarding Mukhanov’s talk, I really feel sorry that you couldn’t follow it, because it was a great lecture! 🙂 It was actually so good that I feel motivated to make a full transcript of it, once the video comes online. I’ll share as soon as I do it.

    Best, 🙂
    Marko

    Like

  16. brodix

    Robin,

    Here is an idea to consider, about the nature of space; For all their mathematical universe mysticism, three spatial dimensions are really based on the xyz coordinate system. Now either you specify the particular coordinates, or it is just a nebulous concept, but if you do specify them, then it is a specific frame and frames can overlap. Much as each person is the zero point on their own longitude, latitude and altitude. So each of us is the center of our own three dimensional frame, which is the current position of our life’s narrative, i.e. time vector/dimension. Keeping in mind that one dimension of space is also a vector, not two of area, or three of volume. So basically each of us is the center of our own view of the universe. Given that, with enough math and computational power, we could theoretically construct a self-centric cosmology, in which all the rest of the universe moves about us, since in reality, that is how we do experience it. Now as we move through space and our narrative evolves, this doesn’t mean Titans are actually pushing the entire universe around, as a physical extrapolation from this mathematical model. The math is a model, a useful and very informative map, but to then assume it must be representative of some deeper foundation, is every bit as logical as assuming those Titans are our personal gods.

    Like

  17. Lorenzo Di Pietro

    Your link about holography failing an empirical test made me laugh out loud and makes it completely clear that you do not have the faintest hint of what holography is (at least in the context that Polchinski is referring to)

    Like

  18. Philosopher Eric

    Massimo I was very impressed, that you were so impressed, by Sabine Hossenfelder’s “Lost in Math” discussion. Here we have a notable physicist who is quite troubled by her field’s progressive reliance upon “math over evidence.” I get the impression that she wasn’t simply asking her colleagues in physics for help in this regard, but certainly philosophers as well. While the scientific community does have a vast array of accepted understandings at their disposal, however, philosophers (and even in a purely critical capacity) seem not to. Thus if asked, “How can you help us?” I perceive an answer of, “We’ll let you know, once we’re able to develop such a consensus.” Of course given philosophy’s history, many seem to believe that there will never be any.

    There are two main concepts which I believe that philosophers as a whole must sort out:

    (I). The nature of definition. My own advice here would be for the term “is” to be limited severely in this capacity. Instead of asking “What is time?” physicists would then ask, “What’s a useful definition for the term “time”?” Thus it would always be the obligation of a reader to accept a writer’s various definitions (since none are “true”) in the quest to better decipher that person’s ideas.

    (II). The nature of conscious understanding. As I see it, there is only one method by which the conscious entity (including horses, birds, lizards and so on) has to figure anything out. It takes what it thinks it knows (evidence), and then uses this to assess various ideas that it’s not so sure about (theory). As evidence continues to stay consistent with a given theory, it does tend to become accepted, but can never be “proven.”

    With these two principals finally at their disposal, I’d love to see what the physics community (and academia in general) could do!

    Like

  19. Coel

    Hi Philip Thrift,

    I had thought that multiverse theories were still in the running for explaining dark matter and dark energy.

    It is true that the only suggestion so far (sensible or not) for the very low value of the cosmological constant is the multiverse explanation (and indeed Weinberg used this idea to predict the small but non-zero cosmological constant before that was observationally known). In Planck units the observed value is about 10^-120, whereas most attempts to derive it would put it at strictly zero or of-order one. Of course neither dark energy nor dark matter are understood, so such arguments are only weak indicators.

    Like

  20. Sabine Hossenfelder (@skdh)

    Hi Eric,

    An interesting suggestion that I will have to think about. Physicists rarely if ever even write down the assumptions that they use. They typically leave this to mathematical physicists (who then write textbooks that physicists never read).

    I would say there are two different types of axioms that a theory needs. One type are purely mathematical statements. These I think are quite straight-forward. The other type is the identification of mathematical quantities with physically real observables. A physical theory needs both, the mathematical structure and a procedure to identify (parts of) this structure with reality.

    The Munich workshop made clear to me that it would be really helpful if theoretical physicists had a clearer way to capture and track assumptions that they use. As I indicated in my talk, all to often some assumptions get “lost in math” and then their status as hypotheses is forgotten. As a result, I am afraid that many options are being prematurely discarded and never explored.

    Of course it is impossible to know, but I am afraid this could be why progress in some areas in theoretical physics has slowed down: the less experimental guidance there is, the more important it is that theorists are very conscientious about what they are doing. And there isn’t presently a culture of doing this.

    You are right that some help from philosophers would be very useful here. Best,

    Sabine

    PS: I should probably add that I don’t use the word “theory” the same way that David Gross suggested in his talk. I would call “theory” what he called “framework”, “model” what he called “theory” and “phenomenological model” what he called “model”. I think that my use of the words is much close to how it is being used in practice. (Quantum field theory, string theory, but standard model.) I summarized this here

    http://backreaction.blogspot.com/2014/07/what-is-theory-what-is-model.html

    I’m not religious about this – after all it’s just words, but again I think clarity matters.

    Liked by 1 person

  21. brodix

    Philip,

    If you are willing to consider alternatives, here is an explanation for how multi spectrum light quanta will cause redshift;

    Click to access 2008CChristov_WaveMotion_45_154_EvolutionWavePackets.pdf

    And here is an experiment showing the loading theory of light, for how such light quanta can come to be;

    http://fqxi.org/community/forum/topic/1344

    In which case, redshift would be an optical effect and we should looking for reasons why it goes parabolic, rather than explanations for why the entire universe is expanding at either increasing rates with distance from us, or decreasing rates, if one starts with the edge of the visible universe and moves forward in time to when this light reaches us.
    That is, unless you are ideologically committed to big bang(s), multiverses, et al, in which case, I don’t mean to bother you.

    Like

  22. Massimo Post author

    Coel,

    you are seriously asking for data that convince people that there has been a single Big Bang. That is, as you have been told a number of time by a number of people, simply the wrong question. We have evidence of one Big Bang. We don’t have any evidence whatsoever of anything before or in parallel. Period, end of story. For now, until and when new data that says otherwise enters the picture.

    I understand your argument based on the idea that nothing in nature happens only one. I simply find it utterly unconvincing, and so do a lot of cosmologists, which ought to at the least give you pause.

    As for Weinberg’s famous prediction, apparently things are a bit more complicated. I had heard this before, but people have repeated it at the Munich workshop: a number of researchers do not accept that prediction as necessitating a multiverse, and so nobody I have encountered counts it as a prediction as raising the multiverse priors significantly.

    I find it funny that you disagree of want people I have talked to in Munich do or do not believe. Where you actually there, a fly on the wall? Then how can you be so confident that what I reported is not what they believe? I’m curious.

    Your suggestion that a “vanilla flavor” multiverse would have exactly the same laws and constants as ours is something I’ve never heard anyone suggest. The whole *point* of the multiverse is that it allows for a physicist’s version of the principle of plenitude, and that it solves the so-called problem of fine tuning to boot. A “vanilla” version would do neither, and on top of that now you would have an infinity of fine tune problems to solve! Talk about going beyond the data and lowering one’s priors!

    You say that the vanilla version is favored by many cosmologists. As I said, I never heard of it, but – more importantly – nobody discussed it in Munich. Can you give me evidence that “many cosmologists” believe what you propose?

    Origin of life: I didn’t argue that it never originated anywhere else. I simply gave you an example of what you asked: a case of a natural phenomenon that, as far as we know, occurred only once. We don’t know if it occurred anywhere else, at any other time. Possible, maybe even likely, but to put it forth as if that were a scientifically inevitable fact would be, well, unscientific.

    DM,

    It is certainly possible that the multiverse is true, but unless we somehow gain empirical access to it. I see no reason whatsoever why one should believe it. Again, isn’t that what distinguishes science from, say, metaphysics?

    Your objection to Ellis’ contention that some multiverses contradict others is, I think, untenable. First off, the multiverse is not at all the same thing as the MUH. Second, there are different categories of infinities, so even if, say, multiverse theory-1 predicts an infinite number of pocket universes these may not be the same infinity predicted by multiverse theory-2. At any rate, I take it Ellis knows what he’s talking about, and nobody contradicted him in Munich on that point.

    As for the word “dogma,” I didn’t use it, Ellis did. You are right, it may be neither science nor dogma, it may be metaphysics. But since Ellis was very explicitly talking within a scientific context, then I think he is right that to assert something without data within the context is dogmatic.

    Nice try with the scientistic judo move. But no, again, we are taking science. Within a scientific context I can ignore the MUH. As metaphysics I find it fascinating, but I also entertain possible worlds scenarios, alternative histories, etc. I just don’t entertain them qua actual worlds or actual histories.

    You may not care about the genealogy of ideas, but I do. I think it is important to own up to the fact that others have proposed what one is suggesting and give credit where it is due. To his own credit, Tegmark does mention Pythagoras in his book, but I’m not sure he fully appreciates just how long and varied the genealogy of his ideas actually is.

    by the way, glad you two are having so much fun!

    Like

  23. SocraticGadfly

    Hasn’t the idea that physics allegedly leans too much on math, not enough on experiment, etc., been around for a century or so, at least to some degree?

    That said, this ties to public policy and citizen literacy in science, does it not? Expecting John/Jane Doe, or the equivalents in other countries, to seemingly “slog through” some semi-challenging science may be tough enough; expecting them to do that with advanced mathematics is probably the point where you lose them.

    Like

Comments are closed.