[Notes: what follows are only lightly edited notes taken while the meeting was in progress, so they are more likely than usual to contain typos and odd phrasing; also, apologies to my readers, but I may not be able to participate much to the hopefully interesting discussion at PlatoFootnote about this series of posts, I’m trying to also get some sleep here in Munich… Below and throughout, comments in brackets are mine.]
Continuing my coverage of the workshop on the current status of fundamental physical theory, organized by philosopher Richard Dawid at the Center for Mathematical Philosophy based at the Ludwig-Maximilians University in Munich, Germany, a conference that was prompted by a high profile, controversial position paper published in Nature by George Ellis and Joe Silk (who are co-organizers of the workshop), entitled “Defend the integrity of physics.”
The first speaker of the day was Bjorn Malte Schafer on “Dark gravity, dark matter, and dark statistics” Physical cosmology has three main building blocks: gravity, fluid mechanics, and statistics. Gravity is the most important force at cosmological scales. Fluid mechanics is important because of the concern with the formation of structures that are then described statistically. Gravity is “given” by general relativity.
Schafer pointed out that it is impossible to investigate different aspects of cosmology — different parameters describing physical models of cosmological structures — independently, because measurements usually combine them all. This is very different from what happens in particle physics.
Observations in cosmology are along the past light cone, focusing on the differential acceleration of objects moving along a geodesic relative to our geodesic, with special relativity determining the dynamics.
Known things: the Hubble constant, curvature, acceleration, isotropy of the metric, thermal history of the universe. Not so well known things: homogeneity. Interpreted things: matter density and cosmological constant, the latter “preferred” over dark energy [yeah, not sure what that means either…].
Structures are believed to be generated by inflation in the early universe. What we know is that the Gaussian assumption is well found and that the inflationary model “works.” Not so well known are the direct individual measurement of slow-roll parameters and the amplitude of gravitational waves. The effective statistical description of nonlinear structures is completely unknown.
We do know a number of things about cosmological structure, including the universality of haloes and the scaling laws for simulation, as well as the influence of dark energy. We are not as confident about the stability of structures and the relation between luminous and dark matter.
All measurements in cosmology are statistical, simple models are preferred, and aesthetics plays an important role. Apparently, Bayesian approaches are all the rage. Even so, without making assumptions about the functional form of the parameters to be estimated, the relevant quantities are difficult to evaluate. Two different models are compared by looking at the ratio between their likelihoods [which is standard also in other disciplines, such as quantitative ecology]. Simple models end up being preferred because they have more “likelihood within the prior.” As a result, of course, there is a tradeoff between models that are better at describing the data, but are more complicated, and models that are simpler, but also less good at describing the data.
There are statistical limits to cosmological modeling. Algorithms need to be able to deal with very large quantities of data, and the available sampling methods work best with unimodal distributions, but we know that we are actually dealing with multimodal ones. Indeed, for a variety of reasons, it is increasingly difficult to develop accurate models, because of the presence of many second-order effects. The author claimed that within a few years we will have observed pretty much everything there is to observe within the light cone, which means that cosmologists will literally run out of new data! [That’s going to be interesting: the end of cosmology?]
The next talk was by Chris Smeenk on “Gaining Access.” [A bit of a cryptic title, isn’t it?] The topic was the use of theory to gain access to inaccessible quantities, through proxy data. The author began by reminding us that geologists seem to have a pretty good idea of the structure of the interior of the earth, even though we obviously do not have direct access to it. [Well, similarly, we don’t see electrons, and yet we have very good reasons to think they exist.]
There are different senses of the word “theory”: frameworks specifying the kinematics and structures for state space; qualitative hypotheses that provide general explanations of phenomena; mathematical equations expressing law-like relationships among variables. He focused on the latter for the remainder of the talk.
Theories in this sense amplify knowledge based on relatively accessible data, and they are necessary to interpret data; also, the best evidence for a theory only develops based on ongoing use of theory itself.
Theoreticians often face the circularity challenge: a given theory is being tested and, at the same time, used to interpret the data. It could be, for instance, that the theory is so flexible that the whole exercise reduces to curve fitting. From a logical perspective, how is it that further use of a theory lends support to the initial use of that theory?
There are historical examples of effective responses to the circularity challenge. However, does the inflationary paradigm in cosmology meet the challenge?
A good historical example concerns the atomic theory (“atomism”) and its relation to Avogadro’s number. The kinetic theory of molecules — and the measurements derived from it — both agreed with the atomic theory and put strict constraints on the estimates of Avogadro’s number. The independence of a number of pertinent measurements made it unlikely that the agreement with the theory was due to some inherent flexibility of the latter.
The author presented a second “good oldie” example, based on the measurement of the orbit of Uranus. Both cases featured robust physical sources for theories and data, which were independently confirmed. The response to the circularity challenge in these cases is that we can either accept the theory or admit to an enormously implausible coincidence. Such theories then become permanently accepted, on penalty of having to unravel the whole edifice of reasoning that led to them.
What about inflation? How do we distinguish between the theory merely accommodating data vs identifying correct cosmological features? Smeenk at this point proceeded with providing a quick reminder of what inflation theory is about and how it works.
The idea is that there are both observational constraints and (somewhat weaker) constraints coming from particle physics. There are inflationary models meeting all of the constraints from most recent data. Still, unlike the historical cases discussed before, Smeenk claimed that there is much less independence of the measurements when inflation is concerned. There is also a problem caused by the lack of a canonical model that provides us with an independent feature of the universe against which to test inflation (something like, say, the observational discovery of Neptune, which validated theories about the anomalous orbit of Uranus).
An additional issue specific to inflation is that we do not know how to best reason about initial conditions, in this case pre-inflationary conditions (again, in contrast to the “good oldie” cases): what was the state of the inflaton field prior to inflation? One response has been the invoking of “eternal” inflation, which leads to the formation of “pocket” universes with different low energy physics. This depends on very speculative ideas, which are meant to respond to worries about the so-called fine tuning problem. The author suggested that this approach is compatible with many other explanations other than inflation, and so that eternal inflation — ironically — undermines the case for inflation in our universe. Eternal inflation is not an exact / mathematical theory, at best it is a framework.
After coffee break the first talk was by Gordon Kane on “String/M-Theories about our world are testable in the traditional physics way.” [Note that a number of attendees had expressed skepticism about Kane’s talk even before it was delivered, since apparently he has a habit of shifting the goal posts for what counts as testable predictions. For more, see here.]
Kane began by saying that in order to test a 10D theory in a 4D world you must “compactify” your experimental approaches as well. String/M theory is a very powerful framework to get at a comprehensive underlying theory that incorporates the Standard Model and goes beyond it.
He made the point that one doesn’t need to “be there” in order to test theories — for example no one witnessed the Big Bang or the extinction of the dinosaurs. [Yes, that’s a basic anti-creationist argument, I don’t think it’s needed with this audience.]
All “superpartner” predictions made in the past were based on “naturalness,” not on a theory, so it is not surprising they failed. Additionally, many string theorists don’t know how to “compactify” their approaches for 4D testing, so one shouldn’t be paying too much attention to them. Much of what is written by people like Woit and Smolin is misleading and string theorists don’t give it much thought. [Well, I think some people would beg to differ here, seems like far too cavalier a dismissal.] String/M theory is too important to be left to string theorists, since apparently they are not much bothered by the issue of testability [this was a rather cryptic remark, which was not made any clearer by the remainder of the talk].
According to Kane there is a well established procedure to compactify string theory, but the results do not yet allow to make precise calculations. However, all is needed is one falsifiable prediction to make a theory testable. [Well, no, that’s only if you are more Popperian than Popper and believe in crucial experiments, an idea that Duhem should have put to rest, like, a century ago.]
A compactified string theory is analogous to the Lagrangian of a system. Now, all tests of theories in physics have depended on assumptions, from Galileo on [yes, but I don’t think anyone has been questioning that] but, interestingly, the author said a general test of string theory — of the kind you have for field theory — is unlikely, yet insisted that the compactified version is testable. Gravity, for instance, provides pertinent evidence. [??]
String/M theory gets us not only a quantum theory of gravity, but an entirely new physics, for instance about “moduli fields,” which describe the sizes, shapes and metrics of small manifolds. Moduli dominated the energy density of the universe after inflation ends, and they decayed before nucleosynthesis into dark matter.
Kane provided an example of his compactification approach, on G2 manifolds. Work on this began in 1995 and it has led to the establishment of a “powerful, rather complete” framework. Compactified M-theory on a manifold with G2 holonomy in fluxless sector is well motivated and technically robust. [Okay, okay, I have no idea what this means…] This led to calculating the supersymmetry soft-breaking Lagrangian, which in turn led to the anticipation of the mass of the Higgs boson. [Boy, I would love some independent opinion on these claims.]
[During the talk itself Kane was strongly challenged on the extraordinary assertion that he was able to predict the mass of the Higgs on the basis of M-theory. Even by David Gross, who is strongly sympathetic to string theory.]
Kane resumed his talk by saying that predictions of the mass of the gluino, wino, and bino are generic but robust, as is “clear to any knowledgeable person who goes through the derivation.” These should not have been seen at the LHC run 1, but should be observed in run 2. [I guess we’ll see soon, won’t we?]
Kane admitted that there are many many solutions to string theory [the infamous 10^500 landscape], but argued that many are not “populated,” which means that the landscape issue might not be an obstacle to explain the existence of our world on the basis of the theory.
The talk ended with a list of (alleged) issues in fundamental physics addressed by M-theory. The interesting bit was a list of those that aren’t, which include why we have three large dimensions, and why there is a universe to begin with. [Oh well, I guess we can leave those for another time…]
The Kane presentation generated quite a bit of controversy at the conference and online. I reproduce here, with permission, a bit from Peter Woit (a well known critic of string theory) for context:
“From reports I’ve heard about the conference today, it seems that Gordon Kane’s claims of a string theory prediction of the gluino mass around 1.5 GeV were not accompanied by any acknowledgement that he has been making exactly this kind of claim for years. For example, back in 1997 the gluino mass was about 250 GeV (see here) and before LHC Run 1 data arrived it was “well below a TeV” (see here and here). I’ve documented (here) the edits made to Kane’s 2000 popular supersymmetry book to remove all falsified predictions and replace them with new ones, with no acknowledgement this was done.”
Next: Joseph Silk (one of the co-authors of the provocative Nature article that prompted the workshop), on “The Limits of Cosmology, Post-Planck.” Some theories are intrinsically untestable, some generically untestable but testable “if the dice roll favorably,” and yet others are generically testable in the remote future.
The theory of inflation dates to 1980. According to Andre Linde, a co-founder of the theory, the inflationary multiverse helps us understand all sorts of facts about why our universe is the way it is — including why the universe does not rotate, or why parallel lines do not converge. [Wow.]
There are several experiments ongoing and proposed over the next few years that will be looking for signatures of inflation. The problem is, there are reasons to believe that signals based on gravity waves are likely to be so confounded by other factors (like interstellar dust scattering) as to be forever undetectable.
What Silk thinks might actually be testable are predictions that inflation makes about the deep structure of the cosmos. The famous 3K background radiation is completely uniform, and with the Planck satellite we can see the mildly granular universe at a scale of 1 in 10^5. If we want to test inflation we need to do much better.
The only robust prediction of inflation is non-gaussianity, which however is very small. We are talking about a precision of the order of 2×10^-6, right now we have 0.01. Can we get there? Yes, if we focus on the 21cm hyperfine transition of neutral hydrogen from the so-called “dark ages” of the universe (before structures starting forming). This corresponds to a frequency of 30MHz, which means that in order to do unimpeded experiments about this we would have to go to the far side of the Moon. [Damn interference from cell phones!] Silk actually laid out some details of the envisioned experiment, which would include 10^6 dipoles scattered on the lunar surface. He thinks that could be done by 2050.
Next: do we need the multiverse? The multiverse is motivated by the need to explain why dark energy is so small. There are strong opinions in opposite camps here, and we are reminded of what Lev Landau famously said of cosmologists: “Often in error, never in doubt.”
Silk thinks there are both astrophysical and particle physical explanations of dark energy which do not require the invocation of an inflationary multiverse. The first have to do with interpretations of supernova data that invoke spatial heterogeneities and do not need dark energy; the latter would rely on string theory been able to narrow down the populated space of its landscape to a small number of alternatives [which, at the moment, doesn’t appear likely].
The afternoon session resumed with Fernando Quevedo on “Achievements and Challenges for String Phenomenology/Cosmology.” [Yet another string theorist. I must say, there does seem to be a stacking of them at this conference, and no experimentalists have been invited either]. The talk began with a quick summary of some of the major implications of relativity and quantum mechanics, which set the stage for string theory. In order to get specific, quantitative (as opposed to generic, qualitative) predictions you have to move from a framework to a specific theory, like the Standard Model.
The SM is arguably the greatest theoretical achievement of the past 70 years. It is simple, and it describes several phases of gauge theories. However, it is not complete, as it does not include baryogenesis, dark matter and gravity. It is “ugly,” since it comprises a lot of particles and relations that are unaccounted for. This means that there are a number of open questions concerning gravity, some 20 parameters to be explained, the number of dimensions, and so on.
The long term plan of string “phenomenology” (as opposed to “noumenology” [ah!]) is to find a scenario that satisfies all particle physics and cosmological observations and hopefully lead to measurable predictions. Meanwhile, the theory makes generic “predictions,” including extra dimensions, supersymmetry (not necessarily low-energy), branes, and so forth. Oh, string theory also predicts gravity! The theory further makes generic predictions in its 4D projected version (compactified), including antisymmetric tensors (like brains) and no global symmetries.
So, even according to Quevedo the string “framework” makes very few predictions. To make progress one has to construct realistic models and try to extract properties of classes of models. This would not test the theory itself, but only specific scenarios within the theory.
The challenges for string models include: gauge and matter structure of the SM, hierarchy of scales and masses, hierarchy of gauge couplings, stable proton and baryogenesis, dark matter, dark radiation, and dark energy, among others. Failing even one of these would doom the theory.
The “landscape” is a problem, which according to Quevedo should to be used to solve physical issues like the fine tuning one. And so far the theory has not been able to deal with the empirically demonstrated existence of a non-zero cosmological constant. The author concluded that string theory has been making “continuous, a-cumulative” progress [interesting notion, right?]. People complain about there being too many string models, but he thinks there are too few, since none of them so far is sufficiently realistic with respect to the actual universe.
Next we had Chris Wüthrich on “Considering the role of information theory in fundamental physics.” There is something that seems to be universally agreed upon within the string wars: there are black holes and they have certain thermodynamic characteristics (following the Bekenstein-Hawking formula). As a philosopher, Wüthrich is skeptical of universal agreement on anything…
The robustness of this consensus is surprising since there is no empirical confirmation of Bekenstein entropy. So do people have convincing non-empirical reasons to accept Bekenstein’s formula about black holes as thermodynamic objects?
The original motivation that spurred Beckenstein to develop a thermodynamics of black holes is that otherwise these objects would violate the second law, so he was looking for a generalized version of the law: “The introduction of a black hole entropy is necessitated by this process,” Beckenstein put it in 1973.
Now, the second law is not itself exceptionless, since it is a statistical principle. Still, a few years before Beckenstein’s work, Hawkins et al. had demonstrated the area theorem, stating that under certain conditions the surface area of the future event horizon never decreases with time. Notice that this is not a statistical statement.
Beckenstein developed an analogy between the surface of a black hole and entropy, which means that Hawking’s area theorem could be deployed to give an account of black hole thermodynamics.
It also turns out that the general formula for entropy (Gibbs’) has the same structure as Shannon’s formula for information/entropy. However, Callender has pointed out that the two have different interpretations, because in Shannon entropy the probabilities are based on our degrees of belief, whereas in statistical mechanics the probabilities are based on objective features of the physics.
Nonetheless, Beckenstein explicitly discussed black hole physics from the point of view of information theory. He then introduced the concept of black hole entropy as the measure of the inaccessibility of information inside the black hole.
But now we are conflating epistemology and ontology. Information — according to Shannon himself — is an inadmissible concept outside of a communication system (which doesn’t have to be between people, but it does entail a sufficiently complex system). So we now have a category mistake!
If these worries are correct, than the analogy between the area of a black hole and its entropy fails, and it turns out that one thing that all participants to the string wars agree on, they really should’t. [I don’t know, of course, whether the author’s argument is correct, though it smells that way. Regardless, his presentation was a delightful example of philosophical reasoning, and an instance of how philosophy can contribute to science, to boot!]
The conclusion shouldn’t be that black holes are not thermodynamic objects. But it does suggest that the original analogical argument advanced by Beckenstein does not hold. There are thought experiments proposed subsequently that do support the conclusion that black holes are thermodynamic objects. But remember: so far, no empirical confirmation. And at the end of the day, it is only the latter that will settle the issue.
Speaking of the end of the day, the last talk was by Viatcheslav Mukhanov on “Is the Quantum Origin of Galaxies falsifiable?” I wish I could report on it, but an awful combination of the author’s unbelievably fix Russian accent and handwritten slides made it close to incomprehensible for me. Oh well, good night and good luck! Last installment of this series, hopefully tomorrow.
Categories: Philosophy of Science