[Notes: what follows are only lightly edited — and long! — notes taken while the meeting was in progress, so they are more likely than usual to contain typos and odd phrasing; also, apologies to my readers, but I may not be able to participate much to the hopefully interesting discussion at PlatoFootnote about this series of posts, I’m trying to also get some sleep here in Munich… Below and throughout, comments in brackets are mine.]
Well, here we are, the much anticipated workshop on the current status of fundamental physical theory, organized by philosopher Richard Dawid at the Center for Mathematical Philosophy based at the Ludwig-Maximilians University in Munich, Germany.
Why am I here, given that I’m neither a physicist nor a philosopher of physics? I’ll tell you when we’ll get to my talk below. The conference was prompted by a high profile, controversial position paper published in Nature by George Ellis and Joe Silk (who are co-organizers of the workshop), entitled “Defend the integrity of physics.”
This is somewhat of a big deal: some high caliber physicists and philosophers are here, and I got calls from journalists interested in covering the workshop, including from Nature and Scientific American. The basic idea is to look not just at the science, but at the philosophy underlying discussions on the usefulness (or lack thereof) of string theory, multiverse theory, and similar speculative ideas that have been debated within the physics community over the last several years.
So, let us get started with David Gross, talking on “What is a theory?” Gross began by noticing that philosophy and physics have, ahem, “grown apart” over the years — citing the now classic quote by Richard Feynman about philosophy, birds, and ornithology. Gross himself said, however, that he envies the pioneers of quantum mechanics and relativity, who were well versed in philosophy, and he still thinks there is much the two fields can say to each other.
Contra popular misunderstanding among the American public (as in “it’s just a theory”), theory of course plays a number of crucial roles in the workings of science. It aids in quantifying and modeling observations, suggesting new experiments and observations, predicting new phenomena, modeling and calculating the expected background to new experiments, and finally unifying disparate phenomena.
For Gross, experiments are “usually evidently real,” while theory must await experimental confirmation — which is why Nobel prizes for theory are given much later than those for experimental results. Another difference: experiments are expensive, theory is cheap… The scientific method is “undeniably” based on the thesis that the final authority as to scientific truth is observation and experiment.
Gross proposed to distinguish among frameworks, theories, and models. Classical mechanics, quantum mechanics and string “theory” are not theories, but rather frameworks. Theories are something like Newton’s or Einstein’s theory of gravity, or the unfortunately named Standard “Model.” Theories can be tested, frameworks not so much. Models include the BCS model of superconductivity, or BSM (Beyond Standard Model) models.
According to Gross, quantum mechanics, for instance, cannot really be tested directly. But the Standard “Model” can. Also, theories (at the least in physics), are compact enough that they can apparently be written on a t-shirt. Not so for frameworks.
The crucial issue is strategy, not ideology: we shouldn’t be discussing what science is or is not — since the scientific method itself evolves continuously — but rather what works and what doesn’t. One of the reasons for this meeting is that physics makes predictions that are far out of the current limits of experimentation, around the energy level at which all forces, including gravity, unify (10^28 eV, the Planck scale).
According to Gross, since physical phenomena scale as the log(energy), physicists can extrapolate theory to very high energy. Unfortunately, experiments scale only as energy^2, which means that they cannot easily be extrapolated to very high energy. This is just a fact of nature. [And, perhaps, a fundamental limitation on human epistemic access to nature?]
Theorists may give up, or they may play with extrapolation, or toy models (i.e., thought experiments). They could also adopt strategies from other fields, like mathematics, where beauty is a criterion for success. [Uhm, that’s pretty dangerous territory…]
String “theory” (framework, really) started about 47 years ago out of a data fitting problem, and it didn’t even feature strings at the time, as it initially was a theory of the behavior of gauge mesons. Then people immediately realized that string theory “must” contain gravity, and it is this sort of unexpected theoretical consequences that have kept the excitement going.
But string theory isn’t really a theory — you can’t write its equations on a t-shirt, after all! String theory and quantum field theory are, according to Gross, part of a larger framework. The problem is that we don’t have any idea of how the framework in question picks the Standard Model, hence connecting to experimentally based physics. Here, Gross also thinks, is perhaps where philosophers — who are used to think carefully — may help.
Possible avenues of empirical contact include: post-diction or prediction of observations using string inspired models; discovery of actual cosmic (macroscopic) strings; discovering phenomena that don’t actually fit with the string framework.
Gross thinks that the “no alternatives argument” introduced by Dawid (see below) is a powerful one in favor of string theory, as it is not easy to change a framework. [I guess Kuhn would call these paradigms?] There is also an “unexpected explanatory power” argument based on the fact that the string framework connects elegantly with a number of other known notions in physics. Finally, there is the “meta-inductive argument,” according to which past speculative but strongly supported (by the physics community) theories have turned out to be correct. As I said, these are the argument actually advanced by the conference organizer, Richard Dawid, so I’ll return to them shortly below.
[Some of the testiness underlying the conference became briefly evident during an exchange between the next speaker, Rovelli, and Gross. Rovelli pointed out that Gross had gone way over time, and asked whether the organizers would take time out of his talk or from dinner. To which Gross replied that he will surely interrupt Rovelli’s talk too. And he did, somewhat rudely I must say, follow up on his threat. Ouch.]
Next was Carlo Rovelli, on “Non-empirical confirmation: just a cover-up for the failures of string theory?” He started out by saying that Dawid makes good points in his paper on non-empirical confirmation, about the analyzability of the context of discovery, for instance. But also that Dawid confuses the context of discovery with the context of validation. [Smells of Popper here.] He also confuses descriptive and normative philosophy of science, misreading the history of string theory, since the latter has failed by the lights of its own stated criteria for validation.
Rovelli traced the distinction between context of discovery and of validation to Reichenbach (1938). The time between the two can be very long, as for instance between the publication of Copernicus and Galileo’s books on the structure of the solar system (1543 vs 1610).
[Rovelli has a good general point, though I must also signal that modern philosophers of science do not make a sharp distinction between the two contexts, as discovery and validation are continuously interacting processes.]
Theoretical work is guided by preliminary appraisal, i.e. weak evaluation. This aids the decision of whether to take the theory seriously enough to develop it and test it further. Dawid’s criteria are good as preliminary, weak evaluations of string theory. But they fail as validations.
Rovelli, amusingly, quotes Lakatos [an influential student of Popper]: “It is no success of Newtonian theory that stones, when dropped, fall towards the Earth.” The speaker then listed a good number of past theories that seemed very promising, and yet turned out to be very clearly wrong.
String theory itself set out validation criteria early on in its history: computing the parameters of the Standard Model from first principles, for instance, or deriving the existence of three families (of particles) from first principles, or predicting the sign of the cosmological constant, or predicting new particles to be discovered at LHC energy levels, or low energy supersymmetry, and so forth. According to Rovelli, all of these failed, turning string theorists into the mythical fox who argued she didn’t really like the grapes, once it was clear that she couldn’t reach them… [Nice classical reference to Aesop!]
There are alternatives, like loop quantum gravity (which actually can be written on a t-shirt!). For Rovelli this is just as not yet validated as string theory, but at the least its existence rejects the oft-made claim that string theory is the only game in town. He also cautioned about confusing “tenure in major universities with consensus of the scientific community,” pointing out that Gross won the Nobel and has a position at a major university because of his non-string work.
Rovelli concluded by pointing out that claiming that a theory is valid even though no experiment has confirmed it destroys the confidence that society has in science, and it also misleads young scientists into embracing sterile research programs. [Lakatos would have called them “degenerate.”]
After coffee break it was Dawid’s turn with “Non-empirical Confirmation.” Though his focus is on string theory, his ideas are applicable to other frameworks as well. He began by acknowledging that the theory hasn’t found empirical confirmation for quite some time, and moreover that this state of affairs may continue long into the future. Nevertheless, a number of physicists have great confidence in it. [Though, as it has already been rather blatantly obvious, not everyone…]
Philosophical considerations cannot settle things in terms of string theory as a scientific theory, but part of the discussion has to do with the way science works, which is very much a philosophical issue.
The canonical view of science is that scientific theories must make testable predictions, which can be the only ones capable of confirming (or not) the theory. Without empirical confirmation, the theory remains speculative.
One reason for the canonical view was the desire for a simple demarcation criterion separating science from non-science [I’m not sure I buy this: one can empirically disconfirm astrology, for instance]. But if we think of experimental confirmation as increasing the probability of a theory’s viability, then we can adopt a broader Bayesian model of confirmation. We can then distinguish between standard empirical confirmation as we know it, by data predicted by the theory, and non-empirical confirmation based on observations of a kind the theory could not be expected to predict, because they are not in the theory’s original domain.
Dawid then went briefly over his three arguments: no alternatives, meta-induction, unexpected explanatory power (see notes from Gross’ talk above). He relates his arguments to the well known problem of underdetermination of theories by the data. Local underdetermination occurs when the data allow for a spectrum of scientific theories with conflicting empirical predictions for a specified set of experiments.
A suggestion is that if there were infinite number of alternatives to string theory it would be surprising that scientists haven’t found any. So we have to conclude that either there are no alternatives, or at worst there are a few. [Hmm, I find this rather questionable, and what about the fact that string theory itself comes in 10^500 flavors? Are those genuine alternatives? Why, why not?] Non empirical confirmation is more plausible if underdetermination is strongly limited.
Dawid formulated his ideas formally, within a Bayesian epistemological framework, arguing that non empirical confirmation does support string theory. The meta inductive argument, moreover, help us discriminate between genuine lack of alternatives and scientists’ lack of ingenuity. The argument tells us that scientists have, in the past, been successful in developing good theories, so there is no reason why they should have suddenly become incompetent now. [Again, there are problems here: for instance, it may simply be that fundamental physicists have reached the limits of human cognitive abilities of understanding the universe, something that — seems to me — the meta inductive argument has no way to assess.]
But is meta induction applicable to string theory? Dawid thinks his third argument, the one about unexpected inter-connections, tells us that the theory is “under control,” so to speak, i.e., it isn’t just speculation gone wild. [Yes, but again, what about the 10^500 versions? That seems out of control…]
Dawid concluded by admitting that non empirical confirmation is certainly more cumbersome and less conclusive than empirical confirmation, but nonetheless plays a crucial role in current advancements in fundamental physics. He also suggested that this framework has been deployed before, for instance in the high level of confidence that physicists had concerning the existence of the Higgs boson, far in advance of its discovery [though my understanding is that there was strong, empirical, indirect evidence for the Higgs way before its existence was confirmed directly].
The speaker stressed that it is a mistake to call non empirical confirmation “post-empirical science,” as some other authors (and the press) have done. It isn’t post-empirical because it relies on ongoing empirical confirmation. Also, a crucial component is the meta inductive argument, which itself depends on (past) empirical confirmation. [This is interesting, both substantively and in terms of public relations — I don’t mean the term in a derogative manner! Whether it is convincing or not remains to be assessed by other philosophers of science, of course.] At the same time, however, Dawid also acknowledged that there is indeed a risk of scientists running amok with speculation while gesturing at non empirical confirmation. But at the least his analysis brings somewhat hidden assumptions and arguments about scientific methodology into the open.
My talk was on “Theory in fundamental physics: the view from the outside,” where I focused on the very public, and often nasty, debate among physicists, drawing analogies with similar debates in evolutionary biology, and discussing implications for public trust of science. You can find the full set of slides here.
The afternoon began with Radin Dardashti on “Physics without Experiments?” Cosmic inflation, string theory, supersymmetry, have all been proposed for a while, and all lack decisive empirical support. On what grounds, then, if at all, are scientists epistemically justified in trusting these kinds of theories?
Dardashti focused on the limits to scientific theorizing that results from underdetermination. If we want to go non-empirical, then physics should change, with much more emphasis in justifying assumptions and working out the math.
Scientific method provides the argumentative strategies that scientists can deploy. [I find all this talk about “scientific method” interesting, given that the consensus among philosophers of science seems to be that there is no such thing.]
Aristotle, Bacon, Newton, Mill et al. supported the idea that science works from induction, with theories generated from observations. Then the early 20th century revolution in physics came, and the focus changed from inferring theories from data to the consideration of the consequences of theories, using such consequences to assess the theories themselves. In a sense, theory became prior. And now, at the beginning of the 21st century the domain of theory actually falls beyond the range of empirical data.
Consider the development of a theory T, which is based on data, and then of T’, which is not directly based on data, but is an extension of T. We can invoke Laudan’s “problem-oriented” approach to scientific theorizing, according to which theories are in the business of addressing both empirical and conceptual problems. Empirical problems are like those posed by dark matter, or baryon asymmetry in the universe. Conceptual problems include things like unification. The question to be addressed is what sort of constraints can be put on scientific theorizing.
One crucial bit is that the number of conceivable alternatives at play depends on the constraints one imposes on the theory been sought. Some of these constraints are better justified than others, and one can always set constraints so that there is only “one game in town” [as perhaps Dawid is doing with one of his three arguments seen above].
Unlike empirical problems, which are clearly defined, conceptual problems are on shakier ground. That’s because applying the wrong constraints may unjustifiably rule out legitimate alternatives. Also, what is or is not a “problem” depends on the theory and research tradition in which the theory developed. Remember, for instance, that string theorists come from quantum mechanics, while loop quantum gravity theorists come from general relativity. [I find this observation, repeated by other speakers, an interesting reading key to the whole controversy.]
What can we learn from history, in this context? In the 1960s people tried to non-trivially combine space-time symmetries, which was motivated by unification and other goals. The result was a number of “no-go” theorems, closing down various avenues for further theoretical research. But this was the result of the deployment of specific mathematical structures, and when people attacked the problem from different starting (mathematical) assumptions they found more promising routes to follow. This was an example of strong constraints put on a theory, which turned out to be unjustified. There are other examples, pointing to the fact that an explicit list of physical assumptions that go into developing a theory is in turn underlined by an often non-stated, and less empirically accessible, mathematical structure being deployed. At times it is the loosening or altering of that mathematical structure that makes movement forward possible while it initially appeared impossible.
The next to the last speaker of the day was Helge Kragh on “Fundamental theories and epistemic shifts: Can history of science serve as a guide?” There used to be such thing as The Journal of Speculative Physics (early 19th century), which was based on the idea that the romantic genius’ intuitions should replace experiments and observations in physics. It didn’t work out that way… This relates to the demarcation problem, since we wouldn’t recognize what the editors of that journal were trying to do as “science.”
History of science has a role to play in this debate, as it connects to philosophy of science, which is in turn more directly connected to science itself. [I like this model, though I would add sociology of science somewhere in there.] Indeed, the way to test philosophical theories of science is by means of cases drawn from the history of science. Something like this was attempted (without much success) in a classic series of studies by Donovan, Laudan & Laudan (1986-92). The problem is that science grows into all sorts directions, so that historical cases are only partially representative or informative of present ones. The author has collected in a volume what he calls cases of “higher speculations,” grand theories and revolutions in physics and cosmology that eventually failed.
Interestingly, some physicists have been just as skeptical of drawing lessons from the history of their discipline as they have been of learning from philosophy. [What’s the matter with these people?] But Kragh claims that one simply cannot avoid to learn from history, it’s just that often such learning takes place the hard way…
The classic example is Kuhn’s work and his interest not in demarcation, but rather on why scientists work on one major theory rather than another at any given time — a problem arguably closer to the subject matter of this workshop than the Popper-inspired issue of demarcation.
“The beautiful vortex theory of atoms is a theory which one almost dare to say that it deserves to be true” (1883). That theory had a number of similarities to string theory, but can we use the failure of the first one to say something about the prospects of the latter? And how about this? “It is a good rule not to put overmuch confidence in observational results until they have been confirmed by theory” (Eddington, 1935). The point being that “mathematical rationalism” was very strong in the 1930s and ‘40s. [Interesting term, “mathematical rationalism”…]
In 1954 the British Journal for the Philosophy of Science published a paper entitled “Is physical cosmology a science?,” reflecting then ongoing discussions on rationalism vs empiricism in cosmology. [Sometimes I think all science, not just philosophy, is a footnote to Plato, and Aristotle, respectively the first great rationalist and empiricist in the history of Western thought.] Bondi, also in 1954, wrote that experimental disproof is a non negotiable aspect of what it means to do science.
The problem is that “testability” itself is a flexible notion. Does it have to be now, or is testability “in principle” acceptable [and until when?]? Should tests be precise, or can they be approximate [and how much?]? Do tests have to be empirical, or can they be mathematical? How do we weigh the importance of novel phenomena vs that of post-dictions?
There are not just epistemic questions (like those above), but ontological ones. What does it mean to say that something exists? Should we ask nature, or will equations do? Susskind, for instance, argued that for a physicist theoretical existence is enough [!!]. This is what philosophers call the principle of plenitude, which goes back to Leibnitz. It is worth noting that it is philosophically highly suspect. “Anything which is not prohibited is compulsory” (E. Sudarshan, 1972).
According to Kragh one should not cultivate history of science for the sake of science, but the former can occasionally be useful to the latter. [I have made precisely the same point about philosophy of science, and I think, again, that this can be said also for sociology of science.] Kragh also made some remarks on how much Popper is mischaracterized and oversimplified by scientists, very similar to the ones I had previously made in my own presentation.
Finally, we had Peter Achinstein on “Scientific Speculation.” [Oh crap, no slides, a good old fashioned reading-your-own-paper thingy!] Lord Kelvin gave a series of lectures at John Hopkins about the aether. He said that we knew more about aether than about water or air, taking aether as fact because he could model it. Turns out he was [spectacularly] wrong. Pierre Duhem wrote a scathing critique of Kelvin’s work on aether based on its lack of empirical support.
Question: is speculating legitimate in science, and if so, under what constraints? Relatedly, what counts as proof or confirmation? There was a famous debate about these matters in the middle of the 19th century, between William Whewell and John Stuart Mill. Whewell’s position was roughly that if we have a theory that can explain parameters that were not instrumental in developing the theory (his famous “consilience”) that counted as confirmation of the theory. He also thought that if a theory is beautiful and simple, and explains lots of facts, it was likely true. Mill was having none of that, bringing up aether as an example of something we had no empirical reason to believe in, and therefore shouldn’t.
According to Newton (and Descartes) hypotheses has no role in natural philosophy — though he violated his own dictum time and again. That’s the strong view of the “don’t speculate” camp. A more moderate view is associated with hypothetico-deductivism, a la Popper: you introduce hypotheses without constraints, but without confirmation they do not make for sound science. The third (immoderate) view is that you speculate like mad, no strings (ah!) attached, a la Feyerabend. Achinstein thinks all these views are wrong, because they offer no constraints on the degree of speculation.
Example of Maxwell’s kinetic theory of gases. His initial assumption was that gases are composed of spherical molecules that move at uniform velocities until they collide, and that they do so in perfect elastic fashion. No justifications for this account were initially provided. Maxwell argued that mechanical theories have worked so well in the past that he was curious to see whether they could be applied to the kinetics of gases. The actual testing of Maxwell’s “speculations” came more than half a century later, but apparently he was unfazed by Newton’s dictum. The constraint he did accept was that the theory had to be developed “mechanically,” meaning that it had to be based on bodies in motion and the laws that regulate their behavior.
Achinstein said that Maxwell provided a pragmatic defense of his assumptions, along the lines of “let’s see how far we can get with this.” And he also gave an inductive argument for plausibility, based on how well “mechanical” explanations had worked out up to that point in physics. None of this amounts to a proof or a confirmation, of course. [All of this strikes me as reasonable, as far as it goes. The problem with “pragmatic” defenses of research programs is that they tend to neglect that research costs money. Even theoretical research, as one has to hire faculty, pay for salaries, train students, and so forth. So perhaps the “pragmatic” approach should incorporate financial as well as epistemic considerations?]
Achinstein, for all his support for pragmatic speculation, is worried by the idea of non empirical confirmation. Scientists should use non empirical considerations in order to argue for working on a theory, but definitely not as a substitute for empirical support of it.
The day ended with a 1.5 hour long panel discussion featuring all the day’s speakers. I did not take notes about that (partly because I was one of the panelists), but here are my own opening remarks for the panel:
I will address something that has come up a few times today, concerning the relationship between physicists and philosophers, what I shall call the “who is watching the watchmen? problem.”
I don’t think the idea is that philosophers are the (self-appointed) guardians of good science. Rather, the suggestion is that philosophers are in the business of studying how science works, logically and epistemically, from the outside, they can afford to take a broader perspective on the discipline because they don’t have to write (as many) grant proposals, or do experiments, they can read widely and think whenever they don’t teach. Trust me, I know, since I’ve done both jobs full time.
Philosophy of science is self-justifying, just like history of science, or sociology of science. Their business is not to help science solve scientific problems. Those scientists who dismiss philosophy because it doesn’t help solve scientific problems hold fundamental misconceptions about what philosophy is about, and most of them, I wager, have never actually read a technical paper in philosophy.
That said, philosophers can offer interesting complementary perspectives on how science works (and, again, that goes for historians and sociologists as well).
Now, then, what sense can we make of phrases like one that has generated discussion earlier today: “science is too important to be left to scientists themselves”?
We live in a (more or less) democratic society, where the public pays for our intellectual luxury (because that’s what it is, make no mistake about it). So, society has a right to ask for public discussions of how science works, and these discussions are most productive — I think — if a number of players are invited at the high table: scientists (obviously), philosophers, historians, sociologists, and even policy makers, who after all are the ones that hold the purse and are (somewhat) accountable to the public.
And that was it for the first day, folks! See ya tomorrow…