Why Trust a Theory? — part III

Munich[Notes: what follows are only lightly edited notes taken while the meeting was in progress, so they are more likely than usual to contain typos and odd phrasing; also, apologies to my readers, but I may not be able to participate much to the hopefully interesting discussion at PlatoFootnote about this series of posts, I’m trying to also get some sleep here in Munich… Below and throughout, comments in brackets are mine.]

This is the last installment of my coverage of the workshop on the current status of fundamental physical theory, organized by philosopher Richard Dawid at the Center for Mathematical Philosophy based at the Ludwig-Maximilians University in Munich, Germany, a conference that was prompted by a high profile, controversial position paper published in Nature by George Ellis and Joe Silk (who are co-organizers of the workshop), entitled “Defend the integrity of physics.”

The first speaker of the day was one of the co-organizers, George Ellis, on “Limits in testing the Multiverse.” The idea of a multiverse has received increased attention, and it has been linked to the Everett (“many worlds”) interpretation of quantum mechanics. Marten Reese and Max Tegmark have been some of its most popular supporters. According to Greene there are a whopping nine different types of multiverse. They can’t be all true, since they conflict with each other. And of course maybe none of them are real, there is only one universe.

The underlying impulse seems to be a philosophical principles along the lines of “everything that can happen does happen.” The multiverse is also claimed to be a necessary outcome of chaotic inflation, and it is invoked as an explanation for life and human consciousness. The key to keep in mind is that the domains considered are beyond the particle horizon that is actually observable to us. This means that no observational data whatever are available, and this situation will not change. To think otherwise is hubris on a vast scale.

Arguments in favor include the “slippery slope” [a fallacy, in favor?], where people say, well, there are likely galaxies beyond the observable limits, so… Right, but this assumes an extrapolation over huge scales, and crossing qualitatively different physical domains. Moreover, if the idea of extrapolation is taken seriously, it actually leads to the old idea of spatial homogeneity forever, known as the Cosmological Principle.

A second argument is that the multiverse is implied by physics, specifically by chaotic inflation. But, again, this is a case of extrapolating known to unknown physics. That extrapolation is unverified and unverifiable.

The problem, in a sense, isn’t with extrapolation per se (which is a time honored practice in science). It is about how far it is reasonable to extrapolate, or how far behind one has left the data (or established, tested, theory).

Inflation is not yet a well defined theory, and the available data is implied by some versions of inflation but not others. Particularly, not all inflation is chaotic. The Encyclopedia Inflationaris considers a whopping 195 different models of inflation. [Again, underdetermination of theory by data rears its ugly head.] At the moment, the data do not actually prefer a universe containing bubbles.

Third argument: fine tuning. Here a multiverse is one of the possible explanations: an infinite set of universe domains allows all possibilities to occur, so somewhere things work out okay — though one ought to be able to demonstrate that the proper multiverse is a physical reality derivable from the theory, not just a conceptual possibility. In a sense the multiverse is used to make the highly improbable appear probable.

The problem is that this is a probability argument predicated on the existence of the multiverse. It is obviously inapplicable if there is only one universe. We can do many observations of that one object (the known universe), but we still have a sample size of one, as far as universes are concerned.

No value of the cosmological constant can prove whether a multiverse does or does not exist. At best the multiverse idea is simply consistent with an anthropically constrained universe, but that doesn’t get us much of theoretical value.

Tom Banks wrote a paper entitled “The top 10^5000 reasons not to believe in the landscape,” where he says that the string landscape is a fantasy. You may or may not agree with that paper, but it shows that the physics underlying the multiverse is simply not uncontroversial or established.

[Interestingly, through Ellis’ talk, David Gross — a supporter of string theory — kept nodding vigorously in first row… He made it clear that he doesn’t believe in the multiverse.]

Now chaotic inflation could be disproved by observing that we live in a small universe. This is currently being tested, by checking whether we can see “all around” the universe. At the moment it doesn’t appear to be so, but it is an open question. The other possibility is about confirmation (rather than disproof), based on the idea of collisions among bubbles, if nucleation is large relative to the rate of expansion. So far, nothing.

An additional issue concerns the claim of physically existing infinities, which the author says can simply not be achieved. “The infinite is nowhere to be found in reality,” according to David Hilbert. Talk of infinity is not scientifically testable. [Here Gross visibly reacted in the negative, and with obvious condescension…]

A second problem is posed by vacuum energy. Quantum Field Theory suggests a huge cosmological constant, discrepant with General Relativity if vacuum gravitates [hmm, I’m afraid I don’t know what the latter means].

The broad problem is that the multiverse makes few if any testable predictions, but it does explain everything. But successful scientific theories are those that make lots of testable predictions, not those that provide all-encompassing explanations.

Ellis argues that Smolin’s Darwinian cosmological natural selection is the best alternative on offer. [I don’t, see here] Still, it is incomplete in a number of ways. [Others during the discussion pointed out that it has recently failed a crucial test.]

There are also worse proposals, like the idea that the universe is a simulation, which raises far more questions than it actually answers. It confuses science fiction with science.

Talk of Popperazzi and falsifiability police is emotional talk and says something about just how thin the critics’ argument really is. What data would allow supporters to abandon the multiverse idea? If the answer is none, than that’s dogma, not science.

Ellis also criticized the use of Bayesianism to support non empirical confirmation, since Bayes theorem actually requires the addition of new data (not just new theory) to change the priors. [I actually think this is an excellent point.]

Next: Joseph Polchinski (in absentia, played by David Gross) on “String Theory to the Rescue.” [A pre-emptive strike about this talk has been published here, based on a version of the paper that has already been uploaded at arXiv.] The talk addresses both string theory and the multiverse. Gross said at the outset that he agrees with the first but not the latter. [This was, of course, not at all warranted and rather discourteous to the missing speaker, but whatever.]

Planck length, time and mass can be considered universal dimensionless constants of nature, to be used throughout the cosmos as natural units. They were arrived at in 1899, before both quantum mechanics and general relativity. The Planck length scale (10^-33 cm) is so far away from what current physics can do that it directly affects the tools and strategies physicists have available to make progress.

Theory has occasionally been able to leap gaps of this type, so we should try, and we should be weary of defining science too rigidly. [Not of defining it too loosely?] It is strange to say, as Smolin does in one of his books, that it is unfortunate that string theorists are not like Einstein, meaning not philosophically savvy. [Here Gross chuckled, stating that he wouldn’t know, since he doesn’t read “those books.”]

The short distance problem is one of the successes of string theory. General relativity and quantum mechanics, when applied to short distances, yield infinities that are non-renormalizable, resulting in something called spacetime foam. This sort of thing already happened to Fermi’s theory of the weak interaction, which gave physicists a clue to the discovery of the W, Z and Higgs bosons.

Another success of string theory: uniqueness of dynamics. String theory was discovered in an incomplete and approximate form, but it is a discovery of mathematical-physical structures that actually exist. [Mathematical Platonism?]

String success: physics from geometry. If general relativity, then gravity is the curvature of spacetime. So it is an attractive idea that the other interactions come from the geometry of time as well. But GR has used up the spacetime that we know, so we need more spacetime. String theories require extra dimensions, plus branes, of a form nicely compatible with the rest of physics.

String success: duality between gauge fields and strings. This is related to the quest for equivalences between classical and quantum theories. Here the string themselves (as well as general relativity, branes, and bulk spacetime) are emergent, and the best context to understand the theory is holography.

Another success of string theory: the relationship between black hole entropy and information. [See yesterday’s talk by Chris Wüthrich.]

Moving to the multiverse. If the physics that we see depends on the (compactified) geometry of spacetime, what determines that geometry? How many solutions does the 10D Einstein equation have that look minimally like our world? Estimates vary from 10^9 vacuum solutions, to combinatorially — with matter — anything between 10^500 and 10^272,000 [yep, you read that last number correctly…]. Einstein encountered a tiny version of the landscape, the radius of the Kaluza-Klein circle.

If there is a landscape, what determines its geometry in the new physics? Dynamics erases much of the initial conditions. Given a positive cosmological constant, we get expansion, then localized tunneling to a new vacuum, and repeat. So: GR + QM + landscape => multiverse. [Aahh…]

A multiverse is the price we pay for getting physics from geometry [too expensive, say some]. Yet there is evidence that we live in just such a universe. This is because all theories that predict a value of the cosmological constant (no supersymmetry, unbroken supersymmetry) get it wrong; the only options are theories where the constant is a free parameter in the dynamical equations, or the multiverse.

Of the various kinds of theories considered, only the multiverse predicts that observers see an unnaturally small (meaning, below the “natural” scale) cosmological constant. But these conditions do not require the constant to be zero, so a small nonzero value is predicted.

It is often said that the discovery of dark energy came as a surprise, except to cosmologists who had paid attention to the data without theoretical prejudice, and to those who had tried to solve the constant problem by conventional means, knew how hard it is, and were aware of Weinberg’s prediction on the subject. The vacuum energy was a prediction of the multiverse. [I bet this claim is gonna be controversial.]

The section of the talk on the multiverse concluded with a “quasi-Bayesian” estimate of the likelihood that there is a multiverse: 94% — and this is a conservative argument. [Laugh in the audience, though I don’t think this was intended as a joke…] The multiverse is on a similar footing as the Higgs, to which Polchinski attached a prior of 99.9%. [Even I can tell that this analogy is, shall we say, a stretch?]

The first speaker after coffee break was Elena Castellani on “Scientific Methodology: A View from Early String Theory.” Questions about scientific methodology are raised by dramatic scale changes in fundamental physics.

Traditionally, philosophers of science are concerned with the modalities followed in building theories, i.e. questions about discovery; they are also concerned with the modalities followed in assessing scientific theories, i.e. questions about justification. They are moreover concerned about the relationship between these modalities, since they are not sharply separated, as well as the influence of external aspects, such as sociological and economic ones.

The problematic aspects concerning participants to this workshop regard assessment methodology, not building ones. We have assessment based on empirical support, which is problematic for scientific reasons, both technical and theoretical. Philosophers have little to contribute here, since these are scientific issues. But we also have “extra-empirical” support, which is problematic for philosophical reasons.

There are general meta-strategies for extra-empirical support: for instance Dawid’s three arguments discussed on the first day. There are also more specific internal criteria: consistency, unifying power, generality, simplicity, fertility, explanatory power, elegance, beauty, etc.

Another internal criterion is the convergence argument: the convergence of results obtained in alternative, independent, ways, even beginning from different starting points. An example comes from early string theory, back to the work by Veneziano (1968) to the first string revolution in 1984.

This is an interesting period because it illuminates the origin of ideas such as strings, duality, supersymmetry, extra dimensions, as well as of mathematical techniques that are basic ingredients in today’s fundamental physics. Historical case studies like this are “data” informing discussions in philosophy of science.

First phase (1968-1973): the so-called dual theory of strong interactions, which was falsified as such. Second phase (1984-1984): those features that were a drawback of the theory for describing hadronic physics (spin-one and spin-two massless particles and extra dimensions) were taken to reveal the true nature of string theory, leading to reinterpret the theory as a unified quantum theory of all fundamental interactions.

In the second period the theory was regarded as so beautiful and having such a compelling mathematical structure that it had to relate to the physical world. This was sufficient motivation to pursue it.

The focus of the author was on the first phase, in particular the string conjecture, which originated in 1969 from three independent authors, in order to arrive at a deeper understanding of the physics described by dual amplitudes. The starting point was Veneziano’s discovery in 1968 of the dual amplitude for the scattering of four mesons. This is a case of discovery, illustrative of both the rationale leading to apparently bold guesses and the kind of evidential support motivating a theory’s progress.

Early work arrived at the conclusion of the number of spacetime dimensions = 26 via three different routes (with a fourth one added later on, in 1981). It was eventually shown that d=26 would fill Hilbert’s space. Of course, the theoretical beauty was achieved at the price of 22 extra dimensions. Nonetheless, these became gradually accepted because of what appeared to be a surprising convergence of different calculation procedures to the same result could be seen as a natural consequence of the theory in its full-fledged form.

Castellani concluded that this is the sort of non-empirical support that has kept string theory alive and well for all these years.

Last talk of the morning: Dieter Lüst on “Aspects of Quantum Gravity.” More than asking whether we can trust a theory we should ask how far we can go in theoretical physics. So far in the history of physics we have been able to probe shorter and shorter distances by accessing higher and higher energies. Extrapolation has always been a good tool, but does it work in the regime of quantum gravity?

The basic picture from quantum mechanics includes the wave-particle duality, Heisenberg’s uncertainty principle and phase space quantization. The other “column” of modern physics, of course, is general relativity. The problem is that the two seem to be mutually incompatible. QM and GR clash at short distances in a variety of ways, leading to infinities. Related questions concern the quantum structure of gravity, the quantum structure of spacetime, the emergence of geometry and gravity, and the storage of information in quantum gravity. Nobody has answers to these issues as yet, but string theory is the most promising framework. [It has been “promising” for quite some time now…]

Graviton scattering in quantum gravity: in quantum gravity, the gravitational force is mediated by the exchange of a massless spin 2 particle, the graviton. Are gravitational waves going to be discovered soon, by LIGO? Meanwhile, one can use a corpuscular model of black holes, which is equivalent to a Bose-Einstein condensate of N gravitons. [Sure, why not… This is one of a number of talks at the meeting that have unfortunately missed the mark. The whole idea of the workshop wasn’t to provide a platform to string theorists to discuss details of string theory and where it may go. It was to focus on a meta-discussion, if you will, of string and multiverse theories vis-a-vis scientific methodology. Oh well.]

String theory, the big picture: particles are string excitations, the spin 2 graviton is always part of the string excitations, it includes the other forces of nature, among other things. We can develop a good understanding of black hole entropy in terms of string states. String theory provides a consistent framework. [There is a lot of emphasis on consistency, which of course is a mathematical/logic requirement, but fairly minimal as far as scientific theories go.]

Of course there is the problem of the landscape, currently estimated at 10^1500 [notice how different people give wildly diverging estimates of the exponent]. But according to Kane [see his controversial talk from yesterday] the theory still has predictive power.

And we are back from the lunch break, with Sabine Hossenfelder and her talk on “Lost in Math.” Theory assessment, non-empirical or not, relies on judgment, which in turn can be trusted only if it is objective. Problem is, cognitive science tells us that human judgment is typically biased. Scientists have of course always suffered from cognitive biases, but the problem becomes much more urgent when data is harder to come by. [I.e., scientists rationalize just as much as anyone else, and if they are free to do that without empirical constraints, then…]

Hossenfelder declared herself offended by an earlier snide remark by David Gross about bloggers. She said as one of the few professionals who blog about fundamental physics she sees her job actually as that of “cleaning up” the mess often left around by science journalists.

The problem with fundamental physics is that it takes a long time to come up with new experiments, which in turn has led to a focus on mathematical consistency. But consistency is clearly insufficient. [See my comment above.] There are infinitely many consistent sets of mathematical axioms, we need to pick the right one(s) in comparison with observation.

The mathematical universe of Max Tegmark is “pretty much useless” because it does not help us to pick the right axioms. Multiverses are merely signs of under constrained theories. They are a symptom of trying to explain everything with nothing. [Ouch!]

Physicists use many assumptions that are never explicitly stated, and they are not required by mathematical consistency. That’s what gets us “lost in math.”

Physics also features “hidden rules,” having to do with the relevance of simplicity, naturalness, elegance and beauty. These requirements get converted into technical statements and their status as assumptions is forgotten.

Beauty used to be a sign of godly influenced (e.g., in Newton), then it became a bonus (Poincare`), then a guide (Hermann Weyl), and finally a mandate (Dirac). As physicists have become more successful they have also become convinced that they can understand nature by introspection. [Double ouch.]

[It strikes me like all this talk of equating beauty and truth is uncharacteristically Platonic…]

Beauty as a guide actually has a pretty bad track record. Many theories were considered beautiful and turned out to be wrong. Contrariwise, some successful theories were considered ugly when proposed.

What about naturalness? It has worked at times (e.g., top quark) but not others (e.g., cosmological constant, Higgs mass). Naturalness too is an aesthetic, not a mathematical principle. It’s empty without assuming a probability distribution.

Any theory is infinitely fine tuned because we have to pick a few sets of consistent axioms out of infinitely many.

Simplicity: by itself it is an utterly empty principle. It only makes sense to require a theory to be as simple as possible given certain data. But simplicity acts in a powerful way through the expectation that fundamental theories should be simpler. This has manifestly not been the case, otherwise we should have stopped with the ancient Greeks.

Perception of simplicity depends on exposure: the more familiar you are with a theory the simpler it seems. This is a phenomenon well known in arts and music, it ought to be acknowledged in physics as well.

Beauty, naturalness and simplicity are human desires. They have a mixed score card, and we have no reason to think that nature conforms to them.

Practitioners come to believe that these criteria are not hypotheses, but rather consistency requirements. This leads to two problems: a) people
forget that the criteria are hypotheses (“lost in math”) and b) the selection of criteria is affected by cognitive and social bias – which then don’t get corrected because they are not recognized as choices to begin with. This is what limits the space of theories that physicists explore in the first place, thus weakening the no-alternatives claim.

Scientific progress relies on objective expert judgment, which is hindered by social and cognitive biases, which need to be explicitly addressed in order to do good science. [Seems to me this was one of the best and most on target talks at the workshop.]

Next to the last talk: Karim Thebault on “What can we learn from analogue experiments?” In 1975 Hawking derived a semi-classical result associating a radiative flux to the black hole event horizon. Direct testing of his prediction seemed and still seems impossible. However, in 1981, Unruh showed that Hawking’s arguments can be applied to sonic horizons in fluids. In 2014, Steinhauer’s group claimed to have created an analogue charged black hole within an atomic Bose-Einstein condensate.

Should we think of the results of analogue experiments as providing empirical evidence for a given notion, or as analogies?

Experimental arguments from analogy have a long history in science. For instance, in medicine, the fact that a given substance X has an effect on experimental animals similar to another substance Y has been used to infer that if X works in humans, then Y will too, with positive results.

Formally, this sort of argument by analogy is invalid. Analogies merely establish plausibility, not proof, giving us reasons to proceed with further research. The suggestion is that if arguments that can provide external validation are added to analogue experiments, then the latter can actually be considered as going beyond arguments by analogy and move into the realm of actual experimental confirmation, the golden standard in science.

The author then developed an argument [much of it very, very technical] for closeness between hydrodynamics and the original context of Hawking’s calculations. where the role of the black hole event horizon is played by the effective acoustic horizon. As he put it, the black hole is replaced by a dumb hole. The general form of the argument can be modeled using Bayesian confirmation theory, which leads to the use of analogue experiments (under the specified conditions, which require the availability for external validation) to provide confirmation in the Bayesian sense of the term.

The last talk of the three-day workshop was by Georgi Dvali on “Secret quantum lives of black holes and dark energy.” [Another speaker who wrote his slides out by hand! Is this a new trend among slavic physicists? Not a good one, not a good one. But at the least his accent was not as incomprehensible as Mukhanov’s yesterday.] The author is concerned with how black holes process information [which of course is only marginally related to the actual topic of the workshop].

Classical black holes carry little information because they are featureless. Quantum mechanical black holes, however, do carry huge Beckenstein entropy. [He was talking as if the two were different objects, which is true mathematically, though I thought this was a conference about physics.] But in the classical limit black holes carry infinite information, except that it takes an infinite amount of time to decode.

[Unfortunately, the rest of the talk was a technical (obviously, theoretical) discussion of black holes physics, which was entirely irrelevant to the topic of the workshop, so I will not bore you with it.]

That’s really it, folks! It was a very interesting experience, not only because I learned quite a bit about physics and ongoing discussions among leading physicists, but because I got an invaluable first-person experience of the personalities and attitudes behind the controversy. You should keep an eye on the conference web site, where the videos of the talks will soon be published. The organizer, Richard Dawid, is also working on publication of the proceedings, either in book form (he is in conversations with Oxford Press, at the moment of this writing) or, less desirably in my mind, as a special issue of some journal. So stay tuned, there is much more to come, the string wars continue…

This update from Lee Smolin, concerning the alleged falsification of one of his predictions derived from his model of cosmological natural selection:

“Sabina was referring to this paper. From the abstract it claims the ‘measurement of a 2.01 +/- 0.04 solar mass pulsar.’ This is clearly not in contradiction with my prediction of an upper mass limit of 2 solar masses. For the reasoning and other checks of this prediction, See page 20 and reference 31 of Lee Smolin, ‘A perspective on the landscape problem,’ Invited contribution for a special issue of Foundations of Physics titled: Forty Years Of String Theory: Reflecting On the Foundations, DOI: 10.1007/s10701-012-9652-x arXiv:1202.3373. That reference 31 is: James M. Lattimer, M. Prakash, What a Two Solar Mass Neutron Star Really Means. 4. arXiv:1012.3208 , to appear in Gerry Brown’s Festschrift; Editor: Sabine Lee (World Scientific), which is by the people who did the nuclear physics that the 2 solar mass limit is based on. So I think its correct to say that this prediction is still standing up.”

78 thoughts on “Why Trust a Theory? — part III

  1. Philosopher Eric

    Hi Robin,

    Philip is plenty capable of answering you in his own way, but since I do hold the same position, consider my own explanation for why matter does precede mathematics.

    (Practicing what I preach) I define mathematics as a language, and so conceptually no different from English. Thus without the physical, presumably there will be no one to speak the language of mathematics, and so presumably there will be no mathematics. Observe that even a true Platonist might agree, if made aware of my definition for the humanly fabricated term of “mathematics.”

    Now if you’re asking if our “pi” would be different in some weird physical realm, I have no reason to think this, but then what do I know?

    Liked by 1 person

  2. Daniel Kaufman

    Philosopher Eric wrote:

    I define mathematics as a language, and so conceptually no different from English.

    ———————————————————————

    Formal languages are almost entirely — and conceptually — different from natural languages like English.

    Like

  3. brodix

    I would think the relevant point about whether circles being constant makes them fundamental, isn’t their regularity, but whether they are explained by more fundamental precepts and relations.

    That the same cause yields the same effect is a more basic function than the forms emerging from it.

    I would explain circles/spheres as the most efficient form of area/volume, to perimeter/surface area. Consequently you see many more circles than squares in nature, because squares and cubes only emerge from the efficient division of area/volume. Consequently you see many more such cornered forms in human constructs.

    Like

  4. Philosopher Eric

    Hi Daniel,

    Yes “natural languages” (defined as what the human evolved to speak) are surely very different from “formal languages” (defined as what we’ve recently acquired). But I don’t see how this changes whether or not the material must exist (as I believe) in order for a language to exist, or at least as I’m defining the terms. Now if someone asks if what we call “a circle,” or “two,” can exist as such, in a realm in which no one speaks any languages, then I agree they might also exist in those realms!

    Furthermore as I see things, a language such as English will be many orders more advanced than the language such as mathematics, which can be obscured from us given that we’ve evolved to speak something which is quite advanced, though not the various primitive languages which we’ve recently developed. Observe that there is nothing which can be denoted in the language of mathematics, which cannot also be denoted in the language of English.

    Liked by 1 person

  5. Daniel Kaufman

    Philip:

    Natural languages may just be complex (perhaps even paraconsistent) “formal” languages.

    —————————————————————————–

    Afraid not.

    I’m not saying that no one thinks this, but it’s clearly wrong. Meaning in natural languages is a function of use, not of compositional meaning or truth theories. And so much of natural language is non-truth functional, effecting communication via the illocutionary and perlocutionary force of speach acts.

    Chuck those formal semantics texts. If you want to understand anything about natural languages, read the later Wittgenstein, J.L. Austin, and H.P. Grice.

    Like

  6. Daniel Kaufman

    Philosopher Eric:

    The problem is, you can’t just define the terms anyway you want. There is a substantial literature on this and thus, a substantial conversation already going on.

    And I’m afraid I just don’t see what “material” has to do with any of it.

    Like

  7. brodix

    Daniel,

    Might it be argued that natural languages are evolved for the purpose of dealing with the everyday totality of existence, while formal languages are expressly logical constructs, derived from our experiences of reality? In essence, natural languages deal with the “noise” of experience, while formal languages attempt extract “signal” from it?

    So while Philip might be wrong, in the sense that natural languages do not necessarily have an inherent formality, Eric might be right in that formal languages are necessarily a subset of what natural languages can describe.

    ?

    Like

  8. Philosopher Eric

    Daniel,

    You know quite well that I respect you. I myself happen to be quite uneducated in the field of philosophy, though you yourself have devoted your very life to the field. What you are saying here however suggests that some sort of misunderstanding must have occurred. Are you saying that you personally believe that English, Spanish, Mathematics, and so on, exist in and of themselves before the “material” humans which use and I presume developed these languages? Are you saying that humans such as myself do not arbitrarily define the English term “open,” for example, in whatever way that it we like?

    Like

  9. brodix

    To add an interesting but likely controversial analogy, one might compare formal languages to the elites of a society, in that they are a subset of it. The signal.

    While the formal rules governing that society do not fully extend to every aspect of it. The noise.

    In fact it is more common that the elites become corrupted, than that the entirety of the organic community can ever be fully rule bound. Aka; funny math.

    So that while seemingly small conveniences work quite usefully to begin with, then when taken for granted, end up creating enormous distortions.

    For instance, the idea of a dimensionless point as an ideal of location, when in fact location is a function of space and time, so that by eliminating all dimensionality, it becomes a multiple of zero and thus non-existent, not a perfect/ideal form.

    Now we have an entire universe springing from this ideal of location and the scientistic elites are duty bound to support it, because the rules they swear by allow it.

    Like

  10. Daniel Kaufman

    Robin: the point is that one can give strict definitions for the concepts of geometry, in terms of lists of necessary and sufficient conditions. this is not true of many — if not most — natural language terms.

    Like

  11. Philip Thrift

    Daniel: “If you want to understand anything about natural languages, read the later Wittgenstein, J.L. Austin, and H.P. Grice.”

    That may be useful, but I think today it is more useful to read what the natural language research teams at at Google, Microsoft, and Facebook are writing, for example, and what’s presented at the Logic and Engineering of Natural Language Semantics conferences, Computational Linguistics / Natural Language Processing conferences, etc.

    Like

  12. Philosopher Eric

    Yes Robin, circles surely don’t appear in any of our languages whatsoever. They do however seem to appear for the conscious entity at times. For this reason we can expect something such as the human to try to describe its associated perceptions by means of its various languages. Thus “the circle” does seem to occur.

    Liked by 1 person

Comments are closed.