Yet another frustrating conversation: why talking science to “skeptics” is a hopeless endeavor

Candle in the darkSome time ago I related a frustrating conversation I had with one of my relatives, an intelligent and educated person, who however holds onto what I consider hardly rational views not just in politics (where there is usually ample room for disagreement), but also about conspiracy theories, and more broadly the nature of the world. Recently, I’ve done it again. This time spending days on and off having a conversation via social media with a person I’ve never met and will never likely meet. Let me tell you what I learned from it.

First, a disclaimer: I usually do not engage in any one-on-one debates, either via email or on social media, simply because not only they tend to be fruitless, but they are also incredibly time consuming. And the older I get, the more I’m jealous of my time. This story, therefore, is to be considered as a rare exception, and not as an encouragement to send me private messages to try to repeat the experience. That’s why I have two blogs (this one and howtobeastoic.org), so that we can have fruitful public discussions that may benefit a number of people.

The range of topics of this new episode was much narrower than the preceding one, and also far more close to my own areas of expertise: evolutionary biology and philosophy of science. I felt, therefore, like I really knew what I was talking about, providing not just a reasonably intelligent and somewhat informed opinion (as, say, during informal discussions on economics, or politics), but an expert one, based on 35 years (shit!) of studying the subject matter at a professional level.

It didn’t help. Not in the least. My interlocutor — let’s call her Curiosa — is an intelligent woman who has read a lot of stuff on evolution in particular, and science more generally. She has also read several of my blog posts, watched some of my debates, and even bought one of my books on evolution. She discovered me by way of reading Michel Denton’s Evolution: A Theory in Crisis, which cites me several times as a reluctant critic of evolutionary theory, i.e., one of those people who know that there is something seriously wrong with “Darwinism,” and yet somehow can’t let go of the orthodoxy and embrace the revolution.

My actual position is easy to check online, in several places. For instance in these two recent blog posts for the Extended Evolutionary Synthesis initiative. In a nutshell: evolutionary theory has evolved by way of several episodes beginning from 1859 (original Darwinism) to the 1930s and ’40s (the Evolutionary Synthesis) through current times (the Extended Synthesis), and it will likely continue to do so. There is nothing wrong with Darwin’s original twin ideas of natural selection and common descent, but we have added a number of other areas of inquiry, explanatory concepts, and of course empirical results over the intervening century and a half. End of story.

Not according to Curiosa. She explained to me that Darwinism is a “reductionist” theory, apparently meaning something really bad by that term. I explained that reductionism is a successful strategy throughout the sciences, and that when it is well done (i.e., it’s not what Dan Dennett characterized as “greedy” reductionism), it is pretty much the only game in town to advance our knowledge of the world.

But, countered Curiosa, how do you then explain the bacterial flagellum? This was obviously a reference to the infamous Darwin’s Black Box by intelligent design creationist Michael Behe. You know, Behe is a scientist! With a PhD!! Working at a legitimate university!!! How do you explain that, Prof. Pigliucci?

Simple, I said, you will always find legitimate academics who will position themselves outside of the mainstream. It actually is a healthy aspect of the social enterprise we call science. Occasionally, some of these people go way outside of the consensus opinion, into territory that is highly questionable, or even downright pseudoscientific. They may do it for a number of reasons, from the fact that they consider themselves rebels and mavericks to their tendency to put their (usually religious, but sometimes political) ideology ahead of reason and evidence. As in fact is the case for Behe, a fervent Catholic who simply can’t wrap his mind around the conclusion that life originated and differentiated by purely natural means, no gods required.

Ah!, continued Curiosa, if that’s the case, how come there is so much disagreement among scientists about evolution, and even the origin of life? Well, I replied, let’s begin by separating those two:

To begin with, there is no such thing as widespread disagreement about “Darwinism” among evolutionary biologists. Pretty much all professionals I know accept the idea, and the disagreement is over the shape of the current theory, just like physicists disagree on the cutting edge of their discipline, not about Newton, or even Einstein.

Moreover, the reason there are indeed so many theories about the origin of life, and truly no consensus, is because we just don’t have enough information left for us to zero in on one or a small subset of hypotheses. The historical traces of those events are, unfortunately, forever erased. We don’t have, and likely never will have, fossils documenting what happened at the onset of life, which means that our ideas about it will remain speculative. Indeed, even should we one day be able to recreate life from scratch in a laboratory, we will have no guarantee that the path we followed under controlled conditions was the one historically followed by nature on our planet. But so what? Science never promised to answer every question, it only promised to do its best. Sometimes its best is not good enough, and the wise thing is to accept human epistemic limitations and move on.

Not at all satisfied, Curiosa shifted topic again: didn’t you hear of Roger Penrose quantum mechanical explanation of consciousness? Doesn’t that imply that consciousness is everywhere, that it is a holistic property of the universe?

Hmm, I said, with all due respect to Sir Roger, I doubt physicists have a clue about consciousness, which so far as I can see is a biological phenomenon, whose explanation is hence best left to biologists. Besides, I told her, beware of any “explanation” that invokes quantum mechanics for anything that is not quantum level phenomena, even when done by an actual credentialed physicist like Penrose. At any rate, I concluded, even if Penrose is right, what does that have to do with Darwinism and its alleged failures?

I think you get the idea, so I won’t bore you with additional examples of the many increasingly frustrating and downright useless exchanges between Curiosa and me, which continued until I politely pointed out that we were going in circles and that perhaps it was time to call it a day.

What did I learn from this exchange? A number of things, none of them boding too well for the advancement of rational discourse and public understanding of science.

First, let me remind you that Curiosa is a smart, well read, and genuinely curious person. She ain’t no country bumpkin, so to speak.

Second, precisely because she reads widely, she can’t help herself putting what I write — or what truly eminent evolutionary biologists, like Stephen Jay Gould, write — on the same level with the sort of fluff that comes out of the Behes and the Dentons of the world. She simply has no way to discriminate, since all these people have PhD’s, and they all have affiliations with reputable universities.

Third, while we always assume that knowledge is an unqualified good, it turns out that a bit of knowledge may do more harm than complete ignorance. When someone as intelligent as Curiosa thinks she understands enough to draw conclusions, she will not hesitate in doing so, rejecting expert opinion outright. When this has to do with the status of evolutionary theory, no much harm is done. But when it has to do with, say, climate change, or the safety of vaccines, that’s an altogether different, and far more dire, story.

Fourth, Curiosa has fallen for the well known technique of spreading doubt on mainstream science, enough that people cannot genuinely make up their minds about what is going on. This was the deliberate strategy of the tobacco industry in its absurd (and lethal, for many people) denial of a link between smoking and cancer, so well encapsulated in the book and documentary Merchants of Doubt. The same approach has then been used to saw doubts about climate change, vaccines, and so forth. And of course it has also been the main strategy behind the so-called intelligent design movement.

Fifth, and rather ironically, Curiosa has absorbed and internalized the vocabulary of skeptical (i.e., pro-science) organizations, accusing me and others of engaging in all sorts of logical fallacies, a convenient shortcut that saves her the trouble to actually engage with my arguments. When I pointed out — reasonably, seemed to me — that Discovery Institute Fellow Jonathan Wells is a member of the Church of Reverend Moon, and that his antipathy toward evolution is entirely ideological in nature, I of course “committed” an ad hominem. When pointed out plenty of reliable sources on evolutionary theory, I was engaging in confirmation bias. And so on.

Lastly, Curiosa’s spirited discussion with me was very clearly fueled by her pride in taking on Big Science and its Orthodoxy, in favor of open mindedness and revolution. She saw herself as David, and I was the Goliath to be slain.

There is nothing I or anyone else can do for the Curiosa of the world. If, and it’s a big if, they will ever manage to get their head clear about what is and is not legitimate science, they will have to do it on their own, painfully and slowly. The resources are out there, easily at their disposal. But they have no psychological incentive to do so.

What can, and ought to, be done instead is act at two levels: i) engage in public outreach aimed at those who are still not as far gone as Curiosa, hoping to retain them and even strengthened their resolve to support sound science; and ii) to do a far better job than we do now with the next generation. It is children that we should target — just like, not at all coincidentally — creationists write lots and lots of books for children. But there is little incentive for scientists and science popularizers to do so, because children literature is seen as somehow inferior to that aimed at adults (even though it is arguably harder to pull off), and because we won’t see the results for decades.

Science, and reason in general, thus remains — in the beautiful metaphor proposed by Carl Sagan — like a candle in the dark. Our urgent job is for it not to be snuffed out by the forces of darkness.

234 thoughts on “Yet another frustrating conversation: why talking science to “skeptics” is a hopeless endeavor

  1. Massimo Post author

    Brodix,

    I don’t think it would have been a good idea to invite Curiosa to this forum. Given my experience with her, it would have turned into a frustrating exercise for everyone involved. Besides, it is rather awkward to be invited, however anonymously, to discuss an essay that lays out just how non-rational someone can be.

    John,

    I taught a graduate seminar on Hume, and took graduate courses on both Hume and Kant (the latter taught by one of the top Kant scholars). I assure you that Hume is no sophomoric anything. Famously, even Kant tipped his hat to him, by saying that Hume awoke him from his dogmatic slumber.

    Astro,

    I have no idea when you say that light is “deeply related” to the idea of consciousness. And the observer effect is misnamed, since the “observer” is any physical apparatus interacting with a system, not an actual, human, observer.

    But yes, I agree that knowledge is morally neutral, broadly speaking. Though the pursue of certain kinds of knowledge isn’t: I don’t see how developing nuclear weapons is a morally neutral enterprise.

    Like

  2. Daniel Kaufman

    DM: I think that supervenience is such a weak relation that it doesn’t have much by way of implications for the “simulation” of social reality / activity, steeped as it is in intentionality.

    Like

  3. Massimo Post author

    Christopher,

    Slow down with the accusations of hypocrisy my friend. I have never argued that virtue is second to power. That’s not even an accurate rendition of Larry Becker’s view, and it is most definitely not mine. So the rest of your comment is a non sequitur.

    Like

  4. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    I don’t see how having an example of a Godel sentence under one particular axiomatic system helps Penrose.

    It doesn’t. The example was provided because you asked for it, and because it interested me to see what it looked like. That’s all. All Penrose needs is Gödel’s theorems, that show that it is possible to construct such a sentence.

    He needs an example of a sentence that is known to be true but is unprovable by any axiomatic system

    Perhaps, but he doesn’t see it that way. What he thinks he needs is a way to construct such a sentence for any axiomatic system.

    Surely there can be no such example

    Correct.

    So I don’t see how Penrose can support his claim that mathematicians are doing things that no deterministic hardware could do.

    Because any deterministic hardware system performing a computation can be construed as a formal axiomatic system. This point is rarely challenged by those who seem to understand the issues, I think, so let’s accept that. Anyway, if a computer is equivalent to a formal axiomatic system, then for any computer we can in principle construct a Gödel sentence which we can see is true but which is unprovable by that computer. So, the thinking goes, for any given computer a human mathematician can do something it cannot. This would presumably include any computer that was doing the same job as that human mathematician’s brain. Therefore, the human mathematician’s brain cannot be doing a job that could be done by a computer.

    Now, I disagree with this argument because it would only work if we knew that the deterministic hardware doing the job of the human mathematician was consistent, and we don’t know that human brains are consistent (and indeed they are often not).

    Also I’m not really convinced that a human brain could ever produce a Gödel sentence of its own algorithm — it would take more resources than the human brain has available. It’s like trying to fit a large box in a small box. You saw how cumbersome was the Gödel sentence of the PA axioms — imagine a Gödel sentence of a human brain! To produce such a Gödel sentence, the human would need access to external aids, paper, pens, calculators, even computers. But then the system is no longer the human alone but a much bigger extended system, and so the Gödel sentence we would need is actually the Gödel sentence for that system — which would presumably necessitate taking in even more resources yet again. And so on ad infinitum. So I don’t think a human can produce a Gödel sentence corresponding to that human’s own information processing, and so a human can no more step outside its own limitations than a computer can.

    So we’re agreed that the argument doesn’t seem to work. And that seems to be the consensus among experts. But my point stands — it’s a much better argument than the caricature you initially offered, even though it’s wrong.

    Like

  5. Disagreeable Me (@Disagreeable_I)

    Hi Dan,

    DM: I think that supervenience is such a weak relation that it doesn’t have much by way of implications for the “simulation” of social reality / activity, steeped as it is in intentionality.

    Again, I said simulated, not reproduced. I’m not talking about reproducing intentionality. I recognise that any such claims (though I may hold them) are controversial. So a simulated you having a conversation with simulated students might have no intentionality and no consciousness, but it would still have every appearance of intentionality and consciousness, in that it would be producing much the same sort of utterances as you did, even if it did not truly understand what it was saying.

    If you don’t think this is the case, then what do you think would happen if we could scan every molecule that made you up and made your students up (along with the air and furniture and so on in the room) and simulated the low level physics taking place? What would we see in our simulation? Would simulated you just sit there dumbly? Would simulated you keel over and die? Or just collapse into a puddle of amino acids, lipds and carbohydrates? Or would simulated you, as I believe, appear to continue the discussion just as if simulated you were real?

    Like

  6. Daniel Kaufman

    If you don’t think this is the case, then what do you think would happen if we could scan every molecule that made you up and made your students up (along with the air and furniture and so on in the room) and simulated the low level physics taking place?

    I don’t think this is a meaningful question. It’s Star Trek. And that’s a problem I have with a lot of these sorts of conversations.

    Liked by 1 person

  7. Coel

    Hi DM,

    Anyway, if a computer is equivalent to a formal axiomatic system, then for any computer we can in principle construct a Gödel sentence which we can see is true but which is unprovable by that computer.

    Agreed so far.

    So, the thinking goes, for **any** **given** computer a human mathematician can do something it cannot.

    Agreed again. For any deterministic computer A, operating on formal axiomatic system A, there is a Godel sentence G(A). However, a deterministic computer B, operating axiomatic system B, could do things that A could not and so might be able to prove G(A).

    This would presumably include any computer that was doing the same job as that human mathematician’s brain.

    But this doesn’t follow!

    If we have: computer-A cannot prove G(A) but human can. Then either:

    1) Human is deterministic computer-A plus added quantum indeterminacy. Or:

    2) Human is deterministic computer-B.

    Penrose needs to exclude (2) before he can argue (1). But that seems to be a complete leap; how does he justify it?

    As I see it he’s given no argument at all against (2). He needs to argue that there is no possible computer-B which can prove G(A). But that’s surely false, since I can simply define B as [A + axiom{G(A)}].

    Like

  8. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    But this doesn’t follow!

    I think it does. It’s a kind of reductio ad absurdum or proof by contradiction.

    It starts by assuming it is possible to program a computer to implement the same information processing abilities as the mind of a human mathematician.

    Let’s call this computer, which by our starting premise is functionally identical to a human mathematician, computer A.

    Now let that human mathematician produce G(A).

    Since the human mathematician has produced G(A), then the human mathematician can prove that G(A) is true.

    But, since G(A) is a Gödel sentence of computer A, computer A cannot prove G(A) is true. Hence computer A and the human mathematician do not posess the same information processing abilities.

    But we started by assuming they did, so we have a contradiction. Our starting assumption cannot ever be true — there cannot ever be a computer that can implement the same information processing abilities as the mind of a human mathematician.

    So it doesn’t do to assume that the human actually corresponds to deterministic computer B. In that case you would just have the human produce G(B) instead.

    Like

  9. Daniel Kaufman

    DM: Your question also presumes that every element of physical reality is translatable into “information,” Otherwise “scanning” as you use it is meaningless. But as Massimo and I have discussed over the course of several different dialogues, this is not the case.

    Like

  10. Disagreeable Me (@Disagreeable_I)

    Hi Dan,

    I don’t think this is a meaningful question. It’s Star Trek. And that’s a problem I have with a lot of these sorts of conversations.

    I think concentrating on how outlandish this is misses the point. The original point from Penrose is that no matter what you do, how complex or clever the system you build, you can’t build something as smart as a human mathematician out of deterministic, mechanical, computable parts — even if we do go to unrealistic ridiculous outlandish Star Trek extremes. Penrose’s point is even if you allow something so ridiculous for the sake of argument, you’re still not going to be able to simulate human intelligence.

    Not only is the computational AI project doomed to fail if Penrose is right, it means that we need something more than a mechanical deterministic account of the operation of neurons. Humans could not be as smart as we are without something indeterministic and uncomputable in physics, unless we want to bring in the supernatural. At the moment, that seems to mean that Quantum Mechanics must play a role, as there isn’t really any room for uncomputable physics anywhere else.

    Like

  11. Daniel Kaufman

    DM:

    I don’t believe quantum mechanics has anything interesting to tell us about consciousness, intentionality or thought.

    The point isn’t outlandishness. The point is that the question, as you posed it, is meaningless. Not in the sense of being syntactically ill-formed, but informally meaningless.

    The AI project is doomed to fail for many reasons, the ones you’re fixating on being the least interesting.

    Like

  12. Disagreeable Me (@Disagreeable_I)

    Hi Dan,

    DM: Your question also presumes that every element of physical reality is translatable into “information,” Otherwise “scanning” as you use it is meaningless.

    My question doesn’t presume this. Scanning is just capturing whatever is translatable into information. Whatever is left out is what is needed for reproduction, not for simulation. A simulation is just a reproduction of whatever it is that can be captured as information, without reproducing everything else (properties such as actual mass, temperature, intentionality, etc).

    Like

  13. richardwein

    Hi DM. I dropped in for an occasional visit, and can’t resist posting. I’m responding to you because I think you’re the best informed on the subject of Penrose, but I think you’re missing the central problem with Penrose’s argument.

    You wrote:

    Because any deterministic hardware system performing a computation can be construed as a formal axiomatic system. This point is rarely challenged by those who seem to understand the issues, I think, so let’s accept that. Anyway, if a computer is equivalent to a formal axiomatic system, then for any computer we can in principle construct a Gödel sentence which we can see is true but which is unprovable by that computer.

    I’m not sure whether you were speaking for yourself at that point, but in my view the argument has already gone wrong. You’ve jumped from “a formal axiomatic system” to a “a formal axiomatic system [in which Godel sentences exist]”. The formal specification of a computation can be construed as a “formal axiomatic system”, but it doesn’t follow that it’s the sort of formal system that is the subject of Godel’s theorem. I won’t say “one which is expressed in terms of Peano’s axioms”, as you’ve put it, because I’m not quite sure that Peano’s axioms are the relevant ones. But the formal specification of a computation need not express Peano’s axioms or the axioms of any mathematical system at all. Not all formal axiomatic systems are mathematical formal axiomatic systems.

    I agree that any deterministic system can be represented (at least approximately) by a formal system. I would add non-deterministic systems too. One example of such a formal system could be a specification of a computer system simulating the brain of a mathematician at a very detailed level. (Of course, I’m speaking in principle, not commenting on whether this will ever be a practical possibility.) Such a system could be as capable of producing mathematical theorems and proofs as a human mathematician could be.

    HOWEVER, it doesn’t follow that the formal system in question must be one based on mathematical axioms. It’s important not to conflate the utterances of the physical system with formulas of the formal system. The utterances need not be formulas of the formal system! The utterances are not (necessarily) being produced by deduction. After all, suppose the mathematician says, “I’m hungry!”. We wouldn’t assume that that utterance has been deduced from a set of axioms. But, because mathematical theorems sometimes are deduced from axioms, we may be too quick to jump to the conclusion that any system that utters mathematical theorems has deduced them from axioms! That, I think, is where Penrose goes wrong.

    What do you think are the “axioms” and “theorems” of the formal system that specifies our computer simulation? Perhaps it will help if I quote something I wrote once before:

    It’s true that we can specify an algorithm in the format of a mathematical formal system, but doing so is of little benefit to these discussions. To see that, I’ll proceed by sketching such a formal system.

    Let’s say we want to model the process of program execution at the machine code level. Let the well-formed formulas of the system be strings of binary digits representing the possible states of the computer’s memory. We’ll need to include the processor’s internal registers. For example we might let the first 64 bits of each formula correspond to the processor’s program counter, which points to the next instruction to be executed. Then our single “axiom” will correspond to the initial state of the computer, with our program and starting data in memory. Our “theorems” will correspond to subsequent states of the computer, after the execution of each instruction. Our single “rule of inference” will tell us how to execute one instruction, whichever one is currently pointed to by the program counter. This single rule could be broken down into sub-rules, one for each different instruction in the instruction set. But I call it one rule in order to emphasise that there is no choice of rules to be applied, as there is in the case of a mathematical formal system. In the mathematical formal system, it’s open to the mathematician to decide which rule to apply to produce the next theorem, and there are many possible theorems he could produce. That’s why we can’t think of the mathematical system as specifying an algorithm. But in the case of program execution it’s more natural to think in terms of an algorithm than of a set of rules.

    https://barbedsextant.wordpress.com/2015/10/14/searles-argument-from-syntax-and-semantics/

    The only “axiom” of such a computational system is the one that specifies the initial state of the computer’s memory. That’s not a mathematical axiom. To put it another way, the formal specification of a computer system might be considered a “formal system” in a broad sense, but it’s not a mathematical formal system, and Godel’s theorem can’t be applied to it.

    Like

  14. Michael Fugate

    John, assertions are not arguments. Have you ever tried to put together your reasons for believing intelligent design is an explanation for anything? It might be a useful exercise to make the attempt. The devil is in the details – proposing intelligence creates many more unanswered questions than it answers.

    As a start, why not paraphrase both Kant’s and Hume’s arguments, I would be interested in what you come up with.

    Like

  15. synred

    Wells is a member of the Church of Reverend Moon, and that his antipathy toward evolution is entirely ideological in nature, I of course “committed” an ad hominem.

    As I understand it ‘ad hominem’ attack is a logical falacy, i.e., that just becuase some belongs to a crack pot religion does not mean that every thing you say is wrong and thus is not allowed as an argument in formal debate.

    In less formal situations it tends to be a pretty good guide, but may be wrong on occasion.

    Like

  16. SocraticGadfly

    The “ad hominem” is to a degree a reverse of the “appeal to authority.” When it’s an expert’s actual field, an appeal to authority is fine. Ditto on the flip side — pointing out something from a person’s background that is relevant to the issue at hand, including a relevant bias, is fine.

    Liked by 2 people

  17. Coel

    Hi DM,

    It starts by assuming [Premise 1] it is possible to program a computer to implement the same information processing abilities as the mind of a human mathematician.

    OK.

    (2) Let’s call this computer, which by our starting premise is functionally identical to a human mathematician, computer A.

    OK.

    (3) Now let that human mathematician produce G(A). Since the human mathematician has produced G(A), then the human mathematician can prove that G(A) is true.

    But — as you agree — this contradicts Premise 1. Since G(A) cannot be proven correct by comp-A, and since — by Premise 1 — the human is functionally equivalent to comp-A, then the human cannot prove G(A).

    Our starting assumption cannot ever be true — there cannot ever be a computer that can implement the same information processing abilities as the mind of a human mathematician.

    Well not so. All this argument shows is that we can’t have both (1) and (3). So, instead of not having (1), we can simply not have (3). I don’t see how this argument ever shows that Premise 1 cannot be true.

    Like

  18. Disagreeable Me (@Disagreeable_I)

    Hi Richard,

    I’m not really good enough on this stuff to be able to comment sensibly on your argument. All I know is this is not the kind of challenge experts usually seem to mount. It is usually deemed fair game to consider a computer to be equivalent to a mathematical axiomatic formal system. I’m willing to grant that, because it seems very plausible to me that for every piece of software there ought to be true statements that that software could never prove. It’s quite a similar idea to the halting problem. It’s also plausible to me that there should be a way to construct such a statement given knowledge of how that software works (access to the source code, essentially).

    Furthermore, we can do so with humans. “Richard Wein can never prove this statement to be true”. I know that sentence is true. You cannot possibly know it is true. It’s just a variant on the liar paradox. So humans are just as vulnerable to this sort of thing as computers. So I think we should accept that computers are likely as vulnerable to Gödelian tricks as Peano arithmetic, but we should see that we are no different and therefore this doesn’t mean computers are more limited than we are. We still have a puzzle to sort out, but it’s the same sort of puzzle we have to sort out when we think about how I can see a sentence is true, how you can presumably appreciate that I can see it is true, and yet you can also see that you can never consistently see it to be true yourself.

    You ask about utterances. I would guess that the state of the system as it produces those utterances are eqiovalent to formulas of that system.

    After all, suppose the mathematician says, “I’m hungry!”. We wouldn’t assume that that utterance has been deduced from a set of axioms.

    Well, no, but the state of the mathematician’s mind that leads her to say “I’m hungry!” can be regarded as a true statement in the corresponding axiomatic system — it is a state that can be reached by applying the rules of the system (which I guess are axioms) to an initial state (which I guess is also an axiom). Whatever the mathematician does corresponds to a proof, and whatever the mathematician never does corresponds to something that cannot be proved within the system. So even the statement “2+2+5” if uttered by the system is a true proof in the sense that it is truly a state that can be reached by the system, given its initial state and the rules that govern its evolution. We only have Gödelian problems once we assume that humans are reliable in the sense that their utterances are also true in the ordinary sense, which is where my point comes in that we don’t know that humans are consistent.

    Like

  19. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    Well not so. All this argument shows is that we can’t have both (1) and (3).

    Penrose would argue in defense of (3), saying that (3) must be true given Gödel’s theorems. You are right that this is the crux of the issue.

    So, by denying (3), you are denying either that a human could produce a Gödel sentence which corresponds to that human’s own mind (which is what I was saying when I said it would take more resources than a human mind has available), or you are denying that a human could know that such a sentence was true (which would be the case if we didn’t presume to know the human is consistent).

    So this is more or less what I was saying.

    Liked by 1 person

  20. ejwinner

    Interesting discussion. And both interesting – and somewhat annoying – that we have in the comment section examples of Curiosa like thought. Particularly DM’s trenchant defense of a quantum ontology that he says doesn’t even convince him, and his effort to drag in Simulation Hypothesis metaphysics. It’s as if no amount of inquiry into the verifiability or falsifiability of such claims – which are wholly lacking – can dissuade him from coming up with one rationalization after another for them, no matter how distinct from – and irrelevant to – the world that we can experience of properly study.

    It’s as if there’s some sort of weird anxiety that if we don’t take the unjustifiable into account along with what can be found to be justified, some terrible doom will befall human intellect.

    But the fact is that scientists and philosophers, in their separate fields, with their diverse methods of inquiry, must at some point say of a particular question, ‘this isn’t working; this is counter-productive; this is just a dead end.’

    There is no way to demonstrate Intelligent Design; and even if it were true, there is nothing scientifically that could be done with it.

    There is no way to justify the Simulation Hypothesis, but even if it were true, there would be nothing philosophically to be done with it.

    The problem with any faith-based metaphysics is that they have no applicability to the world, beyond enhancing certain feelings on the part of the faithful.

    Liked by 2 people

  21. Coel

    Hi DM,

    So, by denying (3), you are denying either that a human could produce a Gödel sentence which corresponds to that human’s own mind …

    Yes, that. The assertion from Penrose is that a human mind can produce the Godel sentence G(human). In asserting that he is just assuming everything he is trying to argue for.

    Liked by 1 person

  22. synred

    Agreed again. For any deterministic computer A, operating on formal axiomatic system A, there is a Godel sentence G(A). However, a deterministic computer B, operating axiomatic system B, could do things that A could not and so might be able to prove G(A).

    This makes no sense. What does same Godel sentence in a different Axiomatic system mean?

    The same set of symbols? The same meaning (that it itself false)? In which case is there more than one Godel sentence?

    I’m trying to read . Godel’s Proof (Kindle Locations 575-576). NYU Press. Kindle Edition, but have gotten stuck on the example of a definition of an irrational number in terms of logical and integers. II’m not sure such non-technical discussion can lead to anything conclusive.

    Like

  23. Michael Fugate

    Coel,
    There are a number of ways of telling. One comes from things like the laryngeal nerve of the giraffe, which follows a path that no intelligent designer would choose. The path only makes sense as the contingent product of an un-intelligent process.

    This presumes that you know who the intelligent designer is and how it would design something. Precisely the things we don’t know.

    Like

  24. Aaron Michaux

    Never underestimate the power of your speech, and you just do not know if your discussion had an effect on Curiosa at some time in the future. I’d argue that the basic cognitive mechanisms affect us all, no matter how expert. We will always have our thoughts subconsciously manipulated by our group-identity commitments, and be none the wiser to it.

    Liked by 3 people

  25. Coel

    Hi synred,

    This makes no sense. What does same Godel sentence in a different Axiomatic system mean?

    A statement that is a Godel sentence within system-A, G(A), is not necessarily a Godel sentence within some other system B, and thus might be provable within B.

    For example, if B is simply A with the addition of G(A) as an axiom, then G(A) is provable within B.

    Liked by 1 person

  26. Daniel Kaufman

    DM wrote:

    “It’s meaningful to me.”

    Well, that’s good, at least.

    There is of course, the further problem of what counts as “information,” which we won’t agree on either.

    Like

  27. synred

    For example, if B is simply A with the addition of G(A) as an axiom, then G(A) is provable within B.

    Well sure. Then there is another Godel statement in A+G(A), etc.

    So you can prove G(A) can’t be proved in A, by assuming G(A) can’t be prove in A. Does not seem that interesting..compared to the point that there is always another such statement (induction).

    I don’t think G(A) means outside the context of A. Unless B contains A, G(A) means nothing in the context of B.

    Godel’s theorem seems pretty algorithmic to me, it seems to me a computer could generate it. From the physics point of view I have trouble deciding whether it matters are not.

    Like

  28. couvent2104

    Massimo,

    If you suggest books for a Plato Footnote Book Club, may I suggest to avoid books that lead to discussions about Gödel?
    Don’t misunderstand me: Gödel is very interesting, just like QM and entropy and so on, but I rarely get the impression that anything fruitful comes out of these discussions. Just a personal opinion, of course.

    Liked by 5 people

Comments are closed.