As readers of this blog, of my books, and of pretty much everything else I’ve written so far know, I value rational discourse and (still) believe it to be the only way forward open to humanity. But boy it can get frustrating, sometimes! One such example occurred recently, during an increasingly surreal discussion I had with one of my relatives — about politics, pseudoscience (specifically, the non-existent connection between vaccines and autism), conspiracy theories (9/11), and much, much more.
Of course, I should have known better than to start such discussion, especially with a relative who I knew subscribed to all those notions. Blame it on the nice bottle of Aglianico wine we had been sharing during the evening.
Anyway, the pattern was the expected one on such occasions: denial of relevant expertise (you know, they thought Galileo was crazy too!), while at the same time vigorously — and apparently entirely obliviously to the patent contradiction — calling on someone else’s doubtful expertise (the guy is an engineer! No, he isn’t…). There was also continuous side-tracking by bringing up irrelevant or unconnected points (in informal logic that’s red herring), as well as pleads to go “beyond logic,” whatever that means. You know, the usual fun.
And then another one of my relatives, present at the discussion and very much amused by it, hit the damn nail right on the head. He explained that my interlocutor was simply confusing probability with possibility. I stopped dead in my tracks, considered the suggestion, and had a Eureka! moment.
Yes, that was indeed exactly what was happening. Pretty much all of her arguments were along the lines of “well, it is possible that…” or “but you can’t exclude the possibility that…” And of course she was right. I explained, however, that this was a Pyrrhic victory. Yes, it is true that for most things (in fact, for any statement that is not mathematical or purely logical) there is always the possibility that one is wrong. But usually we don’t make decisions based on possibilities, we use the much more refined tool of probabilities (estimated to the best of our abilities).
I tried to make the point by drawing two diagrams, like this:
The graphs illustrate two hypothetical probability distributions for a set of events, with the probability estimate on the vertical axis and the type of event on the horizontal one. The top diagram represents my relative’s view of the world: she is acting as if all events had equal probability. Not literally, because she does understand that some outcomes are more likely than others; but in practice, since she considers mere logical possibilities, however remote they may be in reality, to be worthy of the same amount of attention as outcomes that are much more likely to occur. The lower diagram shows how the real world actually behaves. Some ranges of outcomes have much higher probabilities than others, and the resulting distribution (which doesn’t have to take the shape I drew, obviously) is far from flat.
If you smell Bayesianism coming, you are right. I resumed our discussion with David Hume’s famous statement in Of Miracles (part of the Enquiry Into Human Understanding), to the effect that a reasonable person proportions her beliefs to the evidence, a statement later made famous by Carl Sagan within the context of discussions of pseudoscience: extraordinary claims require extraordinary evidence.
Then I moved to the basics of Bayes’ theorem, explaining the famous equation (see top image). It says that the probability of a theory A, given the available evidence B, is proportional to two factors (standardized by the denominator in the equation, so to come out as a number between 0 and 1): the probability of observing evidence B IF hypothesis A were true, multiplied by the probability that A is true based on initial considerations (the so called “priors”).
The beauty of Bayes’ theorem is that it keeps updating itself in a recursive fashion, as new evidence becomes available. The left-side of the equation is called the posterior probability, and it is obtained — conceptually speaking — by updating the priors in proportion to the newly available evidence. While there is much discussion about subjective vs objective Bayesianism (with reference to whether one should use only objective priors or whether subjective ones are acceptable), the nice thing is that one can show that no matter what the initial priors are, after a sufficient number of iterations the posteriors converge toward the true value.
This makes Bayes’ theorem a formidable tool in decision making procedures, and more generally a good guide toward rational assessment of pretty much everything. It is true that some people have gone so far as to make Bayesianism into an all-encompassing and rather strict ideology, almost worshiping the damn thing (I’m looking at you, Less Wrong), but that’s going too far. It is simply a very convenient tool in a lot of practical situations (especially when we have access to objective priors) as well as a good metaphor for how to go about assessing beliefs (as Hume advises, in proportion to the — ever changing — evidence).
I concluded my explanation to my relative by suggesting that when we make an assessment of any given notion we are basically placing a bet. Given the best understanding I have of the vaccine-autism controversy, for instance, I bet (heavily) that vaccines do not, actually, cause autism. Do I know this for certain? No, because it isn’t an a priori truth of mathematics or logic. Is it possible that vaccines do cause autism? Yes, that scenario does not involve a logical contradiction, so it is possible. But those are the wrong questions. The right question is: is it likely, on the basis of the available evidence? If you had to bet (with money, or with the health of your kids), which way should you bet? But by that time we had ran out of Aglianico, and the evening was coming to a close.