I have recently co-published a paper, together with my collaborators Stefaan Blancke and Maarten Boudry, entitled “Why Do Irrational Beliefs Mimic Science? The Cultural Evolution of Pseudoscience,” that I think readers of this blog will find interesting.
In the paper, we develop and extend an epidemiological framework to map the factors that explain the form and the popularity of irrational beliefs in scientific garb. These factors include the exploitation of epistemic vigilance, the misunderstanding of the authority of science, the use of the honorific title of ““science” as an explicit argument for belief, and the phenomenon of epistemic negligence. We conclude by integrating the various factors in an epidemiological framework and thus provide a comprehensive cultural evolutionary account of science mimicry.
Faking and imitating science, exploiting its cultural and epistemic authority, evidently constitutes a profitable strategy. People are somehow duped into believing that pseudoscience constitutes the real thing, or provides a worthy alternative to actual science. But why are people so easily misled?
In previous papers, we have explained the popularity, persistence and typical features of weird beliefs in terms of cognitive and cultural evolutionary processes in which ideas and beliefs adapt to particular susceptibilities of the human mind and to withstand falsification and criticism (see, for instance, here). In the new paper, we investigate why some of these beliefs or belief systems, such as creationism, homeopathy and astrology, assume the cloak of science or pretend to be on equal footing with science.
The first concept we zoom in is that of “epistemic vigilance.” In a landmark paper published in 2010, Sperber and colleagues have explored a human capacity for what they call epistemic vigilance. They argue that such vigilance can be targeted both at the person who is communicating (the source), and the information itself (the content). As to the source, one can rely on two criteria, honesty and competence. The first criterion deals with the intentions of the informant, the second with whether he or she is capable of providing correct information. An informant can be dishonest or incompetent, both, or neither, but only in the last scenario should we trust him. Hence, it is important that we can detect and identify reliable sources, as the opposite of epistemic vigilance is not trust, but blind trust.
If we apply these considerations in the context of expertise, we see that human cognition is clearly equipped with the mechanisms to discriminate, at least in principle, between genuine and false experts. We have the ability to check — within limits — whether an expert is reliable, and whether the information he or she provides is consistent and coherent with our background beliefs.
However, in the case of science and pseudoscience, things tend to be a bit more complicated. Scientific beliefs are often too difficult to comprehend for lay people, which makes content evaluation impossible. This leads people to accept, or reject, scientific concepts mainly on the basis of trust. Now, deferring to experts is not necessarily a bad thing, and indeed, it would be impossible to navigate everyday life without such trust (e.g., think of mundane actions like going to a dentist, or to a car mechanic). Even without the possibility of content evaluation, one can be epistemically vigilant (e.g., checking the credentials of said dentist or car mechanic). One just needs to discriminate between real and false experts, and this latter issue is the target of the next section our paper.
In 2001, Goldman has argued that, even though novices or lay people do not have epistemic access to a particular domain of knowledge, they can rely on five sources of evidence to find out which experts they can trust.
Firstly, one can check the arguments that experts bring to the discussion. Lay people may not be able to grasp the arguments directly, but they can check for what Goldman calls “dialectical superiority.” This does not simply mean that one looks for the best debater — although debating skills can certainly add to the impression that one is an expert — but that one keeps track of the extent to which an alleged expert is capable of debunking or rebutting the opponent’s claims.
Secondly, a novice can check whether and to what extent other experts in that field support a given (alleged) expert’s propositions. It will be more reasonable to follow an expert’s opinion if it is in line with the consensus.
Thirdly, lay people can distinguish between experts on the basis of meta expertise, in the form of credentials such as diplomas and work experience. For example, an expert with a PhD in a relevant field can in general be considered to be more reliable — ceteris paribus — than an amateur.
Fourthly, a novice can check for biases and interests that affect an expert’s judgement. If an expert has a stake in defending a particular position, it will raise the suspicion that he is not interested in providing correct information, which will undermine his credibility. Of course, nobody can be free of biases, which also counts for scientists. Hence the question is not whether there is bias (there always is), but how much, where it comes from, and how one can become aware of and correct it.
Fifthly, a novice can assess an expert’s past track record. The more an expert has been right in the past, the more he has demonstrated that he has indeed access to some expert domain. As such, he will probably be right again in the future.
These five sources of evidence can help novices to tell genuine from false experts, even if they do not understand the substance of the arguments. The problem, of course, is that purveyors of pseudoscience put considerable effort in mimicking each of these tell-tale signs. Take so-called “scientific” creationism, for instance. For decades, creationists of all stripes have been inviting evolutionary biologists to take part in public debates. In the 1930s, creationist Harry Rimmer seized on every occasion to engage scientists in public debates for large crowds and humiliate them. Later, in the 1960s and 1970s, young-earth creationist Duane Gish built himself a reputation out of debating evolutionary scientists (including yours truly, five times).
The creationist strategy has clearly paid off. In a debate setting, a creationist with good rhetorical skills can demonstrate his “dialectical superiority” over less prepared scientists (who tend to underestimate their creationist opponents). Creationists also make a habit of pointing out that many scientists support their cause, and they boast their academic titles and credentials, even if these do not apply to the field they pretend to be experts on (as is very frequently the case). Moreover, creationists often publicly accuse evolutionary scientists of having a hidden political agenda, while they present themselves as unbiased seekers of truth (even though, of course, they are anything but). And, finally, creationists boast of having excellent explanations for biological phenomena. In sum, pseudoscientists give novices a hard time in identifying genuine experts.
The central section of our paper attempts to answer in more detail the question of why pseudoscience is still around, despite a number of cognitive mechanisms available to human beings to spot bullshit, so to speak. I will not attempt to summarize it here (feel free to download the full paper, linked above, if you are interested), but will only mention that there we cover the following topics: error-prone mechanisms and heuristics, exploitation of epistemic vigilance, conceiving of science as an argument, the phenomenon of epistemic negligence, and the existence of what we call “stabilizing factors” (such as confirmation bias and anti-expertise attitudes).
Given all the above, why is cultural mimicry of science so successful? Irrational beliefs become more relevant by dressing up as science, in the sense that they are more likely to grab people’s attention, to be remembered and cognitively processed, and to be disseminated. In the paper we identify and discuss several factors that we think affect the relevance of pseudoscience. Here is a summary of the specific contribution of each factor in this process of cultural mimicry, within the larger framework of cultural epidemiology.
The reason why pseudoscience exists is not that people are stupid or ready to believe anything that they are told. In fact, humans have a suite of mental mechanisms that enable them to filter the information that they receive from others. They can assess the quality of the source by checking competence and honesty, and the quality of the content by checking for consistency and coherence. But in the case of science, things become a bit more complicated. Because people tend to be epistemically vigilant, and critical about sources of information, irrational beliefs need to pretend to originate from a source that people tend to deem trustworthy, i.e., science. However, people do not fully understand or appreciate the epistemic authority of science. They either ascribe authority to science on the basis of its effects or simply because of its reputation. The resulting confusion makes it easier for irrational beliefs to mimic science. In sum, the mechanisms of epistemic vigilance, an environment in which science is regarded as an epistemic authority, and a public who lacks an understanding of that authority, together constitute sufficient conditions for pseudoscience to emerge.
This deception, we show in the paper, largely occurs by persuasion. Purveyors of pseudoscience explicitly use scientific publications, language and typical features such as graphs and formulas, to convince people that they are dealing with genuinely scientific and thus reliable information. Reason-based choice explains why people tend to prefer irrational beliefs that mimic science to non-mimicking ones, because the former provides them with at least one additional argument, i.e., an extra reason for belief.
The relevance, and ensuing pervasiveness, of pseudoscientific beliefs, however, is not only a matter of argumentation, but also of motivation. People are not interested in impartial truth, but in finding and supporting beliefs that make intuitive sense. This lack of concern for truth becomes exacerbated by “epistemic negligence”: people do not invest time and energy in understanding and sustaining highly counterintuitive scientific concepts that are practically useless. The resulting shaky notions of scientific concepts come closer to pseudoscience than science, so that any difference between the two starts to blur. Consequently, psychological factors such as confirmation bias and anti-expertise allow pseudoscientific beliefs to stabilize.
To summarize, pseudoscientific concepts are pervasive: (1) because posing as science works as a tool of persuasion, and (2) because people lack the motivation to correct their intuitive beliefs, but instead seek to confirm them and, simultaneously, distrust genuine scientific expertise.
At the level of individual chains of communication, a complex picture emerges. In chains of transmission, each of the factors discussed above can be present to varying degrees. Several factors, or perhaps even all of them, might be involved at the same time. For instance, purveyors of pseudoscience try to persuade their audience by using science as an argument, while members of the audience readily accept pseudoscientific claims because these tend to corroborate their intuitive, pre-scientific beliefs. Zooming out, however, chains of transmission that might differ depending on the factors involved, when instantiated a sufficiently large number of times, will have the effect of making irrational beliefs converge around particular cultural attractors, namely irrational beliefs that mimic science. As such, the various factors that affect the micro-level processes in which pseudoscientific beliefs become relevant, result in a relatively stable cultural evolutionary process through which irrational beliefs turn into pseudoscience.