Julian Baggini’s The Edge of Reason, of which we have so far examined the first five chapters, ends its second part with a discussion of the distinguishing characteristics of objective rational discourse. He begins it by suggesting that the problem with the classic (Platonic, really) view of reason is that it treats reason as an heteronomous entity, something coming from the outside, imposed on us by eternal laws of logic. Instead, human reason is, well, human, i.e., autonomous, shaped from the inside, shaped by the characteristics and limitations of what it means to be human in the first place.
That said, Julian immediately qualifies, reason does have a component of heteronimity, in that it cannot simply be a self-serving instrument entirely detached from how the world actually is, but rather has to account for the brute facts of external reality. Reason, he says, is nothing if it doesn’t aspire to objectivity, and this brings him to propose a definition of rational argument: “the giving of objective reasons for belief.”
However, if you recall our previous discussions of Baggini’s book, you will immediately notice a tension here: he has been arguing for a somewhat deflated, human, view of reason, and now he’s going to ask for objectivity? Well, yes, but he puts forth a deflated view of objectivity itself, one that he derives from the philosopher Thomas Nagel.
Nagel wrote a famous book, back in 1986, in which he argued that objectivity is often conceived in terms that he summarized with the oxymoronic phrase of the view from nowhere. Particularly, science aspires to such a view, which both Nagel and Baggini see as hopelessly misguided.
Julian’s intriguing example of an attempt to achieve a view from nowhere is the famous plaques that were put onboard the two Pioneer spacecrafts launched in the early ‘70s, and which are now outside the confines of the solar system (see top figure). The plaques were designed by astronomer Carl Sagan as a symbolic attempt to communicate with possible alien beings. (Symbolic because there is pretty much no chance in hell that the Pioneers will ever actually reach another habitable world, given their speed and their cosmic trajectories.)
The plaques convey stylized graphic information about the solar system and humanity itself, attempting to provide reading keys for the aliens, for instance by representing the position of the solar system with reference to fourteen nearby pulsar stars, as well as by drawing the symbol of the hydrogen atom, the most abundant element in the universe.
But for all that, the plaque is going to be very difficult to decipher for a non-human intelligence. For one thing, of course, it assumes that the aliens are capable of perceiving visual information, which is far from obviously true. Notice also the use of an arrow to indicate the trajectory of the Pioneer probe itself, as well as the male human being drawn with a raised hand and an open palm, in the universal sign of greeting. Universal, of course, for human beings. Just like the arrow indicates direction for us, because our prehistory included hunters who used arrows to catch their prey. There is no reason whatsoever to imagine that an alien culture would recognize either one of these human graphic conventions. One of the reasons it will be difficult to communicate with alien intelligence is precisely because no species truly speaks a universal language. There is no view from nowhere.
As Julian summarizes the problem: “Even if the truths we latch on to are indeed objective, they are always framed within our human ways of understanding, by our language and our senses. … Truth has to be seen from some perspective or other, even if it is in itself purely objective.”
Nagel’s view of knowledge is nuanced. There is a continuum from the entirely subjective (I know that I like chocolate) to the purely objective (1+1=2), with much of interest to us lying somewhere in the middle. The more our knowledge of things is linked to our particular framework, the more it depends on the idiosyncrasies of human senses and reasoning, the more it veers towards the subjective, as much as it aspires to reach the other end of the spectrum.
Baggini points out that it is a mistake to confuse objectivity with truth. I can report a subjective fact in a truthful matter (it is true that I like chocolate), and I can state an objective facts untruthfully (the distance between the Sun and the Earth is 300 million kilometers — which it isn’t). That’s why talk of “objective truth” is not redundant.
Julian goes on to say that “rationality and objectivity are usually seen as natural bedfellows. My suggestion is that their link is more intimate than this. To offer a rational argument just is to provide objective reasons for belief, reasons which can include both evidence and argumentative moves.” Notice the inclusion of argumentative moves, not just evidence. The “facts” never speak for themselves, they need to be framed within a given argument, and that argument can be rational, and yet fail.
Which brings him to discuss what he sees as five characteristics of objective reasons and arguments: they are comprehensibility, assessability, defeasibility, interest-neutrality and being compelling.
Let’s begin with comprehensibility. The quintessential example of subjective judgment is art: I can tell you that I like, say, Van Gogh’s Starry Night, and you can reply that you don’t. But we can then move from simple statements of likes and dislikes to a discussion of aesthetics if I begin to tell you why I like the painting, describing its vivid colors and evocative shapes, and so forth. My argument may not convince you, but it may prompt you to take a second look. Importantly, by giving you reasons for my subjective aesthetic judgment I have made a (small) move toward the objective end of Nagel’s spectrum, because I have made comprehensible to you my reasons for liking Starry Night.
For Julian, “an argument that is in principle comprehensible by any rational agent is more rational than one that is comprehensible only by certain types of rational agent,” even though any given argument may not be comprehensible by just any rational agent. For instance, in the case of art, it helps to know something about its history to better appreciate how to situate certain paintings that may at first appear strange and puzzling.
For something to be objective, through, it has also to be assessable. If others do not have any way to judge the truth of your assertions, then those assertions are hopelessly stuck right at the subjective end of Nagel’s spectrum.
Baggini notes that “whenever people appeal to inner convictions, esoteric revelations or diktats from authorities they are evading objective scrutiny by keeping key elements of their justification hidden.” But he acknowledges a kind of circularity in his reasoning: he invokes the concept of a rational agent in explaining what rationality is. Yet this sort of circularity is not vicious, since any definition turns out to be, if pushed far enough, circular. He is just saying that “a rational agent is one who can understand and assess objective arguments, and an objective argument is rational if it can be understood and assessed. These terms all hang together.”
After a brief discussion of several attempts to make sense of rational argument, from David Hume’s distinction between “matters of facts and relations of ideas,” to the logical positivists’ verifiability criterion, to Popper’s falsificationism, Julian says: “if an idea is too vague it will be dismissed as woolly and hand-waving. Too precise, however, and the logic-choppers will be out to unpick its contradictions and inconsistencies. As Aristotle’s immortal adage states, ‘It is the mark of the trained mind never to expect more precision in the treatment of any subject than the nature of that subject permits’ — nor less, we might add. The Goldilocks state of philosophy is to be precise enough to be saying something substantive but not so precise as to ride roughshod over the complexities and ambiguities of the real world.”
Which leads us to a discussion of the third criterion: defeasibility. One of the reasons Popper’s famous criterion of falsifiability for demarcating science from non-science did not work is because “a rational argument is always in principle defeasible — open to revision or rejection — by public criteria of argument and evidence,” and in turn “defeasibility is a property of all propositions with any degree of objectivity, however small.” The history of science is littered with theories that at some point ran into apparent empirical falsification, only to be rescued by scientists because of the adoption of suitably modified versions of the original theory. The Copernican view of the solar system did not work particularly well because Copernicus had assumed the orbits of the planets to be circular. When Kepler made the modification of treating them as elliptical the calculations matched the empirical evidence far better.
The fourth criterion is interest neutrality. Here Baggini helps himself to a thought experiment. Imagine a philosopher who is captured by a super-villain and is presented with the following ultimatum: either believe, genuinely believe, that 1+1=3, or I’ll destroy the world. (The super-villain can check by way of brain imaging whether the philosopher really believes something or not.)
This would seem to be a case of a belief that meets our first three criteria: it is assessable, comprehensible, and defeasible. And yet, it now seems that the philosopher has objective reasons to believe something false.
The way out is that we need to distinguish between two modes of rationality: in the service of an end, or as an end in itself. The super-villain’s threat “does not provide a rational argument to believe that 1+1=3, but a rational argument why it is prudent to believe 1+1=3.” There is a huge difference between the two. Julian calls one practical rationality (yeah, I’ll try to believe that 1+1=3 if it saves the world) and the other epistemic rationality (in reality, 1+1<>3). Both require assessability, comprehensibility, and defeasibility, but epistemic rationality also requires interest-neutrality, which the super-villain vs the philosopher case clearly lacks.
Importantly, practical rationality rests on epistemic rationality, as an agent, “in order to make the right decision, must assess the evidence in an interest-neutral way, and only then decide what she ought to do in order to serve the interests she takes to be most important.” It follows that it may, in some cases, be practically rational to believe something that is not epistemically rational.
Baggini acknowledges that these days there is a lot of skepticism in the very possibility of interest-neutral rationality. He mentions the sort of postmodern criticism by the likes of philosopher Michael Foucault, for instance, the idea that claims to knowledge are always connected to the striving for power or the serving of particular interests. But he may as well have cited also the sort of research conducted by contemporary social psychologists like Jonathan Haidt, which highlight motivated reasoning and rationalization over rationality.
But Julian makes a good argument that — pace both Foucault and Haidt (both of whom do make good points) — if one insists in arguing that all rationality is practical / motivated and not interest neutral one ends up in absurdity (and, I would add, in self-defeat).
He mentions an interview he conducted with radical postmodern philosopher Luce Irigaray, who “notoriously suggested that perhaps even E = mc^2 is a ‘sexed equation,’ expressing masculine dominance. Why? Because ‘it privileges the speed of light over other speeds that are vitally necessary to us.’” I hope you appreciate the ridiculousness of this, without the need of further explanation.
Baggini distinguishes between the claim that there is no such thing as a value-free practice of science (true) from the stronger claim that there are no value-free scientific claims (false, E really does equal mc^2). Another example of this is provided by philosopher Tim Lewens, who said that Marx and Engels were right when they argued that Darwin’s thought was “steeped in the industrial capitalist milieu.” It does not follow at all, however, that the Darwinian theory of evolution is not, in fact, a good account of why we see such a bewildering variety of living organisms on our planet.
Finally, we get to the fifth element of a rational claim: compulsion. “Turned over and examined on all sides, any rational agent who understands the argument should find herself feeling forced – or at least strongly pushed – to accept the conclusion, whether she likes it or not.”
The case of believing that 1+1=2 is an obvious one. Baggini also says that a rational agent should feel compelled to accept the existence of a causal link between smoking and cancer, on the basis of the available evidence. I would add the theory of evolution, climate change, and the fact that vaccines do not cause autism, among many others.
Of course, it is painfully true that not everyone accepts rational arguments (remember that an argument can be rational and still wrong, by the way). To which Julian sensibly replies:
“If you can see that an argument is assessable by all, and that assessment has not raised any major problems with it; if you also understand it; and if you can see that it does not require you to share any particular agent’s interests to accept it: such an argument must carry with it a certain degree of force. There is a sense here in which there is simply nothing left for a rational agent to say to someone who claims to have followed all these steps but is still not convinced.”
As I often want to tell some of my students (but I don’t because I’m too polite and because it would be counterproductive): I can explain it to you, but I cannot understand it for you…
Baggini points out that the failure to translate the rational force of an argument into the psychological force of accepting and acting on it is particularly evident in cases of ethics. For instance, many people readily acknowledge that vegetarians got the better moral argument, and yet refuse to follow through and switch to a vegetarian diet (I include myself here, as I am a pescatarian, and sometimes even just a reducetarian).
After having discussed his five criteria for rational arguments, Julian turns to what he calls the boundaries of rationality. His approach can be applied, he suggests, to common cases where people think they are providing rational arguments, but in fact fail to do so, as in instances of reasoning relying on anecdotal evidence, or of claims based on mystical insight.
Take the case of someone who says that homeopathy works because it has worked for him and for several people he knows. Baggini’s analysis goes like this:
“We can see why these reasons deserve to be considered as attempts to provide a rational case for the efficacy of homeopathy. They appear to be comprehensible and assessable, and in turn defeasible. Those who offer them also see them as interest-neutral and compelling. We should reject them, however, because when we examine these supposed characteristics more carefully, they are not present to a sufficient degree. Assessability is the key here.”
Julian distinguishes between arguments that are rational or irrational (their mode), and arguments that are good or bad (their quality), and further adds that some arguments are simply non-rational, as in when people say that they have acquired special insights into the nature of the world by virtue of taking LSD. Mystical experiences do provide reasons for belief, since we believe all sorts of things as a result of direct experience. But they don’t provide rational reasons for belief.
In the section of this chapter entitled “Rational catholicism,” Baggini reiterates his idea that good (human) reasoning requires judgment, but again clarifies that “good judgement is much more than just opinion, and something less than the mere following of logical rules.” He uses this to elaborate on his take about why philosophy, unlike science, does not lead to agreement about the issues being discussed: “Philosophy relies entirely on rationality and nothing but [i.e., empirical evidence does not directly enter into it, unlike in science]. This involves a high degree of commitment to the rigors of argument but also, ultimately, an acceptance that rational argument does not lead linearly to only one answer, since you cannot take judgement away from rationality.” I think he is partially correct here, but see my full treatment of why philosophy makes progress in a way different from science in my book devoted to that topic.
The last section of the chapter is a call for ending the “truth wars”: “[there now is] a stand-off between what Bernard Williams called the ‘deniers’ — those who deny that there is such a thing as ‘the truth’ that reason aims at — and the ‘party of common sense,’ those who claim that the truth really is out there. Williams wrote that ‘the deniers and the party of common sense, with their respective styles of philosophy, pass each other by.’”
But there is a greater cause that should be common to both parties: “that greater cause is a commitment to reason, no matter what reservations some may have about the history, use and connotations of that term. Despite their apparent differences, it should be obvious that both ‘deniers’ and the ‘party of common sense’ share something like the thin conception of reason and rationality that I have been defending.”
To keep arguing against each other in the face of post-truth and alternative facts would be to end up in the tragicomic situation of the strife among the People’s Front of Judea, the Judean People’s Front, and the Judean Popular People’s Front…