Author Archives: Massimo

About Massimo

Massimo is the K.D. Irani Professor of Philosophy at the City College of New York. He blogs at platofootnote.org and howtobeastoic.org. He is the author of How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life.

The problem with (some) philosophy and metaphysics of perception

1 (2)Does the world exist independently of any given human mind? Most of us – including many philosophers – would answer yes, and even think the question itself more than a bit weird. But of course it is possible that we live, say, in a simulation imposed on us by external entities, a la Matrix movie. Or perhaps we are all just instantiations of a single cosmic mind, which we could simply refer to as God.


This, however, is not what this essay is about. Let us assume for the sake of discussion that the world does exist independently of the human mind. The next, and somewhat more interesting, question is: how do we perceive such world? In the philosophy and metaphysics of perception one position is referred to as naive realism, the notion that we can have, and usually do have, veridical perceptions about the world. What you see, is what you get, so to speak.


I think this position is rather hopeless, and yet it was vigorously defended by a colleague of mine at a recent colloquium at City College. Umrao Sethi, Assistant Professor of Philosophy at Lehman College in New York, gave a talk entitled “Sensible Over-Determination.” In the following I wish to recount an interesting exchange Sethi and I have had during the q&a following her talk, because it puts two views not just of the philosophy of perception, but of philosophy itself, in stark contrast with each other. Please understand that I am not singling out Sethi, but simply using her particular talk to make a more general point about (some) contemporary philosophy.
First, though, here is the abstract of Sethi’s talk. It’s rather technical, so I will explain the relevant bits immediately below:


“I argue that sensible qualities are ‘ontologically flexible’ – they can be instantiated either in virtue of having material bearers or in virtue of being perceived by minds. Given that these qualities have disjunctive conditions on instantiation, a single instance of such a quality can be over-determined – that is, have its existence simultaneously guaranteed both in virtue of having a material bearer and in virtue of being perceived. I make use of this metaphysical insight to show that, contrary to received wisdom, naïve realism is compatible with the possibility of qualitatively matching hallucinations. Both hallucinations and ordinary perceptions acquaint us with instances of the same sensible qualities. While the instances in hallucination are mind-dependent, those in veridical perception are not. The latter are over-determined, and, thus, mind-independent – they can continue to exist unperceived, because, in addition to the minds that perceive them, their existence is guaranteed by the material objects that are their bearers.”


The two key terms here are “hallucinations” and “sensible qualities.” What Sethi was arguing, among other things, is that hallucinations do not present a problem for naive realists. But why would they? The problem goes all the way back to Descartes and his famous thought experiment based on radical doubt. Descartes realized that he could be mistaken about a lot of things, particularly his sensorial perceptions about the world. We all know, for instance, that we can have dreams or hallucinations during which, for a time at least, we cannot tell apart physical reality from what is being conjured up by our minds. If this is the case, Descartes thought, then we can never be absolutely sure of any aspect of our alleged knowledge of the world.


His solution to the problem, famously, was his cogito argument: I may be mistaken about the external world, and even about mathematical or logical truths (because of systematic biases in my reasoning). But there is no way I can be mistaken about the fact that I am a thinking being of some sort. Cogito, ergo sum: I think, therefore I am.


Descartes’ was the last great attempt in philosophy to establish certain foundations of knowledge. And it failed. Sure, I know that I think, but – contra Descartes – that’s pretty much all I will ever know with certainty. There is no way to rebuild knowledge of mathematics, logic, and even less so empirical reality, building on that meager starting point. Most philosophers since have therefore accepted the notion that our knowledge is provisional, and that we could, in principle, be mistaken about all sorts of things.


Not so naive realists, who cling to the hope that we get – most of the times – “veridical” (i.e., true) impressions of the external world via our senses. How so? One path for the naive realist is to reject the argument from hallucination, and show that we can always tell the difference between what our minds conjure and what we perceive about the world.


One way to do this is simply to deny that hallucinations ever feel exactly the same as impression from the external world. I’ve never had a hallucination, but the literature on the subject, seems to me, is pretty clear: at least some of the times, hallucinations (spontaneous, drug-induced, or especially if induced by sensory deprivation) are subjectively indistinguishable from the real thing. In other words, the problem can’t just be swept under the rug.


Sethi conceded this point at least for the sake of discussion, but insisted that we can nevertheless tell the difference. This is where sensible qualities come in. The quintessential one, the one she used the most during her talk, and the very same one that sparked our little debate during the q&a, is color.


Consider a red apple. You may hallucinate a red apple, and “perceive” redness associated with the apple, even though there is no actual apple out there. Or you may perceive redness in the apple because there is, in fact, a red apple that you are looking at. Sethi’s point is that we can tell the difference between these two cases because the second one is “overdetermined,” meaning that there are two reasons we have the first person experience of redness in the case of real objects: (i) because our mind is conjuring one (just like in the case of the hallucination), and (ii) because there is a red apple out there (unlike the case of the hallucination). It is this over-determination that implies mind-independence, according to Sehti.


The problem is – as I pointed out in the q&a – that “redness” is actually not out there (though the apple may be). We know quite a bit about the physiology of color perception and we understand that what is out there are the surface properties of the apple, as well as the characteristics of light at certain wavelengths that can be perceived by the human eye. “Color,” however, results from the specific interaction among those external physical properties and the specific, evolved, sensorial apparatus of a human being. That apparatus includes of course our eyes, with their anatomical and cellular structure, the optical nerve that connects to the brain, and a number of brain systems devoted to the reception and internal representation of what we perceive. Color is not out there, and it is not mind-independent. Ever.


We know this from a variety of sources, including the fact that some human beings (like yours truly) are color-blind (partially, in my case), so that they perceive the exact same external object differently. We also know that other animals do not perceive colors at all (bulls, for instance: when they get mad at someone waving a red cloth in front of them they are reacting to the movement, not the redness of the cloth). Yet other animals perceive colors differently (bees, for example, see ultraviolet, which we cannot perceive).


So what modern science tells us is that the human mind creates a virtual simulation of external reality all the time, and that the veridicity of such simulation depends not just on whatever is out there, but on the details of the human perceptual and cognitive systems. We live in a perpetual Matrix of our own creation, so to speak. That is why we cannot, in fact, always tell the difference between hallucinations and the real world, even if a real, mind-independent world does exist. Descartes was right, and naive realists are wrong.


Here is where things got interesting outside of the narrow scope of an academic talk in a philosophy department. When I pointed out the above to Sethi, her response was that she does not accept the physiological account! Science, she said, is itself based on metaphysical assumptions, and one is free to reject such assumptions. She evidently chose the latter course.


But hold on a second. Yes, science takes certain metaphysical assumptions on board without examination, like the very fact that reality is mind-independent. But I thought (naively, as it turns out!) that a contemporary philosopher is best served by taking very, very seriously whatever science tells us about whatever issue we are discussing philosophically. In this case, to ignore modern physiology and brain science seems hazardous to say the least.


Curious, I asked Sethi why she chose the anti-science path. In other words, what were her metaphysical commitments? Her answer floored me, but also gave me an interesting insight into my own profession. She said that if she went along with standard physiology she would have to agree that consciousness is a biological phenomenon rooted in the brain and how it interacts with the external world.


Again naively, I said, obviously, what else could it be? This is where she mentioned the so-called “hard problem” of consciousness, made famous by NYU philosopher of mind David Chalmers. Here is how the problem is summarized in the excellent entry in the Internet Encyclopedia of Philosophy:


“It is the problem of explaining why there is ‘something it is like’ for a subject in conscious experience … The usual methods of science involve explanation of functional, dynamical, and structural properties – explanation of what a thing does, how it changes over time, and how it is put together.  But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science.”


Except that I don’t think one can “meaningfully” ask that at all. I have made the argument that the so-called hard problem is in fact a category mistake. It confuses scientific explanation with first person experience. We may (possibly will) arrive at a complete scientific explanation of consciousness (we are nowhere near, at the moment, but we are making some progress). When we do, however, we will still be unable to feed that explanation into a human mind and have that mind experience, say, redness.


The reason for this is no mystery at all, and does not point to any alleged limitation of scientific explanations. It’s the same reason why we can have a complete understanding of, say, how to ride a bicycle, and yet immediately fall down as soon as we mount one for the first time. Understanding derives from something that philosophers call “knowledge-that,” while the skill required to properly ride a bike is an instance of “knowledge-of.” The two are distinct, the first being conceptual, the second one literally embodied in our biological beings. There is absolutely nothing missing from our understanding if we can’t also ride. Confusing the two is a category mistake analogous to being shown the buildings, faculty, students, and administrations of Oxford University and still ask, “yes, but where is the university?”


The reason I went into some detail into all of this is because I think that Chalmers-style philosophizing is what gives philosophy a bad name. Let me be clear about this: I am on record arguing that philosophy is not the same thing as science, and that not all meaningful philosophical questions have scientific answers (e.g., how should we think of personal identity? or: is modus ponens a kind of valid deductive inference? or: what is a cause?). I am, in other words, not advocating scientism.


But I am advocating the notion that philosophers take science seriously and refrain from inventing questions just because they wish to ignore science and convince themselves that they can go “deeper.” The kind of analytic metaphysics done by Chalmers (and Sethi) does not serve philosophy well, I think, as it ignores or dismisses science at its own peril. Some philosophers have been pushing an alternative approach, which they call “scientific” metaphysics, though I think it would be best described as science-informed metaphysics.


Philosophy, as I have argued recently does make progress, and one example is the acknowledgment of the failure of the rationalist program in epistemology, of which Descartes was the last, valiant defender, and that is in a sense the reference point of analytic metaphysics. Let us then count Descartes’ failure as an example of progress, and not push philosophy back four centuries.

Advertisements

Philosophy as the evocation of conceptual landscapes – part II

1 (1)In the first installment of this two-part series we have explored different concepts of what “making progress” may mean for disciplines such as science, math, logic, and philosophy. I have then suggested that philosophy makes progress because it explores what Nicholas Rescher has called “aporetic clusters,” i.e., families of alternative solutions to a given particular philosophical problem. I have advanced a similar idea in the past, suggesting that philosophy is in the business of discovering, refining or discarding such solutions as they are “evoked” (i.e., brought into objective existence by the human mind) within the context of a problem. This all sounds very theoretical, however, so let’s take a look at some empirical examples of these aporetic clusters found within specific philosophical conceptual landscapes.


Is there any evidence that philosophy progresses in the way described so far? I think so, and it comes at least in part from a landmark paper by Bourget and Chalmers, published in 2013. In it, they explore quantitatively what professional philosophers think of a number of prominent issues and positions within their own discipline. Even though Bourget and Chalmers did not carry out their study while informed by the ideas of aporetic clusters and progress, their data is suitable for the current discussion. It also represents a rare opportunity to systematically assess the views of an entire profession, the sort of thing that would probably be useful also in other disciplines, from the humanities to the natural sciences, but is all too seldom actually done.


I focus here on a subset of interesting findings that bear directly or indirectly on my overall project concerning progress in philosophy, even though the Bourget-Chalmers paper contains a number of additional stimulating bits of information for anyone interested in the current state of philosophical inquiry.


To begin with, apparently the common adage that if there are X philosophers in a room, they are bound to have X+1 opinions does not appear to be the case at all. Consider some of the main findings of the survey:


A total of 71% of respondents thought that a priori knowledge is possible, while only 18% didn’t think so. There is a clear majority here, and only two major aporetic clusters.


Things are more equally divided when it comes to views on the nature of abstract objects: Platonism gets 39% while nominalism is barely behind, at 38%. Superficially, this may seem an instance of precisely what’s wrong with philosophy, but is in fact perfectly congruent with my model of multiple peaks in conceptual space. Philosophers seem to have settled on two of Rescher’s aporetic clusters here, having eliminated a number of unacceptable alternatives. There may very well not be an ascertainable fact of the matter about whether Platonism or nominalism are “true.” They are both reasonable ways of thinking about the ontology of abstract objects, with each position subject to further criticism and refinement.


Every practicing philosopher knows that W.V.O. Quine thought he had demolished once and for all the distinction between analytic and synthetic propositions, but the bad news for him is that about 65% of philosophers disagree, and only 27% agree that such demise has in fact taken place. The latter may be an example of an aporetic cluster that gained more prominence immediately post-evocation by Quine, but may have eroded somewhat since, to use a geological analogy.


One of the most lopsided outcomes of the survey concerns what epistemic attitude is more reasonable to hold about the existence and characteristics of the external world: 82% of respondents qualified themselves as realists, followed by only 5% skeptics and 4% idealists. This may be as close as it gets for philosophers to actually settle a dispute.


In terms of ethical frameworks, things are pretty evenly split, with deontology barely leading at 26%, followed by consequentialism at 24% and virtue ethics at 18%. Here too, as in the case of Platonism vs nominalism, the result makes sense to me, as it is hard to imagine what it would mean to say that deontology, for instance, is the “true” approach to ethics. These three are reasonable, alternative ways of approaching ethics – and there are a number of unreasonable ones that have been considered and discarded over time (e.g., Ayn Rand’s “Objectivism,” based on a gross distortion, and likely lack of understanding, of Aristotle).


Even more fascinating – and congruent with my general thesis – are the pairwise correlations that the authors uncovered between philosophical views, hinting at the conclusion that philosophers tend to develop fairly internally coherent positions across fields. For instance:


If one thinks that the distinction between analytic and synthetic truths is solid, then one also tends to accept the idea of a priori knowledge – naturally enough.


If a philosopher is a moral realist, she is also likely to be an objectivist about aesthetic value. Interestingly, moral realists also tend to be realists in philosophy of science, and Platonists about abstract objects.


If one thinks that Star Trek’s Kirk survives teleportation (rather than being killed and replaced by a copy), one also – coherently – often adopts a psychological view of personal identity.


Perhaps the most interesting and nuanced approach that Bourget and Chalmers take to their data unfolds when they move from univariate and bivariate to multivariate statistics, in this case principal components analysis. This allows them to examine the many-to-many relationships among variables in their data.


The first principal component they identify, that is, the one that explains most of the variance in the sample, they label “Anti-naturalism,” as it groups a number of responses that coherently fall under that position: libertarianism concerning free will, non-physicalism about the mind, theism, non-naturalism as a metaphilosophy, and the so-called “further fact” view of personal identity. If one were to plot individual responses along this dimension (which Bourget and Chalmers don’t do, unfortunately), one would see anti-naturalist philosophers clustering at the positive and of it, and naturalist philosophers clustering at the negative end.


The second-ranked principal component is labelled “Objectivism/Platonism” by the authors, and features positive loadings (i.e., multivariate correlations) of cognitivism in moral judgment, realism in metaethics, objectivism about aesthetic value, and of course Platonism about abstract objects.


The third component is about Rationalism, with positive loadings for the possibility of a priori knowledge, the analytic-synthetic distinction, and rationalism about knowledge.


(Should you be curious about where I fall in the above multi-variable landscape, see the post-scriptum to this essay.)


The overall picture that emerges, then, is very much that of a conceptual landscape with a number of alternative peaks, or aporetic clusters, which are internally coherent and well refined by centuries of philosophical inquiry. I suspect that historically many more “peaks” have been explored and eventually discarded, and that the height of the current peaks (as reflected by the consensus gathered within the relevant epistemic communities) is itself heterogeneous and dynamic, with some in the process of becoming more prominent in the landscape and others on their way to secondary status or destined to disappear altogether.


If I am right and philosophy makes progress by evoking and refining aporetic clusters that are constrained by empirical evidence about the world, it makes sense to ask one last question: where is philosophy going, considering its continuous tension with the sciences, themselves making progress in what may be more of a teleonomic fashion? (See part I on teleonomy.)


Philosophers have often advanced a model of their discipline as a “placeholder” for the development of eventually independent fields of inquiry, presenting philosophy as the business of conducting the initial conceptual exploration (and, hopefully, clarification) of a given set of problems, handing it then to a special science as soon as those problems become empirically tractable. There are quite a few historical examples to back up this view, from the emergence of the natural sciences to that of psychology and linguistics. Philosophy of mind is arguably in the midst of this very process, interfacing with the nascent cognitive sciences.


Predictably, this model is often twisted by detractors of philosophy to show that the field has been in a slow process of disintegrating itself, with a hard core (metaphysics, ethics, epistemology, logic, aesthetics) that is the last holdout, and which has shown increasing signs of itself yielding to the triumphal march of Science (with a capital “S”).


If that is the case, of course, so be it. But I seriously doubt it. What we have seen over the last few centuries, and especially the last century or so, is simply a transformation of what it means to do philosophy. Although we use the general word “science” to indicate everything from Aristotle’s forays into biology to what modern physicists are doing with the Large Hadron Collider, the very nature of science has evolved throughout the centuries, and keeps evolving still. What counts as good scientific methodology, sound scientific theorizing, or interesting scientific problems has changed dramatically from Aristotle to Bacon to Darwin to Stephen Hawking. Why should it be any different for philosophy?


One of the most obvious indications that philosophy has been reinventing itself over the past century or so is the stark onset of a panoply of “philosophies of.” “Philosophies of” are the way the field has been responding to the progressive emancipation of some of its former branches: science is no longer natural philosophy, but that simply means that now philosophers are free to philosophize about science (and, more specifically, about biology, quantum mechanics, etc.) without doing science. The same idea applies to linguistics (and philosophy of language), psychology (and philosophy of the social sciences), economics (and philosophy of economics), and so on.


Is this sort of transformation also about to affect philosophy’s core areas of metaphysics, ethics, epistemology, logic, and aesthetics? It depends on how one looks at things. On the one hand, to a larger or lesser extent it certainly has become increasingly difficult to engage in any of the above without also taking on board results from the natural and social sciences. While logic is perhaps the most shielded of all core philosophical areas in this respect (indeed, arguably it has contributed to the sciences broadly construed much more than it has received), it is certainly a good idea to do metaphysics while knowing something about physics (and biology); ethics while interfacing with political and social sciences, and even biology and neuroscience; epistemology while being aware of the findings of the cognitive sciences; and aesthetics with an eye toward biology and the cognitive and social sciences.


Nonetheless, all the core areas of philosophy emain still very much recognizable as philosophy. Should they finally spawn their own independent disciplines, there will immediately arise in turn a need for more “philosophies of,” and the process will keep going, the field adapting and regenerating.


Ultimately, philosophy is here to stay for the same reason that other humanities (and the arts) will stay, regardless of how much science improves and expands, or how much narrow-minded politicians and administrators keep cutting humanities’ funding in universities. Human beings need more than facts and formulas, more than experiment and observation. They need to experience in the first person, and they need to reflect critically on all aspects of their existence. They need to understand, in the broadest possible terms, which means they need to philosophize.


_____


Post-Scriptum: where do I fall in the multi-variable conceptual landscape of philosophical positions? Here:


Epistemology: a priori knowledge (e.g., mathematics, logic) is possible.


Knowledge: neither a rationalist nor an empiricist, we need a judicious combination of the two, as Kant first surmised.


Abstract objects: something like nominalism, definitely not Platonism. I prefer Smolin’s “evocation” alternative, briefly discussed in part I.


Analytic / Synthetic distinction: it’s there, despite Quine making some good points about borderline or fuzzy cases.


External world: realism. (Though I do have strong sympathies for instrumentalism in philosophy of science, but the two are not mutually exclusive.)


Aesthetic values: anti-realist (though our aesthetic judgments, at a very basic level, have likely been shaped by evolution, which would make me a quasi-realist of sorts).


Transporter problem: Kirk dies every time. I do not think there is a unique metaphysical answer to personal identity, as it is a human construct constrained by human biology.


Ethics: virtue ethics, obviously.


Naturalism vs anti-naturalism: I’m definitely a naturalist, compatibilist about free will, non-theist, and think that the mind is the result of the activity of the physical (and embodied) brain.


Objectivism vs Platonism: quasi-realist in ethics (as explained here), ethical judgments are constrained by our understanding of human nature and what makes us flourish. More than one reasonable ethical judgment is compatible with any given empirical situation.

Philosophy as the evocation of conceptual landscapes – part I

1I come to philosophy from the natural sciences, where the idea that my former academic field (evolutionary biology) makes progress is taken for granted, and where one would receive amused or puzzled looks by simply asking the question of what constitutes progress. And yet, philosophers of science have convincingly argued that it is much more difficult than one might think to provide a good account of how, precisely, science makes progress. When it comes to philosophy, however, it is philosophers themselves who often deny that the field makes progress, no matter how one understands “progress.” Which is puzzling, because the obvious question, then, is why on earth would they bother spending their lives contributing to an area of inquiry that reliably goes nowhere?


This essay is a shortened version of a full paper that appeared in a volume edited by Russell Blackford and Damien Broderick, entitled Philosophy’s Future: The Problem of Philosophical Progress (if you are an “Aurelian” subscriber to this site you can download it for free).


Part of the problem is that “progress” is itself not at all easy to define, with the term taking on different meanings in the natural sciences and, for instance, in mathematics. I suggest that a reasonable approach to this issue is to “go Wittgensteinian,” so to speak, and argue that “progress” is a family resemblance concept. Wittgenstein’s own famous example of this type of concept was the idea of “game,” which does not admit of a small set of necessary and jointly sufficient conditions in order to be defined, and yet this does not seem to preclude us from distinguishing games from not-games, at least most of the time. Progress, in a sense, could then be thought to be like pornography, to paraphrase the famous quip by US Supreme Court Justice Potter Stewart: “I know it when I see it.”


I submit that progress in science is a teleonomic (i.e., goal oriented) process, where the goal is to increase our knowledge and understanding of the natural world in terms of the simplest possible set of general principles. By contrast, progress in mathematics, logic, and – with some important qualification – philosophy, could be described as an advancement toward a better, more complete, or more modern condition, a definition that does not imply any final goal to be achieved.


Consider first mathematics and logic: I do not think it is tenable to understand them as teleonomic disciplines because there is an infinite number of logical-mathematical objects to discover and theorems to prove, so that mathematics and logic look like ever expanding disciplines, not converging toward any “theory of everything” as is (allegedly) the case for science. So I do think of mathematics and logic as advancing toward a better, more complete position, “better” in the sense that the process both opens up new lines of internally generated inquiry (the solution of mathematical and logical problems generates new problems, and so forth) and “more complete” in the sense that mathematicians and logicians are best thought of as engaged in the exploration of a space of conceptual (as distinct from empirical) possibilities.


At this point in the paper I discuss in detail the concept of “evoking,” as distinct from discovering or inventing, new notions in the space of conceptual possibilities pertinent to, respectively, mathematics, logic, and philosophy. It’s a rather technical topic, inspired by Roberto Unger and Lee Smolin’s treatment of mathematical Platonism in their excellent The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. The interested reader can find a summary of it here. The basic notion, though, is simple. Consider the game of chess. It is clearly a human invention, i.e., the rules of chess are not independent of the human mind. Yet, once invented, the game has “evoked” (i.e., brought into existence) a number of objectively provable statements about it, including some that can be proven mathematically. Keeping this in mind, let’s now go back to how philosophy makes progress.


Current discussions on progress in philosophy have a number of precedents, although it is actually surprising how few scholarly papers have been devoted to the topic. One that I think is particularly important to discuss is Moody’s distinction among three concepts of progress, first published in 1986. What he calls progress-1 takes place when there is a specifiable goal about which people can agree that it has been achieved, or what counts toward achieving it. If you are on a diet, for instance, and decide to lose ten pounds, you have a measurable specific goal, and you can be said to make progress insofar your weight goes down and approaches the specific target. Progress-2 occurs when one cannot so clearly specify a goal to be reached, and yet an individual or an external observer can competently judge that progress has occurred when comparing the situation at time t with the situation at time t+1, even though the criteria by which to make that judgment are subjective. Moody thinks, for example, that composers guided by an inner sense of when they are “getting it right” would be making this sort of progress while composing. Finally, progress-3 is a hybrid animal, instantiated by situations where there are intermediate but not overarching goals.


Interestingly, Moody says that mathematics makes progress-3, insofar as there is no overall goal of mathematical scholarship, and yet mathematicians do set intermediate goals for themselves, and the achievement of these goals (like the proof of Fermat’s Last Theorem) is recognized as such by the mathematical community.


Moody’s next step is to assume provisionally that philosophy is a type of inquiry, and then ask whether any of his three categories of progress applies to it. The first obstacle is that philosophy does not appear to have consensus-generating procedures such as those found in the natural sciences or in technological fields like engineering. Moody claims that “the only thing that philosophers are likely to agree about with enthusiasm is the abysmal inadequacy of a particular theory.” While I think this is actually a bit of a caricature, I do not share Moody’s pessimistic assessment of that observation even if true: negative progress, that is, the elimination of bad ideas, is progress nonetheless.


Moody concludes that philosophy does not make progress-1 or progress-3, because its history has not yielded a trail of solved problems. What about progress-2? He takes up the possibility that perhaps philosophy is not a type of inquiry after all, and analyzes in some detail two alternative conceptions: Wittgenstein’s idea of philosophy as “therapy,” and Richard Rorty’s so-called “conversational model” of philosophy. As Moody summarizes:


“Wittgenstein believed that philosophical problems are somehow spurious and that the activity of philosophy … should terminate with the withdrawal, or deconstruction, of philosophical questions.”


On this view, then, there is progress, of sorts, in philosophy, but is the sort of “terminus” brought about by committing seppuku. As Moody rather drily comments, while nobody can argue that Wittgenstein’s ideas have not been taken seriously, it is equally undeniable that philosophy has gone forward largely as if the therapeutic approach had never been articulated. If a proposed account of the nature of philosophy has so blatantly been ignored by the relevant epistemic community, we can safely file it away.


Rorty’s starting point is what he took to be the (disputable, in my opinion) observation that philosophy has failed at its self-appointed task of analysis and criticism. Moody quotes him as saying: “The attempts of both analytic philosophers and phenomenologists to ‘ground’ this and ‘criticize’ that were shrugged off by those whose activities were purportedly being grounded and criticized.” Rorty arrived at this because of his rejection of what he sees as philosophy’s “hangover” from the seventeenth and eighteenth centuries, when philosophers were attempting to set their inquiry within a framework that allowed a priori truths to be discovered, even though David Hume dealt that framework a fatal blow during the eighteenth century.


While Moody finds much of Rorty’s analysis on target, I must confess that I don’t. For instance, the fact that other disciplines (like science) marched on while refusing to be grounded or criticized by philosophy is neither entirely true (lots of scientists have paid and still pay attention to philosophy of science, for example), nor necessarily apt as the ultimate test of the value of philosophy even if true: creationists and climate change deniers, after all, shrug off any criticism of their positions, but that doesn’t make such criticism invalid, or futile for that matter (since others are responding to it).


Yet there is something to be said for thinking of philosophy as a “conversation” more than an inquiry, as Rorty did. The problem is that this and other dichotomies presented to us by Rorty are, as Moody himself comments, false: “We do not have to choose between ‘saying something,’ itself a rather empty notion that manages to say virtually nothing, and inquiring, or between ‘conversing’ and ‘interacting with nonhuman reality,’” Indeed we don’t.


What account, then, can we turn to in order to make sense of progress in philosophy, according to Moody? I recommend that the interested reader check Moody’s discussion of Robert Nozick’s “explanational model” of philosophy, as well as John Kekes’ “perennial problems” approach, but my own treatment here will jump to Nicholas Rescher’s 1978 proposal of the concept of “aporetic clusters,” which is one path that supports the conclusion that philosophy does make progress, and it is a type-2 progress.


Rescher thinks that it is unrealistic to expect consensus in philosophy, and yet does not see this as a problem, but rather as an organic outcome of the nature of philosophical inquiry:


“In philosophy, supportive argumentation is never alternative-precluding. Thus the fact that a good case can be made out for giving one particular answer to a philosophical question is never considered as constituting a valid reason for denying that an equally good case can be produced for some other incompatible answers to this question.”


In fact, Rescher thinks that philosophers come up with “families” of alternative solutions to any given philosophical problem, which he labels aporetic clusters. According to this view, some philosophical accounts are eliminated, while others are retained and refined. The keepers become philosophical classics, like “virtue ethics,” “utilitarianism,” or “Kantian deontology” in ethics, or “constructive empiricism” and “structural realism” in philosophy of science. Rescher’s view is not at all incompatible with my idea of philosophy as evoking (in the sense briefly described above), and then exploring and refining, peaks in conceptual landscapes. As Moody aptly summarizes it: “That there are ‘aporetic clusters’ is evidence of a kind of progress. That the necronology of failed arguments is so long is further evidence.”


(next: empirical examples of progress in philosophy)

Why I’m a still a (non-card carrying) Skeptic

1 (1)I just came back from Las Vegas, where I had a lovely time at the annual CSICon event, organized by the folks that bring you Skeptical Inquirer magazine, among other things. As I’ve done almost since the beginning of my involvement with the skeptic movement, back in, ghasp, 1997, I’ve delivered a bit of a gadfly talk. This one was about scientism, reminding my fellow skeptics that they have a tendency to overdo it with the science thing, at times coming across nearly as evangelical and even obtuse as their usual targets, from creationists to UFO believers. After asking the audience to be patient with me and not serving me hemlock for lunch, I minced no words and criticized by name some of the big shots in the field, from Neil deGrasse Tyson to Richard Dawkins, from Sam Harris to Steven Pinker. And of course several of those people were giving talks at the same conference, either right before or right after me.


No hemlock was served, and I got less resistance to my chastising than usual from the audience. Some people even approached me later on telling me how much they appreciated my reminder that our community is not perfect and we need to do better. It was all very congenial, set against the perfect backdrop of the ultimate fake city in the world, and accompanied by the occasional dirty martini.


On my way back to New York I then got a tweet from a follower linking to yet another “I resign from the skeptic movement and hand in my skeptic card” article, written by a prominent (former) skeptic. It doesn’t matter who. The list of complaints by that author are familiar: a tendency toward scientism, a certain degree of sexism within the movement, and a public failure to lead by some of the de facto leaders. The same issues that I have been complaining about for years (for instance, here). But I have not quit, and do not intend to quit. Why?


The uncharitable answer would be because I’m part of the privileged elite. I doubt anyone would seriously consider me a “leader” in the movement, but I have certainly been prominent enough. And I am a male. White. Heterosexual. The problem is, uncharitable views are highly unhelpful, and I’m on record advocating on behalf of diversity in the movement, against sexual harassment, and – as I mentioned above – have made a mini-career of stinging the big shots every time I think they deserve it, which is rather often. So I’m afraid a casual dismissal based on my gender, sexual preference and ethnicity will not do. Quite apart from the fact that it would be obviously hypocritical on the part of anyone who claims that gender, sexual preference and ethnicity should not be grounds for blanket statements of any kind.


No, I stay because I believe in the fundamental soundness of the ideas that define modern skepticism, and also because I think quitting to create another group is an example of an all too common fallacy: the notion that, despite all historical evidence to the contrary, next time we’ll definitely get it right and finally create utopia on earth. Let me elaborate on each point in turn.
“Skepticism,” of course, has a long history in philosophy and science. The original Skeptics of ancient Greece and Rome where philosophers who maintained that human knowledge is either highly fallible or downright impossible (depending on which teacher of the school you refer to). Consequently, they figured that the reasonable thing to do was to either abstain entirely from any opinion, or at least to hold on to such opinions as lightly as possible. Theirs wasn’t just an epistemological stance: they turned this into a style of life, whereby they sought serenity of mind by way of detaching themselves emotionally from those opinions (political, religious) that others held so strongly and often died for. Not my cup of tea, but if you think about it, it’s not a bad approach to good living at all.


The philosopher that embodies modern skepticism most closely, however, is the Scottish Enlightenment figure par excellence, David Hume. He held an attitude of open inquiry, considering every notion worth investigating and leaving the (provisional) verdict of such investigations to the empirical evidence. He famously said that a reasonable person proportions his beliefs to the available facts, a phrase later turned by Carl Sagan in his hallmark motto: extraordinary claims require extraordinary evidence.


The contemporary skeptic movement was the brainchild of people like philosopher Paul Kurtz (the founder of the organizations that preceded CSI, as well as of Skeptical Inquirer), magician James “the Amazing” Randi (organizer of the long running conference that preceded CSICon, known as TAM, The Amazing Meeting), Carl Sagan himself, and a number of others. Initially, the movement was rather narrowly devoted to the debunking of pseudoscientific claims ranging from UFOs to telepathy, and from Bigfoot to astrology.


More recently, mainly through the efforts of a new generation of leaders – including but not limited to Steve Novella and his group, Michael Shermer, Barry Karr, and so forth – the scope of skeptical analysis has broadened to include modern challenges like those posed by the anti-vax movement and, of course, climate change. Even more recently, young people from a more diverse crowd, finally including several women like Rebecca Watson, Susan Gerbic, Kavin Senapathy, Julia Galef, and many others, have further expanded the discourse to include an evidence-based treatment of political issues, such as gender rights and racism.


The values of the skeptic movement, therefore, encompass a broad set that I am definitely on board with. At its best, the community is about reason broadly construed, critical but open minded analysis of extraordinary claim, support for science based education and critical thinking, and welcoming diversity within its ranks.


Of course, the reality is, shall we say, more complex. There has been plenty of sexual harassment scandals, involving high profile members of the community. There is that pesky tendency toward closing one’s mind and dismissing rather than investigating claims of the paranormal. And there is a new, annoying, vogue to reject philosophy, despite the fact that a skepticism (or even a science) without philosophical foundations is simply impossible.


But this leads me to the second point: I think it far more sensible to stay and fight for reform and improvement rather than to “hand my skeptic card” (there is no such thing, of course) and walk away. Because those who have walked away have, quite frankly, gone nowhere. Some have attempted to create a better version of what they have left, like the thankfully short-lived “Atheism+” experiment of a few years ago.


The problem with leaving and creating an alternative is that the new group will soon enough inevitably be characterized by the same or similar issues, because people are people. They diverge in their opinions, they get vehemently attached to those opinions, and they fight tooth and nails for them. Moreover, people are also fallible, so they will in turn engage in the same or similar behaviors as the ones that led to the splintering of the group in the first place, including discrimination and harassment. So the whole “I’m leaving and creating a new church over there” kind of approach ends up being self defeating and dispersing resources and energy that could far better be used to improve our own household from within while keep fighting the good fights we inherited from the likes of Kurtz and Sagan.


So, no, I’m not leaving the skeptic movement. I will keep going to CSICon, NECSS, the CICAP Fest, and wherever else they’ll invite me. I will keep up my self assigned role of gadfly, annoying enough people and hopefully energizing a larger number so that we keep getting things more and more right. After all, this is about making the world into an at least slightly better place, not into our personal utopia tailored to our favorite political ideology.

They’ve done it again: another embarrassing moment for the skeptic movement

1In a few days I will be in Las Vegas. No, it’s not what you may be thinking about. I’ll be the token skeptic at one of the largest conferences of skeptics: CSICon, courtesy of the same people who publish Skeptical Inquirer magazine, for which I wrote a column on the nature of science for a decade. I say “token skeptic” because I have been invited by the organizers to talk about scientism, the notion that sometimes science itself is adopted as an ideology, applied everywhere even though it doesn’t belong or is not particularly useful (here is a video about this).


I have been both a member and a friendly internal critic of the skeptic community since the late ‘90s, and I have been reminded of the value of such a gadfly-like role very recently, with the publication of yet another “skeptical” hoax co-authored by philosopher Peter Boghossian and author James Lindsay, this time accompanied by Areo magazine’s Helen Pluckrose. The hoax purports to demonstrate once and for all that what the authors disdainfully refer to as “grievance studies” (i.e., black studies, race studies, women studies, gender studies, and allied fields) is a sham hopelessly marred by leftist ideological bias. The hoax doesn’t do any such thing, although those fields are, in fact, problematic. What the stunt accomplishes instead is to reveal the authors’ own ideological bias, as well as the poverty of critical thinking by major exponents of the self-professed skeptic community. But let’s proceed in order.


Boghossian and Lindsay made a first, awkward attempt at this last year, by submitting a single fake paper entitled “The Conceptual Penis as a Social Construct.” It was a disaster: the paper was, in fact, rejected by the first (very low ranking) journal they submitted it to, and only got published in an unranked, pay-per-publish journal later on. Here is my commentary on why Boghossian and Lindsay’s achievement was simply to shine a negative light on the skeptic movement, and here is a panel discussion about their failure at the North East Conference on Science and Skepticism later on in the year. That did not stop major exponents of the skeptic movement, from Michael Shermer to Steven Pinker, from Richard Dawkins to Sam Harris and Jerry Coyne, from praising Boghossian and Lindsay, which is why I maintain the episode was an embarrassment for the whole community.


The hoax, of course, was modeled after the famous one perpetrated by NYU physicist Alan Sokal at the expense of the (non peer reviewed) postmodernist journal Social Text, back in the ‘90s, at the height of the so-called science wars. Sokal, however, is far more cautious and reasonable than Boghossian & co., writing about his own stunt:


From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty.


In fact, Sokal himself published some good criticisms of the conceptual penis hoax.


Not having learned their lesson at all, Boghossian & co. engaged in a larger project of the same kind, this time sending out 21 fake papers to a number of journals, mostly in women and gender studies. Two thirds of the papers were rejected. Of the seven accepted papers, one was a collection of (bad) poetry, and thus really irrelevant to the objective at hand; two were simply boring and confusing, like a lot of academic papers; one was a self-referential piece on academic hoaxes that one independent commentator actually judged to be making “somewhat plausible arguments”; and three more included fake empirical evidence. As Daniel Engber says in Slate:


One can point to lots of silly-sounding published data from many other fields of study, including strictly scientific ones. Are those emblematic of ‘corruption’ too?


Indeed, there are several examples of this in the literature, like a 2013 hoax that saw a scientific paper about anti-cancer properties in a chemical extracted from a fictional lichen published in several hundred journals. Hundreds, not just half a dozen!


It’s very well worth reading the entirety of Engber’s commentary, which exposes several problematic aspects of the Boghossian et al.’s stunt. The major issues, as I see them, are the following:


1. Hoaxes are ethically problematic, and I honestly think Portland State University should start an academic investigation of the practices of Peter Boghossian. In the first place, I doubt the study (which was published in Aero magazine, not in a peer reviewed journal!) obtained the standard clearance required for research on human subjects. Second, the whole enterprise of academic publishing assumes that one is not faking things, particularly data. So tricking reviewers in that fashion at the very least breaches the ethical norms of any field of scholarship.


2. The authors make a big deal of the ideological slant of the fields they target, apparently entirely oblivious to their own ideological agenda, which explicitly targeted mostly women and gender studies. Both Boghossian and Lindsay have published a series of tweets (see Engber’s essay) that nakedly display their bias. Is the pot calling the kettle black?


3. While we can certainly agree that it is disturbing that academic journals publish any paper that is more or less obviously fake, this is not a good criticism of the target fields. You know what that would look like? It would take the form of a serious, in-depth analysis of arguments proposed by scholars in those fields. But Boghossian & co. actually proudly proclaimed, after their first hoax, that they have never read a paper in “X studies,” which means that – literally – they don’t know what they are talking about. Here is one example of how to do it.


4. What Boghossian et al. really want to convey is that “X studies” are intellectually bankrupt, unlike other academic disciplines, particularly scientific ones. But as the example of the anti-cancer hoax mentioned above, and several others, show, this is simply not the case. Corruption of academic culture, resulting either from ideological bias or from financial interests (pharmaceutical companies are well known to establish entire fake journals to push their products) is not limited to certain small corners of the humanities.


5. In a related fashion – and surprisingly given that Boghossian actually teaches critical thinking – while the first hoax fatally suffered from a sample size of n=1, the new one is plagued by the simple fact that it has no control! Without a similar systematic attempt being directed at journals in other fields (particularly scientific ones) we can conclude precious little about the specific state of “X studies.”


That said, do I think that the fields targeted by Boghossian & co. are problematic? Yes, as I’ve written before. Here the most useful commentary on the hoax has been published in the New York Times by William Eggington. As he puts it:


The problem is not that philosophers, historians or English professors are interested in, say, questions of how gender or racial identity or bias is expressed in culture or thought. Gender and racial identity are universally present and vitally important across all the areas that the humanities study and hence should be central concerns. The problem, rather, is that scholars who study these questions have been driven into sub-specializations that are not always seen as integral to larger fields or to the humanities as a whole. Sometimes they have been driven there by departments that are reluctant to accept them; sometimes they have been driven there by their own conviction that they alone have the standing to investigate these topics.


That strikes me as exactly right. “X studies” programs should be integrated within a university, either (ideally) in broad multidisciplinary programs, or within the most suitable departments, such as History, Philosophy, Sociology, and the like.


Eggington blames academic hyperspecialization for the current sorry state of affairs in these fields, as well as the “publish or perish” attitude that has plagued academia for decades now. But guess what? “X studies” are most definitely not the only ones to suffer from these problems. They are endemic to the whole of modern academy, including the natural sciences. Indeed, we should be far more worried about the influence of ideology and big money on scientific fields than on small areas of the humanities. After all, it is in the name of science that we spend billions annually, and it is from science that we expect miracles of medicine and technology.


As Engber writes in the Slate commentary, notwithstanding the dire warnings of Boghossian, Pinker, Harris, Dawkins and all the others:


Surprise, surprise: Civilization hasn’t yet collapsed. In spite of Derrida and Social Text, we somehow found a means of treating AIDS, and if we’re still at loggerheads about the need to deal with global warming, one can’t really blame the queer and gender theorists or imagine that the problem started with the Academic Left. (Hey, I wonder if those dang sociologists might have something interesting to say about climate change denial?)


The new Boghossian-led hoax is another example of badly executed, ideologically driven stunt that targets narrow fields with little impact while leaving alone the big elephants in the room. It is, in the end, yet another embarrassment for the skeptical community, as well as a reflection of the authors’ own biases and narrow mindedness.

The techno-optimists are at it again

1 (4)(the atomic explosion that destroyed Hiroshima)


Techno-optimism (a form of applied scientism, if you will) is the attitude that no matter how dire humanity’s problems, science and technology will surely come to the rescue. It tends to conveniently neglect that some of humanity’s biggest contemporary problems (say, climate change, or the risk of nuclear annihilation) are, in fact, caused by the willful misuse of science and technology. It seems odd to firmly believe that more of the same thing that caused the disease in the first place will surely cure the disease, because, you know, this time we’ll get it right.


A good example of techno-optimism is a recent article in Slate by Phil Torres, based on his new book, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. It’s actually a somewhat puzzling article, because Torres is both critical and supportive of what he calls radical human enhancement as a way to solve what he perceives as humanity’s existential risks. My take is that he mostly focuses on the wrong risks, some of which are not actually existential at all, and that his proposed solution is more likely to make things worse than better. I think of myself as a realist about technology – one who both appreciates its advantages (I’m writing this on a wonderfully advanced tablet computer!) and is aware of its dark side. But if after reading this you want to class me as a techno-pessimist, I’ll take it. Just don’t dismiss me as a Luddite, okay?


Torres begins by correctly pointing out that the current century is a bit special, in the context both of human evolution and, for that matter, the evolution of life on our planet. For the first time since life emerged 3.5 billion years ago a single, sentient species actually has developed the capacity to profoundly alter Earth’s bio- and geo-spheres. As my favorite philosopher, Spider-Man, warned us, with great power comes great responsibility, but we just don’t seem to be willing to accept that responsibility.


Torres then introduces the concepts of cognitive and moral enhancements, though the word “moral” appears only near the beginning of the piece, with “cognitive” replacing it throughout the rest of the article. That, as we shall see, is a crucial mistake. There are two classes of enhancement, conventional and radical. You are surely familiar with the conventional class (hence the name!): it includes things like education, meditation, and the absorption of caffeine. Okay, it’s an odd mix, but you get the point: anything that improves our cognitive abilities without permanently altering them in a heritable fashion, that is, across generations.


Radical enhancements are a whole different story, and while still at the borderlands between science and science fiction, surely some of them will become available within years or decades. Torres focuses his essay on radical enhancements, since he thinks these are the ones that will be necessary to stave off the existential risks faced by humanity.


One such radical enhancement is embryo selection, a process by which scientists – the wisest of all people, as we all know – pick a subset of embryos generated by a given combination of sperms and eggs, and do so repeatedly in order to improve whatever human characteristic is deemed to be desirable. Torres is perfectly aware that this is eugenics, but he deems it to be of a benign type, because it doesn’t violate people’s autonomy. I guess he hasn’t seen the film Gattaca. And yes, it is perfectly acceptable to object to sci-fi scenarios by using sci-fi philosophical thought experiments. Torres comments:


If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. … According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points – a promising method for creating superbrainy offspring in a relatively short period of time. … As Bostrom puts it … ‘a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.’


I’m afraid that Bostrom and Shulman don’t know what they are talking about, and no such strong prima facie case has been made. At all. First off, we actually don’t understand the genetic basis of intelligence. We know that IQ (which is not at all the same thing as “intelligence,” whatever that is) is heritable in humans. But “heritable” simply means that there is – other things being equal – a statistical correlation between intelligence and genetic makeup. Nothing more, and that ain’t even remotely close enough to what one would need in order to do embryo selection on intelligence, even setting aside the ethical issues, which would be far more thorny than Torres lets on.


Second, and this will become a recurring theme of my critique, “superbrainy” doesn’t seem to have a lot to do with what is really needed: wisdom, or a good moral compass. I seriously doubt that there is any correlation at all between intelligence and morality, and if I’m right, creating a super-race of hyper-intelligent beings with the same highly imperfect moral compass as Homo sapiens sapiens is a sure recipe to accelerate and magnify whatever existential threat Torres, Bostrom and Shulman may be concerned about.


Speaking of which: what does Torres consider to be an existential threat to humanity? At the top of his list he puts “apocalyptic terrorism,” the possibility that someone inspired by a “Manichean belief system” will blow all of us to smithereens with a stolen atomic weapon, in the name of ridding the world of apostates and assorted infidels, thus establishing the kingdom of God on earth.


While surely there is a risk of one such attack, notice a few important caveats. To begin with, there is no credible scenario under which a nuclear terrorist attack would be civilization-ending. Yes, someone may be able to sneak a low grade nuclear weapon in a major city and kill hundreds of thousands, millions even. That would be an unprecedented and horrifying catastrophe. But an existential threat to civilization? No. You know what really constitutes such a threat? The fact that the codes for thousands of nuclear missiles are currently in the hands of an incompetent narcissist sitting in the White House. But, curiously, there is no mention of government-based threats in Torres’ piece. Lastly, please keep in mind that this specific threat is made possible by, you guessed it, science and technology! It’s the very existence of very smart scientists and unscrupulous politicians – none of whom seems to be equipped with even a barely functioning moral compass – that has put us into this situation in the first place. And you think giving more leeway to the same folks is going to save humanity?


More generally speaking, Steven Pinker’s ‘Escalator of Reason’ hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the ‘Flynn effect.’ The most important concept here is that of ‘abstract reasoning,’ which Pinker identifies as being ‘highly correlated’ with IQ. In his words, ‘abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.’


With all due respect to Steven Pinker, I’m going to call bullshit on this one as well. As a start, “violence” has indeed declined since the second half of the 20th century (though even this conclusion comes with huge caveats about what exactly counts as violence), but there is a reason Pinker picked that particular time frame: two world wars had just taken place in the previous half century, killing millions of people, thanks to science and technology. The culmination of that period was the only nuclear attack on civilians in the history of humanity (so far), perpetrated by a government, not a Manichean terrorist; the US government, to be specific.
Moreover, there is no causal model (correlation, as Pinker knows, is not the same as causation) that actually links the Flynn effect (which is probably due to “conventional enhancement techniques,” such as better nutrition and education) and moral improvement. Indeed, I see no reason to believe that humanity at large has improved morally since the times of Socrates and Confucius. And “abstraction from the concrete particulars of immediate experience” is also the sort of thing that makes possible killing at a distance by pushing a button, or that allows many of us to reconcile the otherwise irreconcilable fact that the top 10% of the human population lives by standards historically reserved to Kings and Queens while the rest is below or barely above poverty, subject to preventable disease, or killed by violence rendered particularly effective by technologically advanced weaponry in the hands of unscrupulous governments.


Torres does acknowledge some of the limitations of the approach proposed by techno-optimists like Pinker. After writing that perhaps “idiosyncratic actors” (i.e., terrorists) would suffer from less empathy if they had a higher IQ, he remembers that some real life examples of such actors, like the Unabomber Ted Kaczynski, actually do have high IQs, and yet they are still deficient in empathy. So let me state this clearly: there is no reason whatsoever to think that IQ and empathy are correlated, which throws a big wrench in Pinker’s, Bostrom’s and similar programs of enhancement. Torres continues:


Another major concern: cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button.


Right. But, again, he and his colleagues are insist in worrying about the least likely threats, which, once more, are not actually existential. No Unabomber can end the world. But Donald Trump (just to pick on the current occupant of the WH, it’s not that I trust others a hell of a lot more) can come pretty darn close. But Torres insists:


Although cognitive enhancements could worsen some types of terror agents, the evidence – albeit indirect – suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.


Ahem, no, the evidence suggests no such thing, and in fact the production of a population of “cognitively enhanced cyborg” is a nightmare that only naive techno-optimists could possibly wish on the rest of us. Don’t these people watch any sci-fi, at all? And there is more nonsense on stilts:


It seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in ‘all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.’ … Superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.


Bostrom’s calculations are based on thin air, to be charitable. It isn’t even clear what “all-round cognitive performance” means, let alone how to improve it so efficiently, and it is even more dubitable that such an improvement would actually be an improvement. Also, what sort of model of the brain is Bostrom working with, that allows him to simply sum small percentage increases across different individuals as if they were equivalent to a gigantic increase in a single person? Moreover, look at the list of disasters: most of them are both extremely unlikely and it is just as unlikely that we would be able to do much about them (I wonder why a nearby nova explosion isn’t part of the mix), but the most worrisome ones (climate change, biodiversity loss, emerging technologies, and agential risks) are all made possible by the very same thing that is supposed to save us: more intelligent technology.


Toward the end of the essay we simply reach Pindaric levels of imaginative flights:
There could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.


Sure. Now how about coming down to Earth, our only planet, which we are the ones currently destroying, and talk some sense? One of the problems with techno-optimism is that it captures the imagination with talk of supervolcanoes and “great unknown danger” X, offering us the reassuring but extremely dangerous feeling that all we need to do to get out of the trouble we have stubbornly dug ourselves into is more science. It almost sounds like magic. Because it almost is.


Only at the very end Torres manages to slip in the crucial word that has been missing from the entire exercise: wisdom. The problem is not that we are not smart enough, but rather that we are not wise enough. Science and technology have advanced by giant leaps since the time of Socrates and Confucius, and yet these two (as well as a number of other ancient sages) have been unsurpassed in wisdom by even the most cognitive advanced human beings of the intervening two and a half millennia.


I know it sounds far more pedestrian and unexciting, but what if – for a change – we actually got serious at using the sort of conventional enhancements that are proven to work? First and foremost, education. I don’t mean the sort of STEM-oriented technical tripe that produces more barely functional human drones apt for the use of large corporations. I mean serious education, what the Greeks called paideia, the rearing of good citizens of the polis. And yes, some meditation or other kinds of spiritual exercises, to calm our mind down and center ourselves so that we can live a better and more meaningful life, instead of depending on the constant acquisition of consumer goods for our so-called happiness. And caffeine, of course, that’s always helpful.

Neil deGrasse Tyson “debunks” Spider-Man. And that’s just wrong

1 (3)I’ve spent a significant part of my academic and public careers investigating and opposing pseudoscience. One of my role models in this quest has always been astronomer Carl Sagan, the original host of the landmark PBS series Cosmos. I have met and interviewed the new host, Neil deGrasse Tyson, the director of the Hayden Planetarium at the American Museum of Natural History. Despite our differences about the value of philosophy (he’s dead wrong on that one), Neil too got into the debunking business. But – unlike Sagan – does it with more than a whiff of scientism, and occasionally in a spectacularly wrongheaded fashion.


Take, for instance, last week’s mini-appearance on The Late Show with Stephen Colbert, one of my favorite programs to laugh at the crap currently affecting the planet (as we all known, a sense of humor is the best defense against the universe). On September 14th, Tyson was featured in a one-minute video entitled “Superpowers debunked, with Neil deGrasse Tyson.” What? Why do we need to “debunk” superpowers? Does anyone actually think there exists a god of thunder named Thor, who comes from a mythical place known as Asgard? But apparently the “problem” is pressing enough for our debunker-in-chief to use a popular nationally televised show to tackle it. Here is, in part, what Neil said (and no, this isn’t a joke, he was serious):


Let’s tackle Spider-Man.


No, let’s not! Spider-Man is one of my favorite superheroes, a (fictional) role model, motivated by a more than decent philosophy of life: with great powers comes great responsibility (he got that from Uncle Ben). Something Tyson has, apparently, not learned. He goes on:


He’s bitten by a radioactive spider. Don’t we know from experience that radioactivity give your organs cancer? So, he would just be a dead kid, not one with superpowers.


No kidding, Sherlock. Do we really need the awesome reasoning powers of a star national science popularizer to figure out that Spider-Man’s origin story doesn’t stand up to even casual scrutiny? Doesn’t Neil realize that this is fiction, for crying out loud? Well, apparently, he does, sort of:


Of course it’s fiction, so I don’t have a problem with fiction, but if you think you are going to do this experiment, and try to make that happen to you, I’ve got news for you: it’s not gonna work.
Well, Neil, apparently you do have a problem with fiction. I still remember that on my podcast, years ago, you complained about the aliens in Avatar, because the females had breasts, which are – obviously – a mammalian trait. Really? That’s what bothered you in that movie? Never heard of suspending disbelief and just enjoy a nice story?


Also, who on earth is going to be tempted to repeat in real life the “experiment” that generated Spider-Man? And even if an enterprising and badly informed kid wanted to, where would he get a radioactive spider? Lastly:


I’ve got news for you: it’s not gonna work.


You think?


All right, end of my anti-Tyson rant in defense of Spider-Man. The more serious issue here is: why did he feel the need to do such a silly thing in the first place? I suspect that’s because Neil, like a number of “skeptics” I know, is affected by two maladies: the above mentioned scientism and a strong sense of intellectual superiority to the common rabble.


Scientism is defined by the Merriam-Webster as “an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation.” I don’t know whether commentaries on comic book superheroes qualify as an area of investigation, but clearly Tyson felt it necessary to bring the awesome power of science and critical thinking to debunking the dangerous notion that being bitten by a radioactive spider will give you magical powers.


I really think the skeptic community should stay as far away as possible from the whole notion of debunking (and yes, I’ve been guilty of using that word myself, in the past). For one thing, it conveys a sense of preconceived outcome: you know a priori that the object of your debunking is nonsense, which isn’t exactly in line with the ideal scientific spirit of open inquiry. That’s why my favorite actual skeptic is philosopher David Hume, who famously said that a reasonable person’s beliefs should be proportionate to the evidence, a phrase later turned by Sagan into his famous “extraordinary claims require extraordinary evidence.” Sagan, like Hume, was open to a serious consideration of phenomena like UFOs and telepathy, even though he did not believe in them. At one point he risked his career and reputation in order to organize a scientific conference on UFO sightings. I simply cannot imagine a similar attitude being sported by Neil deGrasse Tyson.


For another thing, “debunking” strongly conveys the impression that one thinks that the people who believe in the notion to be debunked are simpletons barely worth consideration. Perhaps some are, but I’ve met plenty of really smart creationists, for instance, a notion that would sound to Tyson as the quintessential oxymoron. Which brings me to his second malady (one, again, from which I have suffered myself, and that I’m trying really hard to overcome): intellectual snobbism. People like Tyson (or, say, Richard Dawkins) exude the attitude at every turn, as on display in the short Colbert video that got me started with this post. The problem (other than that it’s simply not nice) is than snobbism isn’t going to get you converts. It only plays well with your own faithful crowd.


This is because of something that Aristotle realized back 23 centuries ago, and which he explained at great length in his book on rhetoric. Presumably, Neil, Dawkins, and others want the same thing that Sagan, Stephen Gould (another one of my role models), and myself want: to engage a broader public on the nature of science, and to widen the appreciation and practice of critical thinking. But Aristotle realized that this goal requires the deployment of three concepts: Logos, Ethos, and Pathos.


Logos refers to the idea that our first priority should be to get our facts and our reasoning right. In the case of Neil’s “debunking” of Spider-Man, yeah, he got the biological facts straight, as much as that isn’t going to do anyone any good.


Ethos means character: you need to establish your credentials with your audience. And by credentials Aristotle didn’t mean the fact that you have a PhD (Tyson has one, from Columbia University), but that you are a good, trustworthy person. I can’t comment on the degree to which Neil fits this description, because I don’t know him well enough; but he certainly comes across as condescending in this video and on many other occasions, a character trait that Aristotle would not have approved of. (One more time: I have been guilty of the same before, and I’ve been actively working on improving the situation.)


Pathos refers to the establishment of an emotional connection with your audience. This is something that scientists are actively trained not to do, under the mistaken impression that emotional connection is the same thing as emotional manipulation. But this is the case only if the agent is unscrupulous and manipulative, not if he’s acting as a genuine human being. We humans need emotional connections, without which we are prone to distrust whoever is talking to us. In the video Tyson makes absolutely no effort to connect with his audience. Indeed, it isn’t even clear who is audience is, exactly (certainly, not fans of Spider-Man!), and therefore what the point of the whole exercise actually was.


So, by all means let us nurture good science communicators, which Neil deGrasse Tyson most certainly is. We do need them. But they really ought to read a bit of Aristotle (oh no, philosophy!), and also relax about the questionable science of movies like Avatar or comic books like Spider-Man.


Speaking of which, let me leave you with the delightfully corny original animated series soundtrack. Try to enjoy it without feeling the urge to “debunk” it, okay?