Welcome!

Welcome to Footnotes to Plato! It began back in August ’15 as a blog on general philosophy, with a large component of philosophy of science. The blog has now moved to Patreon and I hope you will follow me there.

However, the full archive of 354 posts and a whopping 31,414 comments will remain permanently available for free. You will also find here a list of my books, all my technical papers in philosophy, links to columns I wrote for a variety of magazines (Skeptical Inquirer, Philosophy Now, The Philosophers’ Magazine), several downloadable collections of essays, and a number of both public and technical talks I have given. You will also find links to my various online presences (Twitter, Facebook, etc.). These pages will keep being updated as new material becomes available.

I hope you will enjoy this site and that it will help you in your continuing quest for understanding and practicing philosophy.

cheers,

Massimo Pigliucci

(the City College of New York)

Between strident atheism and vanilla ecumenicism

1I am a skeptic and an atheist. And now I have to immediately qualify those words. I am a skeptic in the sense that I strive, as David Hume aptly put it, to proportion my beliefs to the available evidence. A concept that Carl Sagan famously turned into “extraordinary claims require extraordinary evidence” in the specific case of pseudoscience.


I am an atheist in the etymologically literal sense of the word: a-theist, without a positive belief in gods. I don’t profess to know that there are no gods, but simply that I don’t see sufficient evidence or reasons in favor of the notion. Likewise, I am an a-unicornist, I don’t believe in unicorns, since they don’t appear anywhere in the fossil record or among contemporary zoological catalogues (yes, yes, I know about narwhals, as well as unicorned rhinos).


These qualifications explain why I am often critical of certain segments of the skeptic and atheist communities. I don’t think “skeptics” do anyone a favor when they engage in silly hoaxes, and that they (we, really) could benefit from a bit less arrogance and a bit more virtue epistemology. Likewise, I have never been a fan of the so-called New Atheists, whom I find strident in behavior and philosophically ill-informed.


That said, I’m also not on board with what I’ve come to regard as vanilla ecumenicism, an increasingly popular stance that argues that there is, and there has never been, a conflict between science and religion, or philosophy and religion, pace Giordano Bruno, Galileo, and a number of others. A recent example (but only one among many) of such an attitude is an article by Peter Adamson in the LA Review of Books. Adamson is a professor of Late Ancient and Arabic Philosophy, and one whose “history of philosophy without any gaps” books I actually use in my introductory courses. In that particular article, he was favorably reviewing Open to Reason: Muslim Philosophers in Conversation with the Western Tradition, by Souleymane Bachir Diagne.


The gist of the book, and the review, is that – contra popular opinion even among philosophers – the Islamic tradition has always been open to reason and science. I honestly think that’s a welcome corrective, and yet at the same time more than a bit of an overstatement. But my beef here is neither with Diagne’s book nor with Adamson’s review of it. Rather, I take issue with the following statement, which appears right at the beginning of the LARB article:


“One of the most common prejudices we historians of philosophy encounter is the notion that philosophy is somehow incompatible with religious belief. Religion is based on faith, philosophy on reason; religion is rigorously imposed doctrine, philosophy is open-ended inquiry; religion is about believing what you’re told, philosophy about figuring things out for yourself. A moment’s reflection will show you that it must be a little more complicated than that. After all, nearly all philosophers in history – famous and obscure, ancient and modern, Western and non-Western, male and female – have been religious believers.”


Let’s start with the last bit: nearly all philosophers in history have been religious believers. Well yes, and so have been nearly all scientists. But the question is whether they were believers in spite of being philosophers or scientists, or whether the two really did go hand in hand. Yes, Galileo was a Catholic, and yet anyone thinking that he experienced no conflicts between his science and his religion as interpreted by Catholic theologians is either not paying attention or is engaging in some seriously misguided historical revisionism (which some people are, yes, I’m aware).


True, Newton spent more time doing biblical criticism than studying physics. But he also spent more time doing alchemy than physics, which is no good reflection on alchemy. And of course, he is celebrated for his physics, not for his biblical criticism or alchemic studies.


Indeed, insofar as science goes, the best way to summarize the conflict with religion was articulated by physicist Richard Feynman, in his The Meaning of It All: Thoughts of a Citizen-Scientist. There he writes that we can discuss general principles and cherry pick historical examples and counterexamples all we want, but when push comes to shove, the ideal scientist is always open to change her mind when new evidence comes in. The religious believer, by contrast, puts faith ahead of reason. Even the many centuries of Christian apologetics are one gigantic attempt to reconcile the “book of nature” with the “book of God.” Church fathers like Basil, Gregory of Nyssa, Augustine, John Cassian, John Chrysostom, Ephrem the Syrian, and Maximus the Confessor believed that the two books tell the same story, and yet whenever religious authors have perceived a conflict between the two, there was no hesitation about which got precedence. Yes, Galileo did write – quoting Tertullian – “We conclude that God is known first through Nature, and then again, more particularly, by doctrine; by Nature in His works, and by doctrine in His revealed word.” But we know how he ended his days…


[Side note: it is more than a bit ironic that Feynman, who was famously contemptuous of philosophy, quipping that it is as useful to science as ornithology is to birds, wrote a number of books that are exquisitely philosophical in outlook.]


What goes for science, mutatis mutandis goes for philosophy, though things there are a bit more murky. Yes, some of the greatest philosophers of history were also theologians, like Augustine of Hippo, or Thomas Aquinas. But most, and certainly, in my mind, the best, philosophy has nothing to do with gods and the like. Huge swaths of metaphysics, ethics, political philosophy, and aesthetics, as well as pretty much all of epistemology, logic, and the many “philosophies of” (science, language, mind, and so forth) have nothing whatsoever to do with religion. Indeed, to bring gods into such discussions would be very rightly frowned upon, as if one where to mention the possibility of supernatural explanations in the discussion section of a scientific paper.


Where religion plays a major role in philosophical discussions, such as the problem of free will, it doesn’t come out very well. The whole notion of “free” (meaning, contra-causal) will is incoherent, and has historically been defended by Christian theologians embarrassed by the problem of evil. Even there, it’s not a good response, since at most it takes care of the problem of human evil (you know, we’ve got free will, so the resulting shit in the world is on us), but not of the twin issue of natural evil (i.e., it doesn’t even begin to explain earthquakes, cancer, and so on. Here is a funny rendition of the problem.)


Yes, I’m aware that there are arguments (and counter-arguments) for all this. But they are hardly convincing, always feeling like rationalizations in defense of the indefensible. Do I know that there is no God, for a fact? Of course not, see above about unicorns. But as Pierre-Simon Laplace may or may not have told Napoleon when the latter was inquiring about the role of God in the former’s theory of the origin of the solar system: “Je n’avais pas besoin de cette hypothèse-là” (I had no need of that hypothesis). In a similar fashion, we don’t need God in philosophy in order to talk about right and wrong (since Plato’s Euthyphro), or whether the human mind is a computer, or the nature of science, or the structure of language, or the validity of modus ponens, or… you get the point.


And, I would argue, the most fundamental locus of friction between philosophy and religion is precisely the one singled out by Feynman in the case of science: attitude. An ideal philosopher will follow an argument wherever it leads, while a theologian will impose restrictions to guard his faith, and eventually will in fact use faith as a trump card (or respond to a penetrating objection with some entirely uninformative phrase along the lines of “the will of God is inscrutable,” often accompanied by literal hand waving).


So I find myself navigating the treacherous waters between the Scylla of scientistic atheism and the Charybdis of vanilla ecumenicism. No, religion is not “the root of all evil,” and yes, it is historically true that religious institutions have done a lot of good for humanity, alongside the notorious bad. But equally, let us not lull our critical sense and think that there isn’t something radically at odds between an approach that situates faith as fundamental and another one (be it science or philosophy) that values evidence and logic above all.

Vindicating science by bringing it down, part II

1 (1)As we have seen in part I, scientists and early philosophers of science adopted an idealized conception of science, which focuses on the successful end result of scientific activity, and in which there is no place for any influence of the social, or indeed, for any of the actors involved in the scientific endeavor. All of that is swept under the carpet. But the fact that the eventual goal of science is to eliminate the social does not imply that social factors have no important role to play in the process. Science, after all, is nothing but the concerted effort of (sometimes not to) humble human brains, none of which was designed to unravel the mysteries of the world on its own.


In the past couple of decades, science has been brought down to earth again by sociologists, cognitive scientists, evolutionary psychologists, and historians. Unfortunately, the opposition between the rational and the social is still besetting the naturalization of science. The backlash against the traditional conception of science, epitomized by the logical positivists and their intellectual heirs, has swung the pendulum in the opposite direction. Still under the spell of the dichotomy between rational and social which we discussed in part I, many science naturalizers have assumed that, as they bring science down to earth, its pretensions will start to unravel.


The truth of the matter, however, is that all beliefs, the true and the false ones, or the scientific and the pseudoscientific ones, have a causal history, involving cognitive and social factors. If we want to understand how people come to believe stuff, even simple and obvious propositions (e.g., dolphins are mammals) are in need of an explanation. Likewise, if we want to understand how scientists have been able to unearth all sorts of true beliefs about the world, we need to understand what kinds of people scientists are, what kind of cognitive strategies they bring to bear on their research questions, what the social organization of science is, and how hypotheses are tested and evaluated within a scientific community.


The development of a cognitively and socially rich account of science has been delayed by the widespread misconception that such an account would compromise the epistemic standing of science. Because of our habit of pitting social and rational explanations against each other, we assume that the intrusion of sociology and psychology into the citadel of science will eat away at its foundations.


At the heart of this opposition between the social and the rational lies an individualist conception of reason, shared by both camps in the science wars. The notion of the social as a contaminant of the rational, to which even social constructivists seem to subscribe, is more indebted to logical positivism than the latter would like to admit. Radical sociologists during the science wars of the ‘90s were led astray by the very same intuition that made the logical positivists allergic to social explanations – only now they were welcoming the opposite conclusion. As philosopher David Hull put it:


“Because science did not possess the ideal characteristics that the ‘positivists’ insisted that it should, knowledge-claims made by scientists had no more warrant than those of magicians, faith healers, and politicians.”


Both camps are wrong. The simple opposition between the rational and the social-psychological explanations goes against the grain of naturalism. Scientific knowledge does not drop out of thin air: it is embodied in real human beings. If our best scientific theories in some way reflect the world out there, this must have come about through the usual perceptual capacities and cognitive operations, with available technological equipment, and in a complex network of social interactions. How else could it have come about?


The sociologists are right that science is a deeply social endeavor, and that all scientific knowledge is in this sense “socially constructed.” No single individual marooned on a desert island, no matter how brilliant, would be capable of finding out any of the significant truths about the universe that we currently possess. Though the history of science has known some solitary geniuses, working in relative isolation from their peers, even they were still engaged in a collective enterprise, in the sense that they were building on the work of numerous predecessors.


The realization that science is a deeply social enterprise, and that scientific consensus is reached through coalition forming and competition, should not surprise us. The question is what particular social organization is exemplified by science, and whether this is conducive to its epistemic aspirations. Scientists are human beings, warts and all. If scientists collectively succeed in finding out significant truths about the universe, while other endeavors have failed in this regard, this must have come about through the particular social dynamics of science.


Many scientists believe that being objective and impartial are the cardinal virtues of science, and that bias and prejudice make one unsuitable for scientific work. Although the culture of science rightly encourages these virtues, they are by no means necessary for the success of science. Indeed, a certain modicum of bias in this or that direction may actually facilitate the progress of science.


It is not a problem that an individual scientist is biased, or emotionally attached to a particular hypothesis. The social organization of science makes sure that these biases will be balanced by others tilting in different directions. A standard example of this is the reorientation of aspects of medical research as a result of feminist epistemological critiques: it is now increasingly acknowledged that, for example, we cannot conduct drug tests solely on a population of (mostly white, middle aged) men and simply assume that the results can be extrapolated to other human biological populations. Again, Hull:


“The objectivity that matters so much in science is not primarily a characteristic of individual scientists but of scientific communities. Scientists rarely refute their own pet hypotheses, especially after they have appeared in print, but that is all right. Their fellow scientists will be happy to expose these hypotheses to severe testing.”


A desire for fame and success is often viewed as unworthy of a real scientist. The goal of science is truth for its own sake. Although such base motives may indeed compromise one’s scientific work, if allowed to be unchecked, there is no convincing reason why they would stand in the way of significant discoveries. Even spite, jealousy, and the desire to humiliate a rival can result in excellent scientific work, if the competing parties know that they have to abide by certain rules, and will be called out whenever they violate them.


In a good number of cases, social influences are not an impediment to the epistemic ambition of science, but rather a facilitator of scientific progress. Science harnesses some of the baser motives of human behavior in the service of truth, making sure that the interplay of scientists’ individual interests and biases mostly align with epistemic progress.


Darwin may have been right from the start about the fact of common ancestry, but his theory would not have carried the day as swiftly as it did without his indefatigable efforts to enlist allies to the cause and to engage and negotiate with his critics. All the parties in the dispute were trying to enlist nature as their ally, but Darwin of course had one big advantage: nature really was on his side all along. In the long run, therefore, as evidence accumulates and factions wax and wane, the social influences on science will be filtered out, and rightly so.

Vindicating science by bringing it down, part I

1What is the role of the social in science? If one consults science textbooks, one will find that the social dimension of scientific knowledge is conspicuously absent. Science is supposed to reflect the way the world really is, independent of our petty human lives. It is, in the classical view, the epitome of a rational endeavor, free from social influences. Of course, science is carried out by human beings, but their individual backgrounds and social lives are simply taken to be irrelevant. What matters are the intellectual merits of a theory, not who conceived it. What matters is the evidence, not who gathered it. This stark contrast between the social and the rational can be found in philosophical accounts of science as well. Because social factors are rendered invisible in the end products of science, many philosophers have underestimated their constructive role in the acquisition of scientific knowledge.


In recent decades, sociologists and historians have tried to bring science back to earth, but many of them have unwittingly bought into the same simplistic opposition. Social influences on science have been relished by its cynical critics and resisted by its admirers, and for the same reason: the fear (or hope) that it would destroy the credentials of science. In a paper I co-wrote with my frequent partner in crime, Maarten Boudry (published in Perspectives in Science and Culture, edited by K. Rutten, S. Blancke, and R. Soetaert, Purdue University Press) we discuss the historical roots of this opposition, culminating in the sorry spectacle of the science wars. This post provides extensive excerpts from that paper, I hope you’ll enjoy it.


When do we feel the need to explain why someone beliefs something? Not all beliefs held by our fellow human beings appear to produce an epistemic itch. People believe that dolphins are mammals, that the earth orbits around the sun, and that World War II ended in 1945, but we rarely wonder how they arrived at such homely truths. Beliefs such as these are just obvious, and no sane person would dispute them. That said, who told you when WWII ended? Where did you acquire the belief that dolphins are mammals, or that the earth goes around the sun? Your sources for these convictions are hard to track down.


Memories of these facts are called semantic by psychologists, to distinguish them from episodic memories, related to things that happen to us. Episodic memories carry a tag with time, place, and the situation we acquired them. Not so for semantic memories, likely because doing so would be a waste of brain resources.


Take the belief that coal is black. If we ask you what your reasons are for believing that, you would probably be puzzled. The first answer that comes to mind is: “Why, because it is black, of course!” It doesn’t matter how you came to know that. You could have learned it in any number of ways. Anyone in doubt about the color of coal can quickly retrieve the answer through any number of sources.


Because the truth of such beliefs is obvious, we rarely question how other people acquired them, or how they can justify them. It seems as if such beliefs just drop out of thin air, without much in the way of a causal history.


That said, how do we account for other kinds of beliefs (as held by others, of course)? Beliefs that are false, quirky, idiosyncratic, or plainly irrational produce an epistemic itch. We want to explain how people end up embracing them. Who told him such nonsense? Did he fall for one of those conspiracy theories circulating on the internet?


We resort to special explanations only when something goes wrong. True beliefs that are part of common knowledge are taken at face value, but false and foolish beliefs cry out for an explanation. This is where social and cultural explanations come in. Such explanations, however, are not invoked when we account for true and justified beliefs. Only when rationality breaks down, it seems, a space is opened up for psychological explanations to fill. We seem to think that there is an association between the irrational and the social, but not between the rational and the social.


In the classical view, science is the epitome of reason. It is objective and impartial. It is ruthless in its indifference to what we fear or fancy. When it comes to the content of science, nature has the final say in the matter. Social, political, and ideological influences on science are anathema. When writing science textbooks, and for many other purposes, the social influences on the development of scientific theories can be safely ignored,  just like with many of our mundane beliefs about the world. Sure, there is a story to be told about how scientists pooled their efforts to acquire this or that piece of knowledge, who published it first, who convinced whom, and so on. But the details of this story make no difference: an alternative history of science would ultimately have led to the same result.


As a result, especially in the natural sciences, students are simply taught scientific theories as if they descended down from some Platonic heaven. The vagaries of scientific history, the false starts, wrong turns, and dead ends, the protracted controversies between rival views, the forerunners and pioneers of scientific ideas – all of this is rendered invisible.


For long, philosophers of science have also treated science in splendid isolation from the social world. Hans Reichenbach, one of the major proponents of logical positivism, taught us to strictly separate the context of discovery from the context of justification. The first deals with the historical conception of a scientific hypothesis, and is of little interest to philosophers trying to understand the logic of science. Philosophers of science should be solely concerned with how a scientific hypothesis, once it appears on the scene, relates to observations, whether it is internally consistent, whether it is falsifiable, and so on.


(next: bringing science down to earth)

How to stop a conversation: with facts and definitions

1 (5)I really ought to know better, after decades of activism on behalf of science and critical thinking, than to engage in ideologically loaded discussions with friends or family. Indeed, the opening chapter of the second edition of my Nonsense on Stilts: How to Tell Science from Bunk features two rather frustrating conversations I’ve had with a relative (I call him “Ostinato,” Italian for stubborn) and an acquaintance (“Curiosa,” Italian for curious). Neither episode led to either side moving a fraction of an inch away from their initial position, resulting mostly in aggravation and emotional distress on both sides. Still, as I explain in the book, it wasn’t time entirely wasted, since I came to a couple of important realizations while engaging in such discussions.


For instance, from Ostinato I learned that a common problem in these cases is the confusion between probability and possibility. Whenever I would explain why I don’t think it likely, say, that 9/11 was an insider’s job, or that the Moon landing was a hoax, Ostinato would reply: “but isn’t it possible?” Out of intellectual honesty I would reply, yes, of course it’s possible, in the narrow sense that those scenarios do not entail a logical contradiction. But they are extremely unlikely, and there really aren’t sufficient reasons to take them seriously. Ostinato clearly thought he had scored a major point by wrangling my admission of logical possibility, but such glee reflects a fundamental misunderstanding not just of how science works, but of how commonsense does as well. Is it possible that you will jump from the window and fly rather than crash to the ground? Yes, it is. Would you take the chance?


As for Curiosa, she taught me that a little bit of knowledge is a dangerous thing. I nicknamed her that way because she was genuinely curious and intelligent, reading widely about evolution, quantum mechanics, and everything in between. Reading yes, understanding, no. She took any evidence of even extremely marginal disagreement among scientists as, again, evidence that it is possible that what people claim is a well established notion (evolution, climate change) is, in fact, false. Again, yes, it is possible; but no, finding the occasional contrarian scientist (often ideologically motivated, as in the case of anti-evolution biochemist Michael Behe) is absolutely no reason to seriously question an established scientific theory.


You would think that Ostinato and Curiosa had taught me a good lesson, and that I wouldn’t fall for it again. Sure enough, recently a close relative of mine wanted to engage me “as a scientist and a philosopher” in a discussion of chemtrails and 9/11 truthism, sending me a long list of the “reasons” she believed both. I respectfully declined, explaining that my experience had showed me very clearly that nothing good comes out of such discussions. People talk past each other, get upset, and nobody changes his mind. My relative was taken aback by my refusal, but I felt pretty good. Part of Stoic training is the notion that one does not control other people’s opinions, motivations, and reasoning. It is okay to try to teach them, within limits (and I do: that’s why I literally teach courses on this stuff, and write books about it), but failing that, one just has to put up with them.


And yet, Stoicism also reminds me that I ain’t no sage, and that I am labile to slip back at the next occasion. Which I did, a couple of days after Thanksgiving! This time I was having dinner with someone we’ll call Sorprendente (Italian for surprising, the reason for the nickname will become apparent in a moment). She is a very intelligent and highly educated person, who, moreover, is involved in a profession that very much requires critical thinking and intellectual acumen.


Imagine then my astonishment when I discovered that Sorprendente flat out denies the existence of a patriarchy, both historically and in contemporary America. I care enough about this sort of thing that I immediately felt the adrenaline rush to my head, which meant – unfortunately – that I had to fight what I already knew was an impossible battle: to explain certain things to Sorprendente without losing my temper. Anger, as Seneca famously put it, is temporary madness, and should not be indulged under any circumstances. Let alone when you are trying to convince someone you know of a notion that she is adamantly opposed to.


This post isn’t about convincing you that we do live in a patriarchal society. If you don’t think so already there probably is little I can do in a blog post to change your mind. Besides, there are plenty of excellent resources out there (like this one; or this one; or, if you are more practically minded, this one). Rather, I want to reflect on a new (to me) strategy deployed by Sorprendente, a strategy that I didn’t expect in general, and certainly not from someone who very much relies for her job on using the two concepts she dismissed at dinner with me.


Said two concepts are: definitions and facts. When Sorprendente admitted that most positions of powers in our society are held by men I made the comment that that’s part of the definition of a patriarchy. Indeed, here is how the Merriam-Webster puts it:


“Patriarchy (noun). Social organization marked by the supremacy of the father in the clan or family, the legal dependence of wives and children, and the reckoning of descent and inheritance in the male line. Broadly: control by men of a disproportionately large share of power.”


While, thankfully, we are slowly moving away from the first group of markers of a patriarchy (in the West and in some other parts of the world, certainly not everywhere, by a long shot), the second one (the bit after “broadly”) very much applies, even according to Sorprendente herself.


And yet she curtly informed me that “definitions are conversations stoppers.” Wait, what? Definitions of words are, seems to me, crucial to any kind of discourse. Yes, it is true that dictionaries are both descriptive and prescriptive. They are descriptive in the sense that if the common usage of a word changes they will update accordingly; prescriptive because they tell us what currently counts as correct usage. “It’s just semantics” is one of the most irritating responses one can get in the middle of a discussion. Of course semantics (and definitions) are important. If we don’t agree on the meaning of the words we use we are talking past each other, with no possibility whatsoever of understanding. All I was trying to say was that – according to Sorprendente’s own admission – the facts on the ground correspond to the definition of a patriarchy, which means that it becomes inherently contradictory to agree with those facts and yet insist in denying that we live in a patriarchy.


Speaking of facts. Apparently, bringing those up also is a conversation stopper, and it is therefore highly impolite. Here things got truly bizarre. To begin with, it was Sorprendente who brought up a fact, in the form of a statistic: she claimed, as partial evidence that women are not oppressed, that their average life span is 10 years longer than men’s. This is biology, one of my areas of expertise, and the facts can readily be checked.


First off, the 10 years figure is false. The true figure, as it happens, varies from country to country: 6.7 years in the US, a whopping 12 in Russia, and a mere 0.1 in Bangladesh. Second, part of the gap is due to biological reasons: women have two copies of the X chromosome, while men only have one copy (because we have the tiny Y instead). As a result, men are exposed to hundreds more genetically influenced diseases than women, and their mortality is higher, both early in life and throughout. Apparently, however, bringing up these obviously pertinent facts on my part was a rude conversation stopper. Translated: I should be free to bring up whatever false information I want, but you are not allowed to contradict me on the basis of factually correct information. Remember that Sorprendente’s job deals with the accurate verification and interpretation of facts. Oh boy.


Regardless, why would she think that a longer life span is proof that we don’t live in a patriarchy? (Indeed, according to her logic, since women have the statistical advantage, we should conclude that we live in a matriarchal society.) Because women have been and to some extent still are are “shielded” from dangerous jobs, like joining the military, which is an “obvious” example of concern on the part of men. No patriarchy. QED.


This makes little sense on a number of levels. A military career has always (since the time of the ancient Greeks) be considered a manly job precisely because women have been thought of as inferior or inadequate for that sort of activity. This is exactly what one would expect in a patriarchy. Moreover, it is likely true that most men “care” for women and want to protect them. This is in no way incompatible with the notion of sexism; indeed, being patronizing toward someone who doesn’t actually need to be protected is one of the symptoms of sexism and other discriminatory attitudes. Not to mention that women are now increasingly accepted in the military. This is true both for the US (average life span gap 6.7 years) and Bangladesh (average life span gap 0.1 years). It doesn’t take a rocket scientist to figure out that this is simply not a factor in explaining why women live longer than men.


Ah, said Sorprendente, but then if we live in a patriarchal society, how do you explain that there are millions more men than women in prison? This, I tried to respond, actually confuses two different issues, since the majority of men in American prisons are minorities, and particularly blacks and hispanics. The differential is a result of a combination of racism, poverty, and lack of education and therefore job opportunities. It appears, again, to have nothing to do with the issue of patriarchy.


Very clearly, I wasn’t getting anywhere, and both Sorprendente and I were becoming increasingly upset. At which point a thought suddenly struck me and I asked: are you by any chance into Jordan Peterson? Yes, came the response, I think he makes some good points. And that, my friends, was the real conversation stopper.

Why Alex Rosenberg is wrong just about everything

1 (4)Philosophy is my second academic career. My first one was in science, evolutionary biology, to be specific. Depending on how you look at it, this makes me either unusually competent in two normally widely distinct areas of academic scholarship, or barely making the passing grade in both. Be that as it may, I have made a personal hobby to observe my new profession from the outside, as much as it is possible, sort of like an anthropologist looking into a different yet sufficient familiar culture.


One of the things I’ve noticed is that philosophers are unusually critical of their own field, with a number of prominent ones, both now and historically, actually arguing that it should be dismantled, usually in favor of science (or linguistics). I will not get into that debate here, as I’ve covered in detail before.


Another frequent observation is that of a high frequency of colleagues who are fascinating for being very smart, well regarded in the field, and yet – in my admittedly non humble opinion – completely wrong. Perhaps the quintessential example is David Chalmers, he of “philosophical zombies,” “hard problem of consciousness,” “singularity,” “mind uploading,” “panpsychism,” and similar inane notions. But this post isn’t about David.


It’s about Alex Rosenberg. Alex is on the faculty at the prestigious Duke University in North Carolina, and someone I think should get a medal (together with Chalmers, of course) for the highest number of wrongheaded papers in a philosophical career. I met him a few years ago during a two-day conference on “Moving naturalism forward,” organized by cosmologist Sean Carroll. The conference was fun, but Alex kept trying to convince us of a notion that he called “happy nihilism,” according to which the universe is devoid of meaning (of course it is, meaning is a human construct), free will doesn’t exist (of course it doesn’t, if one uses the term in the contra-causal sense), and yet, somehow, we can still decide to take all of this on board and be happy.


Setting aside the devastating criticism Alex got at the conference from Dan Dennett, Owen Flanagan, Terrence Deacon, and others, this is also the same bleak picture of the world he presented in his dismal The Atheist’s Guide to Reality, which I reviewed for The Philosophers’ Magazine. Here is a taste of my thinking at the time:


“As a former scientist and now philosopher, I have chastised some of my colleagues for their scientistic attitude. … Thanks to [Rosenberg], I can no longer be accused of fighting a straw man. Rosenberg’s attempt is valiant and will give people much to think about. Except, of course, that according to Rosenberg we cannot really think such things because scientism ‘says’ that chunks of matter cannot possibly produce insights about anything at all, on penalty of violating physicalism.”


Nevermind that such statements are obviously self-contradictory. What was I doing while reading Alex’s book if not thinking about what he wrote? And what was he doing while writing the book? These are all illusions, claims Alex, apparently using the word “illusion” in a novel and profound way that the rest of us are unaware of. I continued my review:


“Take Rosenberg’s denial of the existence of conscious decision-making. Consciousness for him is an epiphenomenon of the brain’s activity. … His major piece of evidence? Benjamin Libet’s experiments in cognitive science. … We are informed [that] ‘consciousness is probably too big a deal not to have been organized by natural selection to solve some design problem or other, perhaps several. Exactly what its functions are, what design problem it solves, neuroscience has not yet figured out.’”


Seriously? Let us set aside that Alex completely misinterprets the implications of Libet’s famous experiments, even contradicting Libet’s own interpretation. He admits that natural selection must have evolved consciousness – which depends on brain structures that are exceedingly metabolically costly – for some reason, but he can’t think of one. Hmm, let’s see, how about the ability to reflect on our actions, make deliberate decisions, plan things ahead? Oh right, those are all illusions. Naturally. Me again:


“For Rosenberg there is no free will, morality, meaning, aboutness and so on because, you see, ‘the physical facts fix all the facts.’ We are never told exactly what this slogan actually means. Well, I’m a big fan of physics, but last time I checked, it didn’t, for instance, ‘fix’ the fact that 2+2=4.”


Nor does physic fix anything at all in the rest of mathematics. And in logic. Continuing the review:

“Rosenberg thinks that economics, the social sciences (not to mention literature, the arts, and his own field of philosophy) are all ‘stories’ that may entertain us, but that should by no means be taken seriously. He doesn’t seem to realize that science – not to mention his very book – also tells stories … because that is the way human beings communicate knowledge and achieve understanding. Science is the right type of story if you want to know about cosmology, but not if you want to learn logic.”


Or history. Or art. I concluded:


“Rosenberg’s scientistic nihilism is analogous to radical skepticism about reality. … It’s thought provoking, there is no scientific evidence that can possibly rule in its favor or against it, and it is best promptly forgotten so that you can get back to thinking about the things that really matter to you.”


Alex, impervious to criticism (well, “he” is only a bunch of subatomic particles without will or aboutness, so – to be fair – how could he change his mind, especially given that the latter is an illusion?), has continued along the same vein in recent years. Just in the last few weeks I’ve read two more articles by him that finally prompted me to write this essay.


The first one, published in The Verge, is actually an interview conducted by Angela Chen, in which Alex “explains” how our addiction to stories keeps us from understanding history. The interview is about (but wait, nothing is about anything!) his book How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories.” First problem: whenever I hear the words “the neuroscience of…” I instinctively reach for my gun (fortunately, I’m a quasi-pacifist, and I don’t own guns). That’s because nowadays a lot of nonsense is written in the name of neuroscience, unfortunately.


The main trust of Alex’s argument is that neuroscience undermines what is often referred to as our “theory of mind,” the ability to guess other people’s thoughts and motivations. Since historians deploy – without realizing it – a theory of mind whenever they talk about this or that historical figure’s motivations for acting one way or another, their theorizing is made hopelessly obsolete by the modern science of the brain.


Except that Alex is making an astounding mistake here, very similar to the one made, for instance, by fellow atheist Sam Harris in his The Moral Landscape (see my review here). He is confusing a mechanistic explanation of X for the explanation of X, apparently forgetting (or simply outright denying) that explanations – which are human constructs, let us not forget – can be given at different levels, and using different language, depending on how useful they are to the target recipients, i.e., other human beings.


Let me give you an analogous example to show just how bizarre Alex’s claim that neuroscience does away with historical explanations really is. Imagine we were interested in the “neural correlates,” as cognitive scientists call them, of mathematical problem solving. We can stick someone – even a mathematician – into an fMRI machine and find out which areas of her brain lit up when she is involved in simple or complex mathematical thinking, from solving a basic equation to demonstrating Fermat’s Last Theorem.


Now, we will surely find some such neural correlates. We have to, since everything we do, and certainly any kind of higher, conscious thinking, has to be done by way of engaging one part or another of our brains. Otherwise, it would be magic.


But now imagine that our neuroscientist completes his experiment, gets the mathematician out of the fMRI machine, and gingerly informs her that mathematicians are no longer needed, because neuroscience has discovered which areas of the brain they use to solve mathematical problems. Crazy, right? Well, it’s no different from Alex’s reasoning for getting rid of historians, or Harris’ “argument” (I’m using the word charitably) for concluding that science, and neuroscience (which just happens to be his own field) in particular, can now answer moral questions. Ethicists can go play golf.


A few weeks later, Alex did it again! This time in an article he penned himself for 3:AM Magazine, entitled “Is neuroscience a bigger threat than artificial intelligence?” Oh boy. It’s the same basic idea that he has been peddling since The Atheist’s Guide to Reality, though – as in The Verge article – this time it isn’t physics that “fixes all the facts,” it is neuroscience that answers all the questions.


After acknowledging the (alleged, and I think way overblown) threat posed by future advanced AI to humanity (you know, the Singularity, again, Terminator and that sort of things), Alex informs us that the real existential downfall of humanity comes from the research of four Nobel-winning neuroscientists: Eric Kandel, John O’Keefe, Edvard [sic], and May-Britt Moser. What have they done?


“Between them they have shown that the human brain doesn’t work the way conscious experience suggests at all. Instead it operates to deliver human achievements in the way IBM’s Watson does. Thoughts with meaning have no more role in the human brain than in artificial intelligence.”


By now you have surely guessed that this is, again, about the alleged failure of the theory of mind, and that, once again, Alex is simply confusing different levels of explanation, an elementary mistake that you would think a trained philosopher simply wouldn’t make.


The fascinating thing is that Alex actually acknowledges that there is quite a bit of evidence for the theory of mind:


“Several sources of evidence suggest that we have an innate mind-reading ability more powerful than other primates. It’s an ability to track other people’s actions that is triggered soon after birth. Child psychologists have established its operation in pre-linguistic toddlers, while primatologists have shown its absence in other primates even when they exceed infants in other forms of reasoning. Social psychologists have established deficiencies in its deployment among children on the Autism spectrum. fMRI and transcranial magnetic stimulation studies have localized a brain region that delivers this mind-reading ability. Evolutionary anthropology, game theory and experimental economics have established the indispensability of powerful mind reading for the cooperation and collaboration that resulted in Hominin genus’s rapid ascent of the African savanna’s food chain.”


None of this matters, because neuroscience has (allegedly) “revealed” to us that the theory of mind is “quite as much of a dead end as Ptolemaic astronomy.” Why? Because Kandel and colleagues have shown that if you look into the brain you won’t find beliefs, desires, or reasons, but only specific, dynamic neural pathways.


No kidding, Sherlock. That’s because what we call beliefs, desires and reasons are instantiated in the brain by way of specific neural pathways. The neurobiological level is more basic – but, crucially, no more true – than the psychological one. They provide complementary, not competing, explanations of the same phenomenon. One explanation is more useful to biologists and neuroscientists, another one to psychologists, historians, and art critics, among others.


It’s like the much abused and misunderstood example of the chair in which you may be sitting at this particular time. Physics tells us that said chair is “really” just a collection of quarks, interacting in the way prescribed by the fundamental laws of nature. This is certainly the case, but by a long shot not the whole picture. Your chair is also “solid” at the level of analysis pertinent to human beings who wish to sit down in order to read a blog post, not to mention those other human beings that designed and built the chair itself. The chair is most definitely not an illusion, just because it can be (usefully, depending on the context) be described in different ways. Explanatory complementarity, not competition.


A side note, as a biologist, on Kandel et al.’s indubitably scientifically fascinating work: it was done on rats, because the pertinent experiments are too invasive and unethical to be conducted on human beings. With his usual braggadocio, Alex informs us that this doesn’t matter at all:


“Of course you could argue that what Nobel Prize winning research shows about rats is irrelevant to humans. But you’d be flying in the face of clinical evidence about human deficits and disorders, anatomical and physiological identities between the structure of rat and human brains, and the detailed molecular biology of learning and information transmission in the neuronal circuitry of both us and Rattus rattus, the very reasons neuroscientists interested in human brains have invested so much time and effort in learning how rat brains work. And won Nobel Prizes for doing it.”


I got news for Alex: while, again, Kandel et al.’s research is most certainly important, enough to win the Nobel, translating things from rats to humans is definitely not that obvious or straightforward. It is simply false that rat and human brains have a large number of anatomical and physiological identities, as the perusal of any introductory book on mammalian anatomy will readily confirm. Heck, our brains are substantially different from those of higher primates like chimpanzees and bonobos, which is a major reason we need to be careful when we extrapolate from the latter (let alone rats) to humans. For instance, we have little to go by, in terms of comparative brain anatomy and physiology, to explain exquisite and crucially human traits like language (not just communication) and iterative cultural evolution. Take a look at this book by my colleague Kevin Laland to appreciate just how carefully biologists (as distinct from some philosophers) are when it comes to interspecies comparisons.


Don’t get me wrong. Alex Rosenberg is a really smart guy, and his misguided writings are necessary in order to sharpen our thinking about all sorts of matters. After all, the British Royal Society awarded physicist Fred Hoyle (the author of the steady state theory in cosmology, which for a while rivaled the big bang theory) a medal for the highest number of wrong ideas proposed in a scientific career. This was not an example of British sarcasm, they meant it in all seriousness, as Hoyle’s theories have arguably played an important role in advancing cosmology. Perhaps we should establish a similar prize in philosophy. I have a couple of candidates in mind…

On the crucial differences between private and public forgiveness

1 (3)

(Stolen Generation apology, Adelaide, Australia, 13 February 2008)

In the aftermath of the Apartheid government in South Africa, a Truth and Reconciliation Commission (TRC) was formed to help the country move forward at the same time as it acknowledged and attempted to redress the injustices of the past. Not everyone was onboard with the project, but it turned out to be a success in terms of helping to heal the nation. Of 7,000 individuals who applied to the commission for amnesty about 10% were granted it, and the commission proposed a wealth tax to help fund reparation programs. (The South African government, however, never followed through.)


This was not the first TRC, nor would it be the last. An earlier attempt was made by Bolivia in 1982, with its National Commission of Inquiry into Disappearances, which brought together a variety of sectors of society after the end of the military rule there. The very first TRC was the one established in Uganda in 1974. Argentina followed in 1983, Chile and Nepal in 1990, El Salvador and Germany in 1992, Guatemala in 1994, Canada in 2006, and so forth.


Priscilla Hayner in Unspeakable Truths: Transitional Justice and the Challenge of Truth Commissions (Routledge, 2002) defines a TRC in this manner:


“A truth commission (1) is focused on the past, rather than ongoing, events; (2) investigates a pattern of events that took place over a period of time; (3) engages directly and broadly with the affected population, gathering information on their experiences; (4) is a temporary body, with the aim of concluding with a final report; and (5) is officially authorized or empowered by the state under review.”


I’m telling you all this because of a brilliant talk I just heard at City College, entitled “The Broken Promise of Public Forgiveness,” delivered by University of Texas-Austin PhD candidate Simone Gubler. Simone took as her starting point the formal apology proffered by the Parliament of Australia to the indigenous people of that continent, back in 2008. The apology was delivered by Prime Minister Kevin Rudd, who also asked the indigenous people for forgiveness on behalf of the nation.


And here is where things, according to Simone, got problematic. You see, a good argument can be made that forgiveness is an essentially personal process, not one that makes sense at the level of public institutions. Setting aside the obvious observation that the current non-indigenous inhabitants of Australia did not perpetrate the original crimes against the indigenous ones (though, of course, they still take advantage of the aftermath), and setting further aside the fact that groups cannot forgive (only individual members of such groups can), there is an obviously imbalanced power dynamic at play here. Asking for forgiveness at that level in a real sense imposes an implied demand on the other side, along the lines of “hey, I’m being so nice to ask you, and you are going to refuse?”


Individuals are far less likely to feel that pressure. If my partner betrays me and she asks for forgiveness I may or may not grant it. It’s up to me, and it is not a given that I will grant it. But when we scale up from the individual to a social group the dynamics change dramatically, according to Simone, so that forgiveness is no longer about contrite individuals who have come to agree that what they did is wrong, but rather about a political (possibly, though not necessarily, cynical) move in the public arena, meant to elicit a very specific response.


I must admit that I was rather skeptical of Simone’s approach when she outlined what she was going to argue for at the beginning of her talk, but it didn’t take much to convince me that she is right. And then Q&A time came, and my colleague Kate Ritchie upped the ante. She suggested that what Simone said about forgiveness also goes for official apologies. They too, are something that makes sense at the individual level, but not so much at the social one. And apologies too can be given by the wrong person, on behalf of groups who may not agree, used as a power play, and delivered because of cynical calculations. Even when not cynical in nature, both Simone and Kate agreed, requests for forgiveness as well as apologies quickly become empty. Simone mentioned the institution of a national “Sorry Day” in Australia, which, while probably initially well intentioned, has soon turned into a rote empty gesture, particularly since little or nothing is being done in the meantime to actually improve the lives of indigenous people.


Once again, my initial thought was that surely Kate’s point is problematic. But nope, a few seconds of reflection revealed to me that she was right.


But hold on, then. I began this article by talking in positive terms of the notion of Truth and Reconciliation Commissions, and now I’m agreeing with our guest speaker (it was Philosophy Day, by the way) that officially asking for forgiveness, and even issuing public apologies, is problematic. Doesn’t reconciliation imply apologies and forgiveness?


Probably, again, at the individual level, but not the public one. Consider two contrasting situations. The first one is the case mentioned above of my partner cheating on me. Let’s say that she apologized and asked for forgiveness. While I’m not mandated to grant it, it would seem that if I agree to reconcile, to get back together and resume our life as a couple, I have at least implicitly accepted her apology and forgiven her. Without that, life as a couple would be re-established on very shaky foundations indeed.


The second scenario is that of the CEO of Exxon-Mobil apologizing to the public for yet another environmental catastrophe caused by the greedy practices of his corporation. Again setting aside the degree to which it makes sense for a single individual to apologize on behalf of a large anonymous entity which he certainly does not control beyond a limited extent, what I – as a member of the public – expect from Exxon-Mobil is three things and three things only: (i) an acknowledgement of the facts; (ii) some concrete suggestion on how the corporation can repair the damage; and (iii) reasonable assurances that whatever caused the problem will not happen again in the future. Both apologies and forgiveness are entirely beside the point.


The bottom line from all the above is that there is an important difference between the individual and social levels at which concepts like apologies and forgiveness operate. Simone (or yours truly) does not deny that it makes perfect sense for an individual to apologize to another for some wrongdoing. She also agrees (and so do I) that it makes sense for an individual to ask for forgiveness, so long as it is understood that this is not an imposition on the other party, who may or may not grant it. Yet, as we have seen above, both concepts are problematic when scaled up to the level of social groups. If this is true, why do we do it, and how could we do otherwise?


I believe one source of the particular kind of mistake we are talking about is Plato. In the Republic he famously set out to investigate what makes for a just person. His strategy was to scale things up and ask first what makes for a just state (the Republic of the title). The explicit assumption was that one can go back and forth between the two levels. The ideal Republic will be guided by reason (in the form of a ruling class of philosophers), who will direct the other two components (the soldiers-auxiliaries and the regular folks-producers). Likewise, the ideal human being has a tripartite soul, and is well advised to put her rational soul in control of the spirited and appetitive ones.


Yet, most of the ancients rejected this approach, making a clear distinction between individuals and society. Both the Stoics and Cicero talked of ethics (from the Greek êthos, a word related to our idea of character) as distinct from law. Ethics (and morality, from the Latin moralis, which is how Cicero translated the Greek êthos) pertains to the individual and her character; law pertains to how we regulate things in society. A just individual is not the same as a just state. At the individual level the Stoics considered justice a character trait, having to do with treating other people fairly and with respect, but not necessarily equally (e.g., I don’t treat my daughter in the same way as a stranger, even though I try to show respect for everyone); by contrast, at the societal level a just state is one based on just laws, where everyone is treated equally, regardless of personal status.


Simone’s suggestion, then, can be reframed as a call for going back to a meaningful distinction between ethics and law, the first one functioning at the individual, the second at the societal level. It is within this framework that it makes perfect sense to say that apologies and forgiveness have an important role to play in the intercourse between individuals, while at the same time maintaining that they are problematic between groups. So politicians and corporate officers can keep their apologies, and they will not get our forgiveness. What they need to do instead is to acknowledge wrongdoing and put in place workable procedures to redress injustices.

The problem with (some) philosophy and metaphysics of perception

1 (2)Does the world exist independently of any given human mind? Most of us – including many philosophers – would answer yes, and even think the question itself more than a bit weird. But of course it is possible that we live, say, in a simulation imposed on us by external entities, a la Matrix movie. Or perhaps we are all just instantiations of a single cosmic mind, which we could simply refer to as God.


This, however, is not what this essay is about. Let us assume for the sake of discussion that the world does exist independently of the human mind. The next, and somewhat more interesting, question is: how do we perceive such world? In the philosophy and metaphysics of perception one position is referred to as naive realism, the notion that we can have, and usually do have, veridical perceptions about the world. What you see, is what you get, so to speak.


I think this position is rather hopeless, and yet it was vigorously defended by a colleague of mine at a recent colloquium at City College. Umrao Sethi, Assistant Professor of Philosophy at Lehman College in New York, gave a talk entitled “Sensible Over-Determination.” In the following I wish to recount an interesting exchange Sethi and I have had during the q&a following her talk, because it puts two views not just of the philosophy of perception, but of philosophy itself, in stark contrast with each other. Please understand that I am not singling out Sethi, but simply using her particular talk to make a more general point about (some) contemporary philosophy.
First, though, here is the abstract of Sethi’s talk. It’s rather technical, so I will explain the relevant bits immediately below:


“I argue that sensible qualities are ‘ontologically flexible’ – they can be instantiated either in virtue of having material bearers or in virtue of being perceived by minds. Given that these qualities have disjunctive conditions on instantiation, a single instance of such a quality can be over-determined – that is, have its existence simultaneously guaranteed both in virtue of having a material bearer and in virtue of being perceived. I make use of this metaphysical insight to show that, contrary to received wisdom, naïve realism is compatible with the possibility of qualitatively matching hallucinations. Both hallucinations and ordinary perceptions acquaint us with instances of the same sensible qualities. While the instances in hallucination are mind-dependent, those in veridical perception are not. The latter are over-determined, and, thus, mind-independent – they can continue to exist unperceived, because, in addition to the minds that perceive them, their existence is guaranteed by the material objects that are their bearers.”


The two key terms here are “hallucinations” and “sensible qualities.” What Sethi was arguing, among other things, is that hallucinations do not present a problem for naive realists. But why would they? The problem goes all the way back to Descartes and his famous thought experiment based on radical doubt. Descartes realized that he could be mistaken about a lot of things, particularly his sensorial perceptions about the world. We all know, for instance, that we can have dreams or hallucinations during which, for a time at least, we cannot tell apart physical reality from what is being conjured up by our minds. If this is the case, Descartes thought, then we can never be absolutely sure of any aspect of our alleged knowledge of the world.


His solution to the problem, famously, was his cogito argument: I may be mistaken about the external world, and even about mathematical or logical truths (because of systematic biases in my reasoning). But there is no way I can be mistaken about the fact that I am a thinking being of some sort. Cogito, ergo sum: I think, therefore I am.


Descartes’ was the last great attempt in philosophy to establish certain foundations of knowledge. And it failed. Sure, I know that I think, but – contra Descartes – that’s pretty much all I will ever know with certainty. There is no way to rebuild knowledge of mathematics, logic, and even less so empirical reality, building on that meager starting point. Most philosophers since have therefore accepted the notion that our knowledge is provisional, and that we could, in principle, be mistaken about all sorts of things.


Not so naive realists, who cling to the hope that we get – most of the times – “veridical” (i.e., true) impressions of the external world via our senses. How so? One path for the naive realist is to reject the argument from hallucination, and show that we can always tell the difference between what our minds conjure and what we perceive about the world.


One way to do this is simply to deny that hallucinations ever feel exactly the same as impression from the external world. I’ve never had a hallucination, but the literature on the subject, seems to me, is pretty clear: at least some of the times, hallucinations (spontaneous, drug-induced, or especially if induced by sensory deprivation) are subjectively indistinguishable from the real thing. In other words, the problem can’t just be swept under the rug.


Sethi conceded this point at least for the sake of discussion, but insisted that we can nevertheless tell the difference. This is where sensible qualities come in. The quintessential one, the one she used the most during her talk, and the very same one that sparked our little debate during the q&a, is color.


Consider a red apple. You may hallucinate a red apple, and “perceive” redness associated with the apple, even though there is no actual apple out there. Or you may perceive redness in the apple because there is, in fact, a red apple that you are looking at. Sethi’s point is that we can tell the difference between these two cases because the second one is “overdetermined,” meaning that there are two reasons we have the first person experience of redness in the case of real objects: (i) because our mind is conjuring one (just like in the case of the hallucination), and (ii) because there is a red apple out there (unlike the case of the hallucination). It is this over-determination that implies mind-independence, according to Sehti.


The problem is – as I pointed out in the q&a – that “redness” is actually not out there (though the apple may be). We know quite a bit about the physiology of color perception and we understand that what is out there are the surface properties of the apple, as well as the characteristics of light at certain wavelengths that can be perceived by the human eye. “Color,” however, results from the specific interaction among those external physical properties and the specific, evolved, sensorial apparatus of a human being. That apparatus includes of course our eyes, with their anatomical and cellular structure, the optical nerve that connects to the brain, and a number of brain systems devoted to the reception and internal representation of what we perceive. Color is not out there, and it is not mind-independent. Ever.


We know this from a variety of sources, including the fact that some human beings (like yours truly) are color-blind (partially, in my case), so that they perceive the exact same external object differently. We also know that other animals do not perceive colors at all (bulls, for instance: when they get mad at someone waving a red cloth in front of them they are reacting to the movement, not the redness of the cloth). Yet other animals perceive colors differently (bees, for example, see ultraviolet, which we cannot perceive).


So what modern science tells us is that the human mind creates a virtual simulation of external reality all the time, and that the veridicity of such simulation depends not just on whatever is out there, but on the details of the human perceptual and cognitive systems. We live in a perpetual Matrix of our own creation, so to speak. That is why we cannot, in fact, always tell the difference between hallucinations and the real world, even if a real, mind-independent world does exist. Descartes was right, and naive realists are wrong.


Here is where things got interesting outside of the narrow scope of an academic talk in a philosophy department. When I pointed out the above to Sethi, her response was that she does not accept the physiological account! Science, she said, is itself based on metaphysical assumptions, and one is free to reject such assumptions. She evidently chose the latter course.


But hold on a second. Yes, science takes certain metaphysical assumptions on board without examination, like the very fact that reality is mind-independent. But I thought (naively, as it turns out!) that a contemporary philosopher is best served by taking very, very seriously whatever science tells us about whatever issue we are discussing philosophically. In this case, to ignore modern physiology and brain science seems hazardous to say the least.


Curious, I asked Sethi why she chose the anti-science path. In other words, what were her metaphysical commitments? Her answer floored me, but also gave me an interesting insight into my own profession. She said that if she went along with standard physiology she would have to agree that consciousness is a biological phenomenon rooted in the brain and how it interacts with the external world.


Again naively, I said, obviously, what else could it be? This is where she mentioned the so-called “hard problem” of consciousness, made famous by NYU philosopher of mind David Chalmers. Here is how the problem is summarized in the excellent entry in the Internet Encyclopedia of Philosophy:


“It is the problem of explaining why there is ‘something it is like’ for a subject in conscious experience … The usual methods of science involve explanation of functional, dynamical, and structural properties – explanation of what a thing does, how it changes over time, and how it is put together.  But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science.”


Except that I don’t think one can “meaningfully” ask that at all. I have made the argument that the so-called hard problem is in fact a category mistake. It confuses scientific explanation with first person experience. We may (possibly will) arrive at a complete scientific explanation of consciousness (we are nowhere near, at the moment, but we are making some progress). When we do, however, we will still be unable to feed that explanation into a human mind and have that mind experience, say, redness.


The reason for this is no mystery at all, and does not point to any alleged limitation of scientific explanations. It’s the same reason why we can have a complete understanding of, say, how to ride a bicycle, and yet immediately fall down as soon as we mount one for the first time. Understanding derives from something that philosophers call “knowledge-that,” while the skill required to properly ride a bike is an instance of “knowledge-of.” The two are distinct, the first being conceptual, the second one literally embodied in our biological beings. There is absolutely nothing missing from our understanding if we can’t also ride. Confusing the two is a category mistake analogous to being shown the buildings, faculty, students, and administrations of Oxford University and still ask, “yes, but where is the university?”


The reason I went into some detail into all of this is because I think that Chalmers-style philosophizing is what gives philosophy a bad name. Let me be clear about this: I am on record arguing that philosophy is not the same thing as science, and that not all meaningful philosophical questions have scientific answers (e.g., how should we think of personal identity? or: is modus ponens a kind of valid deductive inference? or: what is a cause?). I am, in other words, not advocating scientism.


But I am advocating the notion that philosophers take science seriously and refrain from inventing questions just because they wish to ignore science and convince themselves that they can go “deeper.” The kind of analytic metaphysics done by Chalmers (and Sethi) does not serve philosophy well, I think, as it ignores or dismisses science at its own peril. Some philosophers have been pushing an alternative approach, which they call “scientific” metaphysics, though I think it would be best described as science-informed metaphysics.


Philosophy, as I have argued recently does make progress, and one example is the acknowledgment of the failure of the rationalist program in epistemology, of which Descartes was the last, valiant defender, and that is in a sense the reference point of analytic metaphysics. Let us then count Descartes’ failure as an example of progress, and not push philosophy back four centuries.

Philosophy as the evocation of conceptual landscapes – part II

1 (1)In the first installment of this two-part series we have explored different concepts of what “making progress” may mean for disciplines such as science, math, logic, and philosophy. I have then suggested that philosophy makes progress because it explores what Nicholas Rescher has called “aporetic clusters,” i.e., families of alternative solutions to a given particular philosophical problem. I have advanced a similar idea in the past, suggesting that philosophy is in the business of discovering, refining or discarding such solutions as they are “evoked” (i.e., brought into objective existence by the human mind) within the context of a problem. This all sounds very theoretical, however, so let’s take a look at some empirical examples of these aporetic clusters found within specific philosophical conceptual landscapes.


Is there any evidence that philosophy progresses in the way described so far? I think so, and it comes at least in part from a landmark paper by Bourget and Chalmers, published in 2013. In it, they explore quantitatively what professional philosophers think of a number of prominent issues and positions within their own discipline. Even though Bourget and Chalmers did not carry out their study while informed by the ideas of aporetic clusters and progress, their data is suitable for the current discussion. It also represents a rare opportunity to systematically assess the views of an entire profession, the sort of thing that would probably be useful also in other disciplines, from the humanities to the natural sciences, but is all too seldom actually done.


I focus here on a subset of interesting findings that bear directly or indirectly on my overall project concerning progress in philosophy, even though the Bourget-Chalmers paper contains a number of additional stimulating bits of information for anyone interested in the current state of philosophical inquiry.


To begin with, apparently the common adage that if there are X philosophers in a room, they are bound to have X+1 opinions does not appear to be the case at all. Consider some of the main findings of the survey:


A total of 71% of respondents thought that a priori knowledge is possible, while only 18% didn’t think so. There is a clear majority here, and only two major aporetic clusters.


Things are more equally divided when it comes to views on the nature of abstract objects: Platonism gets 39% while nominalism is barely behind, at 38%. Superficially, this may seem an instance of precisely what’s wrong with philosophy, but is in fact perfectly congruent with my model of multiple peaks in conceptual space. Philosophers seem to have settled on two of Rescher’s aporetic clusters here, having eliminated a number of unacceptable alternatives. There may very well not be an ascertainable fact of the matter about whether Platonism or nominalism are “true.” They are both reasonable ways of thinking about the ontology of abstract objects, with each position subject to further criticism and refinement.


Every practicing philosopher knows that W.V.O. Quine thought he had demolished once and for all the distinction between analytic and synthetic propositions, but the bad news for him is that about 65% of philosophers disagree, and only 27% agree that such demise has in fact taken place. The latter may be an example of an aporetic cluster that gained more prominence immediately post-evocation by Quine, but may have eroded somewhat since, to use a geological analogy.


One of the most lopsided outcomes of the survey concerns what epistemic attitude is more reasonable to hold about the existence and characteristics of the external world: 82% of respondents qualified themselves as realists, followed by only 5% skeptics and 4% idealists. This may be as close as it gets for philosophers to actually settle a dispute.


In terms of ethical frameworks, things are pretty evenly split, with deontology barely leading at 26%, followed by consequentialism at 24% and virtue ethics at 18%. Here too, as in the case of Platonism vs nominalism, the result makes sense to me, as it is hard to imagine what it would mean to say that deontology, for instance, is the “true” approach to ethics. These three are reasonable, alternative ways of approaching ethics – and there are a number of unreasonable ones that have been considered and discarded over time (e.g., Ayn Rand’s “Objectivism,” based on a gross distortion, and likely lack of understanding, of Aristotle).


Even more fascinating – and congruent with my general thesis – are the pairwise correlations that the authors uncovered between philosophical views, hinting at the conclusion that philosophers tend to develop fairly internally coherent positions across fields. For instance:


If one thinks that the distinction between analytic and synthetic truths is solid, then one also tends to accept the idea of a priori knowledge – naturally enough.


If a philosopher is a moral realist, she is also likely to be an objectivist about aesthetic value. Interestingly, moral realists also tend to be realists in philosophy of science, and Platonists about abstract objects.


If one thinks that Star Trek’s Kirk survives teleportation (rather than being killed and replaced by a copy), one also – coherently – often adopts a psychological view of personal identity.


Perhaps the most interesting and nuanced approach that Bourget and Chalmers take to their data unfolds when they move from univariate and bivariate to multivariate statistics, in this case principal components analysis. This allows them to examine the many-to-many relationships among variables in their data.


The first principal component they identify, that is, the one that explains most of the variance in the sample, they label “Anti-naturalism,” as it groups a number of responses that coherently fall under that position: libertarianism concerning free will, non-physicalism about the mind, theism, non-naturalism as a metaphilosophy, and the so-called “further fact” view of personal identity. If one were to plot individual responses along this dimension (which Bourget and Chalmers don’t do, unfortunately), one would see anti-naturalist philosophers clustering at the positive and of it, and naturalist philosophers clustering at the negative end.


The second-ranked principal component is labelled “Objectivism/Platonism” by the authors, and features positive loadings (i.e., multivariate correlations) of cognitivism in moral judgment, realism in metaethics, objectivism about aesthetic value, and of course Platonism about abstract objects.


The third component is about Rationalism, with positive loadings for the possibility of a priori knowledge, the analytic-synthetic distinction, and rationalism about knowledge.


(Should you be curious about where I fall in the above multi-variable landscape, see the post-scriptum to this essay.)


The overall picture that emerges, then, is very much that of a conceptual landscape with a number of alternative peaks, or aporetic clusters, which are internally coherent and well refined by centuries of philosophical inquiry. I suspect that historically many more “peaks” have been explored and eventually discarded, and that the height of the current peaks (as reflected by the consensus gathered within the relevant epistemic communities) is itself heterogeneous and dynamic, with some in the process of becoming more prominent in the landscape and others on their way to secondary status or destined to disappear altogether.


If I am right and philosophy makes progress by evoking and refining aporetic clusters that are constrained by empirical evidence about the world, it makes sense to ask one last question: where is philosophy going, considering its continuous tension with the sciences, themselves making progress in what may be more of a teleonomic fashion? (See part I on teleonomy.)


Philosophers have often advanced a model of their discipline as a “placeholder” for the development of eventually independent fields of inquiry, presenting philosophy as the business of conducting the initial conceptual exploration (and, hopefully, clarification) of a given set of problems, handing it then to a special science as soon as those problems become empirically tractable. There are quite a few historical examples to back up this view, from the emergence of the natural sciences to that of psychology and linguistics. Philosophy of mind is arguably in the midst of this very process, interfacing with the nascent cognitive sciences.


Predictably, this model is often twisted by detractors of philosophy to show that the field has been in a slow process of disintegrating itself, with a hard core (metaphysics, ethics, epistemology, logic, aesthetics) that is the last holdout, and which has shown increasing signs of itself yielding to the triumphal march of Science (with a capital “S”).


If that is the case, of course, so be it. But I seriously doubt it. What we have seen over the last few centuries, and especially the last century or so, is simply a transformation of what it means to do philosophy. Although we use the general word “science” to indicate everything from Aristotle’s forays into biology to what modern physicists are doing with the Large Hadron Collider, the very nature of science has evolved throughout the centuries, and keeps evolving still. What counts as good scientific methodology, sound scientific theorizing, or interesting scientific problems has changed dramatically from Aristotle to Bacon to Darwin to Stephen Hawking. Why should it be any different for philosophy?


One of the most obvious indications that philosophy has been reinventing itself over the past century or so is the stark onset of a panoply of “philosophies of.” “Philosophies of” are the way the field has been responding to the progressive emancipation of some of its former branches: science is no longer natural philosophy, but that simply means that now philosophers are free to philosophize about science (and, more specifically, about biology, quantum mechanics, etc.) without doing science. The same idea applies to linguistics (and philosophy of language), psychology (and philosophy of the social sciences), economics (and philosophy of economics), and so on.


Is this sort of transformation also about to affect philosophy’s core areas of metaphysics, ethics, epistemology, logic, and aesthetics? It depends on how one looks at things. On the one hand, to a larger or lesser extent it certainly has become increasingly difficult to engage in any of the above without also taking on board results from the natural and social sciences. While logic is perhaps the most shielded of all core philosophical areas in this respect (indeed, arguably it has contributed to the sciences broadly construed much more than it has received), it is certainly a good idea to do metaphysics while knowing something about physics (and biology); ethics while interfacing with political and social sciences, and even biology and neuroscience; epistemology while being aware of the findings of the cognitive sciences; and aesthetics with an eye toward biology and the cognitive and social sciences.


Nonetheless, all the core areas of philosophy emain still very much recognizable as philosophy. Should they finally spawn their own independent disciplines, there will immediately arise in turn a need for more “philosophies of,” and the process will keep going, the field adapting and regenerating.


Ultimately, philosophy is here to stay for the same reason that other humanities (and the arts) will stay, regardless of how much science improves and expands, or how much narrow-minded politicians and administrators keep cutting humanities’ funding in universities. Human beings need more than facts and formulas, more than experiment and observation. They need to experience in the first person, and they need to reflect critically on all aspects of their existence. They need to understand, in the broadest possible terms, which means they need to philosophize.


_____


Post-Scriptum: where do I fall in the multi-variable conceptual landscape of philosophical positions? Here:


Epistemology: a priori knowledge (e.g., mathematics, logic) is possible.


Knowledge: neither a rationalist nor an empiricist, we need a judicious combination of the two, as Kant first surmised.


Abstract objects: something like nominalism, definitely not Platonism. I prefer Smolin’s “evocation” alternative, briefly discussed in part I.


Analytic / Synthetic distinction: it’s there, despite Quine making some good points about borderline or fuzzy cases.


External world: realism. (Though I do have strong sympathies for instrumentalism in philosophy of science, but the two are not mutually exclusive.)


Aesthetic values: anti-realist (though our aesthetic judgments, at a very basic level, have likely been shaped by evolution, which would make me a quasi-realist of sorts).


Transporter problem: Kirk dies every time. I do not think there is a unique metaphysical answer to personal identity, as it is a human construct constrained by human biology.


Ethics: virtue ethics, obviously.


Naturalism vs anti-naturalism: I’m definitely a naturalist, compatibilist about free will, non-theist, and think that the mind is the result of the activity of the physical (and embodied) brain.


Objectivism vs Platonism: quasi-realist in ethics (as explained here), ethical judgments are constrained by our understanding of human nature and what makes us flourish. More than one reasonable ethical judgment is compatible with any given empirical situation.

Philosophy as the evocation of conceptual landscapes – part I

1I come to philosophy from the natural sciences, where the idea that my former academic field (evolutionary biology) makes progress is taken for granted, and where one would receive amused or puzzled looks by simply asking the question of what constitutes progress. And yet, philosophers of science have convincingly argued that it is much more difficult than one might think to provide a good account of how, precisely, science makes progress. When it comes to philosophy, however, it is philosophers themselves who often deny that the field makes progress, no matter how one understands “progress.” Which is puzzling, because the obvious question, then, is why on earth would they bother spending their lives contributing to an area of inquiry that reliably goes nowhere?


This essay is a shortened version of a full paper that appeared in a volume edited by Russell Blackford and Damien Broderick, entitled Philosophy’s Future: The Problem of Philosophical Progress (if you are an “Aurelian” subscriber to this site you can download it for free).


Part of the problem is that “progress” is itself not at all easy to define, with the term taking on different meanings in the natural sciences and, for instance, in mathematics. I suggest that a reasonable approach to this issue is to “go Wittgensteinian,” so to speak, and argue that “progress” is a family resemblance concept. Wittgenstein’s own famous example of this type of concept was the idea of “game,” which does not admit of a small set of necessary and jointly sufficient conditions in order to be defined, and yet this does not seem to preclude us from distinguishing games from not-games, at least most of the time. Progress, in a sense, could then be thought to be like pornography, to paraphrase the famous quip by US Supreme Court Justice Potter Stewart: “I know it when I see it.”


I submit that progress in science is a teleonomic (i.e., goal oriented) process, where the goal is to increase our knowledge and understanding of the natural world in terms of the simplest possible set of general principles. By contrast, progress in mathematics, logic, and – with some important qualification – philosophy, could be described as an advancement toward a better, more complete, or more modern condition, a definition that does not imply any final goal to be achieved.


Consider first mathematics and logic: I do not think it is tenable to understand them as teleonomic disciplines because there is an infinite number of logical-mathematical objects to discover and theorems to prove, so that mathematics and logic look like ever expanding disciplines, not converging toward any “theory of everything” as is (allegedly) the case for science. So I do think of mathematics and logic as advancing toward a better, more complete position, “better” in the sense that the process both opens up new lines of internally generated inquiry (the solution of mathematical and logical problems generates new problems, and so forth) and “more complete” in the sense that mathematicians and logicians are best thought of as engaged in the exploration of a space of conceptual (as distinct from empirical) possibilities.


At this point in the paper I discuss in detail the concept of “evoking,” as distinct from discovering or inventing, new notions in the space of conceptual possibilities pertinent to, respectively, mathematics, logic, and philosophy. It’s a rather technical topic, inspired by Roberto Unger and Lee Smolin’s treatment of mathematical Platonism in their excellent The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. The interested reader can find a summary of it here. The basic notion, though, is simple. Consider the game of chess. It is clearly a human invention, i.e., the rules of chess are not independent of the human mind. Yet, once invented, the game has “evoked” (i.e., brought into existence) a number of objectively provable statements about it, including some that can be proven mathematically. Keeping this in mind, let’s now go back to how philosophy makes progress.


Current discussions on progress in philosophy have a number of precedents, although it is actually surprising how few scholarly papers have been devoted to the topic. One that I think is particularly important to discuss is Moody’s distinction among three concepts of progress, first published in 1986. What he calls progress-1 takes place when there is a specifiable goal about which people can agree that it has been achieved, or what counts toward achieving it. If you are on a diet, for instance, and decide to lose ten pounds, you have a measurable specific goal, and you can be said to make progress insofar your weight goes down and approaches the specific target. Progress-2 occurs when one cannot so clearly specify a goal to be reached, and yet an individual or an external observer can competently judge that progress has occurred when comparing the situation at time t with the situation at time t+1, even though the criteria by which to make that judgment are subjective. Moody thinks, for example, that composers guided by an inner sense of when they are “getting it right” would be making this sort of progress while composing. Finally, progress-3 is a hybrid animal, instantiated by situations where there are intermediate but not overarching goals.


Interestingly, Moody says that mathematics makes progress-3, insofar as there is no overall goal of mathematical scholarship, and yet mathematicians do set intermediate goals for themselves, and the achievement of these goals (like the proof of Fermat’s Last Theorem) is recognized as such by the mathematical community.


Moody’s next step is to assume provisionally that philosophy is a type of inquiry, and then ask whether any of his three categories of progress applies to it. The first obstacle is that philosophy does not appear to have consensus-generating procedures such as those found in the natural sciences or in technological fields like engineering. Moody claims that “the only thing that philosophers are likely to agree about with enthusiasm is the abysmal inadequacy of a particular theory.” While I think this is actually a bit of a caricature, I do not share Moody’s pessimistic assessment of that observation even if true: negative progress, that is, the elimination of bad ideas, is progress nonetheless.


Moody concludes that philosophy does not make progress-1 or progress-3, because its history has not yielded a trail of solved problems. What about progress-2? He takes up the possibility that perhaps philosophy is not a type of inquiry after all, and analyzes in some detail two alternative conceptions: Wittgenstein’s idea of philosophy as “therapy,” and Richard Rorty’s so-called “conversational model” of philosophy. As Moody summarizes:


“Wittgenstein believed that philosophical problems are somehow spurious and that the activity of philosophy … should terminate with the withdrawal, or deconstruction, of philosophical questions.”


On this view, then, there is progress, of sorts, in philosophy, but is the sort of “terminus” brought about by committing seppuku. As Moody rather drily comments, while nobody can argue that Wittgenstein’s ideas have not been taken seriously, it is equally undeniable that philosophy has gone forward largely as if the therapeutic approach had never been articulated. If a proposed account of the nature of philosophy has so blatantly been ignored by the relevant epistemic community, we can safely file it away.


Rorty’s starting point is what he took to be the (disputable, in my opinion) observation that philosophy has failed at its self-appointed task of analysis and criticism. Moody quotes him as saying: “The attempts of both analytic philosophers and phenomenologists to ‘ground’ this and ‘criticize’ that were shrugged off by those whose activities were purportedly being grounded and criticized.” Rorty arrived at this because of his rejection of what he sees as philosophy’s “hangover” from the seventeenth and eighteenth centuries, when philosophers were attempting to set their inquiry within a framework that allowed a priori truths to be discovered, even though David Hume dealt that framework a fatal blow during the eighteenth century.


While Moody finds much of Rorty’s analysis on target, I must confess that I don’t. For instance, the fact that other disciplines (like science) marched on while refusing to be grounded or criticized by philosophy is neither entirely true (lots of scientists have paid and still pay attention to philosophy of science, for example), nor necessarily apt as the ultimate test of the value of philosophy even if true: creationists and climate change deniers, after all, shrug off any criticism of their positions, but that doesn’t make such criticism invalid, or futile for that matter (since others are responding to it).


Yet there is something to be said for thinking of philosophy as a “conversation” more than an inquiry, as Rorty did. The problem is that this and other dichotomies presented to us by Rorty are, as Moody himself comments, false: “We do not have to choose between ‘saying something,’ itself a rather empty notion that manages to say virtually nothing, and inquiring, or between ‘conversing’ and ‘interacting with nonhuman reality,’” Indeed we don’t.


What account, then, can we turn to in order to make sense of progress in philosophy, according to Moody? I recommend that the interested reader check Moody’s discussion of Robert Nozick’s “explanational model” of philosophy, as well as John Kekes’ “perennial problems” approach, but my own treatment here will jump to Nicholas Rescher’s 1978 proposal of the concept of “aporetic clusters,” which is one path that supports the conclusion that philosophy does make progress, and it is a type-2 progress.


Rescher thinks that it is unrealistic to expect consensus in philosophy, and yet does not see this as a problem, but rather as an organic outcome of the nature of philosophical inquiry:


“In philosophy, supportive argumentation is never alternative-precluding. Thus the fact that a good case can be made out for giving one particular answer to a philosophical question is never considered as constituting a valid reason for denying that an equally good case can be produced for some other incompatible answers to this question.”


In fact, Rescher thinks that philosophers come up with “families” of alternative solutions to any given philosophical problem, which he labels aporetic clusters. According to this view, some philosophical accounts are eliminated, while others are retained and refined. The keepers become philosophical classics, like “virtue ethics,” “utilitarianism,” or “Kantian deontology” in ethics, or “constructive empiricism” and “structural realism” in philosophy of science. Rescher’s view is not at all incompatible with my idea of philosophy as evoking (in the sense briefly described above), and then exploring and refining, peaks in conceptual landscapes. As Moody aptly summarizes it: “That there are ‘aporetic clusters’ is evidence of a kind of progress. That the necronology of failed arguments is so long is further evidence.”


(next: empirical examples of progress in philosophy)