Category Archives: Metaphilosophy

Philosophy as the evocation of conceptual landscapes – part II

1 (1)In the first installment of this two-part series we have explored different concepts of what “making progress” may mean for disciplines such as science, math, logic, and philosophy. I have then suggested that philosophy makes progress because it explores what Nicholas Rescher has called “aporetic clusters,” i.e., families of alternative solutions to a given particular philosophical problem. I have advanced a similar idea in the past, suggesting that philosophy is in the business of discovering, refining or discarding such solutions as they are “evoked” (i.e., brought into objective existence by the human mind) within the context of a problem. This all sounds very theoretical, however, so let’s take a look at some empirical examples of these aporetic clusters found within specific philosophical conceptual landscapes.


Is there any evidence that philosophy progresses in the way described so far? I think so, and it comes at least in part from a landmark paper by Bourget and Chalmers, published in 2013. In it, they explore quantitatively what professional philosophers think of a number of prominent issues and positions within their own discipline. Even though Bourget and Chalmers did not carry out their study while informed by the ideas of aporetic clusters and progress, their data is suitable for the current discussion. It also represents a rare opportunity to systematically assess the views of an entire profession, the sort of thing that would probably be useful also in other disciplines, from the humanities to the natural sciences, but is all too seldom actually done.


I focus here on a subset of interesting findings that bear directly or indirectly on my overall project concerning progress in philosophy, even though the Bourget-Chalmers paper contains a number of additional stimulating bits of information for anyone interested in the current state of philosophical inquiry.


To begin with, apparently the common adage that if there are X philosophers in a room, they are bound to have X+1 opinions does not appear to be the case at all. Consider some of the main findings of the survey:


A total of 71% of respondents thought that a priori knowledge is possible, while only 18% didn’t think so. There is a clear majority here, and only two major aporetic clusters.


Things are more equally divided when it comes to views on the nature of abstract objects: Platonism gets 39% while nominalism is barely behind, at 38%. Superficially, this may seem an instance of precisely what’s wrong with philosophy, but is in fact perfectly congruent with my model of multiple peaks in conceptual space. Philosophers seem to have settled on two of Rescher’s aporetic clusters here, having eliminated a number of unacceptable alternatives. There may very well not be an ascertainable fact of the matter about whether Platonism or nominalism are “true.” They are both reasonable ways of thinking about the ontology of abstract objects, with each position subject to further criticism and refinement.


Every practicing philosopher knows that W.V.O. Quine thought he had demolished once and for all the distinction between analytic and synthetic propositions, but the bad news for him is that about 65% of philosophers disagree, and only 27% agree that such demise has in fact taken place. The latter may be an example of an aporetic cluster that gained more prominence immediately post-evocation by Quine, but may have eroded somewhat since, to use a geological analogy.


One of the most lopsided outcomes of the survey concerns what epistemic attitude is more reasonable to hold about the existence and characteristics of the external world: 82% of respondents qualified themselves as realists, followed by only 5% skeptics and 4% idealists. This may be as close as it gets for philosophers to actually settle a dispute.


In terms of ethical frameworks, things are pretty evenly split, with deontology barely leading at 26%, followed by consequentialism at 24% and virtue ethics at 18%. Here too, as in the case of Platonism vs nominalism, the result makes sense to me, as it is hard to imagine what it would mean to say that deontology, for instance, is the “true” approach to ethics. These three are reasonable, alternative ways of approaching ethics – and there are a number of unreasonable ones that have been considered and discarded over time (e.g., Ayn Rand’s “Objectivism,” based on a gross distortion, and likely lack of understanding, of Aristotle).


Even more fascinating – and congruent with my general thesis – are the pairwise correlations that the authors uncovered between philosophical views, hinting at the conclusion that philosophers tend to develop fairly internally coherent positions across fields. For instance:


If one thinks that the distinction between analytic and synthetic truths is solid, then one also tends to accept the idea of a priori knowledge – naturally enough.


If a philosopher is a moral realist, she is also likely to be an objectivist about aesthetic value. Interestingly, moral realists also tend to be realists in philosophy of science, and Platonists about abstract objects.


If one thinks that Star Trek’s Kirk survives teleportation (rather than being killed and replaced by a copy), one also – coherently – often adopts a psychological view of personal identity.


Perhaps the most interesting and nuanced approach that Bourget and Chalmers take to their data unfolds when they move from univariate and bivariate to multivariate statistics, in this case principal components analysis. This allows them to examine the many-to-many relationships among variables in their data.


The first principal component they identify, that is, the one that explains most of the variance in the sample, they label “Anti-naturalism,” as it groups a number of responses that coherently fall under that position: libertarianism concerning free will, non-physicalism about the mind, theism, non-naturalism as a metaphilosophy, and the so-called “further fact” view of personal identity. If one were to plot individual responses along this dimension (which Bourget and Chalmers don’t do, unfortunately), one would see anti-naturalist philosophers clustering at the positive and of it, and naturalist philosophers clustering at the negative end.


The second-ranked principal component is labelled “Objectivism/Platonism” by the authors, and features positive loadings (i.e., multivariate correlations) of cognitivism in moral judgment, realism in metaethics, objectivism about aesthetic value, and of course Platonism about abstract objects.


The third component is about Rationalism, with positive loadings for the possibility of a priori knowledge, the analytic-synthetic distinction, and rationalism about knowledge.


(Should you be curious about where I fall in the above multi-variable landscape, see the post-scriptum to this essay.)


The overall picture that emerges, then, is very much that of a conceptual landscape with a number of alternative peaks, or aporetic clusters, which are internally coherent and well refined by centuries of philosophical inquiry. I suspect that historically many more “peaks” have been explored and eventually discarded, and that the height of the current peaks (as reflected by the consensus gathered within the relevant epistemic communities) is itself heterogeneous and dynamic, with some in the process of becoming more prominent in the landscape and others on their way to secondary status or destined to disappear altogether.


If I am right and philosophy makes progress by evoking and refining aporetic clusters that are constrained by empirical evidence about the world, it makes sense to ask one last question: where is philosophy going, considering its continuous tension with the sciences, themselves making progress in what may be more of a teleonomic fashion? (See part I on teleonomy.)


Philosophers have often advanced a model of their discipline as a “placeholder” for the development of eventually independent fields of inquiry, presenting philosophy as the business of conducting the initial conceptual exploration (and, hopefully, clarification) of a given set of problems, handing it then to a special science as soon as those problems become empirically tractable. There are quite a few historical examples to back up this view, from the emergence of the natural sciences to that of psychology and linguistics. Philosophy of mind is arguably in the midst of this very process, interfacing with the nascent cognitive sciences.


Predictably, this model is often twisted by detractors of philosophy to show that the field has been in a slow process of disintegrating itself, with a hard core (metaphysics, ethics, epistemology, logic, aesthetics) that is the last holdout, and which has shown increasing signs of itself yielding to the triumphal march of Science (with a capital “S”).


If that is the case, of course, so be it. But I seriously doubt it. What we have seen over the last few centuries, and especially the last century or so, is simply a transformation of what it means to do philosophy. Although we use the general word “science” to indicate everything from Aristotle’s forays into biology to what modern physicists are doing with the Large Hadron Collider, the very nature of science has evolved throughout the centuries, and keeps evolving still. What counts as good scientific methodology, sound scientific theorizing, or interesting scientific problems has changed dramatically from Aristotle to Bacon to Darwin to Stephen Hawking. Why should it be any different for philosophy?


One of the most obvious indications that philosophy has been reinventing itself over the past century or so is the stark onset of a panoply of “philosophies of.” “Philosophies of” are the way the field has been responding to the progressive emancipation of some of its former branches: science is no longer natural philosophy, but that simply means that now philosophers are free to philosophize about science (and, more specifically, about biology, quantum mechanics, etc.) without doing science. The same idea applies to linguistics (and philosophy of language), psychology (and philosophy of the social sciences), economics (and philosophy of economics), and so on.


Is this sort of transformation also about to affect philosophy’s core areas of metaphysics, ethics, epistemology, logic, and aesthetics? It depends on how one looks at things. On the one hand, to a larger or lesser extent it certainly has become increasingly difficult to engage in any of the above without also taking on board results from the natural and social sciences. While logic is perhaps the most shielded of all core philosophical areas in this respect (indeed, arguably it has contributed to the sciences broadly construed much more than it has received), it is certainly a good idea to do metaphysics while knowing something about physics (and biology); ethics while interfacing with political and social sciences, and even biology and neuroscience; epistemology while being aware of the findings of the cognitive sciences; and aesthetics with an eye toward biology and the cognitive and social sciences.


Nonetheless, all the core areas of philosophy emain still very much recognizable as philosophy. Should they finally spawn their own independent disciplines, there will immediately arise in turn a need for more “philosophies of,” and the process will keep going, the field adapting and regenerating.


Ultimately, philosophy is here to stay for the same reason that other humanities (and the arts) will stay, regardless of how much science improves and expands, or how much narrow-minded politicians and administrators keep cutting humanities’ funding in universities. Human beings need more than facts and formulas, more than experiment and observation. They need to experience in the first person, and they need to reflect critically on all aspects of their existence. They need to understand, in the broadest possible terms, which means they need to philosophize.


_____


Post-Scriptum: where do I fall in the multi-variable conceptual landscape of philosophical positions? Here:


Epistemology: a priori knowledge (e.g., mathematics, logic) is possible.


Knowledge: neither a rationalist nor an empiricist, we need a judicious combination of the two, as Kant first surmised.


Abstract objects: something like nominalism, definitely not Platonism. I prefer Smolin’s “evocation” alternative, briefly discussed in part I.


Analytic / Synthetic distinction: it’s there, despite Quine making some good points about borderline or fuzzy cases.


External world: realism. (Though I do have strong sympathies for instrumentalism in philosophy of science, but the two are not mutually exclusive.)


Aesthetic values: anti-realist (though our aesthetic judgments, at a very basic level, have likely been shaped by evolution, which would make me a quasi-realist of sorts).


Transporter problem: Kirk dies every time. I do not think there is a unique metaphysical answer to personal identity, as it is a human construct constrained by human biology.


Ethics: virtue ethics, obviously.


Naturalism vs anti-naturalism: I’m definitely a naturalist, compatibilist about free will, non-theist, and think that the mind is the result of the activity of the physical (and embodied) brain.


Objectivism vs Platonism: quasi-realist in ethics (as explained here), ethical judgments are constrained by our understanding of human nature and what makes us flourish. More than one reasonable ethical judgment is compatible with any given empirical situation.

Philosophy as the evocation of conceptual landscapes – part I

1I come to philosophy from the natural sciences, where the idea that my former academic field (evolutionary biology) makes progress is taken for granted, and where one would receive amused or puzzled looks by simply asking the question of what constitutes progress. And yet, philosophers of science have convincingly argued that it is much more difficult than one might think to provide a good account of how, precisely, science makes progress. When it comes to philosophy, however, it is philosophers themselves who often deny that the field makes progress, no matter how one understands “progress.” Which is puzzling, because the obvious question, then, is why on earth would they bother spending their lives contributing to an area of inquiry that reliably goes nowhere?


This essay is a shortened version of a full paper that appeared in a volume edited by Russell Blackford and Damien Broderick, entitled Philosophy’s Future: The Problem of Philosophical Progress (if you are an “Aurelian” subscriber to this site you can download it for free).


Part of the problem is that “progress” is itself not at all easy to define, with the term taking on different meanings in the natural sciences and, for instance, in mathematics. I suggest that a reasonable approach to this issue is to “go Wittgensteinian,” so to speak, and argue that “progress” is a family resemblance concept. Wittgenstein’s own famous example of this type of concept was the idea of “game,” which does not admit of a small set of necessary and jointly sufficient conditions in order to be defined, and yet this does not seem to preclude us from distinguishing games from not-games, at least most of the time. Progress, in a sense, could then be thought to be like pornography, to paraphrase the famous quip by US Supreme Court Justice Potter Stewart: “I know it when I see it.”


I submit that progress in science is a teleonomic (i.e., goal oriented) process, where the goal is to increase our knowledge and understanding of the natural world in terms of the simplest possible set of general principles. By contrast, progress in mathematics, logic, and – with some important qualification – philosophy, could be described as an advancement toward a better, more complete, or more modern condition, a definition that does not imply any final goal to be achieved.


Consider first mathematics and logic: I do not think it is tenable to understand them as teleonomic disciplines because there is an infinite number of logical-mathematical objects to discover and theorems to prove, so that mathematics and logic look like ever expanding disciplines, not converging toward any “theory of everything” as is (allegedly) the case for science. So I do think of mathematics and logic as advancing toward a better, more complete position, “better” in the sense that the process both opens up new lines of internally generated inquiry (the solution of mathematical and logical problems generates new problems, and so forth) and “more complete” in the sense that mathematicians and logicians are best thought of as engaged in the exploration of a space of conceptual (as distinct from empirical) possibilities.


At this point in the paper I discuss in detail the concept of “evoking,” as distinct from discovering or inventing, new notions in the space of conceptual possibilities pertinent to, respectively, mathematics, logic, and philosophy. It’s a rather technical topic, inspired by Roberto Unger and Lee Smolin’s treatment of mathematical Platonism in their excellent The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. The interested reader can find a summary of it here. The basic notion, though, is simple. Consider the game of chess. It is clearly a human invention, i.e., the rules of chess are not independent of the human mind. Yet, once invented, the game has “evoked” (i.e., brought into existence) a number of objectively provable statements about it, including some that can be proven mathematically. Keeping this in mind, let’s now go back to how philosophy makes progress.


Current discussions on progress in philosophy have a number of precedents, although it is actually surprising how few scholarly papers have been devoted to the topic. One that I think is particularly important to discuss is Moody’s distinction among three concepts of progress, first published in 1986. What he calls progress-1 takes place when there is a specifiable goal about which people can agree that it has been achieved, or what counts toward achieving it. If you are on a diet, for instance, and decide to lose ten pounds, you have a measurable specific goal, and you can be said to make progress insofar your weight goes down and approaches the specific target. Progress-2 occurs when one cannot so clearly specify a goal to be reached, and yet an individual or an external observer can competently judge that progress has occurred when comparing the situation at time t with the situation at time t+1, even though the criteria by which to make that judgment are subjective. Moody thinks, for example, that composers guided by an inner sense of when they are “getting it right” would be making this sort of progress while composing. Finally, progress-3 is a hybrid animal, instantiated by situations where there are intermediate but not overarching goals.


Interestingly, Moody says that mathematics makes progress-3, insofar as there is no overall goal of mathematical scholarship, and yet mathematicians do set intermediate goals for themselves, and the achievement of these goals (like the proof of Fermat’s Last Theorem) is recognized as such by the mathematical community.


Moody’s next step is to assume provisionally that philosophy is a type of inquiry, and then ask whether any of his three categories of progress applies to it. The first obstacle is that philosophy does not appear to have consensus-generating procedures such as those found in the natural sciences or in technological fields like engineering. Moody claims that “the only thing that philosophers are likely to agree about with enthusiasm is the abysmal inadequacy of a particular theory.” While I think this is actually a bit of a caricature, I do not share Moody’s pessimistic assessment of that observation even if true: negative progress, that is, the elimination of bad ideas, is progress nonetheless.


Moody concludes that philosophy does not make progress-1 or progress-3, because its history has not yielded a trail of solved problems. What about progress-2? He takes up the possibility that perhaps philosophy is not a type of inquiry after all, and analyzes in some detail two alternative conceptions: Wittgenstein’s idea of philosophy as “therapy,” and Richard Rorty’s so-called “conversational model” of philosophy. As Moody summarizes:


“Wittgenstein believed that philosophical problems are somehow spurious and that the activity of philosophy … should terminate with the withdrawal, or deconstruction, of philosophical questions.”


On this view, then, there is progress, of sorts, in philosophy, but is the sort of “terminus” brought about by committing seppuku. As Moody rather drily comments, while nobody can argue that Wittgenstein’s ideas have not been taken seriously, it is equally undeniable that philosophy has gone forward largely as if the therapeutic approach had never been articulated. If a proposed account of the nature of philosophy has so blatantly been ignored by the relevant epistemic community, we can safely file it away.


Rorty’s starting point is what he took to be the (disputable, in my opinion) observation that philosophy has failed at its self-appointed task of analysis and criticism. Moody quotes him as saying: “The attempts of both analytic philosophers and phenomenologists to ‘ground’ this and ‘criticize’ that were shrugged off by those whose activities were purportedly being grounded and criticized.” Rorty arrived at this because of his rejection of what he sees as philosophy’s “hangover” from the seventeenth and eighteenth centuries, when philosophers were attempting to set their inquiry within a framework that allowed a priori truths to be discovered, even though David Hume dealt that framework a fatal blow during the eighteenth century.


While Moody finds much of Rorty’s analysis on target, I must confess that I don’t. For instance, the fact that other disciplines (like science) marched on while refusing to be grounded or criticized by philosophy is neither entirely true (lots of scientists have paid and still pay attention to philosophy of science, for example), nor necessarily apt as the ultimate test of the value of philosophy even if true: creationists and climate change deniers, after all, shrug off any criticism of their positions, but that doesn’t make such criticism invalid, or futile for that matter (since others are responding to it).


Yet there is something to be said for thinking of philosophy as a “conversation” more than an inquiry, as Rorty did. The problem is that this and other dichotomies presented to us by Rorty are, as Moody himself comments, false: “We do not have to choose between ‘saying something,’ itself a rather empty notion that manages to say virtually nothing, and inquiring, or between ‘conversing’ and ‘interacting with nonhuman reality,’” Indeed we don’t.


What account, then, can we turn to in order to make sense of progress in philosophy, according to Moody? I recommend that the interested reader check Moody’s discussion of Robert Nozick’s “explanational model” of philosophy, as well as John Kekes’ “perennial problems” approach, but my own treatment here will jump to Nicholas Rescher’s 1978 proposal of the concept of “aporetic clusters,” which is one path that supports the conclusion that philosophy does make progress, and it is a type-2 progress.


Rescher thinks that it is unrealistic to expect consensus in philosophy, and yet does not see this as a problem, but rather as an organic outcome of the nature of philosophical inquiry:


“In philosophy, supportive argumentation is never alternative-precluding. Thus the fact that a good case can be made out for giving one particular answer to a philosophical question is never considered as constituting a valid reason for denying that an equally good case can be produced for some other incompatible answers to this question.”


In fact, Rescher thinks that philosophers come up with “families” of alternative solutions to any given philosophical problem, which he labels aporetic clusters. According to this view, some philosophical accounts are eliminated, while others are retained and refined. The keepers become philosophical classics, like “virtue ethics,” “utilitarianism,” or “Kantian deontology” in ethics, or “constructive empiricism” and “structural realism” in philosophy of science. Rescher’s view is not at all incompatible with my idea of philosophy as evoking (in the sense briefly described above), and then exploring and refining, peaks in conceptual landscapes. As Moody aptly summarizes it: “That there are ‘aporetic clusters’ is evidence of a kind of progress. That the necronology of failed arguments is so long is further evidence.”


(next: empirical examples of progress in philosophy)

Is philosophy a profession? (Yes, it’s a serious question)

You would think that the one that gives the title to this essay is one of those silly questions that only a philosopher would pose. And yet, a few months ago I unwittingly caused a Twitterstorm when I suggested that philosophy is, indeed, a profession, and that it comes with credentials (in the form of an awarded PhD, job titles and so forth) and even (gasp!) expertise.


I will start by presenting my arguments for why philosophy is indeed a profession that marks a certain kind of expertise; then we’ll talk about why this matters; and finally we’ll explore why, I think, so many people got positively upset at the mere suggestion that there can be professional philosophers, and even more so that they deserve a bit of respect when they talk about their own subject matter. I will also address some common objections to the idea of professional philosophy, as they were put to me during said Twitterstorm.


Is philosophy a profession?


Modern philosophy, meaning — approximately — philosophy has it has been practiced since the 20th century, is a profession in the same sense that, say, psychology or dentistry are professions. If you want to become a psychologist, or a dentist, you go to specialized schools, you take specific courses, you demonstrate your ability as a practitioner, and you get awarded a certificate that says that yup, you are indeed a psychologist, dentist, or philosopher. You then look for a job in your chosen profession, and if you are capable and lucky you land one. You then practice said profession, drawing a salary or other form of income. And eventually you cease practicing in order to enjoy a more or less well deserved retirement.


Typically, in order to become a professional philosopher one needs an undergraduate degree in that field (in the United States, four years) and a PhD from an accredited university (4-6 years on average, but it can be more). The PhD requires taking advanced courses (in my case, for instance, on Plato, ethics, Descartes, Kant, and a number of others), and the writing of a dissertation that must be of publication quality and advance the field by way of proposing original ideas (here is mine). After this, a young philosopher may find temporary employment as a postdoctoral associate, or as a lecturer, and eventually, maybe, land a tenure track position (though the whole institution of tenure has been under relentless attack by conservative political forces, but that’s another discussion). If you do get such a position, you then have six years to prove to your colleagues that you are worth retaining and being promoted from assistant to associate professor, a promotion that comes with some benefits (beginning with tenure itself) and usually a very modest increase in salary. If you are good, a number of years later (usually around five) you get another promotion, to full professor, which comes with little additional benefits (except that now you can serve on more university committees!) and with an equally modest increase in salary.


What I have just described, of course, is the academic path. It used to be pretty much the only game in town, but now the American Philosophical Association has a whole booklet on career paths beyond academia, if you are so inclined. Nevertheless, the academy is still where you will find most professional philosophers, these days.


So, since becoming a philosopher requires studying and getting a degree, and is often associated with belonging to a professional society and getting a regular salary from an employer (usually a university) it seems pretty obvious that philosophy is, indeed, a profession as succinctly defined by the Merriam-Webster: a type of job that requires special education, training, or skill.


Why does this matter?


Why did I bother engaging in the above elucidation of the obvious? Because ever since I switched my own career from that of a scientist (evolutionary biology) to that of a philosopher, I noticed an incredible amount of hostility and dismissal toward philosophy, including — unbelievably — by some philosophers!


I think it is important to correct public misperceptions of philosophy in particular, and of the humanities in general, not because these disciplines are difficult to practice and therefore deserving of respect, but because they are vital to the functioning of an open society. Far too often these days we hear administrators and politicians (usually, but not only, conservatives) saying that a college degree should prepare students to find well paying jobs. That is simply not the case. That definition applies to trade schools, not universities. Yes, of course you want to find a well paying job, especially given the insane amount of money you will have to shell for the privilege of a higher education in the increasingly unequal United States of America (and elsewhere). But the point of a liberal arts education (as it used to be called before “liberal” somehow became a dirty word) is first and foremost to help create mature adults and responsible citizens. You know, the sort of people who can think for themselves about what to do with their lives, instead of being brainwashed by corporate ads. Or the sort of people who believe that voting is both a right and a privilege, and who exercise such right/privilege by doing their homework on different candidates, instead of falling for blatant propaganda and conspiracy theories. That, and not to create an obedient army of drones for the corporate world and an increasingly illiberal government, is what education is for. No wonder so many in power have tried so hard to undermine that mission.


And make no mistake about it, that mission requires a substantial involvement in the humanities, not just the STEM fields. Everyone these days claims to be teaching “critical thinking,” but trust me, you ain’t gonna learn that in a biology class, or in chemistry, or in engineering. You will learn all sorts of interesting things in those classes, some of which may even be useful for getting you a job. But you won’t acquire the sort of ability at critical analysis and writing that philosophy will give you. You will also not be able to familiarize yourself with art, literature and music, some of the main reasons why human life is so interesting and varied. And you will not learn about the stupid things we have repeatedly done in the course of history — which is just as well from the point of view of politicians who prefer to keep selling you propaganda according to which you live (of course!) in the best nation that has ever blessed planet earth, handpicked by God himself to be a shining light for the rest of the world. You see, if you read Plato and Shakespeare and Haruki Murakami, or learn about the American bombing of Dresden at the end of WWII, or appreciate just how and why inequality, racism, and sexism are still pervasive in the 21st century, you will might start questioning what the hell is going on and how to change it. As one of my favorite comedians, George Carlin, once put it: “it’s called the American dream because you must be asleep to believe it.” Philosophy, and the rest of the humanities, are a major way for you to wake up.


Why do people have a problem?


Once more, I would not have thought that any of the above were controversial. But it was! I got a surprising amount of pushback on social media. Okay, fine, it’s social media, where one gets pushback and worse for saying the most mundane things. But still. Studying those responses, it seems to me they fall in the following broad categories:


(i) People who believe that I’m telling them that only professional philosophers can think. What? No, and if you believe that’s the implicature of the above position, you may benefit to taking a philosophy class or two! Snarky comments aside (sorry, this sort of exercise is exhausting!), of course philosophers aren’t the only people who can think, or even think well. Nor does thinking require a license or accreditation of any sort. But the job description of the philosopher is not “thinker,” but rather thinker of a particular kind, using particular tools, applying them to particular subject matters. Similarly, a psychotherapist, say, isn’t just someone who talk to you about your problems. Your friend can do that over a beer at the local pub. But your friend is not professionally trained, is not aware of psychological theories of human behavior, and is not familiar with psychotherapeutic techniques. That’s why so many people pay professional therapists to talk about their problems, instead (or on top) of having a beer with their friends.


That is why it is bizarre that when someone disagrees with me on Twitter or Facebook they often say something along the lines of “you should be aware of logical fallacies,” or “you should study philosophy of science” (actual phrases, and please notice that I teach a course on — among other things — logical fallacies, have written technical papers on the topic, and my specialty is, you guessed it, philosophy of science). This isn’t to say that a professional is always right and an amateur always wrong. Sometimes your intuitions about what’s wrong with your car may trump those of your mechanic. But, as a general rule, is far more likely the expert got it right and that you have a superficial or incomplete understanding of the matter. There is no shame in this, of course. We can’t all be experts on everything.


(ii) Which brings me to the second cause of irritation among some commenters: a good number of people seem not to recognize that philosophy is a field of expertise. On the one hand, this is understandable, but on the other hand it is downright bizarre. It’s understandable because philosophy is, indeed, a rather peculiar field, even within the academy. While biologists study the living world, physicists study the fundamentals of matter and energy, psychologists study human behavior, and historian study human history, what do philosophers study, exactly? The answer is: everything.


Which doesn’t mean they are experts on everything. Here is how it works. First off, the very comparison between philosophy and, say, biology, is misleading. “Philosophy,” if anything, is comparable to “science,” not to a sub-discipline of science. Second, philosophers are interested in broad vistas and the connections among fields, hence the various “philosophies of” (mind, biology, physics, social science, language, history, and so forth). This doesn’t make it easier, but more difficult to be a philosopher. Take my own case: I am a philosopher of science, and in particular a philosopher of evolutionary biology. This means that I need to be very familiar with not one, but two areas of scholarship: evolutionary biology and philosophy of science. I need to understand both the biology and epistemology, for instance, in order to apply a philosophical lense to the science and ask questions like what is the logic and structure of a particular scientific theory, how do unstated assumptions and unrecognized biases interfere with scientific research, what exactly is the relationship between a scientific theory and the evidence that is invoked to back it up (i.e., what’s the “epistemic warrant” of the theory).


Surely this sort of work requires expertise. Equally surely, someone without background in both science and philosophy of science is unlikely to just waltz in and come up with a novel idea that will stun the pros. It’s possible, of course, but very, very unlikely.


(iii) A third group of responses threw back at me that apparent incongruity that I have spent years encouraging people to practice philosophy (Stoicism, specifically) in their everyday life, and yet I’m now telling them that they don’t understand it. But there is a big difference between philosophy as an academic field of scholarship and philosophy understood as a daily practice in life. The first one is the province of professionals, the second one can (and, I think, should) be accessible by anyone willing to spend a modicum of time reading about it.


Again, the difference that I’m drawing here should not be surprising, as it finds lots of parallels. Everyone should exercise to maintain good health. That doesn’t mean everyone suddenly is a professional trainer or athlete. Anyone is capable of driving a car. But we are not a planet of car mechanics. Every Christian is able to read the Gospels, but few are theologians of the level of Thomas Aquinas. And so on, the examples are endless.


So, no, there is no contradiction at all between the notion that philosophy is a specialized academic profession requiring a lot of training and the idea that anyone can read up enough about Stoicism, or Buddhism, or any other philosophical or religious practice and incorporate them in their lives.


Possible objections


Finally, let me do some pre-emptive addressing of likely criticisms (another useful habit that I picked up as a professional philosopher!):


(1) But dentists (say) produce something, what do philosophers produce?


The outcome of the profession of dentistry is that your teeth will be in better order and more healthy than they would have been had you not gone to the dentist. The outcome of the profession of philosophy is twofold: (a) our students develop a better sense for complex ideas and how to evaluate them; and (b) we publish papers and books that contain new insights into the problems we are interested in. (The latter is, of course, what every scholar does, both in the humanities and in the sciences.)


(2) But Socrates did not have a PhD!

 

True. Neither did Darwin. Or Galileo. But today it’s really, really hard to become a professional biologist or physicist without proper, standardized, and rigorous training, usually certified by the award of a PhD. Philosophy has changed exactly in the same way in which all other fields of inquiry have, and for similar reasons (increased specialization, consequent division of labor, institutionalization, etc.).


(3) But someone can make novel contributions to philosophy even without a degree.

 

Yes. Just like someone can make a novel contribution to biology, or physics, and so forth. Such cases exist, but they are rare. Indeed, they are increasingly hard to find, across fields, precisely because both humanistic and scientific knowledge are getting more and more sophisticated and specialized, thus requiring extensive professional training.


(4) But plenty of professional philosophers don’t make interesting contributions to the field.

 

True. And the same goes for plenty of professional biologists (believe me, I’ve seen it) and, I assume, professional physicists, mathematicians, and so forth. Even so, your average philosopher (or biologist, or physicist) will still have a far more sophisticated command of her field than someone who has never studied it systematically.


(5) But there are serious problems with academia. Indeed there are.

 

This is something often pointed out, among others, by my friend Nigel Warburton. That said, Nigel himself has a PhD in philosophy and was an academic before going freelance. And for his spectacularly successful podcast, Philosophy Bites, he tends to interview… you guessed it! Professional philosophers! (Including yours truly.) Because they have knowledge of their field, and interesting things to say about it.


The bottom line


So, can we please get over this strange combination of defensiveness and disdain, and admit that philosophy is — among other things — a serious profession carried out by people with expertise? As I argued above, there is far more at stake here than a petty turf war or wounded egos. Taking philosophy (and the humanities) seriously may be what ultimately will save us from the forces of obscurantism and tyranny.

Evolution, moral realism, and conditional imperatives

Is it true that genocide is wrong? Most of us would respond in the affirmative, would think that the answer is obvious, and would regard anyone answering in the negative as a psychopath, best immediately locked up somewhere where he can’t hurt people. And yet, that kind of response implicitly assumes that there is a fact of the matter about moral pronouncements, that some statements in ethics are true or false. But by what standard?


Moral truths — if they exist — don’t appear to be on par with scientific truths, despite much nonsense that has been written about it in recent years (see here and here). If a scientist says that, for instance, the planet Saturn has rings, that statement is true if, and only if, it turns out that Saturn does, in fact, have rings. This is referred to in philosophy as the correspondence theory of truth: a statement is true if it corresponds (to the best of our knowledge) to what’s actually “out there.”


Moral truths are also not (quite) like mathematical truths. In mathematics the Pythagorean theorem, say, is “true” if it can be derived deductively from a small number of axioms. The reasoning that leads to its derivation has to be coherent, meaning that the theorem has to be logically entailed by the axioms, and not lead to or imply any contradiction. This is known as the coherence theory of truth.


I don’t mean to imply that there is a sharp distinction between science and mathematics, nor that individual applications may not rely on a combination of the two theories of truth (indeed, we’ll see one such application below), but the above is a basic sketch that will serve us well in this essay.


So if moral truths don’t follow either a correspondence or a coherent account, what are we left with? Enter error theory. According to the excellent Stanford Encyclopedia of Philosophy:
“Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that level headed people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn’t believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn’t believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk.”


The upshot is that if you (like the majority of people) believe that there are such things as moral truths, you are a moral realist, but you need to provide an account of where moral truths come from. If you reject the existence of moral truths (and error theorists are just one class of philosophers who do) then you are left with the task of explaining how come so many people are prone to this particular type of error.


This is why I was curious to read a recent paper by eminent philosopher of science Kim Sterelny and his University of Canberra colleague Ben Fraser, entitled “Evolution and moral realism,” and published in the British Journal for the Philosophy of Science. Here is a summary of their approach:


“We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). We: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. … True moral beliefs are a ‘fuel for success,’ a map by which we steer, flexibly, in a variety of social interactions.”


Let me unpack the above, and see where this leads us. The rather uncontroversial premise of Sterelny and Fraser’s paper is that our sense of right and wrong derives from an instinct that was probably favored by natural selection in order to improve our prosocial behavior, because the latter — in highly social species like ours — increases individual survival and reproduction, which are the only things natural selection “cares” about. Elements of prosocial behavior of this sort, which we would call moral if observed in humans, are indeed present in other species of primates.


But as Sterelny and Fraser point out, evolutionary accounts have largely being co-opted by error theorists: while moral language is what philosophers call “truth-apt” (i.e., it looks like it’s referring to truths), actual moral claims cannot be demonstrated to be true, since neither the correspondence nor the coherence theory seem to apply.


This has actually always sounded strange to me, for the following reason. A similar argument could be made that natural selection evolved our intelligence not in order for us to discover truths about the world (including scientific truths), but rather to figure out how to best our rivals within social groups. This is referred to as the Machiavellian theory of the origin of mind (but see here for one criticism), and it would lead us to conclude that even our scientific utterances are “truth-apt” and yet “never true or warranted.” One theologian, Alvin Plantinga, actually makes that argument against naturalism (as opposed to supernaturalism), ironically using evolutionary theory to conclude that either evolutionary science is untrustworthy or philosophical naturalism is wrong. I think Plantinga’s argument is bogus, though it would require a separate essay to show why (maybe at a later time). Still, it seems really weird to say that science (including evolutionary theory) doesn’t at least approximate truths about the universe, given how well its products work in practice. So error theorists shouldn’t be that quick to co-opt evolutionary theory on behalf of their position, or they risk falling into something like Plantinga’s dilemma.


Which lead us to the three points that constitute the heart of Sterelny and Fraser’s paper. They consider evolutionary error theory within a broader framework, the continuum between “folk” (i.e., everyday) and scientific understanding of things. Let’s clarify by means of an example not related to ethics: the phenomena of sunrise and sunset. The folk understanding in pre-scientific times was that, literally, the sun would rise above the horizon every morning, and set below it every evening. The sun was understood in a variety of ways, metaphysically, but usually as some kind of god or manifestation of the divine. The scientific account, of course, is that the sun isn’t doing any such thing, and in reality it is the earth that rotates on its axis, causing the illusion of sunset and sunrise. An evolutionary error theory would say that although sunrises and sunsets are illusions, in the sense that they are not a truthful description of what is going on, they are useful, since people can regulate their days accordingly. For everyday life, it simply doesn’t matter whether it is the sun that rises or sets, or the earth that rotates around its axis.
This is why Sterelny and Fraser say that according to this approach “many folk conceptual structures are neither hopelessly wrong nor fully vindicated.” But how is evolutionary morality cashed out, using this framework? On the one hand, there cannot be any simple reduction of moral truths to scientific facts. On the other hand, “moral facts are facts about cooperation, and the conditions and practices that support or undermine it.”


This is an interesting move, but I think it succeeds only in part. Sterenly and Fraser are ambitious here, as they want to ground a kind of moral realism, or quasi-realism, in evolutionary theory. Essentially, they are saying that moral truths follow the correspondence account outlined above, in that something is morally true just in case it fosters cooperation among human beings, and it is morally wrong if it doesn’t.


But this simply cannot be the full story. I think it is morally right (“true”) to cooperate with the entire human race in order to achieve a peaceful and prosperous world. Unfortunately, this is certainly not the sort of cooperation that natural selection has ever fostered. On the contrary, human evolution has been characterized by competition, not cooperation, among groups, with cooperation limited to each in-group. Indeed, it can be argued that the natural human trait of xenophobia (which I assume we would unequivocally label as morally wrong) has been adaptive for much of the history of Homo sapiens: if someone looks different from members of your in-group, he’s probably dangerous and you should be wary of him.


It is true that Sterelny and Fraser are careful, and are not committed to the simplistic notion that whatever behavior was favored by natural selection it is ipso facto morally good. But there are simply far too many discrepancies between what a theory of evolutionary morality would predict and what most people nowadays consider morally right or wrong for their approach to get us very far.


What then? Are the error theorists right after all? I don’t think so. I suggest that the sort of considerations articulated by Sterelny and Fraser provide a good account of how a natural moral instinct might have evolved: to favor in-group prosociality. But ethics since the invention of language, and especially since the invention of philosophy in different parts of the world between two and a half and three millennia ago, has been about using reason to precisely articulate and usually expand what counts as moral. Slavery, oppression of women, and xenophobia were initially considered morally acceptable, because they either enhanced or did not get in the way of the functioning of human groups. But today we think of all those notions as morally wrong, and for good reasons.


These reasons are the result of a combination of a basic prosocial human nature, an innate sense of the existence of right and wrong things, and of reason applied to the amelioration of the human condition. The first two components are the result of biological evolution, the latter of cultural evolution, which took over once we left the African savanna between 70,000 and 40,000 years ago, and especially after the agricultural revolution of 12,000 years ago. While the natural selective imperative is to survive in order to reproduce, the cultural imperative goes well beyond it: we want to flourish, to pursue projects, to acquire a certain degree of independence, and so forth. Biology simply cannot account for that.


What does, then? As philosopher Philippa Foot famously argued in her landmark 1972 paper, “Morality as a System of Hypothetical Imperatives,” moral truths are conditional truths. IF we wish to build a peaceful world THEN xenophobia is immoral. IF we wish to maximize every agent’s ability to flourish THEN the oppression of groups or individuals is immoral. And so forth.
This makes moral truths a combination of correspondence and coherence. The correspondence part derives from the notion that there are certain facts about the human condition that we can ascertain empirically. For instance that individuals flourish if they are accorded some rights, like the right to health care, or education. The coherence part comes from the fact that IF … THEN statements are a matter of logic, and so reasoning built on their bases is valid in some cases and invalid in others.


Another way to put it is this: if moral reasoning is reasoning about hypothetical imperatives, as Foot correctly, in my mind, argued, then the structure of that reasoning is a matter of logic (coherence) while the assumptions from which one begins any such reasoning are empirical in nature (correspondence). If human beings were radically different kinds of beings, our moral philosophy would look very different, or perhaps wouldn’t exist at all. Hence the relevance to ethics of the concept of human nature.


Some people may be unhappy with what they will conceive as a weakened sense of moral truths. They want categorical, not just hypothetical imperatives. They wish for universal, mind-independent moral truths. Too bad, there is no such thing. Others will deny the above account and embrace a relativist position in which morality is an entirely arbitrary human construct. While theoretically possible, I challenge anyone who adheres to such position to actually live by it. It’s not going to happen because it isn’t a good concept of morality for humanity. Foot-style categorical imperative offer us the best available option to navigate between the Scylla of strict moral realism and the Charybdis of strict moral anti-realism.

Why neuroscience is largely irrelevant to ethics

Benjamin Libet, neuroscientist

A few days ago, over at my other blog, I published an article that I touted on my social media as “the last piece on free will you will ever need to read.” That was a slight exaggeration, but only slight. The specific point of the post was to explain in some detail the ancient Stoic take on human decision making, what I and modern psychologists prefer to call volition rather than free will (given how loaded with metaphysical nonsense the latter term is). I also wanted to see how the Stoic position squares with the findings of modern science. As it turns out, that ancient view is highly compatible with what contemporary cognitive science says about the matter, but this is neither a miraculous coincidence nor indication that somehow the Stoics managed to anticipate scientific discoveries that would be made more than two millennia later. (Which would be just as preposterous as to maintain, as some do, that the pre-Socratic atomists “anticipated” modern physics. They didn’t, as even a superficial reading of the pre-Socratics, and a passing acquaintance with modern physics, should amply demonstrate.)

Rather, the reasons we still find so much of value in Stoic (or Aristotelian, or several other) ancient moral philosophy are twofold: first, some of the ancients were keen observers of human psychology; second, moral discourse has little to do with whatever mechanisms make it possible for human brains to think about morality (so long as some mechanisms that allow us to think do exist, of course). Both notions need to be unpacked a bit, which is what I intend to do in this essay.

What was so special about Aristotle, or Epicurus, or Epictetus? In a sense, not much. They were sharp thinkers who paid attention to the empirical side of what they were thinking about. We tend to forget that many others at the time and since have written about the same topics, and yet they are completely forgotten, or they appear at best as footnotes in philosophy books. (Have you ever heard of Aristippus of Cyrene? Not likely, and he was one of the major figures among the minor Greek philosophers…)

The reasons we read some ancient philosophers are, so to speak, evolutionary. Specifically, the cultural analogues of two basic processes that steer biological evolution: drift and selection. Drift is about statistical sampling: some books survive and others don’t because of luck. There probably never were too many copies — by modern standards — of the works of Chrysippus, one of the most noted Hellenistic philosophers, and unfortunately not a single one has come down to us. Selection makes it so that whatever authors are highly esteemed not just by their contemporaries, but further and further down in history, are the ones whose works and ideas tend to survive. In the case of Chrysippus, we know a good amount about what he thought because so many later commentators copied several of his passages, in order to praise him or criticize him. To put it into another fashion, we still read Plato and Aristotle because of what biologist Jacque Monod once called a combination of chance and necessity.

But we don’t read all of Plato and Aristotle nowadays, unless we are historians of philosophy, or of science. There isn’t much point in consulting Aristotle’s Physics if you are a physicist, because the field has moved very far from the Aristotelian positions, beginning with Galileo and arriving at Einstein and Stephen Hawking. By contrast, philosophers still find a lot of value in the Nichomachean Ethics. Ill informed people (who shall here go unmentioned) are under the impression that this is because philosophy, unlike physics, doesn’t make progress (usually, these people just happen to be physicists). But that’s sheer ignorance, which ought (morally) to be embarrassing. Philosophy does make progress (see here), but it is a very different kind of endeavor from physics, so any direct comparison is a category mistake.

No, the reason Aristotle, the Stoics, and so forth are relevant today (other than the above mentioned one that they were la creme de la creme of their period) is that modern science has little of relevance to say about certain branches of philosophy, and in particular ethics. (Yes, I know, certain individuals are making a cottage industry of arguing the opposite. But they too shall go mercifully unmentioned in this post. I’ve dealt with them ad nauseam in the past.)

The reason this is the case has been explained by philosophers Ludwig Wittgenstein and Wilfrid Sellars, and is exemplified by the work of neuroscientist Benjamin Libet. Let me explain.

First, Wittgenstein. In Tractatus 4.111 he famously wrote that “philosophy is not one of the natural sciences,” adding at 4.112 that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations we find:

“[Philosophy’s] investigation is a grammatical one. Such an investigation sheds light on our problem by clearing misunderstandings away. Misunderstandings concerning the use of words, caused, among other things, by certain analogies between the forms of expression in different regions of language.” (90)

While I think that Wittgenstein had too narrow a view of what philosophy does, there is quite a bit of truth in the above. The job of philosophers isn’t to discover new things about the world (we’ve got science for that), but rather to clarify issues by way of critical analysis, and to see how things that appear disparate “hang together,” so to speak. That is, for instance, why metaphysics isn’t being replaced by physics, it is transforming itself into a discipline informed by physics (and biology, and other sciences) whose objective is to make sense of the picture of the world that emerges from the discoveries of individual special sciences, something that no single science does or is concerned with. (See, for instance, Ladyman and Ross’ Every Thing Must Go, a sort of manifesto for a naturalistic metaphysics.)

Wittgenstein becomes even more relevant to the present discussion when we consider his concept of “language games” as presented in the Investigations:

“The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words ‘block,’ ‘pillar,’ ‘slab,’ ‘beam.’ A calls them out; B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.” (2)

Ethics is another language game, or, rather, a multiplicity of language games, since there are a number of ways to conceive, talk about, and actually do, ethics. Within the human community, we talk about “good,” “bad,” “moral,” “immoral,” “ought,” and so forth, and any competent language user understands what others mean by those words. Moreover, .just like the words of the builder’s language actually help building things, so the words of ethical language actually help regulate our actions within a given community. The fact that science comes in and, say, tells us that “bricks” are really mostly empty space is interesting from within the science language game, but it is utterly useless, and indeed a distraction, to the builder. Analogously, that a neuroscientist may be able to tell us which parts of the human brain are involved in the production of ethical judgments, and by which cellular means, is interesting within the language game of neuroscience, but it is a useless distraction if we are concerned with improving social justice, or becoming a better person.

Which brings me to what I have termed the most important philosopher you likely never heard of: Wilfrid Sellars. My friend Dan Kaufman and I did an extensive video conversation on Sellars, which I think is worth checking out. One of Sellars’ landmark ideas was the distinction between what he called the manifest and the scientific images of the world. The manifest image is the way most people understand and navigate the world. The Sun “rises,” genocide is morally repellant. That sort of thing. The scientific image, by contrast, is the way science looks at the world: the Sun does not, actually, rise; it is the Earth that rotates on its axis. As for genocide? Ah, therein lies the rub. I’m sure there are scientific explanations for why genocide is such a recurring feature of human history, from the biology and neuroscience of violence to those of inter-group relations. While such scientific understanding of genocide may be useful, it does not give us the complete picture. Why not?

Because, according to Sellars, the manifest, but not the scientific, image deals with things like reasons and values. This is not a call to reject science. On the contrary. Sellars was quite clear that whenever the scientific and the manifest images of the world are in conflict (as in “the Sun rises” vs “the Earth rotates” case), then the sensible thing is for us to yield to science. But science simply isn’t in the business of doing a number of other things for which we have developed different tools: philosophy, literature, history, and so forth. These tools are complementary with, not opposed to, scientific ones. Ideally, says Sellars, we want to develop a conceptual stereoscopic vision, whereby we are capable of integrating the manifest and scientific images. Indeed, according to Sellars — and I wholeheartedly agree — developing and constantly updating such vision is a major task of philosophy, and our discipline is uniquely positioned to carry the task out because of both its methods (empirically-informed critical discourse) and its scope (very, very broad).

In a sense, what emerges from Wittgenstein, but even more so from Sellars’ thought is that there are a number of things about which we can talk at different levels of analysis, and which level(s) make the most sense depends on what it is that we wish to accomplish. While in theory a full integration of all levels may be possible, in practice it is often not desirable, because it doesn’t help with the particular language game we happen to be playing.

Let me then come back to “free will” (or volition), and use my discussion of Stoic philosophy as it compares to the famous experiments by Benjamin Libet to present a specific example of what I have outlined above, attempting to convince you of why I think science is largely irrelevant to moral discourse.

The Stoics thought that we have a faculty of judgment, which they call the hêgemonikon. It was a major goal of Stoic training to improve the way we use it, i.e., to arrive at better and better judgments about whatever life throws at us. In the post at my other blog I suggest that, roughly speaking, the hêgemonikon corresponds to the frontal lobes of the human brain, which are far more developed than in most other mammals, and are known to be associated, in fact, with our capacity for judgment, and in particular with our ability to “veto,” so to speak, certain actions that might otherwise come natural to us (as in: “there is a strange noise in my house in the middle of the night! Someone is about to kill me!! I need to run the hell out of here!!! … Oh, wait, it’s the cat. Back to sleep).

The Stoics themselves were spectacularly wrong about the likely location of the hêgemonikon: they thought it resided in the heart. But pretty much everything else they said about its functioning and how we can improve it was right on the money, as shown by the fact that 23 centuries later Stoic “psychology” still informs a number of evidence based psychotherapies, such as rational emotive behavior therapy and cognitive behavioral therapy.

How is this possible? Because most of what the Stoics thought about the hêgemonikon was part of the manifest image, and was useful then as it is now for the simple reason that people still deal with the same basic issues: unhealthy emotions like anger and fear, and the search for better ways to relate to others and improve the human polis. What the Stoics got wrong, not at all surprisingly, is the bit that belongs to the scientific image: as it turns out, our faculty of judgment depends on a particular part of the brain, not the heart. Crucially, though, this has had no effect whatsoever on Stoic philosophy or its usefulness. A modern Stoic simply updates that bit of information, thanks the scientist, and goes back to her practice.

Nowadays, whenever the topic of human volition comes up someone is bound to cite the famous experiments carried out by Benjamin Libet, beginning in 1983. Briefly, he asked subjects to follow the movements of a dot on the screen of an oscilloscope. The dot moved like the hands of a clock, but faster. Libet told his subjects to move a finger at a moment of their choice during the experiment, noting the position of the dot when they became aware of their decision to act. The experiment showed that the decision to move the finger entered conscious awareness about 200 milliseconds before the actual movement. But, stunningly, there was a rise in the so-called “readiness potential,” which is thought to be associated with the preparation for action, about 550 milliseconds before movement. So the subjects appeared to get ready to move the finger a full 350 milliseconds before they became conscious of their decision to do so. (Indeed, in later experiments, the readiness potential has been shown to build up even as long as 1.5 seconds before movement.)

Taken at face value, Libet’s results seem to show that we decide our actions unconsciously, and that what we call consciousness is simply a (late) awareness of a decision that has been made. There are several well known criticisms of such conclusion, beginning with the obvious one, that the experimental conditions have precious little to do with the recursive, complex behavior that we normally label “conscious decision making,” and which is understood as a continuous feedback loop between what Daniel Kahneman calls System I (fast, subconscious) and System II (slow, deliberate) brain processing systems. Moreover, recent research has both amply confirmed, and yet significantly re-interpreted, Libet’s original findings.

But a good reason to think that Libet’s experiments do not mean what so many enthusiasts of the “free will is an illusion” bandwagon seem to think they mean, is Libet’s own commentary:

“The finding that the volitional process is initiated unconsciously leads to the question: is there then any role for conscious will in the performance of a voluntary act? The conscious will does appear 150 msec before the motor act, even though it follows the onset of the cerebral action by at least 400 msec. That allows it, potentially, to affect or control the final outcome of the volitional process. An interval msec before a muscle is activated is the time for the primary motor cortex to activate the spinal motor nerve cells, and through them, the muscles. During this final 50 msec, the act goes to completion with no possibility of its being stopped by the rest of the cerebral cortex. The conscious will could decide to allow the volitional process to go to completion, resulting in the motor act itself. Or, the conscious will could block or ‘veto’ the process, so that no motor act occurs.” (B. Libet, Mind Time: The Temporal Factor in Consciousness, 2004, p. 137)

[Once more, to preempt distracting discussions: I do not think we should talk about “free will,” which is a hopelessly metaphysically confused concept. We are talking about what psychologists themselves call volition, i.e., the ability of human beings to make complex decisions informed by conscious thought. Hopefully no one will deny that we do have such ability.]

Interestingly, studies have found very good experimental evidence for the veto power Libet is talking about. But that is “interesting” from within the language game of neuroscience. It makes no difference at all in terms of the language game in which the Stoics — and most of us — are engaged, that of improving ourselves as individuals and of making society a better place for everyone to live.

That is why, as a scientist, I will keep following with interest the undoubtedly fascinating future developments of cognitive and neuro-science. But it is also why, as a philosopher and human being, I’m not very concerned with how those findings will impact my day to day life in the realm of ethics. As the Stoic philosopher Epictetus aptly put it:

“You are not flesh or hair but volition; if you keep that beautiful, then you will be beautiful.” (Discourses III.1.40)

The nature of philosophy: Scruton vs Williamson edition

Roger Scruton (left) and Timothy Williamson (right)

As some readers may recall, last year I published serially a whole book on this blog on the nature of philosophy (available here as a single volume). This sort of thing is an exercise in meta-philosophy, i.e., philosophizing about the very nature of the discipline. It needs to be done from time to time, but not too often, as it tends to approximate the sort of navel gazing that philosophy is (unjustly, in my opinion) infamous for.

Apparently, one of those times is now, taking inspiration from a recent debate organized by The Times of London Literary Supplement between two eminent British philosophers, Roger Scruton and Timothy Williamson. I don’t think either got it exactly right, but Scruton came much closer, in my view, than Williamson. Regardless, there was much to be learned from the three rounds of the exchange, which is why I’m bringing it to the attention of my readers and will comment on it here, focusing on the first round only.

Let’s start, then, with Scruton’s take, which opens The Times’ debate. He begins by telling us that when he attended Cambridge in the 1960s, he was immediately disabused of the naive notion that studying philosophy would tell him something abut the meaning of life. Ethics was instead dominated by the likes of G.E. Moore, who spent endless time debating the exact meaning of “good,” “right,” and “ought.” As he puts it:

“Ethics came to rest in the study of dilemmas, like that of the man who must visit his aunt in hospital on the very same day as his child is competing in the long-jump at school. The manifest facts that modern people are living in a state of spiritual anxiety, that the world has become strange to us and frightening, that we lack and need a conception of our own existence — such facts were either un­noticed or dismissed as yet more leftovers from the mental disease called religion.”

I must say Scruton is right on target here. A significant portion (though, thankfully, not all) of academic philosophy has become irrelevant to pretty much anyone else outside of academic philosophy departments (the same, to be fair, holds for English literature, or the natural sciences, but that’s another story). The damage done by so-called analytic philosophy to ethics is indeed a perfect example. We don’t learn much, if anything, from increasingly convoluted versions of the trolley dilemma, as “fun” as those riddles can be for someone who thinks such things qualify as fun.

Scruton then rejects Locke’s contention that philosophy should be a “handmaiden to the sciences”:

“Philosophy is, and ought especially to be, a handmaiden to the humanities. It should use its best endeavours to show why the attempts to rewrite religion, politics, music­ology, architecture, literary criticism and art history as branches of evolutionary psycho­logy (or still worse, branches of applied neuro­science) are destined to fail.”

So Scruton sees a major task of contemporary philosophy to contrast scientism, the ideological attitude that declares (on no scientific grounds) that only scientific questions are worth being considered, and only the methods of science (often conveniently and arbitrarily expanded to encompass all ways of human reasoning) are valid sources of knowledge and understanding. Together with my friend and colleague Maarten Boudry I have put together a collection of essays — due on December 26 from Chicago Press — on the challenges posed by scientism, and readers of this blog know why I’m very sympathetic to Scruton’s perspective (not everyone contributed to our volume is, by the way — it’s a discussion, not a monologue).

He goes on to explain that the reason evolutionary psychology’s attempt to “reduce” the humanities fails is because science is in the business of (and is very good at) providing answers couched in a third-person perspective, focused on the causality of observable phenomena. But the world of the humanities is what Wilfred Sellars (remember him?) called “the space of reasons,” and reasons (or prescriptive statements) just don’t show up in an fMRI scan.

Let it be clear that Scruton is not anti-science. He explains that this failure of science is not the result of the existence of some other, magical, realm of existence. It is simply that science isn’t in the business of doing what the humanities do. It is one tool among many at our disposal to understand the world — not just the physical and biological world, but also the world of human relations and meaning. It shouldn’t be necessary, but I hasten to add that Scruton seems to be perfectly aware that human beings are also biological beings made of physical stuff. He is not claiming that there is no place for science in studying humans and their societies. He is just reiterating the famous, and very useful, distinction that Sellars himself made between the scientific and manifest images of the world.

Scruton ends his first round by bringing up David Hume and his idea that the human mind has a capacity to “spread itself upon objects.” While this capacity is, obviously, the result of biological evolution and it is made possible by our neural apparatus, biology and neuroscience tells us comparatively little of value about what happens when we engage in such Humean activity. as Scruton puts it:

“The case is no different from the case of aspects, like the face in the picture, which is there for us in the pigments, but not really there, as the pigments are.”

2nd Avenue subway’s (New York) artwork by Sarah Sze

Let’s now turn to Williamson’s initial response. He doesn’t start too well, as he deploys a rather cheap rhetorical trick, accusing Scruton of thinking that history is not part of the empirical world, which he smugly says “may come as news to historians.” Williamson then immediately retreats from this over the top criticism by acknowledging that Scruton has a particular — Sellars-informed — meaning of the term “empirical world” in mind, which Williamson apparently willfully misunderstood.

But the next paragraph doesn’t improve things, because, says Williamson, according to Scruton mathematics is not a science either, which it certainly is, in his opinion:

“Before proclaiming limits to science, perhaps one should get clearer on what it is. Mathematics, though a science, is not a natural science like physics, chemistry and biology. It supports its results by deductive proofs rather than experiments, but is at least as rigorous, systematic and reliable a search for knowledge.”

That’s exactly right, we should be a bit more clear on what science is, but Williamson’s approach isn’t very helpful. To begin with, there is a good argument to be made that mathematics is not a science, although it is, of course, very useful to the sciences. Williamson himself acknowledges that math is different from the natural sciences, largely using different tools, and — I will add — producing results that are not dependent on empirical observation and experiment (setting aside so-called “experimental mathematics,” which is not experimental at all, but based on brute force computer simulations).

Indeed, my detailed analysis of the nature of progress in mathematics (here, here, here, and here) suggests that it works in a fashion much more similar to logic than to science, and not even Williamson has gone so far as to suggest that logic is a science in any meaningful sense of the term. So, if by “science” we mean the natural sciences (such as physics, chemistry, biology, geology) as they have historically and socioculturally developed from the 17th century on — and I don’t see any reason why one would want to mean anything else by that word — than Williamson is way off base in his criticism of Scruton.

In fact, Williamson goes on and on talking right past Scruton, attempting to convince him that historical research is based on empirical evidence, something that I’m pretty willing to bet Scruton knows very well. Let me try to explain where exactly Williamson misses the point by way of an example that took place a couple of years ago, when I was teaching a class on epistemology across the curriculum. We were exploring claims to knowledge and understanding made by varied disciplines, from the natural sciences to the humanities, including, of course, history. I had a number of guest lecturers from different departments, and one of my colleagues delivered a particularly clear explanation of what, I think, Scruton is trying to get at. My colleague did not use Sellars’ distinction between the scientific and manifest images, but he may as well have done that.

My colleague works in the social sciences, and specifically on Colonialism. He told our students that of course he uses some of the tools of the natural sciences, from the collection of systematic observations to statistical analyses. But, he also immediately added, the picture emerging from those methods alone would be woefully incomplete. For instance, he also studies books — including works of fiction — and other testimonials written by people who have experienced Colonialism firsthand, on either side of it. These human artifacts are qualitative in nature, not really amenable to statistical and systematic analyses. But they provide exactly what Sellars was talking about as far as the human sphere is concerned: reasons. Not in the sense of “good” or objective reasons, necessarily, but in the sense of a glimpse into the human condition, into why people do things, or how they tell themselves why they do things. My colleague concluded that research areas like his are, as a result, at the borderlands between the sciences and the humanities. They certainly benefit from deploying the methods of science, but they have to use also those of the humanities, on penalty of missing large chunks of the picture. He may as well have been talking about history in the sense clearly intended by Scruton and so distorted by Williamson.

If Williamson’s definition of science is a “rigorous, systematic and reliable search for knowledge,” then almost anything human beings do qualifies. History does, and so do all the other humanities, including literary criticism and art history. Philosophy too qualifies. And that’s the problem: so many activities fit the bill that the very term “science” begins to lose meaning. Now why would anyone want that, unless he is trying to define everything else out of existence by a single, well placed sleight of hand?

Williamson becomes even more deaf to Scruton’s arguments when he brings up, of all things, semantics and logical empiricism (the American offshoot of logical positivism):

“Again, even if Scruton is right that perspectival words like ‘here,’ ‘now’ and ‘I’ do not belong in the language of scientific theorizing, the rigorous scientific investigation of their meaning was led by philosophers such as Hans Reichenbach and David Kaplan [the latter is the Hans Reichenbach Professor of “scientific” philosophy at UCLA]. They showed how to theorize points of view in semantics.”

But, again, Scruton knows and understands this very well (as it is clear also from his response after the first round). It is very telling that Williamson should bring up a philosophical approach whose zenith has passed almost a century ago, and whose major failure was precisely to attempt to do philosophy as if it were a science.

So what, in the end, is the work of a philosopher? I think philosophy is a strange discipline, by its (historical) nature at the interface of pretty much every other human endeavor. The classical core sub-disciplines of philosophy tell much of the story: metaphysics, epistemology, ethics, aesthetics, and logic. And the remainder of the story is told by the onset, beginning in the early 20th century, of a panoply of “philosophies of,” i.e., (critical, in the positive sense) philosophical analysis and commentary of the very same disciplines that used to be part of philosophy and eventually spun out into self-sufficiency: physics (16th-17th centuries), biology (19th century), psychology (19th century), linguistics (20th century), and so forth.

Philosophy can be, and is, done badly, with little understanding of other disciplines, or while ignoring those disciplines’ contributions, or by adopting an arrogant posture that is both unjustified and counterproductive. But the same can be told of a lot of other human endeavors, including first and foremost science itself. (What do you say? You’ve never encountered an arrogant scientist who blubbered incessantly about things he does not understand? Well, lucky you.)

But when it is done well, philosophy is nobody’s handmaiden, pace both Locke and Scruton. She is the connective tissue that holds together the sciences and the humanities, reminding the first of their own limits and the second of just how much they can benefit from science. It is, to use again Wilfred Sellars’ felicitous turn of phrase, the bridge between the scientific and the manifest images of the world. That’s an important job, well worth pursuing seriously and humbly.

Can we compare different cultural forms of life?

MacIntyre versions of moral inquiry“Alternative and rival conceptions of rationality are at home in different conceptual schemes.” –Alasdair MacIntyre

I’ve been reading and commenting on a book that has little to do with the range of subject matters usually covered here at Footnotes to Plato: C. Kavin Rowe’s One True Life: The Stoics and Early Christians as Rival Traditions (if you are interested in my ongoing commentary over at How to Be a Stoic, check part I, part II; and part III; there will be one more, forthcoming soon). The reason I’m bringing this up here is because of Rowe’s chapter 8, entitled “Can we compare?” His goal is to eventually show that Stoicism and Christianity are fundamentally incompatible ways of life, with distinct — and incommensurable — internal logics. I don’t think so, but that’s another story. What’s interesting here is that Rowe deploys the influential philosophy of Alasdair MacIntyre to lay the grounds for his conclusion, and MacIntyre’s philosophy is very much relevant to ongoing discussions about, say, science vs pseudoscience, or atheism vs religion, and a number of other dichotomous positions that we often approach with the assumption that we can meaningfully compare and decide which is more rational or rationally defensible.

Continue reading

The never ending discussion: what’s philosophy good for?

Seven years ago I officially began my career as a philosopher, being appointed as Chair of the Department of Philosophy at CUNY’s Lehman College. One of my first duties was to completely restructure the Department’s web site, which looked awful and was hopelessly out of date. So I spent my first summer on the job (well, technically, even before starting my job, which officially began at the end of August) putting together the new site. If you visit the web pages of most philosophy departments, including Lehman’s, you will notice two differences between them and those of pretty much any other academic field (including not just the natural sciences, but also the rest of the humanities): first, they will almost certainly feature either a painting of Rafael’s School of Athens, or an image of Rodin’s Thinker (those accompanying this post, up left). Second, they will have a tab labeled something along the lines of “Why Philosophy?” It is on this latter idiosyncrasy that I want to focus here.

Continue reading

The Nature of Philosophy video series

As readers may remember, this past Spring we went through a long series of posts (27, to be exact) that presented in serialized form my book, The Nature of Philosophy: How Philosophy Makes Progress and Why It Matters.

Over the past few months, Dan Kaufman and yours truly have taped a series of video conversations that present the main ideas of the book to a broader public, and the series is now completed and available for viewing or downloading at my YouTube channel (as well as on the Sofia channel at MeaningofLife.tv).

Continue reading

Has philosophy lost its way?

philosophyOne of the characteristics of philosophy as a field of inquiry is that — unique among human endeavors — it also inquiries upon itself. This was true since the times of Socrates and Epictetus, of course. Here is how the latter puts it in his Discourses:

“Now if you are writing to a friend, grammar will tell you that you need particular letters; but it will not tell you whether or not you should write to your friend. The same holds in the case of music’s relation to song. It will not say whether at this moment you should sing or play the lyre, or whether you should not do so. Which faculty, then, will do so? The one that studies both itself and everything else. And what is that? The faculty of reason. Yes; for this is the only faculty we have inherited that can perceive itself — what it is, what it is capable of, and how valuable it is — and also perceive all the rest.” (1.1.1-4)

Continue reading