Category Archives: Metaphilosophy

Is philosophy a profession? (Yes, it’s a serious question)

You would think that the one that gives the title to this essay is one of those silly questions that only a philosopher would pose. And yet, a few months ago I unwittingly caused a Twitterstorm when I suggested that philosophy is, indeed, a profession, and that it comes with credentials (in the form of an awarded PhD, job titles and so forth) and even (gasp!) expertise.


I will start by presenting my arguments for why philosophy is indeed a profession that marks a certain kind of expertise; then we’ll talk about why this matters; and finally we’ll explore why, I think, so many people got positively upset at the mere suggestion that there can be professional philosophers, and even more so that they deserve a bit of respect when they talk about their own subject matter. I will also address some common objections to the idea of professional philosophy, as they were put to me during said Twitterstorm.


Is philosophy a profession?


Modern philosophy, meaning — approximately — philosophy has it has been practiced since the 20th century, is a profession in the same sense that, say, psychology or dentistry are professions. If you want to become a psychologist, or a dentist, you go to specialized schools, you take specific courses, you demonstrate your ability as a practitioner, and you get awarded a certificate that says that yup, you are indeed a psychologist, dentist, or philosopher. You then look for a job in your chosen profession, and if you are capable and lucky you land one. You then practice said profession, drawing a salary or other form of income. And eventually you cease practicing in order to enjoy a more or less well deserved retirement.


Typically, in order to become a professional philosopher one needs an undergraduate degree in that field (in the United States, four years) and a PhD from an accredited university (4-6 years on average, but it can be more). The PhD requires taking advanced courses (in my case, for instance, on Plato, ethics, Descartes, Kant, and a number of others), and the writing of a dissertation that must be of publication quality and advance the field by way of proposing original ideas (here is mine). After this, a young philosopher may find temporary employment as a postdoctoral associate, or as a lecturer, and eventually, maybe, land a tenure track position (though the whole institution of tenure has been under relentless attack by conservative political forces, but that’s another discussion). If you do get such a position, you then have six years to prove to your colleagues that you are worth retaining and being promoted from assistant to associate professor, a promotion that comes with some benefits (beginning with tenure itself) and usually a very modest increase in salary. If you are good, a number of years later (usually around five) you get another promotion, to full professor, which comes with little additional benefits (except that now you can serve on more university committees!) and with an equally modest increase in salary.


What I have just described, of course, is the academic path. It used to be pretty much the only game in town, but now the American Philosophical Association has a whole booklet on career paths beyond academia, if you are so inclined. Nevertheless, the academy is still where you will find most professional philosophers, these days.


So, since becoming a philosopher requires studying and getting a degree, and is often associated with belonging to a professional society and getting a regular salary from an employer (usually a university) it seems pretty obvious that philosophy is, indeed, a profession as succinctly defined by the Merriam-Webster: a type of job that requires special education, training, or skill.


Why does this matter?


Why did I bother engaging in the above elucidation of the obvious? Because ever since I switched my own career from that of a scientist (evolutionary biology) to that of a philosopher, I noticed an incredible amount of hostility and dismissal toward philosophy, including — unbelievably — by some philosophers!


I think it is important to correct public misperceptions of philosophy in particular, and of the humanities in general, not because these disciplines are difficult to practice and therefore deserving of respect, but because they are vital to the functioning of an open society. Far too often these days we hear administrators and politicians (usually, but not only, conservatives) saying that a college degree should prepare students to find well paying jobs. That is simply not the case. That definition applies to trade schools, not universities. Yes, of course you want to find a well paying job, especially given the insane amount of money you will have to shell for the privilege of a higher education in the increasingly unequal United States of America (and elsewhere). But the point of a liberal arts education (as it used to be called before “liberal” somehow became a dirty word) is first and foremost to help create mature adults and responsible citizens. You know, the sort of people who can think for themselves about what to do with their lives, instead of being brainwashed by corporate ads. Or the sort of people who believe that voting is both a right and a privilege, and who exercise such right/privilege by doing their homework on different candidates, instead of falling for blatant propaganda and conspiracy theories. That, and not to create an obedient army of drones for the corporate world and an increasingly illiberal government, is what education is for. No wonder so many in power have tried so hard to undermine that mission.


And make no mistake about it, that mission requires a substantial involvement in the humanities, not just the STEM fields. Everyone these days claims to be teaching “critical thinking,” but trust me, you ain’t gonna learn that in a biology class, or in chemistry, or in engineering. You will learn all sorts of interesting things in those classes, some of which may even be useful for getting you a job. But you won’t acquire the sort of ability at critical analysis and writing that philosophy will give you. You will also not be able to familiarize yourself with art, literature and music, some of the main reasons why human life is so interesting and varied. And you will not learn about the stupid things we have repeatedly done in the course of history — which is just as well from the point of view of politicians who prefer to keep selling you propaganda according to which you live (of course!) in the best nation that has ever blessed planet earth, handpicked by God himself to be a shining light for the rest of the world. You see, if you read Plato and Shakespeare and Haruki Murakami, or learn about the American bombing of Dresden at the end of WWII, or appreciate just how and why inequality, racism, and sexism are still pervasive in the 21st century, you will might start questioning what the hell is going on and how to change it. As one of my favorite comedians, George Carlin, once put it: “it’s called the American dream because you must be asleep to believe it.” Philosophy, and the rest of the humanities, are a major way for you to wake up.


Why do people have a problem?


Once more, I would not have thought that any of the above were controversial. But it was! I got a surprising amount of pushback on social media. Okay, fine, it’s social media, where one gets pushback and worse for saying the most mundane things. But still. Studying those responses, it seems to me they fall in the following broad categories:


(i) People who believe that I’m telling them that only professional philosophers can think. What? No, and if you believe that’s the implicature of the above position, you may benefit to taking a philosophy class or two! Snarky comments aside (sorry, this sort of exercise is exhausting!), of course philosophers aren’t the only people who can think, or even think well. Nor does thinking require a license or accreditation of any sort. But the job description of the philosopher is not “thinker,” but rather thinker of a particular kind, using particular tools, applying them to particular subject matters. Similarly, a psychotherapist, say, isn’t just someone who talk to you about your problems. Your friend can do that over a beer at the local pub. But your friend is not professionally trained, is not aware of psychological theories of human behavior, and is not familiar with psychotherapeutic techniques. That’s why so many people pay professional therapists to talk about their problems, instead (or on top) of having a beer with their friends.


That is why it is bizarre that when someone disagrees with me on Twitter or Facebook they often say something along the lines of “you should be aware of logical fallacies,” or “you should study philosophy of science” (actual phrases, and please notice that I teach a course on — among other things — logical fallacies, have written technical papers on the topic, and my specialty is, you guessed it, philosophy of science). This isn’t to say that a professional is always right and an amateur always wrong. Sometimes your intuitions about what’s wrong with your car may trump those of your mechanic. But, as a general rule, is far more likely the expert got it right and that you have a superficial or incomplete understanding of the matter. There is no shame in this, of course. We can’t all be experts on everything.


(ii) Which brings me to the second cause of irritation among some commenters: a good number of people seem not to recognize that philosophy is a field of expertise. On the one hand, this is understandable, but on the other hand it is downright bizarre. It’s understandable because philosophy is, indeed, a rather peculiar field, even within the academy. While biologists study the living world, physicists study the fundamentals of matter and energy, psychologists study human behavior, and historian study human history, what do philosophers study, exactly? The answer is: everything.


Which doesn’t mean they are experts on everything. Here is how it works. First off, the very comparison between philosophy and, say, biology, is misleading. “Philosophy,” if anything, is comparable to “science,” not to a sub-discipline of science. Second, philosophers are interested in broad vistas and the connections among fields, hence the various “philosophies of” (mind, biology, physics, social science, language, history, and so forth). This doesn’t make it easier, but more difficult to be a philosopher. Take my own case: I am a philosopher of science, and in particular a philosopher of evolutionary biology. This means that I need to be very familiar with not one, but two areas of scholarship: evolutionary biology and philosophy of science. I need to understand both the biology and epistemology, for instance, in order to apply a philosophical lense to the science and ask questions like what is the logic and structure of a particular scientific theory, how do unstated assumptions and unrecognized biases interfere with scientific research, what exactly is the relationship between a scientific theory and the evidence that is invoked to back it up (i.e., what’s the “epistemic warrant” of the theory).


Surely this sort of work requires expertise. Equally surely, someone without background in both science and philosophy of science is unlikely to just waltz in and come up with a novel idea that will stun the pros. It’s possible, of course, but very, very unlikely.


(iii) A third group of responses threw back at me that apparent incongruity that I have spent years encouraging people to practice philosophy (Stoicism, specifically) in their everyday life, and yet I’m now telling them that they don’t understand it. But there is a big difference between philosophy as an academic field of scholarship and philosophy understood as a daily practice in life. The first one is the province of professionals, the second one can (and, I think, should) be accessible by anyone willing to spend a modicum of time reading about it.


Again, the difference that I’m drawing here should not be surprising, as it finds lots of parallels. Everyone should exercise to maintain good health. That doesn’t mean everyone suddenly is a professional trainer or athlete. Anyone is capable of driving a car. But we are not a planet of car mechanics. Every Christian is able to read the Gospels, but few are theologians of the level of Thomas Aquinas. And so on, the examples are endless.


So, no, there is no contradiction at all between the notion that philosophy is a specialized academic profession requiring a lot of training and the idea that anyone can read up enough about Stoicism, or Buddhism, or any other philosophical or religious practice and incorporate them in their lives.


Possible objections


Finally, let me do some pre-emptive addressing of likely criticisms (another useful habit that I picked up as a professional philosopher!):


(1) But dentists (say) produce something, what do philosophers produce?


The outcome of the profession of dentistry is that your teeth will be in better order and more healthy than they would have been had you not gone to the dentist. The outcome of the profession of philosophy is twofold: (a) our students develop a better sense for complex ideas and how to evaluate them; and (b) we publish papers and books that contain new insights into the problems we are interested in. (The latter is, of course, what every scholar does, both in the humanities and in the sciences.)


(2) But Socrates did not have a PhD!

 

True. Neither did Darwin. Or Galileo. But today it’s really, really hard to become a professional biologist or physicist without proper, standardized, and rigorous training, usually certified by the award of a PhD. Philosophy has changed exactly in the same way in which all other fields of inquiry have, and for similar reasons (increased specialization, consequent division of labor, institutionalization, etc.).


(3) But someone can make novel contributions to philosophy even without a degree.

 

Yes. Just like someone can make a novel contribution to biology, or physics, and so forth. Such cases exist, but they are rare. Indeed, they are increasingly hard to find, across fields, precisely because both humanistic and scientific knowledge are getting more and more sophisticated and specialized, thus requiring extensive professional training.


(4) But plenty of professional philosophers don’t make interesting contributions to the field.

 

True. And the same goes for plenty of professional biologists (believe me, I’ve seen it) and, I assume, professional physicists, mathematicians, and so forth. Even so, your average philosopher (or biologist, or physicist) will still have a far more sophisticated command of her field than someone who has never studied it systematically.


(5) But there are serious problems with academia. Indeed there are.

 

This is something often pointed out, among others, by my friend Nigel Warburton. That said, Nigel himself has a PhD in philosophy and was an academic before going freelance. And for his spectacularly successful podcast, Philosophy Bites, he tends to interview… you guessed it! Professional philosophers! (Including yours truly.) Because they have knowledge of their field, and interesting things to say about it.


The bottom line


So, can we please get over this strange combination of defensiveness and disdain, and admit that philosophy is — among other things — a serious profession carried out by people with expertise? As I argued above, there is far more at stake here than a petty turf war or wounded egos. Taking philosophy (and the humanities) seriously may be what ultimately will save us from the forces of obscurantism and tyranny.

Advertisements

Evolution, moral realism, and conditional imperatives

Is it true that genocide is wrong? Most of us would respond in the affirmative, would think that the answer is obvious, and would regard anyone answering in the negative as a psychopath, best immediately locked up somewhere where he can’t hurt people. And yet, that kind of response implicitly assumes that there is a fact of the matter about moral pronouncements, that some statements in ethics are true or false. But by what standard?


Moral truths — if they exist — don’t appear to be on par with scientific truths, despite much nonsense that has been written about it in recent years (see here and here). If a scientist says that, for instance, the planet Saturn has rings, that statement is true if, and only if, it turns out that Saturn does, in fact, have rings. This is referred to in philosophy as the correspondence theory of truth: a statement is true if it corresponds (to the best of our knowledge) to what’s actually “out there.”


Moral truths are also not (quite) like mathematical truths. In mathematics the Pythagorean theorem, say, is “true” if it can be derived deductively from a small number of axioms. The reasoning that leads to its derivation has to be coherent, meaning that the theorem has to be logically entailed by the axioms, and not lead to or imply any contradiction. This is known as the coherence theory of truth.


I don’t mean to imply that there is a sharp distinction between science and mathematics, nor that individual applications may not rely on a combination of the two theories of truth (indeed, we’ll see one such application below), but the above is a basic sketch that will serve us well in this essay.


So if moral truths don’t follow either a correspondence or a coherent account, what are we left with? Enter error theory. According to the excellent Stanford Encyclopedia of Philosophy:
“Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that level headed people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn’t believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn’t believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk.”


The upshot is that if you (like the majority of people) believe that there are such things as moral truths, you are a moral realist, but you need to provide an account of where moral truths come from. If you reject the existence of moral truths (and error theorists are just one class of philosophers who do) then you are left with the task of explaining how come so many people are prone to this particular type of error.


This is why I was curious to read a recent paper by eminent philosopher of science Kim Sterelny and his University of Canberra colleague Ben Fraser, entitled “Evolution and moral realism,” and published in the British Journal for the Philosophy of Science. Here is a summary of their approach:


“We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). We: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. … True moral beliefs are a ‘fuel for success,’ a map by which we steer, flexibly, in a variety of social interactions.”


Let me unpack the above, and see where this leads us. The rather uncontroversial premise of Sterelny and Fraser’s paper is that our sense of right and wrong derives from an instinct that was probably favored by natural selection in order to improve our prosocial behavior, because the latter — in highly social species like ours — increases individual survival and reproduction, which are the only things natural selection “cares” about. Elements of prosocial behavior of this sort, which we would call moral if observed in humans, are indeed present in other species of primates.


But as Sterelny and Fraser point out, evolutionary accounts have largely being co-opted by error theorists: while moral language is what philosophers call “truth-apt” (i.e., it looks like it’s referring to truths), actual moral claims cannot be demonstrated to be true, since neither the correspondence nor the coherence theory seem to apply.


This has actually always sounded strange to me, for the following reason. A similar argument could be made that natural selection evolved our intelligence not in order for us to discover truths about the world (including scientific truths), but rather to figure out how to best our rivals within social groups. This is referred to as the Machiavellian theory of the origin of mind (but see here for one criticism), and it would lead us to conclude that even our scientific utterances are “truth-apt” and yet “never true or warranted.” One theologian, Alvin Plantinga, actually makes that argument against naturalism (as opposed to supernaturalism), ironically using evolutionary theory to conclude that either evolutionary science is untrustworthy or philosophical naturalism is wrong. I think Plantinga’s argument is bogus, though it would require a separate essay to show why (maybe at a later time). Still, it seems really weird to say that science (including evolutionary theory) doesn’t at least approximate truths about the universe, given how well its products work in practice. So error theorists shouldn’t be that quick to co-opt evolutionary theory on behalf of their position, or they risk falling into something like Plantinga’s dilemma.


Which lead us to the three points that constitute the heart of Sterelny and Fraser’s paper. They consider evolutionary error theory within a broader framework, the continuum between “folk” (i.e., everyday) and scientific understanding of things. Let’s clarify by means of an example not related to ethics: the phenomena of sunrise and sunset. The folk understanding in pre-scientific times was that, literally, the sun would rise above the horizon every morning, and set below it every evening. The sun was understood in a variety of ways, metaphysically, but usually as some kind of god or manifestation of the divine. The scientific account, of course, is that the sun isn’t doing any such thing, and in reality it is the earth that rotates on its axis, causing the illusion of sunset and sunrise. An evolutionary error theory would say that although sunrises and sunsets are illusions, in the sense that they are not a truthful description of what is going on, they are useful, since people can regulate their days accordingly. For everyday life, it simply doesn’t matter whether it is the sun that rises or sets, or the earth that rotates around its axis.
This is why Sterelny and Fraser say that according to this approach “many folk conceptual structures are neither hopelessly wrong nor fully vindicated.” But how is evolutionary morality cashed out, using this framework? On the one hand, there cannot be any simple reduction of moral truths to scientific facts. On the other hand, “moral facts are facts about cooperation, and the conditions and practices that support or undermine it.”


This is an interesting move, but I think it succeeds only in part. Sterenly and Fraser are ambitious here, as they want to ground a kind of moral realism, or quasi-realism, in evolutionary theory. Essentially, they are saying that moral truths follow the correspondence account outlined above, in that something is morally true just in case it fosters cooperation among human beings, and it is morally wrong if it doesn’t.


But this simply cannot be the full story. I think it is morally right (“true”) to cooperate with the entire human race in order to achieve a peaceful and prosperous world. Unfortunately, this is certainly not the sort of cooperation that natural selection has ever fostered. On the contrary, human evolution has been characterized by competition, not cooperation, among groups, with cooperation limited to each in-group. Indeed, it can be argued that the natural human trait of xenophobia (which I assume we would unequivocally label as morally wrong) has been adaptive for much of the history of Homo sapiens: if someone looks different from members of your in-group, he’s probably dangerous and you should be wary of him.


It is true that Sterelny and Fraser are careful, and are not committed to the simplistic notion that whatever behavior was favored by natural selection it is ipso facto morally good. But there are simply far too many discrepancies between what a theory of evolutionary morality would predict and what most people nowadays consider morally right or wrong for their approach to get us very far.


What then? Are the error theorists right after all? I don’t think so. I suggest that the sort of considerations articulated by Sterelny and Fraser provide a good account of how a natural moral instinct might have evolved: to favor in-group prosociality. But ethics since the invention of language, and especially since the invention of philosophy in different parts of the world between two and a half and three millennia ago, has been about using reason to precisely articulate and usually expand what counts as moral. Slavery, oppression of women, and xenophobia were initially considered morally acceptable, because they either enhanced or did not get in the way of the functioning of human groups. But today we think of all those notions as morally wrong, and for good reasons.


These reasons are the result of a combination of a basic prosocial human nature, an innate sense of the existence of right and wrong things, and of reason applied to the amelioration of the human condition. The first two components are the result of biological evolution, the latter of cultural evolution, which took over once we left the African savanna between 70,000 and 40,000 years ago, and especially after the agricultural revolution of 12,000 years ago. While the natural selective imperative is to survive in order to reproduce, the cultural imperative goes well beyond it: we want to flourish, to pursue projects, to acquire a certain degree of independence, and so forth. Biology simply cannot account for that.


What does, then? As philosopher Philippa Foot famously argued in her landmark 1972 paper, “Morality as a System of Hypothetical Imperatives,” moral truths are conditional truths. IF we wish to build a peaceful world THEN xenophobia is immoral. IF we wish to maximize every agent’s ability to flourish THEN the oppression of groups or individuals is immoral. And so forth.
This makes moral truths a combination of correspondence and coherence. The correspondence part derives from the notion that there are certain facts about the human condition that we can ascertain empirically. For instance that individuals flourish if they are accorded some rights, like the right to health care, or education. The coherence part comes from the fact that IF … THEN statements are a matter of logic, and so reasoning built on their bases is valid in some cases and invalid in others.


Another way to put it is this: if moral reasoning is reasoning about hypothetical imperatives, as Foot correctly, in my mind, argued, then the structure of that reasoning is a matter of logic (coherence) while the assumptions from which one begins any such reasoning are empirical in nature (correspondence). If human beings were radically different kinds of beings, our moral philosophy would look very different, or perhaps wouldn’t exist at all. Hence the relevance to ethics of the concept of human nature.


Some people may be unhappy with what they will conceive as a weakened sense of moral truths. They want categorical, not just hypothetical imperatives. They wish for universal, mind-independent moral truths. Too bad, there is no such thing. Others will deny the above account and embrace a relativist position in which morality is an entirely arbitrary human construct. While theoretically possible, I challenge anyone who adheres to such position to actually live by it. It’s not going to happen because it isn’t a good concept of morality for humanity. Foot-style categorical imperative offer us the best available option to navigate between the Scylla of strict moral realism and the Charybdis of strict moral anti-realism.

Why neuroscience is largely irrelevant to ethics

Benjamin Libet, neuroscientist

A few days ago, over at my other blog, I published an article that I touted on my social media as “the last piece on free will you will ever need to read.” That was a slight exaggeration, but only slight. The specific point of the post was to explain in some detail the ancient Stoic take on human decision making, what I and modern psychologists prefer to call volition rather than free will (given how loaded with metaphysical nonsense the latter term is). I also wanted to see how the Stoic position squares with the findings of modern science. As it turns out, that ancient view is highly compatible with what contemporary cognitive science says about the matter, but this is neither a miraculous coincidence nor indication that somehow the Stoics managed to anticipate scientific discoveries that would be made more than two millennia later. (Which would be just as preposterous as to maintain, as some do, that the pre-Socratic atomists “anticipated” modern physics. They didn’t, as even a superficial reading of the pre-Socratics, and a passing acquaintance with modern physics, should amply demonstrate.)

Rather, the reasons we still find so much of value in Stoic (or Aristotelian, or several other) ancient moral philosophy are twofold: first, some of the ancients were keen observers of human psychology; second, moral discourse has little to do with whatever mechanisms make it possible for human brains to think about morality (so long as some mechanisms that allow us to think do exist, of course). Both notions need to be unpacked a bit, which is what I intend to do in this essay.

What was so special about Aristotle, or Epicurus, or Epictetus? In a sense, not much. They were sharp thinkers who paid attention to the empirical side of what they were thinking about. We tend to forget that many others at the time and since have written about the same topics, and yet they are completely forgotten, or they appear at best as footnotes in philosophy books. (Have you ever heard of Aristippus of Cyrene? Not likely, and he was one of the major figures among the minor Greek philosophers…)

The reasons we read some ancient philosophers are, so to speak, evolutionary. Specifically, the cultural analogues of two basic processes that steer biological evolution: drift and selection. Drift is about statistical sampling: some books survive and others don’t because of luck. There probably never were too many copies — by modern standards — of the works of Chrysippus, one of the most noted Hellenistic philosophers, and unfortunately not a single one has come down to us. Selection makes it so that whatever authors are highly esteemed not just by their contemporaries, but further and further down in history, are the ones whose works and ideas tend to survive. In the case of Chrysippus, we know a good amount about what he thought because so many later commentators copied several of his passages, in order to praise him or criticize him. To put it into another fashion, we still read Plato and Aristotle because of what biologist Jacque Monod once called a combination of chance and necessity.

But we don’t read all of Plato and Aristotle nowadays, unless we are historians of philosophy, or of science. There isn’t much point in consulting Aristotle’s Physics if you are a physicist, because the field has moved very far from the Aristotelian positions, beginning with Galileo and arriving at Einstein and Stephen Hawking. By contrast, philosophers still find a lot of value in the Nichomachean Ethics. Ill informed people (who shall here go unmentioned) are under the impression that this is because philosophy, unlike physics, doesn’t make progress (usually, these people just happen to be physicists). But that’s sheer ignorance, which ought (morally) to be embarrassing. Philosophy does make progress (see here), but it is a very different kind of endeavor from physics, so any direct comparison is a category mistake.

No, the reason Aristotle, the Stoics, and so forth are relevant today (other than the above mentioned one that they were la creme de la creme of their period) is that modern science has little of relevance to say about certain branches of philosophy, and in particular ethics. (Yes, I know, certain individuals are making a cottage industry of arguing the opposite. But they too shall go mercifully unmentioned in this post. I’ve dealt with them ad nauseam in the past.)

The reason this is the case has been explained by philosophers Ludwig Wittgenstein and Wilfrid Sellars, and is exemplified by the work of neuroscientist Benjamin Libet. Let me explain.

First, Wittgenstein. In Tractatus 4.111 he famously wrote that “philosophy is not one of the natural sciences,” adding at 4.112 that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations we find:

“[Philosophy’s] investigation is a grammatical one. Such an investigation sheds light on our problem by clearing misunderstandings away. Misunderstandings concerning the use of words, caused, among other things, by certain analogies between the forms of expression in different regions of language.” (90)

While I think that Wittgenstein had too narrow a view of what philosophy does, there is quite a bit of truth in the above. The job of philosophers isn’t to discover new things about the world (we’ve got science for that), but rather to clarify issues by way of critical analysis, and to see how things that appear disparate “hang together,” so to speak. That is, for instance, why metaphysics isn’t being replaced by physics, it is transforming itself into a discipline informed by physics (and biology, and other sciences) whose objective is to make sense of the picture of the world that emerges from the discoveries of individual special sciences, something that no single science does or is concerned with. (See, for instance, Ladyman and Ross’ Every Thing Must Go, a sort of manifesto for a naturalistic metaphysics.)

Wittgenstein becomes even more relevant to the present discussion when we consider his concept of “language games” as presented in the Investigations:

“The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words ‘block,’ ‘pillar,’ ‘slab,’ ‘beam.’ A calls them out; B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.” (2)

Ethics is another language game, or, rather, a multiplicity of language games, since there are a number of ways to conceive, talk about, and actually do, ethics. Within the human community, we talk about “good,” “bad,” “moral,” “immoral,” “ought,” and so forth, and any competent language user understands what others mean by those words. Moreover, .just like the words of the builder’s language actually help building things, so the words of ethical language actually help regulate our actions within a given community. The fact that science comes in and, say, tells us that “bricks” are really mostly empty space is interesting from within the science language game, but it is utterly useless, and indeed a distraction, to the builder. Analogously, that a neuroscientist may be able to tell us which parts of the human brain are involved in the production of ethical judgments, and by which cellular means, is interesting within the language game of neuroscience, but it is a useless distraction if we are concerned with improving social justice, or becoming a better person.

Which brings me to what I have termed the most important philosopher you likely never heard of: Wilfrid Sellars. My friend Dan Kaufman and I did an extensive video conversation on Sellars, which I think is worth checking out. One of Sellars’ landmark ideas was the distinction between what he called the manifest and the scientific images of the world. The manifest image is the way most people understand and navigate the world. The Sun “rises,” genocide is morally repellant. That sort of thing. The scientific image, by contrast, is the way science looks at the world: the Sun does not, actually, rise; it is the Earth that rotates on its axis. As for genocide? Ah, therein lies the rub. I’m sure there are scientific explanations for why genocide is such a recurring feature of human history, from the biology and neuroscience of violence to those of inter-group relations. While such scientific understanding of genocide may be useful, it does not give us the complete picture. Why not?

Because, according to Sellars, the manifest, but not the scientific, image deals with things like reasons and values. This is not a call to reject science. On the contrary. Sellars was quite clear that whenever the scientific and the manifest images of the world are in conflict (as in “the Sun rises” vs “the Earth rotates” case), then the sensible thing is for us to yield to science. But science simply isn’t in the business of doing a number of other things for which we have developed different tools: philosophy, literature, history, and so forth. These tools are complementary with, not opposed to, scientific ones. Ideally, says Sellars, we want to develop a conceptual stereoscopic vision, whereby we are capable of integrating the manifest and scientific images. Indeed, according to Sellars — and I wholeheartedly agree — developing and constantly updating such vision is a major task of philosophy, and our discipline is uniquely positioned to carry the task out because of both its methods (empirically-informed critical discourse) and its scope (very, very broad).

In a sense, what emerges from Wittgenstein, but even more so from Sellars’ thought is that there are a number of things about which we can talk at different levels of analysis, and which level(s) make the most sense depends on what it is that we wish to accomplish. While in theory a full integration of all levels may be possible, in practice it is often not desirable, because it doesn’t help with the particular language game we happen to be playing.

Let me then come back to “free will” (or volition), and use my discussion of Stoic philosophy as it compares to the famous experiments by Benjamin Libet to present a specific example of what I have outlined above, attempting to convince you of why I think science is largely irrelevant to moral discourse.

The Stoics thought that we have a faculty of judgment, which they call the hêgemonikon. It was a major goal of Stoic training to improve the way we use it, i.e., to arrive at better and better judgments about whatever life throws at us. In the post at my other blog I suggest that, roughly speaking, the hêgemonikon corresponds to the frontal lobes of the human brain, which are far more developed than in most other mammals, and are known to be associated, in fact, with our capacity for judgment, and in particular with our ability to “veto,” so to speak, certain actions that might otherwise come natural to us (as in: “there is a strange noise in my house in the middle of the night! Someone is about to kill me!! I need to run the hell out of here!!! … Oh, wait, it’s the cat. Back to sleep).

The Stoics themselves were spectacularly wrong about the likely location of the hêgemonikon: they thought it resided in the heart. But pretty much everything else they said about its functioning and how we can improve it was right on the money, as shown by the fact that 23 centuries later Stoic “psychology” still informs a number of evidence based psychotherapies, such as rational emotive behavior therapy and cognitive behavioral therapy.

How is this possible? Because most of what the Stoics thought about the hêgemonikon was part of the manifest image, and was useful then as it is now for the simple reason that people still deal with the same basic issues: unhealthy emotions like anger and fear, and the search for better ways to relate to others and improve the human polis. What the Stoics got wrong, not at all surprisingly, is the bit that belongs to the scientific image: as it turns out, our faculty of judgment depends on a particular part of the brain, not the heart. Crucially, though, this has had no effect whatsoever on Stoic philosophy or its usefulness. A modern Stoic simply updates that bit of information, thanks the scientist, and goes back to her practice.

Nowadays, whenever the topic of human volition comes up someone is bound to cite the famous experiments carried out by Benjamin Libet, beginning in 1983. Briefly, he asked subjects to follow the movements of a dot on the screen of an oscilloscope. The dot moved like the hands of a clock, but faster. Libet told his subjects to move a finger at a moment of their choice during the experiment, noting the position of the dot when they became aware of their decision to act. The experiment showed that the decision to move the finger entered conscious awareness about 200 milliseconds before the actual movement. But, stunningly, there was a rise in the so-called “readiness potential,” which is thought to be associated with the preparation for action, about 550 milliseconds before movement. So the subjects appeared to get ready to move the finger a full 350 milliseconds before they became conscious of their decision to do so. (Indeed, in later experiments, the readiness potential has been shown to build up even as long as 1.5 seconds before movement.)

Taken at face value, Libet’s results seem to show that we decide our actions unconsciously, and that what we call consciousness is simply a (late) awareness of a decision that has been made. There are several well known criticisms of such conclusion, beginning with the obvious one, that the experimental conditions have precious little to do with the recursive, complex behavior that we normally label “conscious decision making,” and which is understood as a continuous feedback loop between what Daniel Kahneman calls System I (fast, subconscious) and System II (slow, deliberate) brain processing systems. Moreover, recent research has both amply confirmed, and yet significantly re-interpreted, Libet’s original findings.

But a good reason to think that Libet’s experiments do not mean what so many enthusiasts of the “free will is an illusion” bandwagon seem to think they mean, is Libet’s own commentary:

“The finding that the volitional process is initiated unconsciously leads to the question: is there then any role for conscious will in the performance of a voluntary act? The conscious will does appear 150 msec before the motor act, even though it follows the onset of the cerebral action by at least 400 msec. That allows it, potentially, to affect or control the final outcome of the volitional process. An interval msec before a muscle is activated is the time for the primary motor cortex to activate the spinal motor nerve cells, and through them, the muscles. During this final 50 msec, the act goes to completion with no possibility of its being stopped by the rest of the cerebral cortex. The conscious will could decide to allow the volitional process to go to completion, resulting in the motor act itself. Or, the conscious will could block or ‘veto’ the process, so that no motor act occurs.” (B. Libet, Mind Time: The Temporal Factor in Consciousness, 2004, p. 137)

[Once more, to preempt distracting discussions: I do not think we should talk about “free will,” which is a hopelessly metaphysically confused concept. We are talking about what psychologists themselves call volition, i.e., the ability of human beings to make complex decisions informed by conscious thought. Hopefully no one will deny that we do have such ability.]

Interestingly, studies have found very good experimental evidence for the veto power Libet is talking about. But that is “interesting” from within the language game of neuroscience. It makes no difference at all in terms of the language game in which the Stoics — and most of us — are engaged, that of improving ourselves as individuals and of making society a better place for everyone to live.

That is why, as a scientist, I will keep following with interest the undoubtedly fascinating future developments of cognitive and neuro-science. But it is also why, as a philosopher and human being, I’m not very concerned with how those findings will impact my day to day life in the realm of ethics. As the Stoic philosopher Epictetus aptly put it:

“You are not flesh or hair but volition; if you keep that beautiful, then you will be beautiful.” (Discourses III.1.40)

The nature of philosophy: Scruton vs Williamson edition

Roger Scruton (left) and Timothy Williamson (right)

As some readers may recall, last year I published serially a whole book on this blog on the nature of philosophy (available here as a single volume). This sort of thing is an exercise in meta-philosophy, i.e., philosophizing about the very nature of the discipline. It needs to be done from time to time, but not too often, as it tends to approximate the sort of navel gazing that philosophy is (unjustly, in my opinion) infamous for.

Apparently, one of those times is now, taking inspiration from a recent debate organized by The Times of London Literary Supplement between two eminent British philosophers, Roger Scruton and Timothy Williamson. I don’t think either got it exactly right, but Scruton came much closer, in my view, than Williamson. Regardless, there was much to be learned from the three rounds of the exchange, which is why I’m bringing it to the attention of my readers and will comment on it here, focusing on the first round only.

Let’s start, then, with Scruton’s take, which opens The Times’ debate. He begins by telling us that when he attended Cambridge in the 1960s, he was immediately disabused of the naive notion that studying philosophy would tell him something abut the meaning of life. Ethics was instead dominated by the likes of G.E. Moore, who spent endless time debating the exact meaning of “good,” “right,” and “ought.” As he puts it:

“Ethics came to rest in the study of dilemmas, like that of the man who must visit his aunt in hospital on the very same day as his child is competing in the long-jump at school. The manifest facts that modern people are living in a state of spiritual anxiety, that the world has become strange to us and frightening, that we lack and need a conception of our own existence — such facts were either un­noticed or dismissed as yet more leftovers from the mental disease called religion.”

I must say Scruton is right on target here. A significant portion (though, thankfully, not all) of academic philosophy has become irrelevant to pretty much anyone else outside of academic philosophy departments (the same, to be fair, holds for English literature, or the natural sciences, but that’s another story). The damage done by so-called analytic philosophy to ethics is indeed a perfect example. We don’t learn much, if anything, from increasingly convoluted versions of the trolley dilemma, as “fun” as those riddles can be for someone who thinks such things qualify as fun.

Scruton then rejects Locke’s contention that philosophy should be a “handmaiden to the sciences”:

“Philosophy is, and ought especially to be, a handmaiden to the humanities. It should use its best endeavours to show why the attempts to rewrite religion, politics, music­ology, architecture, literary criticism and art history as branches of evolutionary psycho­logy (or still worse, branches of applied neuro­science) are destined to fail.”

So Scruton sees a major task of contemporary philosophy to contrast scientism, the ideological attitude that declares (on no scientific grounds) that only scientific questions are worth being considered, and only the methods of science (often conveniently and arbitrarily expanded to encompass all ways of human reasoning) are valid sources of knowledge and understanding. Together with my friend and colleague Maarten Boudry I have put together a collection of essays — due on December 26 from Chicago Press — on the challenges posed by scientism, and readers of this blog know why I’m very sympathetic to Scruton’s perspective (not everyone contributed to our volume is, by the way — it’s a discussion, not a monologue).

He goes on to explain that the reason evolutionary psychology’s attempt to “reduce” the humanities fails is because science is in the business of (and is very good at) providing answers couched in a third-person perspective, focused on the causality of observable phenomena. But the world of the humanities is what Wilfred Sellars (remember him?) called “the space of reasons,” and reasons (or prescriptive statements) just don’t show up in an fMRI scan.

Let it be clear that Scruton is not anti-science. He explains that this failure of science is not the result of the existence of some other, magical, realm of existence. It is simply that science isn’t in the business of doing what the humanities do. It is one tool among many at our disposal to understand the world — not just the physical and biological world, but also the world of human relations and meaning. It shouldn’t be necessary, but I hasten to add that Scruton seems to be perfectly aware that human beings are also biological beings made of physical stuff. He is not claiming that there is no place for science in studying humans and their societies. He is just reiterating the famous, and very useful, distinction that Sellars himself made between the scientific and manifest images of the world.

Scruton ends his first round by bringing up David Hume and his idea that the human mind has a capacity to “spread itself upon objects.” While this capacity is, obviously, the result of biological evolution and it is made possible by our neural apparatus, biology and neuroscience tells us comparatively little of value about what happens when we engage in such Humean activity. as Scruton puts it:

“The case is no different from the case of aspects, like the face in the picture, which is there for us in the pigments, but not really there, as the pigments are.”

2nd Avenue subway’s (New York) artwork by Sarah Sze

Let’s now turn to Williamson’s initial response. He doesn’t start too well, as he deploys a rather cheap rhetorical trick, accusing Scruton of thinking that history is not part of the empirical world, which he smugly says “may come as news to historians.” Williamson then immediately retreats from this over the top criticism by acknowledging that Scruton has a particular — Sellars-informed — meaning of the term “empirical world” in mind, which Williamson apparently willfully misunderstood.

But the next paragraph doesn’t improve things, because, says Williamson, according to Scruton mathematics is not a science either, which it certainly is, in his opinion:

“Before proclaiming limits to science, perhaps one should get clearer on what it is. Mathematics, though a science, is not a natural science like physics, chemistry and biology. It supports its results by deductive proofs rather than experiments, but is at least as rigorous, systematic and reliable a search for knowledge.”

That’s exactly right, we should be a bit more clear on what science is, but Williamson’s approach isn’t very helpful. To begin with, there is a good argument to be made that mathematics is not a science, although it is, of course, very useful to the sciences. Williamson himself acknowledges that math is different from the natural sciences, largely using different tools, and — I will add — producing results that are not dependent on empirical observation and experiment (setting aside so-called “experimental mathematics,” which is not experimental at all, but based on brute force computer simulations).

Indeed, my detailed analysis of the nature of progress in mathematics (here, here, here, and here) suggests that it works in a fashion much more similar to logic than to science, and not even Williamson has gone so far as to suggest that logic is a science in any meaningful sense of the term. So, if by “science” we mean the natural sciences (such as physics, chemistry, biology, geology) as they have historically and socioculturally developed from the 17th century on — and I don’t see any reason why one would want to mean anything else by that word — than Williamson is way off base in his criticism of Scruton.

In fact, Williamson goes on and on talking right past Scruton, attempting to convince him that historical research is based on empirical evidence, something that I’m pretty willing to bet Scruton knows very well. Let me try to explain where exactly Williamson misses the point by way of an example that took place a couple of years ago, when I was teaching a class on epistemology across the curriculum. We were exploring claims to knowledge and understanding made by varied disciplines, from the natural sciences to the humanities, including, of course, history. I had a number of guest lecturers from different departments, and one of my colleagues delivered a particularly clear explanation of what, I think, Scruton is trying to get at. My colleague did not use Sellars’ distinction between the scientific and manifest images, but he may as well have done that.

My colleague works in the social sciences, and specifically on Colonialism. He told our students that of course he uses some of the tools of the natural sciences, from the collection of systematic observations to statistical analyses. But, he also immediately added, the picture emerging from those methods alone would be woefully incomplete. For instance, he also studies books — including works of fiction — and other testimonials written by people who have experienced Colonialism firsthand, on either side of it. These human artifacts are qualitative in nature, not really amenable to statistical and systematic analyses. But they provide exactly what Sellars was talking about as far as the human sphere is concerned: reasons. Not in the sense of “good” or objective reasons, necessarily, but in the sense of a glimpse into the human condition, into why people do things, or how they tell themselves why they do things. My colleague concluded that research areas like his are, as a result, at the borderlands between the sciences and the humanities. They certainly benefit from deploying the methods of science, but they have to use also those of the humanities, on penalty of missing large chunks of the picture. He may as well have been talking about history in the sense clearly intended by Scruton and so distorted by Williamson.

If Williamson’s definition of science is a “rigorous, systematic and reliable search for knowledge,” then almost anything human beings do qualifies. History does, and so do all the other humanities, including literary criticism and art history. Philosophy too qualifies. And that’s the problem: so many activities fit the bill that the very term “science” begins to lose meaning. Now why would anyone want that, unless he is trying to define everything else out of existence by a single, well placed sleight of hand?

Williamson becomes even more deaf to Scruton’s arguments when he brings up, of all things, semantics and logical empiricism (the American offshoot of logical positivism):

“Again, even if Scruton is right that perspectival words like ‘here,’ ‘now’ and ‘I’ do not belong in the language of scientific theorizing, the rigorous scientific investigation of their meaning was led by philosophers such as Hans Reichenbach and David Kaplan [the latter is the Hans Reichenbach Professor of “scientific” philosophy at UCLA]. They showed how to theorize points of view in semantics.”

But, again, Scruton knows and understands this very well (as it is clear also from his response after the first round). It is very telling that Williamson should bring up a philosophical approach whose zenith has passed almost a century ago, and whose major failure was precisely to attempt to do philosophy as if it were a science.

So what, in the end, is the work of a philosopher? I think philosophy is a strange discipline, by its (historical) nature at the interface of pretty much every other human endeavor. The classical core sub-disciplines of philosophy tell much of the story: metaphysics, epistemology, ethics, aesthetics, and logic. And the remainder of the story is told by the onset, beginning in the early 20th century, of a panoply of “philosophies of,” i.e., (critical, in the positive sense) philosophical analysis and commentary of the very same disciplines that used to be part of philosophy and eventually spun out into self-sufficiency: physics (16th-17th centuries), biology (19th century), psychology (19th century), linguistics (20th century), and so forth.

Philosophy can be, and is, done badly, with little understanding of other disciplines, or while ignoring those disciplines’ contributions, or by adopting an arrogant posture that is both unjustified and counterproductive. But the same can be told of a lot of other human endeavors, including first and foremost science itself. (What do you say? You’ve never encountered an arrogant scientist who blubbered incessantly about things he does not understand? Well, lucky you.)

But when it is done well, philosophy is nobody’s handmaiden, pace both Locke and Scruton. She is the connective tissue that holds together the sciences and the humanities, reminding the first of their own limits and the second of just how much they can benefit from science. It is, to use again Wilfred Sellars’ felicitous turn of phrase, the bridge between the scientific and the manifest images of the world. That’s an important job, well worth pursuing seriously and humbly.

Can we compare different cultural forms of life?

MacIntyre versions of moral inquiry“Alternative and rival conceptions of rationality are at home in different conceptual schemes.” –Alasdair MacIntyre

I’ve been reading and commenting on a book that has little to do with the range of subject matters usually covered here at Footnotes to Plato: C. Kavin Rowe’s One True Life: The Stoics and Early Christians as Rival Traditions (if you are interested in my ongoing commentary over at How to Be a Stoic, check part I, part II; and part III; there will be one more, forthcoming soon). The reason I’m bringing this up here is because of Rowe’s chapter 8, entitled “Can we compare?” His goal is to eventually show that Stoicism and Christianity are fundamentally incompatible ways of life, with distinct — and incommensurable — internal logics. I don’t think so, but that’s another story. What’s interesting here is that Rowe deploys the influential philosophy of Alasdair MacIntyre to lay the grounds for his conclusion, and MacIntyre’s philosophy is very much relevant to ongoing discussions about, say, science vs pseudoscience, or atheism vs religion, and a number of other dichotomous positions that we often approach with the assumption that we can meaningfully compare and decide which is more rational or rationally defensible.

Continue reading

The never ending discussion: what’s philosophy good for?

Seven years ago I officially began my career as a philosopher, being appointed as Chair of the Department of Philosophy at CUNY’s Lehman College. One of my first duties was to completely restructure the Department’s web site, which looked awful and was hopelessly out of date. So I spent my first summer on the job (well, technically, even before starting my job, which officially began at the end of August) putting together the new site. If you visit the web pages of most philosophy departments, including Lehman’s, you will notice two differences between them and those of pretty much any other academic field (including not just the natural sciences, but also the rest of the humanities): first, they will almost certainly feature either a painting of Rafael’s School of Athens, or an image of Rodin’s Thinker (those accompanying this post, up left). Second, they will have a tab labeled something along the lines of “Why Philosophy?” It is on this latter idiosyncrasy that I want to focus here.

Continue reading

The Nature of Philosophy video series

As readers may remember, this past Spring we went through a long series of posts (27, to be exact) that presented in serialized form my book, The Nature of Philosophy: How Philosophy Makes Progress and Why It Matters.

Over the past few months, Dan Kaufman and yours truly have taped a series of video conversations that present the main ideas of the book to a broader public, and the series is now completed and available for viewing or downloading at my YouTube channel (as well as on the Sofia channel at MeaningofLife.tv).

Continue reading