Category Archives: Ethics

Michael Shermer on utilitarianism, deontology, and “natural rights”

Nazi officerYou may have noticed that I don’t opine on quantum mechanics. Or jazz. The reason for this is that — although I’m very interested in both topics — I just don’t know enough about them. Not enough to be able to offer an informed opinion, at any rate. So I sit back, read what other, more knowledgeable people have to say about quantum mechanics and jazz, form my own second-hand opinion, and try to avoid embarrassing myself by pontificating in public.

Apparently, my friend Michael Shermer does not follow the same philosophy. At least, not when it comes to the field of moral philosophy. He has recently published a column in Scientific American entitled “Does the philosophy of ‘the greatest good for the greatest number’ have any merit?” which starts out simple (simplistic, really) enough, and ends in a crescendo of nonsense. Let’s take a look.

After asking whether you would politically oppress a people for a limited time, if it increased the overall well-being of the citizenry, Michael explains that that’s utilitarianism, the philosophy started by Jeremy Bentham back at the beginning of the 19th century, often summarized with the slogan “the greatest good for the greatest number.” (Bentham, incidentally, is currently visiting New York, go see him, if you have a chance.)

Well, that is one of many versions of utilitarianism, and it was immediately abandoned, by none other than John Stuart Mill, Bentham’s student, who actually wrote the classic 1861 text entitled Utilitarianism. Indeed, before that Mill wrote two important articles, “Remarks on Bentham’s Philosophy” (1833) and “Bentham” (1838), in which he criticized his mentor and began to develop modern utilitarian thought. One of the major distinctions one can draw within utilitarianism still today is that between so-called act utilitarianism (where we must evaluate the morality of each act, a la Bentham) and rule utilitarianism (where we conform to rules that have shown overall to bring about the greatest amount of good, a la Mill). More generally, utilitarianism has a long history, and nowadays it is actually best thought of as a particular type of consequentialist philosophy. I could be wrong, but Shermer seems unaware of these distinctions.

Michael then tells his readers that “modern utilitarianism” is best instantiated by the (in)famous trolley problems. This is just flat out wrong. The original dilemma was introduced by Philippa Foot back in 1967. Here is the first version:

“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.”

Contra Shermer, the trolley dilemma was proposed, and it continues to be used (not only in philosophy, but in social psychology), in order to probe people’s moral intuitions, not to “instantiate” utilitarianism. For instance, a deontologist would refuse to frame an innocent or switch the lever, on the basis of the Kantian notion that one ought never to treat others solely as means to an end. The fact that many people switch from utilitarian to deontological responses when considering different versions of the dilemma tells us that they tend to react emotionally, which leads them to deploy an incoherent moral philosophy.

Michael then says that “the problem” with utilitarianism is that there are situations in which following its precepts one would end up endorsing psychopathic behaviors, as in the famous case (which I pose to my intro philosophy students) of the surgeon who has five patients in the emergency room, each with a failing vital organ, and decides to pick up a stranger from the street, cut him up into pieces, and distribute his organs around to save the other five. Too bad that this sort of thing is precisely why Mill (remember, already in 1833) introduced rule utilitarianism, which blocks the psychopathic doctor in his tracks. Again, no mention of this in the SciAm article.

Shermer briefly mentions a recent paper in Psychological Review (which I have not read, so I will not comment on it), mostly to tell us that he took the Oxford Utilitarianism Scale test and scored 17/63. He ain’t no utilitarian, according to the test. Neither am I, apparently (phew!), since I scored 21/63. You can do the test yourself, here.

After a brief mention of Kantian deontology, the article really veers from simplistic to nonsensical: “Historically the application of a utilitarian calculus is what drove witch hunters to torch women they believed caused disease, plagues, crop failures and accidents — better to incinerate the few to protect the village. More recently, the 1:5 utilitarian ratio has too readily been ratcheted up to killing one million to save five million (Jews: “Aryan” Germans; Tutsi:Hutu), the justification of genocidal murderers.”

What?? No, absolutely not. Setting aside the obvious observation that utilitarianism (the philosophy) did not exist until way after the Middle Ages, no, witch hunts were the result of fear, ignorance and superstition, not of a Bentham- or Mill-style calculus. And this is the first time I heard that Hitler or the Hutu of Rwanda had articulated a utilitarian rationale for their ghastly actions. Again, they were driven by fear, ignorance, superstition, and — in the case of Nazi Germany — a cynical calculation that power could be achieved and maintained in a nation marred by economic chaos by means of the time-tested stratagem of scapegoating. (The latter is also what perpetrators of witch hunting and the Rwandan genocide did: prey on the weak, it’s easy to do and get away with it.)

But Shermer doesn’t let Kant off the hook either. He brings up the famous example (which, again, I confront my intro philosophy students with) of lying: if it is the case — as Kant says in one formulation of the categorical imperative — that we should only accept as moral those principles that we would be willing to make into universal rules, wouldn’t that mean that I should never lie and give up the Jew I’m hiding in the basement if a Nazi officer (it’s always the Nazi!) politely asks me? Or, as Michael updates the scenario: “if you live in Syria and a band of ISIS thugs knocks on your door demanding to know if you are hiding any homosexuals they can murder in the mistaken belief that this fulfills the word of God — and you are — few moralists would object to your lying to save them.”

Notice the pejorative term “moralists,” instead of moral philosophers. Anyway, you would think Kantian philosophers would have something to say about this. Oh, right, they do! A good example is a paper by Helga Varden in the Journal of Social Philosophy, entirely devoted to Kant, lying and the Nazi officer. I do not have the time here to do justice to her analysis, but a couple of points need to be brought to bear: first, in that case Kant was writing explicitly within the context of a discussion of the doctrine of rightful interactions (the original, short paper in which he tackles the case is entitled “On a supposed right to lie from philanthropy”). As Varden says, within that context, “we can make sense of why lying to the murderer, although a wrong, is not to wrong the murderer, why we become responsible for the bad consequences of the lie, and finally why lying is to do wrong in general.”

More to the point, Kant was talking about a murderer (he, obviously, couldn’t have contemplated the Nazi), but when one changes the scenario to a Nazi officer — or an ISIS terrorist — it turns out that the problem dissolves itself, because “the only time doing wrong in general by lying is legally punishable [within Kant’s framework] is when we lie to or as a representative of the public authority. The Nazis, however, did not represent a public authority on Kant’s view and consequently there is no duty to abstain from lying to Nazis.” Or to ISIS. Again, I didn’t notice any of these qualifications in Shermer’s article.

Michael, predictably, makes no mention at all of the third great framework in moral philosophy, virtue ethics, which would actually do a lot of the work he wants to do, against both utilitarianism and deontology — in their philosophically sophisticated versions, not the caricature we get in the SciAm article.

But never mind that. The true nonsense comes right at the end, when Shermer puts forth his preferred view, the one that, in his mind, has allowed for true moral progress throughout the ages: “both utilitarianism and Kantian ethics are trumped by natural-rights theory, which dictates that you are born with the right to life and liberty of both body and mind, rights that must not be violated, not even to serve the greater good or to fulfill a universal rule.”

Setting aside that you get precisely the same result from Mill’s rule utilitarianism, not to mention that natural rights theory has no argument against Kant, “natural rights” are what Jeremy Bentham famously, and correctly, referred to as “nonsense on stilts.” There is no such thing as a natural right, and we, therefore, are not born with them (contra the mindless libertarian mantra that Shermer is repeating). Michael is confusing human desires and instincts — some of which are actually culturally dependent (it is empirically not the case that everyone on earth desires liberty of mind, for instance) with rights. But rights are, obviously, a human creation. Which accounts for why, as Shermer himself notes, they have to be written down in things like the Bill of Rights, and protected by the force of state-enabled law. It’s also why people have come up with different lists of rights at different times. The United Declaration of Human Rights, for instance, provides a much more extensive list than the one arrived at by James Madison and co. back in 1789.

To argue that rights are “natural” is to commit the most elementary logical fallacy in ethics, that of the appeal to nature. And even if one were to overlook that little problem, there simply is no consistent empirical evidence for most of such alleged rights (i.e., desires, instincts) in Homo sapiens or its recent ancestors. Yeah, we all prefer to be alive rather than dead, other things being equal, but natural selection does not care about mere survival, it only favors survival that leads to reproduction. And it favors it, it doesn’t guarantee it. (So you can’t derive a natural right to sex. Too bad!)

This is the sort mess one gets when Michael talks about moral philosophy. Or when I talk about quantum mechanics. Or jazz. Please, let us all stick to what we know. It’s hard enough as it is.

Advertisements

Socrates, weakness of the will, and addiction

Socrates“People are dying because we misunderstand how those with addiction think,” says the title of a recent article in Vox by philosopher Brendan de Kenessey, who argues that addiction is not a moral failure, and that it is the moralistic attitude of a number of politicians and a significant portion of the public that makes the problem more difficult to deal with. Addicts are not bad people who need to be punished, he says, they are sick and need help.

And he is completely right, I think. And yet, I also suggest that the bulk of the article is based on the wrong philosophical criticism. de Kenessey blames Socrates for the moralistic attitude, while he should blame certain brands of Christianity instead. Here I will not make the positive case against Christian moralism (which is well known among certain politicians of a certain party in the US), nor will I unpack the idea that addicts are sick, not bad, people, as de Kenessey does a very fine job of that in his article. But I will defend Socrates and use the occasion to talk a bit not just about addiction, but in general the phenomenon of what the Greeks called akrasia, or weakness of the will, and which Socrates thought simply does not exist.

The starting point of de Kenessey’s analysis of the Socratic problem is the Platonic dialogue known as the Protagoras, in which the discussion between the Athenian sage and one of the most famous sophists turns to the topic of akrasia. Let’s contrast two instances of alleged akrasia, brought up by de Kenessey to make his point against Socrates, and which I think, on the contrary, show pretty clearly why Socrates was correct (once we add an hidden premise to the Socratic position, a premise not discussed by de Kenessey).

Imagine yourself in front of the television, intending to binge watch a season of Black Mirror (or whatever your favorite show happens to be). You think, when you reflect on it, that this isn’t really the best use of your time, and that you should instead pick yourself up and go to the gym, as lately you’ve let yourself go a little, and you don’t feel good, both physically and psychologically. You mull it over a bit, but in the end decide to stay and watch television, with munchies to accompany the experience.

Now imagine, says de Kenessey, an addict who is driving down the projects, thinking that he really ought to stop what he is doing, turn his life around, clean up, get a job, and take care of his family. Nevertheless, he keeps driving to the corner where he regularly meets his dealer, and buys some cocaine instead.

The two cases appear to have a similar structure, like this:

Subject A has two courses of action available to him, X and Y.

A thinks that he should do X, even though he is very tempted by Y.

A ends up doing Y, rather than X.

Socrates has this to say, in the Protagoras, about this kind of situation:

“No one who knows or believes there is something else better than what he is doing, something possible, will go on doing what he had been doing when he could be doing what is better.”

This seems paradoxical, in the original meaning of the term (para doxan = uncommon opinion), as it is a straightforward observation that people, like both our hypothetical television binger and drug addict, very often don’t do what they believe to be the best thing for them. And yet, Socrates is not alone in taking this position. Modern economists such as Paul Samuelson have proposed an approach in behavioral economics known as “revealed preference,” according to which people show what they really like by what they do, not by what they say. Similarly, modern psychology has accumulated a pretty good amount of evidence that we often confabulate about the reasons why we do things, i.e., we make up reasons to justify our actions because we often don’t really have a good understanding of our own motivations.

How does Socrates defend his “paradoxical” position, which seems to fly so clearly in the face of the evidence? He thinks that people in these cases do not suffer from akrasia, i.e., weakness of the will, thus acting against their best judgment. He thinks instead that people are doing exactly what they want to do, but are doing it because of bad judgment. Doing bad things is, therefore, a matter of ignorance, not malice.

Ignorance my ass, one might easily retort. The television watcher is not ignorant, and neither is the drug addict. They don’t luck the pertinent information, they don’t need to be educated about what is going on. True, but the word used in the Platonic dialogues in this context is amathia, which although usually translated as ignorance actually means something closer to un-wisdom, the opposite of sophia, one of the roots of the word philosophy. Socrates is arguing that apparent cases of weakness of the will are actually cases of lack of wisdom — not of factual or empirical knowledge, but of the proper way to arrive at judgments given certain factual or empirical knowledge.

Ever since discovering the Socratic idea of replacing akrasia (and, more importantly, actual “evil”) with amathia I found myself to be significantly more prone to understand others’ motivations and actions, to sympathize with their manifest lack of wisdom even when I cannot possibly condone their actions, and to generally cultivate an attitude of sorrow rather than anger when people do bad things. I find this new approach liberating and far more constructive than either the akratic or, much worse, the moralistic one.

Still, isn’t de Kenessey right that Socrates ends up blaming the victim here, and that it is this sort of blame that justifies the kind of draconian measures implemented by politicians, and supported by the public, that made the so-called war on drugs a total disaster with a high cost to society, both in human and financial terms?

I don’t think so, and the reason is that if we want to read Socrates charitably we need to see that the two cases above are actually distinct, and they are distinct because of a hidden premise in the Socratic approach. That premise is that we are talking about a normally functioning human mind, not a diseased one. It was well known even in the ancient world that human beings have a tendency to reason very poorly when they are under the influence of a number of external conditions, particularly drugs (including wine). A good deal of Greek tragedy is built on that premise, such as Euripides’ The Bacchantes. That is why Diogenes Laertius, commenting on the Stoics — which were explicit followers of Socrates — says that “they will take wine, but not get drunk.” (VII.118) Getting drunk artificially impairs one’s judgment, so when one is under the influence, as we say today, one is not suffering from lack of wisdom, he’s suffering from a temporarily dysfunctional mind.

If this is a reasonable and charitable interpretation of Socrates’ take, then the two cases of the television binger and the drug addict are very different. The first is an actual case of what Socrates is arguing against Protagoras: the binger — in accordance with modern behavioral economics theory — really does prefer to stay at home to watch Black Mirror rather than going to the gym. Yes, of course he knows that in the long run he would be better off taking the second course of action, but he judges that for him, right here and right now, binging is better. His future self be damned. He is, of course, mistaken in such judgment, just like Socrates maintained.

The same reasoning, by contrast, does not apply to the drug addict, precisely because he is an addict, and therefore his judgment is impaired. He is not suffering from amathia, he is suffering from a chemical addiction. And that is why the moralist attitude criticized by de Kenessey is pernicious, because it does not recognize that the person in question is sick, not evil (or unwise, as Socrates would put it).

There is, of course, a wrinkle in all this, which de Kenessey must be aware of, and yet never mentions in his article: on the first occasion that the soon-to-be drug addict decided to take cocaine his judgment was not impaired by being sick, yet. Which means he is still responsible for the initial decision to go down that road. Now we only have two ways of looking at the onset of the addiction, then: either the person is morally bad (the moralist view), or he lacks wisdom (the Socratic view). Not only the second view is more humane, it also makes much more sense than invoking akrasia: the future drug user had not yet had the experience of being on drugs, so he couldn’t possibly have yielded to the temptation of temporary pleasure promised by the drug. More likely, he made the unwise judgment that the drug wasn’t as bad as people say, or that he will have the willpower to resist the addiction, or something along similar lines and to the same effect.

de Kenessey points out that several modern philosophers have attempted to come up with an anti-Socratic account, but they can’t agree on what’s going on: for Harry Frankfurt the desires that represent our true self are those desires that we want ourselves to have (Harry Frankfurt); for Gary Watson they are the desires that align with our judgments of what is valuable; for Michael Bratman they are the desires that cohere with our stable life plans; and for Susan Wolf they are the desires that are supported by rational deliberation (Susan Wolf).

This business of a “true self” is, however, a red herring. As de Kenessey argues, modern psychology has done away with that notion (so did David Hume, two a half century before modern psychology). But the fact remains that “we” do make decisions in response to our desires and as a function of our capacity to arrive at judgments. Whether “we” are made of a unitary self, a bundle of perceptions, or whatever, doesn’t matter. Our judgments are either made by a functional human mind (in which case we are responsible for them) or by a non-functional one (in which case we are sick and need help). The difference between the moralist and Socratic view pertains to the first, not the second case. And there one has a choice of blaming people for the evil doing, or pity them for their lack of wisdom. I find the latter course of action to be far more preferable.

Against The Four: Amazon, Apple, Facebook, and Google

Amazon Apple Facebook Google“The Four” are the giant tech companies Amazon, Apple, Facebook and Google. You can read all about why they are a problem in Scott Galloway’s informative book, The Four: The Hidden Dna of Amazon, Apple, Facebook, and Google. Galloway is a Professor at NYU’s Stern School of Business, where he teaches brand strategy and digital marketing. He is also the founder of several firms including L2, Red Envelope, and Prophet. In 2012, he was named “one of the world’s 50 best business school professors” (no, this isn’t an oxymoron) by Poets & Quants. Moreover, he has served on the boards of Eddie Bauer, The New York Times Company, Gateway Computer, and Berkeley’s Haas School of Business. In other words, this is someone who knows a lot about corporate culture, and not at all a left wing moralist such as myself.

(If you don’t have time to read the book, look at these two articles that turned me onto it, in Wired magazine, and over at the BBC.)

In a nutshell, the problem with The Four is that they simply have far too much power in our lives, both in terms of the information they store about us (and how they use it), and of their financial muscle, which of course easily turns into political influence. From the BBC article:

“The four most important tech companies aren’t even just tech companies anymore. They each have embedded themselves in our lives, hugely influencing us by playing to our basic human instincts: from the eternal human search for answers to our need for love. … [Galloway] also says that the companies’ deep pockets and massive customer base are what allow the companies to start competing in different sectors and industries – like how Amazon is now producing original TV content, or how Facebook has more or less emerged as a news media platform. That has prompted scepticism and warnings from critics at places like the New York Times and Bloomberg.”

And that was before Amazon bought Whole Foods, for instance. You can dig into the details yourself, no point for me to repeat here easily found material. My objection to The Four is ethical: I am generally skeptical of any concentration of power, especially in the private sector (but not only: governments are a little better to the extent that they really are accountable to their people by means of a not too dysfunctional democracy. The US Government does not, at this point, qualify, for instance). But I also recognize that these and other tech companies have made possible a wonderful number of things, including, of course, this very blog (which in its initial incarnation, Rationally Speaking, was hosted by Google, with essays that were for many years written on a MacBook or an iPad, and are still broadcasted via Facebook).

Moreover, I am certainly not an anti-technologist. On the contrary, until recently I was a very early adopter of new technologies. I bought the first iPhone, and then an iPad (and then an iPad Pro, which replaced my laptop), signed up on Facebook very early on, have used Amazon for many years, and have a very very long browser history with Google.

And before you accuse me of naivete, I am perfectly aware that The Four aren’t the only giant tech companies to be wary of (add Twitter, Verizon, Samsung, just for starters), and the list gets far longer when one moves to large corporations in general. Ultimately, I think the only reasonable and effective way to curb corporate power is by legislation, along the model of the famous breakup of Bell back in 1982. As a society, we want innovation, and we certainly want private entities to benefit from their work. But innovation needs competition, not near monopolies, and benefiting from one’s work does not equate creating a very small class of ultra-billionaires who exploit their workers (like Amazon certainly does), including in other countries (like Apple equally certainly does).

But while we are waiting for governments to take action (more likely in Europe than in the US, at the moment — see Galloway’s take here), it doesn’t mean we have to be complicit enablers. I try to practice what in philosophy is called virtue ethics, which means that my first focus is on improving my own character, which in turn requires acting as virtuously (in the Greco-Roman, not the Christian sense) as possible. It follows, it seems to me, that I need to extricate myself as much as possible from The Four, as an initial step.

And that’s where I discovered two interesting things, which are the main objects of this post. First, it is much harder than one might at first imagine. Second, you are unlikely to get a lot of support even from friends and family, who might even exhibit hostility to your intentions. Let me explain.

They are called The Four for a reason. They are everywhere, and next to impossible to avoid, unless you are willing to completely disengage from the Internet. In the postscript, I detail the steps I have taken so far, in case anyone else wishes to try it. You can thank me later for having saved you endless hours of web searching (using DuckDuckGo, of course…).

Apple was actually the easiest to get rid of. Because their ecosystem is so tight and positively discourages any contact with the outside, once you decide to get out of it, you pretty much have to go the whole nine yards. This pained me, because I have been an Apple fan ever since I ditched Microsoft because of the poor quality of their products, back in 2004. But just a couple of weeks later, I hardly miss my iPad and iPhone, and I most certainly don’t miss the Watch, one of the most intrusive gadgets ever made.

Next was Amazon. The big steps here were to stop shopping on their online store (easy, plenty of alternatives), to replace the Kindle with one of several other high quality e-book readers, and to begin to direct readers of my own books to either publishers’ web sites or other e-book stores. of course, the bulk of my collection of books is on Amazon, but I’ll eventually get it back by way of available software that decrypts the files and turns them into the popular epub format. I still watch Amazon videos, because they are good and not available elsewhere. Before you accuse me of hypocrisy, however, keep in mind that the goal is to minimize my footprint on The Four, so to speak, not to eliminate them from my life altogether. It’s an incremental project, not a revolution.

Which brings me to Google. In a sense, I actually increased my use of their products, since now my office suite is the Google one, replacing Apple’s iWorks. But it is a temporary transition dictated by limited time available to search for long term suitable alternatives, and by the need not to disrupt several ongoing collaborative works. And at any rate, I hit Google where it hurts, their web search engine, which produces their advertisement revenues and is of course highly invasive of our privacy. DuckDuckGo does an excellent replacement job.

Finally, Facebook. This was the hardest, again unless I was willing to forgo keeping in touch with (real) friends and family, and also to give up my outreach presence (my “official” philosophy page, my participation to the largest online Stoic community, and a few other things). What I did was to get rid of their obnoxious Messenger app, as well as “unlike” and “unfollow” a crapload of pages that were, of course, generating lots of targeted advertisements. I am now using Facebook with a very small and tightly guarded circle of actual friends and family, as well as for the above mentioned outreach, nothing else.

So the bottom line of the first point is that this exercise showed me very clearly just how dependent our lives have become from The Four. Perhaps this should not have been surprising, but experiencing the full measure of it in such a short period was eye opening. The other thing that was eye opening relates the second point: the comparative lack of support, and occasionally more or less overt hostility, I got from friends and family (and, I’m sure, from some readers, now that I’ve put this out).

When I explained what I was doing and why, a good number of people were puzzled, and began immediately to mount arguments against my enterprise. “It’s useless.” “You won’t succeed” “It’s going to cost you a lot of money and time.” “What do you have against corporations?” “Are you a Luddite?” “Why do you hate America?” Okay, I made up the last one, but the others have been thrown at me fast and furious during the past few weeks.

So I patiently explained: no, I’m not a Luddite; on the contrary, I’ve always been an early user of especially electronic technology. No, it isn’t really that expensive (as I’m sure everyone knows, Apple alternatives in terms of phones and tablets are incredibly cheap by comparison). Yes, it took me some time, but I was helped by others who have similar objections and have done much of the legwork for me, and at any rate, it’s an ethical decision, it would be a bit too easy if it didn’t cost me money or time or effort.

My attitude toward corporations is the one already explained above. I am perfectly aware that if it weren’t Apple it would be someone else, but that’s not an argument about disinvesting from Apple. It’s the social activism equivalent of what in biology is called frequency dependent selection: you go for the rare phenotype, which eventually becomes the dominant, at which point you switch to the new rare, and so on.

In terms of success and utility, it depends on what one’s goal is. I am perfectly aware that Apple, Google and the others are not going to feel the pinch of my decisions. But from a virtue ethical perspective that’s not the objective: I just don’t want to be personally co-responsible for what they are doing. Moreover — and that’s why I’m writing this post and promoting Galloway’s book — if enough others do the same, the damage will be greater and greater, and it might bring about change.

Also, again in terms of success, as I said above my goal was never to completely disengage from The Four, only to distance myself from them. Many years ago I read Peter Singer’s How Are We to Live?: Ethics in an Age of Self-Interest, and it changed my life. No, I did not become a utilitarian like Singer, but I was struck by one of the first things he says in that book: don’t try to do everything at once, you will be overwhelmed, get discouraged, and fail. Instead, decide what your ethical priorities are, and then make some small but concrete steps in that direction. I discussed How Are We to Live? at a book club I founded in Knoxville, TN, and my wife at the time and I were talking about it on our way back home. We decided to follow Peter’s advice: we sold our house in the suburbs and moved downtown, near the bus lines and where we could bicycle to work; we also sold one of our two cars. Our life improved as a result, our carbon footprint went down, and we felt good about the decision. The current anti-Four action is along similar lines: I’m doing something, not everything, because I can do the former, but not the latter.

I thought my explanations were reasonable and cogent. One may still disagree, and indeed one may even agree with my take and still not act in a similar fashion, for all sorts of reasons. But my arguments hardly made I dent. Some people seemed not just to disagree with me, but to positively resent my chosen course of action. What was going on?

Then it hit me. It’s the same reaction I got when I stopped eating meat, and that my partner gets every time people find out she is a vegetarian. The same objections are immediately raised: it’s useless; it’s difficult; what’s wrong with the meat industry?; are you an environmental nuts?; do you feel somehow superior to the rest of?; why do you hate America??

It’s the next to the last one that should give you the clue. At least in my judgment, a lot of people who are not vegetarian recognize, at some level, that vegetarians have by far the better argument: no matter how you look at the issue — in terms of animal suffering, environmental degradation, treatment of labor, or even pure and simple self interest when it comes to health — vegetarianism is better. But it’s harder, too. Stakes are delicious; burgers are delightful; and everyone eats them, so it’s easier to just go along with the habit. But when you meet someone who is bucking the trend, and you are dimly aware that she has made the right choice and you haven’t, resentment kicks in. She simply must be mistaken, and you begin to rattle out a number of more or less incoherent “arguments” for why that is “obviously” the case.

I think something similar has been going on with my anti-Four strategy over the past few weeks. A number of my friends and family realize that I’m onto something (and Galloway’s book gives me plenty of well researched ammunitions, as well as the comfort to know that there are others who think and act the same). But it’s too hard, or expensive, or just inconvenient for them to follow suit. So I must be wrong. And once you know someone is wrong then you immediately begin to search for all the flaws in their reasoning, while ignoring the big ones in your own. It’s a well known cognitive fallacy.

Be that as it may. My conscience feels slightly better, in the same way and measure in which similar small decisions (to not eat meat, to try to shop locally, to voluntarily pay carbon footprint offsets when I travel by air, to change bank because my old one was a giant corporate monster, and so forth) have made me feel better. Is this going to change the world? Of course not. But what are you doing to help, right now?

_____

Postscript: Massimo’s (ongoing) anti-Four plan. Each item lists a service or product offered by Amazon, Apple, Facebook or Google, followed by a suitable, or at least less objectionable, alternative. All of these have already been implemented, it took about a week to switch completely. Remember, the goal is not perfection, just progress. Readers’ suggestions for further improvements are welcome.

Amazon > myriad alternative online retailers, obviously

Amazon Kindle > Nook, Kobo, Sony, etc.

(My) Amazon books > wherever possible I changed links to direct readers to the publishers themselves instead of the A-store

Amazon Video > Hulu, Netflix, though I still use some Amazon Video because some of their productions are unique and good

Apple’s iPad > any Android tablet (even though Android is Google, each manufacturer uses it differently, and the platform is more open than Apple’s)

Apple’s iPhone > any Android phone, except Google’s own, obviously

Apple’s Watch > back to analogical (and more stylish!)

Apple’s TV > back to simple smart TV native apps

Apple’s Music > Pandora, Spotify, or similar

Apple’s Mail > Aqua Mail (which does not track your search history), or any of a number of alternative third party clients

Apple’s office productivity (iWorks) > Google office, as first step, then independent systems, ideally open source

Apple’s Safari browser > Firefox (which does not track your history), Google Chrome not acceptable

Facebook > un-liked most pages, tightened security, limited who can ask me for “friendship”

Facebook Messenger > eliminated in favor of simple sms, or third-party apps

Google search > DuckDuckGo (which does not track your search history)

Google navigator > this is a tough one (particularly since Apple Maps is neither acceptable for this project, nor good, really), though for instance CityMappers works very well for major cities

The non-problem of moral luck

The Good Place - trolley dilemma

The Good Place is an unusual comedy on NBC, featuring a professor of moral philosophy among its main characters. My friend Skye Cleary has interviewed the real life philosopher who consults for the show, Todd May of Clemson University, for the blog of the American Philosophical Association. The exchange is definitely worth a read. In this post I will make an argument that one can learn more about moral philosophy from watching a single episode of the show than by listening to a technical talk in that same field while attending the APA’s own annual meeting.

Episode five of the second season of TGP features a sophisticated discussion of the infamous trolley problem, a thought experiment in ethics that has by now generated a cottage industry among both philosophers and neuroscientists. I will not explain for the n-th time what the problem consists of, you can look it up on Wikipedia. Suffice to say that the more I study virtue ethics, the more I become skeptical of the value of much modern moral philosophy, with its indulging in more and more convoluted hypothetical situations that seem to be designed more to show off the cleverness of the people working in the field than to actually help the rest of us live an ethical life. It is no coincidence that the dilemma is always framed in terms of what a deontologist or a utilitarian would do, those two frameworks having gotten further and further away from any relevance to real life, contra to what either Immanuel Kant or John Stuart Mill surely intended.

At any rate, the episode in question features a theoretical lecture on trolleys by the resident philosophical character, Chidi (played by the excellent William Jackson Harper). One of those on the receiving end of the lecture is the demon-turning-good-guy Michael (played by the awesome Ted Danson). During the lecture, Michael becomes impatient with the theory, so he snaps his fingers and transports Chidi, his friend Eleanor (played by Kristen Bell) and himself aboard an actual trolley, about to kill what appear to be real people. Michael then asks Chidi for a real-life demonstration: what is the philosopher going to do when suddenly faced with the dilemma, in the field, so to speak? Hilarity (and mayhem) quickly ensue. The episode is so good that I made my students watch it and comment on it.

Michael’s point is well taken: ethics is not (or ought not to be!) a theoretical exercise in cleverness, but a guide to navigating life’s real situations, and Chidi the philosopher — while very good in theory — fails spectacularly at it. I was thinking of that sit-com imparted lesson while attending a talk at the Eastern APA meeting last January, delivered by Philip Swenson of the College of William and Mary. In the following I will pick on Swenson a bit, not because his talk was bad (it wasn’t), but because it is an example of a way of doing philosophy that I increasingly object to, on ground of indulgence in irrelevant logic chopping.

Swenson set out to propose a solution to the “problem” of moral luck. He began, of course, with a couple of hypothetical situations:

Resultant luck case. Alice and Bill both go on walks along a riverbank. Both encounter a drowning child and attempt a rescue. They make the same choices and attempt the same actions. Alice’s rescue succeeds, but a sudden current prevents Bill’s attempt from succeeding, and the child drowns.

Circumstantial luck case. Alice goes for a walk along a riverbank and encounters a drowning child. She rescues the child. On a separate occasion, Claire goes for a walk along the riverbank. She does not encounter a drowning child. If Claire had encountered a drowning child she would have rescued the child.

What’s the problem? I mean, other than for the (fortunately hypothetical) child who occasionally drowns? Swenson is bothered by the fact that, in the first case, if we say that Alice is more praiseworthy than Bill, it looks as though we accept something apparently horrible called “resultant moral luck.” In the second case, if we say that Alice is more praiseworthy than Claire, then we accept something equally objectionable, called “circumstantial moral luck.” As Swenson puts it:

“Rejecting circumstantial moral luck appears to require a very significant revision to everyday moral judgment. Consider the plausible claim that a great many people all over the world are not so different from those who went along with the rise of the Nazis. Many people would have done similar things under similar circumstances. If we accept this and reject circumstantial luck then it looks as though some radical claim or other will follow.”

That would be, in case the reasoning isn’t clear, the radical claim that most of us are not as good as we think, and that if we had lived under the Nazi we would have been just as culpable as the majority of the German population of the time for the Holocaust. But it doesn’t end here, there is a third case to consider:

Constitutive luck case. Alice goes for a walk along a riverbank and encounters a drowning child. she rescues the child. On a separate occasion Daniel goes for a walk along the riverbank and also encounters a drowning child. Because Daniel is — through no previous fault of his own — cruel and uncaring, he refrains from rescuing the child. However, if he had possessed Alice’s naturally caring disposition, he would have rescued the child.

Swenson went on to remind the audience of the two classical “solutions” found in the philosophical literature for the problem of moral luck: “responsibility skepticism” (deny that anyone is ever praiseworthy or blameworthy at all), and the “responsibility explosion” (say that people are praiseworthy or blameworthy in virtue of what they would have done in various circumstances they never actually faced, equipped with character traits they never had).

He then goes on to present his own solution to the problem, which involves a strange calculation of moral desert levels, beginning with the assumption that the “expected desert level” for an agent is zero, and continuing with the notion that we can then assign points to different ethical situations according to a variety of criteria. I will not go into the details because they are irrelevant to my argument here. Which is that we should reject this whole approach to moral philosophy, period.

To begin with, I find bizarre the very idea that we should engage in some sort of morality ledger construction, keeping score of the praiseworthiness or blameworthiness of people. Why? What the heck is the point? Are we some sort of god who has to decide on where to send people in the afterlife? (That, incidentally, is the premise of TGP show. And it’s very funny.) Take the first scenario, the case of resultant luck. It wouldn’t cross my mind for a second to say that Alice is more praiseworthy than Bill just because Bill did not succeed in his attempt at rescuing the drowning child. On the contrary, I am in awe of anyone who would attempt the rescue, regardless of whether s/he succeeds or not.

The circumstantial luck case is even more out there: there is no reason for us to consider Claire at all. If the circumstances were such as not to test her moral fiber, fine, why should that be an issue of any sort? Alice is to be praised for her attempted (and successful) rescue, the question of what Claire would have done simply did not arise, and that’s the end of that.

The last scenario, that of constitutive luck, is interesting, but only academically. To begin with, my view — contra Swenson’s stated hypothesis — is that adult human beings are morally responsible by simple virtue of being adults. That’s what it means to be an adult, regardless of the circumstances of one’s childhood. But if Daniel has an aberrant character because, say, of some developmental abnormality in his brain, or perhaps because a tumor is interfering with his moral decision making brain network, then fine, he is not to be blamed for his inaction. That’s no skin off of Alice’s nose, because moral desert is not (or should not be) a competition! Again, why the karmic obsession with keeping scores?

What about the choice between responsibility skepticism and the responsibility explosion? It seems to me that a society cannot function without a reasonable attribution of responsibility for the actions of its (adult, normally functioning) members. But one shouldn’t be carried away and start thinking of all possible hypothetical scenarios. Ethics should be concerned with what actually happens to real people, not with how hypothetical individuals would behave under (infinite) hypothetical circumstances. If you care about the latter, I suggest you’ve got your priorities seriously screwed up.

In the end, the “problem” of moral luck is not a problem at all. When Thomas Nagel wrote his now classical paper by that title, back in 1979, I took it to call our attention to the humbling fact that we may be far less moral than we like to think, and that that observation ought to make us more sympathetic toward the above mentioned ordinary Germans under the Nazi. To cure us of moral hubris, as it were. That is a very good practical lesson, nudging us toward being both less complacent about our own abilities and more charitable toward the shortcomings of others. But if the whole thing degenerates into an entirely impractical mathematical exercise in the assignment of praise and blame we have lost track of what ethics should be about. As the Stoic philosopher Epictetus put it 19 centuries ago:

“If you didn’t learn these things in order to demonstrate them in practice, what did you learn them for?” (Discourses I, 29.35)

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

Why neuroscience is largely irrelevant to ethics

Benjamin Libet, neuroscientist

A few days ago, over at my other blog, I published an article that I touted on my social media as “the last piece on free will you will ever need to read.” That was a slight exaggeration, but only slight. The specific point of the post was to explain in some detail the ancient Stoic take on human decision making, what I and modern psychologists prefer to call volition rather than free will (given how loaded with metaphysical nonsense the latter term is). I also wanted to see how the Stoic position squares with the findings of modern science. As it turns out, that ancient view is highly compatible with what contemporary cognitive science says about the matter, but this is neither a miraculous coincidence nor indication that somehow the Stoics managed to anticipate scientific discoveries that would be made more than two millennia later. (Which would be just as preposterous as to maintain, as some do, that the pre-Socratic atomists “anticipated” modern physics. They didn’t, as even a superficial reading of the pre-Socratics, and a passing acquaintance with modern physics, should amply demonstrate.)

Rather, the reasons we still find so much of value in Stoic (or Aristotelian, or several other) ancient moral philosophy are twofold: first, some of the ancients were keen observers of human psychology; second, moral discourse has little to do with whatever mechanisms make it possible for human brains to think about morality (so long as some mechanisms that allow us to think do exist, of course). Both notions need to be unpacked a bit, which is what I intend to do in this essay.

What was so special about Aristotle, or Epicurus, or Epictetus? In a sense, not much. They were sharp thinkers who paid attention to the empirical side of what they were thinking about. We tend to forget that many others at the time and since have written about the same topics, and yet they are completely forgotten, or they appear at best as footnotes in philosophy books. (Have you ever heard of Aristippus of Cyrene? Not likely, and he was one of the major figures among the minor Greek philosophers…)

The reasons we read some ancient philosophers are, so to speak, evolutionary. Specifically, the cultural analogues of two basic processes that steer biological evolution: drift and selection. Drift is about statistical sampling: some books survive and others don’t because of luck. There probably never were too many copies — by modern standards — of the works of Chrysippus, one of the most noted Hellenistic philosophers, and unfortunately not a single one has come down to us. Selection makes it so that whatever authors are highly esteemed not just by their contemporaries, but further and further down in history, are the ones whose works and ideas tend to survive. In the case of Chrysippus, we know a good amount about what he thought because so many later commentators copied several of his passages, in order to praise him or criticize him. To put it into another fashion, we still read Plato and Aristotle because of what biologist Jacque Monod once called a combination of chance and necessity.

But we don’t read all of Plato and Aristotle nowadays, unless we are historians of philosophy, or of science. There isn’t much point in consulting Aristotle’s Physics if you are a physicist, because the field has moved very far from the Aristotelian positions, beginning with Galileo and arriving at Einstein and Stephen Hawking. By contrast, philosophers still find a lot of value in the Nichomachean Ethics. Ill informed people (who shall here go unmentioned) are under the impression that this is because philosophy, unlike physics, doesn’t make progress (usually, these people just happen to be physicists). But that’s sheer ignorance, which ought (morally) to be embarrassing. Philosophy does make progress (see here), but it is a very different kind of endeavor from physics, so any direct comparison is a category mistake.

No, the reason Aristotle, the Stoics, and so forth are relevant today (other than the above mentioned one that they were la creme de la creme of their period) is that modern science has little of relevance to say about certain branches of philosophy, and in particular ethics. (Yes, I know, certain individuals are making a cottage industry of arguing the opposite. But they too shall go mercifully unmentioned in this post. I’ve dealt with them ad nauseam in the past.)

The reason this is the case has been explained by philosophers Ludwig Wittgenstein and Wilfrid Sellars, and is exemplified by the work of neuroscientist Benjamin Libet. Let me explain.

First, Wittgenstein. In Tractatus 4.111 he famously wrote that “philosophy is not one of the natural sciences,” adding at 4.112 that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations we find:

“[Philosophy’s] investigation is a grammatical one. Such an investigation sheds light on our problem by clearing misunderstandings away. Misunderstandings concerning the use of words, caused, among other things, by certain analogies between the forms of expression in different regions of language.” (90)

While I think that Wittgenstein had too narrow a view of what philosophy does, there is quite a bit of truth in the above. The job of philosophers isn’t to discover new things about the world (we’ve got science for that), but rather to clarify issues by way of critical analysis, and to see how things that appear disparate “hang together,” so to speak. That is, for instance, why metaphysics isn’t being replaced by physics, it is transforming itself into a discipline informed by physics (and biology, and other sciences) whose objective is to make sense of the picture of the world that emerges from the discoveries of individual special sciences, something that no single science does or is concerned with. (See, for instance, Ladyman and Ross’ Every Thing Must Go, a sort of manifesto for a naturalistic metaphysics.)

Wittgenstein becomes even more relevant to the present discussion when we consider his concept of “language games” as presented in the Investigations:

“The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words ‘block,’ ‘pillar,’ ‘slab,’ ‘beam.’ A calls them out; B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.” (2)

Ethics is another language game, or, rather, a multiplicity of language games, since there are a number of ways to conceive, talk about, and actually do, ethics. Within the human community, we talk about “good,” “bad,” “moral,” “immoral,” “ought,” and so forth, and any competent language user understands what others mean by those words. Moreover, .just like the words of the builder’s language actually help building things, so the words of ethical language actually help regulate our actions within a given community. The fact that science comes in and, say, tells us that “bricks” are really mostly empty space is interesting from within the science language game, but it is utterly useless, and indeed a distraction, to the builder. Analogously, that a neuroscientist may be able to tell us which parts of the human brain are involved in the production of ethical judgments, and by which cellular means, is interesting within the language game of neuroscience, but it is a useless distraction if we are concerned with improving social justice, or becoming a better person.

Which brings me to what I have termed the most important philosopher you likely never heard of: Wilfrid Sellars. My friend Dan Kaufman and I did an extensive video conversation on Sellars, which I think is worth checking out. One of Sellars’ landmark ideas was the distinction between what he called the manifest and the scientific images of the world. The manifest image is the way most people understand and navigate the world. The Sun “rises,” genocide is morally repellant. That sort of thing. The scientific image, by contrast, is the way science looks at the world: the Sun does not, actually, rise; it is the Earth that rotates on its axis. As for genocide? Ah, therein lies the rub. I’m sure there are scientific explanations for why genocide is such a recurring feature of human history, from the biology and neuroscience of violence to those of inter-group relations. While such scientific understanding of genocide may be useful, it does not give us the complete picture. Why not?

Because, according to Sellars, the manifest, but not the scientific, image deals with things like reasons and values. This is not a call to reject science. On the contrary. Sellars was quite clear that whenever the scientific and the manifest images of the world are in conflict (as in “the Sun rises” vs “the Earth rotates” case), then the sensible thing is for us to yield to science. But science simply isn’t in the business of doing a number of other things for which we have developed different tools: philosophy, literature, history, and so forth. These tools are complementary with, not opposed to, scientific ones. Ideally, says Sellars, we want to develop a conceptual stereoscopic vision, whereby we are capable of integrating the manifest and scientific images. Indeed, according to Sellars — and I wholeheartedly agree — developing and constantly updating such vision is a major task of philosophy, and our discipline is uniquely positioned to carry the task out because of both its methods (empirically-informed critical discourse) and its scope (very, very broad).

In a sense, what emerges from Wittgenstein, but even more so from Sellars’ thought is that there are a number of things about which we can talk at different levels of analysis, and which level(s) make the most sense depends on what it is that we wish to accomplish. While in theory a full integration of all levels may be possible, in practice it is often not desirable, because it doesn’t help with the particular language game we happen to be playing.

Let me then come back to “free will” (or volition), and use my discussion of Stoic philosophy as it compares to the famous experiments by Benjamin Libet to present a specific example of what I have outlined above, attempting to convince you of why I think science is largely irrelevant to moral discourse.

The Stoics thought that we have a faculty of judgment, which they call the hêgemonikon. It was a major goal of Stoic training to improve the way we use it, i.e., to arrive at better and better judgments about whatever life throws at us. In the post at my other blog I suggest that, roughly speaking, the hêgemonikon corresponds to the frontal lobes of the human brain, which are far more developed than in most other mammals, and are known to be associated, in fact, with our capacity for judgment, and in particular with our ability to “veto,” so to speak, certain actions that might otherwise come natural to us (as in: “there is a strange noise in my house in the middle of the night! Someone is about to kill me!! I need to run the hell out of here!!! … Oh, wait, it’s the cat. Back to sleep).

The Stoics themselves were spectacularly wrong about the likely location of the hêgemonikon: they thought it resided in the heart. But pretty much everything else they said about its functioning and how we can improve it was right on the money, as shown by the fact that 23 centuries later Stoic “psychology” still informs a number of evidence based psychotherapies, such as rational emotive behavior therapy and cognitive behavioral therapy.

How is this possible? Because most of what the Stoics thought about the hêgemonikon was part of the manifest image, and was useful then as it is now for the simple reason that people still deal with the same basic issues: unhealthy emotions like anger and fear, and the search for better ways to relate to others and improve the human polis. What the Stoics got wrong, not at all surprisingly, is the bit that belongs to the scientific image: as it turns out, our faculty of judgment depends on a particular part of the brain, not the heart. Crucially, though, this has had no effect whatsoever on Stoic philosophy or its usefulness. A modern Stoic simply updates that bit of information, thanks the scientist, and goes back to her practice.

Nowadays, whenever the topic of human volition comes up someone is bound to cite the famous experiments carried out by Benjamin Libet, beginning in 1983. Briefly, he asked subjects to follow the movements of a dot on the screen of an oscilloscope. The dot moved like the hands of a clock, but faster. Libet told his subjects to move a finger at a moment of their choice during the experiment, noting the position of the dot when they became aware of their decision to act. The experiment showed that the decision to move the finger entered conscious awareness about 200 milliseconds before the actual movement. But, stunningly, there was a rise in the so-called “readiness potential,” which is thought to be associated with the preparation for action, about 550 milliseconds before movement. So the subjects appeared to get ready to move the finger a full 350 milliseconds before they became conscious of their decision to do so. (Indeed, in later experiments, the readiness potential has been shown to build up even as long as 1.5 seconds before movement.)

Taken at face value, Libet’s results seem to show that we decide our actions unconsciously, and that what we call consciousness is simply a (late) awareness of a decision that has been made. There are several well known criticisms of such conclusion, beginning with the obvious one, that the experimental conditions have precious little to do with the recursive, complex behavior that we normally label “conscious decision making,” and which is understood as a continuous feedback loop between what Daniel Kahneman calls System I (fast, subconscious) and System II (slow, deliberate) brain processing systems. Moreover, recent research has both amply confirmed, and yet significantly re-interpreted, Libet’s original findings.

But a good reason to think that Libet’s experiments do not mean what so many enthusiasts of the “free will is an illusion” bandwagon seem to think they mean, is Libet’s own commentary:

“The finding that the volitional process is initiated unconsciously leads to the question: is there then any role for conscious will in the performance of a voluntary act? The conscious will does appear 150 msec before the motor act, even though it follows the onset of the cerebral action by at least 400 msec. That allows it, potentially, to affect or control the final outcome of the volitional process. An interval msec before a muscle is activated is the time for the primary motor cortex to activate the spinal motor nerve cells, and through them, the muscles. During this final 50 msec, the act goes to completion with no possibility of its being stopped by the rest of the cerebral cortex. The conscious will could decide to allow the volitional process to go to completion, resulting in the motor act itself. Or, the conscious will could block or ‘veto’ the process, so that no motor act occurs.” (B. Libet, Mind Time: The Temporal Factor in Consciousness, 2004, p. 137)

[Once more, to preempt distracting discussions: I do not think we should talk about “free will,” which is a hopelessly metaphysically confused concept. We are talking about what psychologists themselves call volition, i.e., the ability of human beings to make complex decisions informed by conscious thought. Hopefully no one will deny that we do have such ability.]

Interestingly, studies have found very good experimental evidence for the veto power Libet is talking about. But that is “interesting” from within the language game of neuroscience. It makes no difference at all in terms of the language game in which the Stoics — and most of us — are engaged, that of improving ourselves as individuals and of making society a better place for everyone to live.

That is why, as a scientist, I will keep following with interest the undoubtedly fascinating future developments of cognitive and neuro-science. But it is also why, as a philosopher and human being, I’m not very concerned with how those findings will impact my day to day life in the realm of ethics. As the Stoic philosopher Epictetus aptly put it:

“You are not flesh or hair but volition; if you keep that beautiful, then you will be beautiful.” (Discourses III.1.40)

Know thyself: still excellent advice, after all these years

“gnothi seauton,” know thyself

I have been at Delphi twice already, and I plan on going back again. It is a truly magical place. No, I don’t believe in “magic,” I’m talking about real magic, the sense of awe that strikes you when you arrive there. Despite the tourist shops, the bed and breakfasts, and the restaurants, you cannot avoid been struck by the sheer beauty of the place: a green mountainous peak overlooking a deep valley, from where you can see the Aegean Sea in the distance. No wonder the ancients thought it a place privileged by the gods, as testified today by the beautiful ruins of the temples of Apollo and Athena.

It is in Delphi, of course, that the most famous Oracle of the ancient world resided. Still today you can see the omphalos (i.e., navel), the stone that allowed direct communication between the priestess and the gods. Modern science has suggested that the location is characterized by significant underground quantities of ethylene or methane, which may cause hallucinations to people exposed to them. So far, however, this is speculation, and not really germane to the psychological power of the Oracle. The advice given by the priestess of Apollo, regardless of its natural trigger, was often sound, if not necessarily amenable to an immediate interpretation.

One of my favorite stories is that of Themistocles, the Athenian general who was told that Athens will successfully defend itself from the powerful army of the Persian king Xerxes by building a wall of wood (“Though all else shall be taken, Zeus, the all seeing, grants that the wooden wall only shall not fail”). The notion, of course, is ridiculous on its face. Surely the mighty Persians would not be stopped in their tracks by mere wood. But interpret the advice more creatively, as Themistocles did, and you realize that the wood in question was that of the ships forming the formidable Athenian navy, which did, in fact, annihilate the opponent fleet at the battle of Salamis.

Temple of Athena at Delphi (Photo by the Author)

Delphi was also famous for a list of “commandments” that were allegedly assembled from the wisdom of the Seven Sages, a legendary group of philosophers, statesmen, and law-givers from the early history of Greece. Perhaps the most famous of such commandments was “know thyself,” which has since inspired countless philosophers, most famously informing Socrates’ entire career as a gadfly to the good people of Athens (who repaid him for his trouble, as we know, by putting him to death by hemlock).

Now an article published in Aeon magazine by Bence Nanay (a professor of philosophy at the University of Antwerp, Belgium) tells us not only that “know thyself” is “silly” advice, but that it’s actively dangerous. While Nanay has a point, I will argue that it is his own article that is, in fact, dangerous.

Nanay tells us that the Delphic injunction is based on an untenable picture of the self, and of how we make decisions — though I wonder how he knows which theory of mind and psychological agency was endorsed by whoever chiseled the famous phrase on the entrance to the temple of Apollo.

He invites us to consider a simple situation: “You go to the local cafe and order an espresso. Why? Just a momentary whim? Trying something new? Maybe you know that the owner is Italian and she would judge you if you ordered a cappuccino after 11am? Or are you just an espresso kind of person? I suspect that the last of these options best reflects your choices. You do much of what you do because you think it meshes with the kind of person you think you are. You order eggs Benedict because you’re an eggs Benedict kind of person. It’s part of who you are. And this goes for many of our daily choices.”

The notion is that we have somewhat stable ideas about who we are, which is practically useful, since it saves us a lot of time whenever we have to make decisions. Except if you go to Starbucks, because they have far too many choices. Then again, no self respecting Italian would go to Starbucks. Or order a cappuccino after 11am. (See what I did there? I have an image of myself as a self respecting Italian, hence my choices about where to get my coffee and when it is proper to order a cappuccino. Also, no Parmesan cheese on seafood pasta, please.)

But of course, as Nanay reminds his readers, we also change, all the time. On occasion these changes are sudden and dramatic, and therefore very noticeable. Many people feel and act differently after having had a child, for instance. Or having experienced a trauma, such as a diagnosis of cancer. Many changes, though, are subtle and slow, yet cumulative over time. It is this second kind of change that creates the major problem for the Delphic injunction, apparently: “The problem is this: if we change while our self-image remains the same, then there will be a deep abyss between who we are and who we think we are. And this leads to conflict.”

Not only that. We apparently suffer from what psychologists call the “end of history illusion,” the idea that, right now, we are final, finished products. This, and not our selves of five, ten, or twenty years ago, is who we really are, and who we will keep being until our demise. The end of history illusion is, of course, nonsense. We are never finished, as the only constant throughout our life is precisely that things, including ourselves, change. You can see why Nanay is worried.

The problem concerns much more than your choices of morning java: “Maybe you used to genuinely enjoy doing philosophy, but you no longer do. But as being a philosopher is such a stable feature of your self-image, you keep doing it. There is a huge difference between what you like and what you do. What you do is dictated not by what you like, but by what kind of person you think you are.”

Theater and temple of Apollo at Delphi (Photo by the Author)

In an interesting twist, Nanay even manages to blame our addiction to social media on this alleged incongruence between who we are and who we think we are. That incongruence not only wastes a lot of our time and efforts (because, robotically, we keep doing things we no longer enjoy or think important), it also generates a fair degree of cognitive dissonance between reality and our image of reality. And cognitive dissonance, again the psychologists helpfully remind us, is emotionally costly. “Hiding a gaping contradiction between what we like and what we do takes significant mental effort and this leaves little energy to do anything else. And if you have little mental energy left, it is so much more difficult to switch off the TV or to resist spending half an hour looking at Facebook or Instagram.” Now you tell me!

Nanay concludes that “If we take the importance of change in our lives seriously, [following the Oracle] just isn’t an option. You might be able to know what you think of yourself in this moment. But what you think of yourself is very different from who you are and what you actually like. And in a couple of days or weeks, all of this might change anyway.” He then concludes with a pseudo-profound piece of poetry from André Gide, who wrote in Autumn Leaves (1950): “A caterpillar who seeks to know himself would never become a butterfly.”

Right. Then again, caterpillars are too stupid to philosophize about themselves, not to mention that their are profoundly ignorant of their own biology. And does anyone really believe that, except (maybe) for traumatic experiences, we can change a lot in mere days or weeks?

I hope it is clear what the central flow in Nanay’s argument is: he is assuming an essentialist view of the self, the self conceived as the “true,” unchanging part of who we are, which people are supposed to “discover” in order to live authentic lives. I’m sure some Ancient Greeks did hold to a similar notion (Plato comes to mind), though they were usually far too good observers of human psychology to fall into that trap. It is not at all clear whether whoever came up with the Delphic injunction subscribed to such an untenable theory of the self. What is abundantly clear is that “know thyself” is very good advice regardless, indeed even more so if our selves are dynamic bundles of perceptions, sensations, desires, and deliberations, to paraphrase and build on David Hume.

Let’s consider the more serious of Nanay’s examples, that of the philosopher who doesn’t realize that he doesn’t believe in philosophizing anymore. I don’t know whether that example was autobiographic, but I can certainly counter it with an autobiographical anecdote of my own. Ever since I can remember I wanted to be a scientist, a dream that eventually came through when I was appointed assistant professor of botany and evolutionary biology at the University of Tennessee in Knoxville, back in the distant 1995.

I had a reasonably successful career for several years in my chosen field of specialization, gene-environment interactions, rising through the ranks of associate and then full professor with tenure. My self image had been one of a scientist since I was five or six years old, and it had served me well until my late thirties and early forties.

Then a midlife crisis ensued, partly precisely because my reflections about myself began to alert me of some sort of growing gap between my mental image of me and how I was feeling while doing what I was doing. I realized that I was less and less interested in laboratory and field research, and more and more in theoretical and conceptual issues. And the step from the latter to philosophy of science wasn’t very big. Partly because such conscious reflections (the “know thyself” part), and partly because of serendipitous events, I was able to enroll as a graduate student in philosophy, publish a book and several papers in the field, and eventually switch career and become a full time philosopher.

That’s where I am now, though other adjustments have occurred in the meantime, like my increased interest in public philosophy, and my novel interest in Stoicism. These changes, too, were made actionable by the fact that I have a habit of reflecting about my feelings and experiences, trying as much as possible to keep adjusting what I actually do and what I want to do, in a never ending exercise of reflective equilibrium.

The bottom line is that my life, I can confidently assert, has been made better and better by trying to follow the Delphic commandment. I suspect the same is true of other people, who can benefit from a monitoring of the evolving “self,” coupled with the occasional redirection and adjustment of what they do or pursue. Contra Nanay, it is this process of self knowledge that reduces, or even preempts, the cognitive dissonance he refers to. And, apparently, it will also save you a lot of wasted time on Facebook and Instagram.

What is truly dangerous is not to follow the not at all “silly” advice that has served Socrates and so many others since. You may end up mispending a good chunk of your life if you ignore it. And if you have the chance, go to Delphi. You’ll thank me for it.