Category Archives: Social & Political Philosophy

Against The Four: Amazon, Apple, Facebook, and Google

Amazon Apple Facebook Google“The Four” are the giant tech companies Amazon, Apple, Facebook and Google. You can read all about why they are a problem in Scott Galloway’s informative book, The Four: The Hidden Dna of Amazon, Apple, Facebook, and Google. Galloway is a Professor at NYU’s Stern School of Business, where he teaches brand strategy and digital marketing. He is also the founder of several firms including L2, Red Envelope, and Prophet. In 2012, he was named “one of the world’s 50 best business school professors” (no, this isn’t an oxymoron) by Poets & Quants. Moreover, he has served on the boards of Eddie Bauer, The New York Times Company, Gateway Computer, and Berkeley’s Haas School of Business. In other words, this is someone who knows a lot about corporate culture, and not at all a left wing moralist such as myself.

(If you don’t have time to read the book, look at these two articles that turned me onto it, in Wired magazine, and over at the BBC.)

In a nutshell, the problem with The Four is that they simply have far too much power in our lives, both in terms of the information they store about us (and how they use it), and of their financial muscle, which of course easily turns into political influence. From the BBC article:

“The four most important tech companies aren’t even just tech companies anymore. They each have embedded themselves in our lives, hugely influencing us by playing to our basic human instincts: from the eternal human search for answers to our need for love. … [Galloway] also says that the companies’ deep pockets and massive customer base are what allow the companies to start competing in different sectors and industries – like how Amazon is now producing original TV content, or how Facebook has more or less emerged as a news media platform. That has prompted scepticism and warnings from critics at places like the New York Times and Bloomberg.”

And that was before Amazon bought Whole Foods, for instance. You can dig into the details yourself, no point for me to repeat here easily found material. My objection to The Four is ethical: I am generally skeptical of any concentration of power, especially in the private sector (but not only: governments are a little better to the extent that they really are accountable to their people by means of a not too dysfunctional democracy. The US Government does not, at this point, qualify, for instance). But I also recognize that these and other tech companies have made possible a wonderful number of things, including, of course, this very blog (which in its initial incarnation, Rationally Speaking, was hosted by Google, with essays that were for many years written on a MacBook or an iPad, and are still broadcasted via Facebook).

Moreover, I am certainly not an anti-technologist. On the contrary, until recently I was a very early adopter of new technologies. I bought the first iPhone, and then an iPad (and then an iPad Pro, which replaced my laptop), signed up on Facebook very early on, have used Amazon for many years, and have a very very long browser history with Google.

And before you accuse me of naivete, I am perfectly aware that The Four aren’t the only giant tech companies to be wary of (add Twitter, Verizon, Samsung, just for starters), and the list gets far longer when one moves to large corporations in general. Ultimately, I think the only reasonable and effective way to curb corporate power is by legislation, along the model of the famous breakup of Bell back in 1982. As a society, we want innovation, and we certainly want private entities to benefit from their work. But innovation needs competition, not near monopolies, and benefiting from one’s work does not equate creating a very small class of ultra-billionaires who exploit their workers (like Amazon certainly does), including in other countries (like Apple equally certainly does).

But while we are waiting for governments to take action (more likely in Europe than in the US, at the moment — see Galloway’s take here), it doesn’t mean we have to be complicit enablers. I try to practice what in philosophy is called virtue ethics, which means that my first focus is on improving my own character, which in turn requires acting as virtuously (in the Greco-Roman, not the Christian sense) as possible. It follows, it seems to me, that I need to extricate myself as much as possible from The Four, as an initial step.

And that’s where I discovered two interesting things, which are the main objects of this post. First, it is much harder than one might at first imagine. Second, you are unlikely to get a lot of support even from friends and family, who might even exhibit hostility to your intentions. Let me explain.

They are called The Four for a reason. They are everywhere, and next to impossible to avoid, unless you are willing to completely disengage from the Internet. In the postscript, I detail the steps I have taken so far, in case anyone else wishes to try it. You can thank me later for having saved you endless hours of web searching (using DuckDuckGo, of course…).

Apple was actually the easiest to get rid of. Because their ecosystem is so tight and positively discourages any contact with the outside, once you decide to get out of it, you pretty much have to go the whole nine yards. This pained me, because I have been an Apple fan ever since I ditched Microsoft because of the poor quality of their products, back in 2004. But just a couple of weeks later, I hardly miss my iPad and iPhone, and I most certainly don’t miss the Watch, one of the most intrusive gadgets ever made.

Next was Amazon. The big steps here were to stop shopping on their online store (easy, plenty of alternatives), to replace the Kindle with one of several other high quality e-book readers, and to begin to direct readers of my own books to either publishers’ web sites or other e-book stores. of course, the bulk of my collection of books is on Amazon, but I’ll eventually get it back by way of available software that decrypts the files and turns them into the popular epub format. I still watch Amazon videos, because they are good and not available elsewhere. Before you accuse me of hypocrisy, however, keep in mind that the goal is to minimize my footprint on The Four, so to speak, not to eliminate them from my life altogether. It’s an incremental project, not a revolution.

Which brings me to Google. In a sense, I actually increased my use of their products, since now my office suite is the Google one, replacing Apple’s iWorks. But it is a temporary transition dictated by limited time available to search for long term suitable alternatives, and by the need not to disrupt several ongoing collaborative works. And at any rate, I hit Google where it hurts, their web search engine, which produces their advertisement revenues and is of course highly invasive of our privacy. DuckDuckGo does an excellent replacement job.

Finally, Facebook. This was the hardest, again unless I was willing to forgo keeping in touch with (real) friends and family, and also to give up my outreach presence (my “official” philosophy page, my participation to the largest online Stoic community, and a few other things). What I did was to get rid of their obnoxious Messenger app, as well as “unlike” and “unfollow” a crapload of pages that were, of course, generating lots of targeted advertisements. I am now using Facebook with a very small and tightly guarded circle of actual friends and family, as well as for the above mentioned outreach, nothing else.

So the bottom line of the first point is that this exercise showed me very clearly just how dependent our lives have become from The Four. Perhaps this should not have been surprising, but experiencing the full measure of it in such a short period was eye opening. The other thing that was eye opening relates the second point: the comparative lack of support, and occasionally more or less overt hostility, I got from friends and family (and, I’m sure, from some readers, now that I’ve put this out).

When I explained what I was doing and why, a good number of people were puzzled, and began immediately to mount arguments against my enterprise. “It’s useless.” “You won’t succeed” “It’s going to cost you a lot of money and time.” “What do you have against corporations?” “Are you a Luddite?” “Why do you hate America?” Okay, I made up the last one, but the others have been thrown at me fast and furious during the past few weeks.

So I patiently explained: no, I’m not a Luddite; on the contrary, I’ve always been an early user of especially electronic technology. No, it isn’t really that expensive (as I’m sure everyone knows, Apple alternatives in terms of phones and tablets are incredibly cheap by comparison). Yes, it took me some time, but I was helped by others who have similar objections and have done much of the legwork for me, and at any rate, it’s an ethical decision, it would be a bit too easy if it didn’t cost me money or time or effort.

My attitude toward corporations is the one already explained above. I am perfectly aware that if it weren’t Apple it would be someone else, but that’s not an argument about disinvesting from Apple. It’s the social activism equivalent of what in biology is called frequency dependent selection: you go for the rare phenotype, which eventually becomes the dominant, at which point you switch to the new rare, and so on.

In terms of success and utility, it depends on what one’s goal is. I am perfectly aware that Apple, Google and the others are not going to feel the pinch of my decisions. But from a virtue ethical perspective that’s not the objective: I just don’t want to be personally co-responsible for what they are doing. Moreover — and that’s why I’m writing this post and promoting Galloway’s book — if enough others do the same, the damage will be greater and greater, and it might bring about change.

Also, again in terms of success, as I said above my goal was never to completely disengage from The Four, only to distance myself from them. Many years ago I read Peter Singer’s How Are We to Live?: Ethics in an Age of Self-Interest, and it changed my life. No, I did not become a utilitarian like Singer, but I was struck by one of the first things he says in that book: don’t try to do everything at once, you will be overwhelmed, get discouraged, and fail. Instead, decide what your ethical priorities are, and then make some small but concrete steps in that direction. I discussed How Are We to Live? at a book club I founded in Knoxville, TN, and my wife at the time and I were talking about it on our way back home. We decided to follow Peter’s advice: we sold our house in the suburbs and moved downtown, near the bus lines and where we could bicycle to work; we also sold one of our two cars. Our life improved as a result, our carbon footprint went down, and we felt good about the decision. The current anti-Four action is along similar lines: I’m doing something, not everything, because I can do the former, but not the latter.

I thought my explanations were reasonable and cogent. One may still disagree, and indeed one may even agree with my take and still not act in a similar fashion, for all sorts of reasons. But my arguments hardly made I dent. Some people seemed not just to disagree with me, but to positively resent my chosen course of action. What was going on?

Then it hit me. It’s the same reaction I got when I stopped eating meat, and that my partner gets every time people find out she is a vegetarian. The same objections are immediately raised: it’s useless; it’s difficult; what’s wrong with the meat industry?; are you an environmental nuts?; do you feel somehow superior to the rest of?; why do you hate America??

It’s the next to the last one that should give you the clue. At least in my judgment, a lot of people who are not vegetarian recognize, at some level, that vegetarians have by far the better argument: no matter how you look at the issue — in terms of animal suffering, environmental degradation, treatment of labor, or even pure and simple self interest when it comes to health — vegetarianism is better. But it’s harder, too. Stakes are delicious; burgers are delightful; and everyone eats them, so it’s easier to just go along with the habit. But when you meet someone who is bucking the trend, and you are dimly aware that she has made the right choice and you haven’t, resentment kicks in. She simply must be mistaken, and you begin to rattle out a number of more or less incoherent “arguments” for why that is “obviously” the case.

I think something similar has been going on with my anti-Four strategy over the past few weeks. A number of my friends and family realize that I’m onto something (and Galloway’s book gives me plenty of well researched ammunitions, as well as the comfort to know that there are others who think and act the same). But it’s too hard, or expensive, or just inconvenient for them to follow suit. So I must be wrong. And once you know someone is wrong then you immediately begin to search for all the flaws in their reasoning, while ignoring the big ones in your own. It’s a well known cognitive fallacy.

Be that as it may. My conscience feels slightly better, in the same way and measure in which similar small decisions (to not eat meat, to try to shop locally, to voluntarily pay carbon footprint offsets when I travel by air, to change bank because my old one was a giant corporate monster, and so forth) have made me feel better. Is this going to change the world? Of course not. But what are you doing to help, right now?

_____

Postscript: Massimo’s (ongoing) anti-Four plan. Each item lists a service or product offered by Amazon, Apple, Facebook or Google, followed by a suitable, or at least less objectionable, alternative. All of these have already been implemented, it took about a week to switch completely. Remember, the goal is not perfection, just progress. Readers’ suggestions for further improvements are welcome.

Amazon > myriad alternative online retailers, obviously

Amazon Kindle > Nook, Kobo, Sony, etc.

(My) Amazon books > wherever possible I changed links to direct readers to the publishers themselves instead of the A-store

Amazon Video > Hulu, Netflix, though I still use some Amazon Video because some of their productions are unique and good

Apple’s iPad > any Android tablet (even though Android is Google, each manufacturer uses it differently, and the platform is more open than Apple’s)

Apple’s iPhone > any Android phone, except Google’s own, obviously

Apple’s Watch > back to analogical (and more stylish!)

Apple’s TV > back to simple smart TV native apps

Apple’s Music > Pandora, Spotify, or similar

Apple’s Mail > Aqua Mail (which does not track your search history), or any of a number of alternative third party clients

Apple’s office productivity (iWorks) > Google office, as first step, then independent systems, ideally open source

Apple’s Safari browser > Firefox (which does not track your history), Google Chrome not acceptable

Facebook > un-liked most pages, tightened security, limited who can ask me for “friendship”

Facebook Messenger > eliminated in favor of simple sms, or third-party apps

Google search > DuckDuckGo (which does not track your search history)

Google navigator > this is a tough one (particularly since Apple Maps is neither acceptable for this project, nor good, really), though for instance CityMappers works very well for major cities

Five big philosophical questions: my modest take

number 5

golden 3d number 5 isolated on white

An anonymous poster has recently published a short essay over at the Oxford University Press philosophy blog, entitled “5 great unsolved philosophical questions.” How could I possibly resist answering them, I ask you? Presumptuous, you might say. Well, no, that would be the case if I claimed that my answers are original, or clearly the right ones. I make no such claim, I am simply offering my informed opinion about them, in my dual role of a philosopher and scientist. Of course, I’m also totally right.

Before proceeding, I need to remind readers of my take on the nature of philosophical questions, and therefore of philosophy itself. Here it is, in a nutshell. (For a much longer, and far more substantiated, though of course not necessarily convincing to everyone, answer, see here.)

Philosophy began, in the Western tradition, with the pre-Socratics, and at that time, and for many centuries afterwards, its business was all-encompassing. Pretty much every meaningful question to be asked was philosophical, or had a philosophical component. Then gradually, mathematics was spun off as one of many offsprings from Mother Philosophy, followed from the 17th century on by a succession of what today we call sciences: first physics, then chemistry, biology, and eventually psychology. That did not mean any shrinking of philosophy itself, however. The discipline retained its core (metaphysics, ethics, aesthetics, logic, epistemology, and so forth) and added just as many “philosophies of” as new disciplines originated from it (e.g., philosophy of science, of language, of mind, and so forth).

In modern times, I think the business of philosophy is no longer trying to attain empirical truths about the world (we’ve got science for that), but rather to critically explore concepts and notions informed, whenever possible, by science. As Wilfrid Sellars would put it, philosophers are in the business of reconciling the manifest and the scientific images of the world. (I also think philosophy is therapy for the sane, so to speak, and a way of life.)

As a result, and this brings me to the topic of the present post, philosophical questions are unlikely to ever be answered definitively. Rather, philosophers propose a number of competing accounts aimed at increasing our understanding of such questions. Our knowledge of things will likely always underdetermine our understanding, meaning that several accounts may be equally plausible or interesting. The job of philosophers is to propose and refine these accounts, as well as discard those that have become untenable because of our progress in both science and philosophy.

1. Do we really have free will?

An incredible amount of ink has been spilled on this question over the centuries. There are religious people from the Judeo-Christian-Muslim tradition who are absolutely sure the answer is yes. And there are physicists and neuroscientists who are adamant that the answer is obviously no.

My take is that it all depends on what one means by “free will,” and moreover, that the answer doesn’t really matter. If “free” indicates some magical independence of human will from causality, then no, we don’t have it. We are part and parcel of the universal web of cause and effect, and we can’t exempt ourselves simply so that we can reconcile the alleged existence of an all-powerful, all-good, and all-knowing God with the obvious observation that bad shit happens in the world.

That said, people who are absolutely sure that we live in a deterministic universe, where the writing of these very words was a given ever since the Big Bang, are significantly overstepping their epistemic warrant. Physics has not given us, yet, an ultimate theory describing the basic building blocks of existence, and we don’t know whether the world, ato bottom, works deterministically or whether instead there is true randomness in it. Indeed, we are not even sure that so-called “strong emergence” is impossible, though at the moment I’m betting against it.

But, as I said, it doesn’t matter. We should drop the theologically loaded term “free will” to begin with, and go instead with what the ancient Greeks called prohairesis, and modern cognitive scientists call volition, the ability to make decisions. It is an indisputable fact that we have more volition than most animals, a hell of a lot more than plants, and infinitely more than rocks. It is also indisputable that we have to make decisions in order to live, that we can train ourselves to get better at them, and that it is in our own interest to do so. Anyone objecting to this is falling prey to the ancient “lazy argument,” and is just wasting your time.

2. Can we know anything at all?

Ah, well, that depends on what one means by “know,” doesn’t it? Setting aside modern debates in epistemology (the so-called Gettier problem), at a first approximation knowledge is, following Plato, justified true belief. So the debate is really about truth and justification.

There are different conceptions of truth, as I have argued at length (see here and here), so we need to be more specific. Science, and much everyday discourse, typically operate according to a correspondence theory of truth: it is true that the Moon rotates around the Earth just in case the state of affairs in the world out there corresponds with that sentence. Logic and mathematics, by contrast, work with a coherence conception of truth. To say that the Pythagorean theorem is “true” (yes, yes, within the framework of Euclidean geometry!) is to say that its conclusions are logically derived from its premises in a valid fashion.

But of course the correspondence account of truth brings up the issue of justification: how do we justify the correspondence between my utterance that the Moon goes around the Earth in terms of actual states of affairs in the world? Unlike in deductive reasoning, which is typical of both formal logic and mathematics, scientific and everyday inferences are inductive, which means we cannot be certain about them, we can only make probabilistic statements. So, in the strict sense, no, we can’t know anything (outside of logical-mathematical truths). But this isn’t worrisome so long as one is willing to accept with humility that human beings are finite and fallible. We still seem to have been able to acquire a lot of quasi-knowledge, which has been serving us well for hundreds of thousands of years.

(Notice that I completely ignored the radical skeptical challenge to the concept of knowledge, a la Pyrrhonism, or of the Cartesian doubt type. I think those challenges are both irrefutable and irrelevant, except as a good aid at checking our own hubris.)

3. Who am “I”?

This too is an age-old question, to which both scientists and philosophers have attempted to provide answers. Philosophers have come up with accounts based on the continuity of memory (what makes you who you are is your memories), on the persistence of one’s personality, or on the continued physical existence of you as a spatio-temporal being, and so on. All of these have problems, and yet all of them capture some aspects of what we think we mean when we use the word “I.” Other theories are deflationary, both in philosophy and in modern neuroscience. There really is no “you,” because your “self” is not an essence, it is, as David Hume famously put it, a bundle of perceptions.

I don’t subscribe to either the idea that there is an essence that is us (e.g., the position taken by anyone who believes we have souls), nor to the opposite notion that the self is an illusion. Personal identity is a human concept, not something to be discovered out there, either by metaphysical or scientific inquiry. It is the way we think about, and make sense of, our thoughts, sensations, and experiences. It is both true that I am, to an extent, a different person from what I was ten or twenty years ago, as well as that I am, to a point, the same (or similar enough) person. And yes, this way of thinking about personal identity is informed by a combination of the above criteria: I am who I am because I have memories of my past (in part, and anyway a disease could erase them), because I have a certain somewhat stable personality (though aspects of it have changed over time, and again a disease could alter it dramatically), and because I have been in existence as a continuous spatio-temporal “warm.”

It is true that we can come up with all sorts of clever thought experiments about unreal situations that effectively question every account proposed so far. But those thought experiments largely miss the point, because in a sense they assume that there is one true and final answer to the question of personal identity, if only we were clever enough to figure it out. That, I think, is a mistake that smells of Platonic Idealism, like asking what is the essence of the concept of chair and attempting to arrive at a definition that unifies all the objects that we label with that word, with no exceptions and no provisos.

4. What is death?

This is an easy one, as far as I’m concerned. Plenty of people seem to think that death is something mysterious, and wonder what will happen “after.” Nothing will happen, because you will have ceased to exist. Consequently, there will be no “you” (whatever that means, see above) to experience anything. There is nothing that it is like to be dead.

I arrived at this conclusion both because my philosophy is naturalistic, and because I’m a scientist, and particularly a biologist. My professor of biophysics in college, Mario Ageno, memorably defined death as a sudden increase in entropy, which disrupts the orderly functions of our our physiology and metabolism. Death is a natural phenomenon, everything passes, panta rhei. The important question, as the Stoics were keenly aware of, is what you are going to do between now and that final moment. And keep in mind that you don’t actually know when it will come. It may already be later than you think…

5. What would “global justice” look like?

This is an odd entry in the OUP Blog post, possibly a reflection of contemporary debates about justice and inequality, more than a measure of the fundamentality of the question from a philosophical perspective. Then again, Socrates did spend a lot of time inquiring into the nature of justice, so there it goes. (We get a full treatment of the subject by Socrates/Plato in the Republic.)

The OUP entry, curiously, says that “to this day, there is no universally accepted theory of justice.” But why would we expect there to be such a theory? Again, justice, like personal identity, is a human construct, not to be found “out there,” either metaphysically or scientifically. We need to have a conversation about what we want justice to mean, whether it is a worthy goal (I certainly think it is), and what are the best strategies to achieve it.

As a practicing Stoic, I quite like that philosophy’s take on the concept, which was crucial to the Stoics since justice is one of the four virtues one is supposed to practice in order to become a better human being: “The unanimity of the soul with itself, and the good discipline of the parts of the soul with respect to each other and concerning each other; the state that distributes to each person according to what is deserved; the state on account of which its possessor chooses what appears to him to be just; the state underlying a law-abiding way of life; social equality; the state of obedience to the laws.” (Incidentally, this comes from Plato’s philosophical dictionary, the Definitions.)

There is a lot going on there, and please don’t be bothered by the use of the word “soul,” which can simply be replaced with mind, if you prefer. And I discard the bit about obedience to the laws, since there can obviously be unjust laws (that part is Platonic, not Stoic). The bulk of it, however, shifts back and forth between justice as personal attitude (we are in harmony with ourselves, we make the right decisions) and a social perspective (we want each person to receive according to their desert, we wish to achieve social equality). This capture an aspect often missing from modern discussions of justice: we cannot have a just society made of unjust people. Justice is achieved through a continuous virtuous feedback loop between individuals and the society they help constitute.

That’s it folks! I have just solved five of the all-time philosophical questions! You can thank me by buying me a drink the next time you see me…

In defense of the indefensible humanities

Università di Bologna

The University of Bologna, the most ancient in the world.

We keep hearing that the humanities — meaning things like literature, philosophy, history and so forth — are in crisis. Which is undeniably true, as measured in terms of dollars invested in them (including number of faculty positions, courses offered, etc.) in many contemporary universities, especially, but not only, in the United States and the UK. Many reasons have been adduced to explain this phenomenon, and there have been a number of calls to defend the humanistic disciplines on a variety of grounds.

I have my own take on this, which was crystallized in my mind several years ago, during a dinner with the Chair of the Philosophy Department at Notre Dame University. He was bragging that Notre Dame has the largest philosophy department in the country, possibly the world (I think the former statement is correct, the latter is doubtful, but still). I was then myself Chair of the Department of Philosophy at Lehman College in the Bronx, and I asked my host what accounted for their success. His response was simple and obvious: “we are a Catholic university. You simply don’t graduate from here unless you have taken a minimum of two philosophy courses.”

It is as simple as that, really. The “crisis” is an artifact of the fact that universities — especially public ones in the US — are increasingly run like businesses, where the “customer” (they used to be called students) get to pick what they want to study and how. The problem, of course, is that students, by definition, don’t know enough about what is good for them, and so should be institutionally limited in their choices. When I learned how to drive I patiently listened to my instructor and followed his lead, I didn’t design my own curriculum at driving school. The same when I learned Judo. Oh, and when I went to college, obviously. To run universities the way they are run now is purely and simply to abdicate the responsibility of teaching the next generation. Faculty and administrators, instead, become retail sellers, competing with each other to attract the highest number of customers in order to boost enrollment and bring in the tuition money that is increasingly needed because States have cut funding for “public” education, in many cases to ridiculously low levels.

I could end this post here, surely having pissed off or outraged countless students and administrators. Which is okay, since I’ve got tenure. But I recently read a refreshingly different essay on the subject, which I want to comment on. It’s titled “There is no case for the humanities,” published in American Affairs Journal, and authored by Justin Stover, a quondam fellow of All Souls College, Oxford University, and a lecturer at the University of Edinburgh. Stover provides a scholarly informed background about the history of the very concept of a university, makes excellent points, gets most of the facts right, and yet is — I maintain — spectacularly wrong in his conclusions. Or so I am going to argue.

Stover begins by arguing that there is deep conceptual confusion about what the humanities are and the reasons for studying them. He then immediately tells his readers that he will ignore the first part of the issue (what constitutes the humanities) and devote his piece to the second one (why studying them). Not necessarily a good move, in my opinion, because the reader is left — off the bat, so to speak — to having to guess what Stover means by “humanities.” Still, let’s assume that we all know what he is talking about, a la Justice Potter.

Stover’s first excellent point concerns the strange critique, and support, that both conservatives and leftists have for the humanities. The conservatives first. On the one hand, they attempt to use the coercive power of the state, and the financial muscle of private donors, in order to correct what they see as the ideological bias of the academy. On the other hand, in so doing, they are contributing to the destruction of the very professoriate that they claim to be defending. As Stover puts it:

“It is self-defeating to make common cause with corporate interests looking to co-opt the university and its public subsidy to outsource their job training and research, just for the sake of punishing the political sins of liberal professors.”

This without counting the fact that university professors tend to be liberal within the humanities, but certainly not in the social sciences, or even in the natural sciences — which are by far more powerful and influential on modern campuses.

The left doesn’t do much better, according to Stover. Progressives want to use the humanities as a force for social change and a training camp for citizen-activists, which right there is in flagrant contradiction with the mission of a university. Worse, they impose ideological litmus tests, despite their vocal protestations of being in favor of critical thinking and freedom of expression.

Stover tells us that most faculty are caught in the middle of this struggle, and that what they want to do, mostly, is to mind their business and carry out their research and scholarship on tiny, and often entirely irrelevant, domains of human knowledge. In other words, they want to do precisely what universities were originally designed to do, from the establishment of the first world university (in Bologna, Italy) back in 1088, onwards. This is an interesting — and insofar as I know correct — point:

“The critics, often well-meaning [well, I don’t know about that], think they are attacking the decadence and excess of contemporary humanities scholarship, when in fact they are striking at the very heart of the humanities as they have existed for centuries.”

One large caveat here, coming from my more extensive experience as someone who has worked in, and is familiar with the history of, not just the humanities, but the sciences as well. Everything that’s Stover has said so far, and that he says in the rest of the article, applies mutatis mutandis to the sciences. Which pretty much dispatches of his entire argument, since he is assuming from the beginning that the humanities are somehow different from the rest of academy. They are most certainly not, at least not by the light of the parameters he uses in his discussion.

The central part of the article is structured around a series of obviously provocative sections, boldly making entirely counterintuitive claims. The first one is “in praise of overspecialization,” addressing the criticism that today’s humanistic scholarship is too narrowly focused, and often concerned with minutiae that seem hardly worth bothering with. Here Stover is absolutely right that this is nothing new:

“No Scholastic ever argued how many angels could dance on the head of a pin — it takes the fevered imagination of a philosophe to come up with that question — but popular depictions of scholars in the Middle Ages indicate that their specialized pursuits were not always fully appreciated.”

Indeed, as Stover points out with dismay, it is the modern expectation that is new and way out of proportions. If you were to write, for instance, a paper or book on French clothing from 1650 to 1699, reviewers would demand that you situate your work within the broader area of literary theory, and moreover provide analyses of your material within the framework generated by the cultural milieu of the modern world. No Scholastic was ever asked to do anything like that at all.

This demand for broad context and up to date framing, according to Stover, simply results in bad scholarship:

“Take an important subject, say, democracy in classical Athens. If you ever want to go beyond a silly nursery story about Athens as the cradle of democracy … if you actually want to understand the political and social system of fifth-century Athens, you would have to delve into everything from epigraphy to the minor Attic orators, to comedy and tragedy, the Greek economy, trade relationships in Greece and the Mediterranean, coinage, ship construction, material supply chains, colonies, gender roles, even clothing and food.”

In other words, you would have to rely on a lot of narrow, “useless” scholarship.

The next section is “in defense of overproduction.” Here too, Stover’s strategy is to show that this isn’t a new problem, but a feature that has been with us from the dawn of (scholarly) time. He quotes an unspecified 13th century scholar who complained that “Aristotle offers the key to wisdom, but he hid that key in so many books.” Tens of thousands of commentaries on Peter Lombard exist, unread for hundreds of years, scattered across European universities, the reason being that this was once a standard exercise to go through to become a reputable (and licensed) teacher of theology. Overproduction doesn’t seem nearly like a sufficient term here!

Then we have “against teaching,” where Stover reminds us that scholars have always eschewed teaching, and that universities were never meant primarily as teaching (as opposed to scholarly) enterprises. I remember reading a biography of Galileo (not a humanist, but a scientist!) that commented about a letter that he wrote to a friend explaining why he was moving back to Florence from Padua: the wine is better, and the teaching load is smaller. I can relate. Stover puts it this way:

“These critiques, whether from the right or left, betray a rather limited horizon of imagination. They can only see the university as a fee-for-service corporation, a vendor hawking knowledge. … A school — be it a gymnasium or realschule, a college or a lycee, a grammar school or comprehensive, a preparatory academy or a public school — exists to teach. But the difference between a university and a school is not the mere difference of the age of the student or the level of instruction. The university is a different kind of thing.”

Indeed. Throughout its history the university has been a locus of scholarship, where the students benefit from the proximity with scholars, more a workshop than a school, at least ideally. That role has now shifted to graduate schools, in the process degrading colleges to glorified high schools, in part because actual high schools no longer do a proper job of teaching the next generation.

So Stover is right that the modern critics of the university, if they had their way, would destroy the very concept of a university, turning it instead into a slightly refined high school. He sees the contemporary university as a bizarre chimaera, and he is not wrong in this:

“The contemporary university … has become an institution for teaching undergraduates, a lab for medical and technological development in partnership with industry, a hospital, a museum (or several), a performance hall, a radio station, a landowner, a big-money (or money-losing) sports club, a research center competing for government funding, often the biggest employer for a hundred miles around, and, for a few institutions, a hedge fund.”

Which brings him finally to what he sees as the misguided attempts of late to defend the humanities. He accuses his colleagues of uttering words in which they don’t, really, believe, such as “skills,” “relevance,” “changing economy,” “engagement,” and “values.” I think he is a bit too harsh here, but I have certainly experienced, both as a faculty and as an administrator (five years as a Chair) part of what he is talking about. I can’t tell you how many useless strategic and rebranding meetings I have participated to, realizing full well that they were going to be a waste of everyone’s time.

Stover tells us that, in the end, what academic humanists really value is that their scholarship gives them participation in a particular community that they appreciate, a community in which other scholars typically share their values and interests. He rejects what he sees — rightly, mostly — as conservative paranoia about sinister plots to brainwash students with liberal dogma. Which leads him to conclude that the only justification for the humanities is within a humanistic framework, and that outside of such framework there is no case to be made:

“The humanities do not need to make a case within the university because the humanities are the heart of the university. Golfers do not need to justify the rationale for hitting little white balls to their golf clubs; philatelists do not need to explain what makes them excited about vintage postage at their local stamp collecting society.”

This is utterly wrong, and quite obviously so. The analogies simply do not hold. Golfers pay for their club memberships, and philatelists buy their own stamps. Academics, by contrast, are paid, often with public funds. So justification is most definitely needed.

Stover is correct, however, when he says that what distinguishes universities from technical schools is precisely the presence of the humanities:

“The most prestigious universities in the West are still those defined by their humanities legacy, which surrounds them with an aura of cultural standing that their professional purpose no longer justifies. … That is why every technical institute with higher aspirations has added humanities programs: accounting or law or engineering can be learned in many places, but courtoisie is passed along only in the university, and only through the humanities — and everyone knows it. … It is the lingering presence of the humanities that allows the modern university to think better of itself, and to imagine itself to be above commercial or political vulgarity.”

In the end, Stover tells us that the current weak defense of the humanities will fail, and the crisis of the university will deepen. Luckily, he says, this is not the first time, and will probably not be the last one. The university, and the humanities, will survive to fight another day:

“The way to defend the arts [and humanities] is to practice them. … Scholarship has built institutions before, and will do so again.”

Perhaps, but I’m not willing to wait and see how history unfolds. And — contra Stover — I don’t find most (though not all) of the current defenses of the humanities to be weak at all. Of course the humanities teach valuable skills to students, and there is plenty of empirical evidence to substantiate that claim. No, the sciences don’t teach “critical thinking,” by and large, and they certainly don’t teach how to think broadly and write well. And those are much more crucial, and portable, skills than learning how to run a chemical reaction or dissect a frog.

Of course the humanities teach about values. You don’t learn much about the human polis by studying astronomy or biology (as important as those disciplines are), or even engineering and medicine. You learn that from reading Shakespeare, engaging with Aristotle and Kant, seeing (and even better acting in, or producing) a play by Aristophanes. (Feel free to substitute the examples above with equivalent ones from China, Japan, Africa, South America, and so forth.)

If we yield to the neo-liberal project for the university it will not only destroy the university, it will also destroy the hope to provide the kind of public education that helps to form the next generation of intelligent, informed, critical human beings and citizens. Again, this is not something the STEM disciplines are equipped to do, with all due respect to my colleagues in science, computer science, engineering, and mathematics. I know this not just because I read widely, but from personal experience: my philosophy classes are so much more important and impactful than the ones I used to teach in biology that the comparison is simply laughable.

Against teaching? The hell with that. Teaching is by far the most important thing we do (when we do it well, not as a glorified high school). And to argue that it is not so today because it was not so during the Middle Ages is a complete non sequitur. Plenty of things were different in the past, but we have learned to do them better, or not to do them at all, if they turned out to be useless. And we are better off for it.

In praise of over-specialization and over-production? My arse. My heart aches at the immense waste of human potential represented by those tens of thousands of commentaries on Peter Lombard. What a gigantic load of lost opportunities! No, please, let’s not use that as a model for modern scholarship. Again, just because it has always been so it doesn’t mean it is a good idea to continue doing it that way. Yes, specialization is the inevitable name of the scholarly game, and Stover’s example of what is needed to develop a deep understanding of ancient Athenian democracy is a very good one. But let’s go a little lighter on additional commentaries on the philosopher or dramatist du jour, please.

Unlike Stover — whom I thank for his cogent analysis, which really pushed me to reflect more carefully on all of this — I think that a defense of the humanities, right here and right now, is synonymous with a defense of the very idea of a liberal education. Which in turn is synonymous with a defense of the possibility and hope for a vibrant democracy. Or at least a democracy that doesn’t produce the sort of obscene politics and social policies that a number of Western countries, especially the US and UK, are currently producing. We can do better, we ought to do better, we will do better.

Book Club: On Inequality 2, Equality and respect

We have seen last time that Harry Frankfurt advances a number of critiques of the commonly accepted idea that inequality is morally objectionable per se, as opposed to being objectionable for derivative reasons (e.g., because accumulation of wealth in the hands of few leads to undue influence in terms of political power). I now turn to the second (much shorter) part of his On Inequality, which discusses the relationship between equality and respect.

Frankfurt begins by stating that his analysis is compatible with a range of social policies and political viewpoints, i.e., it is neither limited to a left-leaning agenda nor opposed to it; and by rejecting the presumption that egalitarianism is an ideal of intrinsic moral importance, even though he personally supports a number of policies aimed at reducing inequality (for reasons other than the intrinsic immorality of the latter).

His claim, then, is that the moral appeal of economic egalitarianism is an illusion, since:

“Whenever it is morally important to strive for equality, it is always because doing so will promote some other value rather than because equality itself is morally desirable.” (p. 66)

Frankfurt takes on Thomas Nagel’s famous question: “How could it not be an evil that some people’s life prospects at birth are radically inferior to others’?” and responds that, empirically, it appears to be the case that inequality is compatible with having quite a bit, and that doing less well than others does not, in fact, entail doing badly.

As I argued in the course of the discussion following the first post on this book, Frankfurt is surely right if we are talking about modest discrepancies in wealth. The fact that there are some millionaires in New York City in no way implies that my life as a non-millionaire is bad, or that I am in no position to pursue my own projects and live a fulfilling life. Nor is anything of the sort implied for some people that are a little less wealthy than I am. Of course, if someone is truly poor (and that is, indeed, the case for many in the Big Apple) then those people’s ability to live a good life is seriously hampered. But that poses no problem for Frankfurt’s position, since the issue is that such people simply do not have enough, not that they have less.

But what about those, also to be found in good numbers in New York, who have so much more than is necessary even for a very comfortable life, and that frequently use their wealth to gain unfair access to the levers of power? There, Frankfurt argues, the issue — again — isn’t inequality per se, but rather the lack, or the non enforcement of laws that block the coupling between wealth and political power. The fact that these two are empirically correlated and often causally connected is not a logical necessity, says Frankfurt. Indeed, his point is that by focusing on inequality per se we miss the real problem, which is, for instance, the corruption of the political system. But despite Frankfurt’s claim that his approach does not prescribe any specific social or political reform, it actually does, since it shifts our focus from one kind of intervention (directly on inequality) to another (on corruption, or other undesirable empirical correlates of inequality).

Frankfurt again seeks to shift the attention of moral philosophers and people concerned with social justice:

“Surely what is of genuine moral concern is not formal but substantive. It is whether people have good lives, and not how their lives compare with the lives of others. … What makes it an evil that certain people have bad lives is not that some other people have better lives. The evil lies simply in the conspicuous fact that bad lives are bad.” (p. 71-73)

Frankfurt asks us to consider what is important when we consider a person’s concern for her rights, respect, and consideration. Enjoying certain rights, or being treated with consideration and respect, have inherently nothing to do with how much more or less wealthy one is compared to others, because rights, respect, etc. are accorded to members of the human society qua human beings, not in proportion to their wealth — at the least in theory.

That latter point needs a bit of commentary. Frankfurt is not being naive here, I am guessing. He is not saying that, as a matter of fact, people enjoy the same rights and respect. That is patently empirically false. But it is also obviously true that we live in a society bound by laws, and more broadly a Constitution, that is designed to apply equally to people regardless of their race, gender, religion, and socio-economic status. That in and of itself makes Frankfurt’s point that inequality is logically distinct from other social issues having to do with injustice and unfair treatment.

Consider an analogy with the ongoing issue of police brutality against minorities, and blacks in particular. When people claim that the problem is not limited to individual policemen who may be racists, but is “systemic,” what do they mean, exactly? If they mean that police departments across the country tend to be characterized (with due exceptions ) by a culture of implicit or explicit racism, and that it is this culture that results in the disproportionate killing of black men, they are probably correct. But if they mean that the laws of this country are inherently racist, then that is obviously false. We are very clearly, very explicitly, all equal under the law. Recognizing this distinction — which is analogous to the one Frankfurt seeks to draw between inequality per se and undesirable empirical correlates of inequality — leads to very different types of actions: in one case one should seek to reform police departments, in the other case to change the law of the land. They are not at all the same thing.

A bit later on Frankfurt makes another distinction that is worth pondering:

“It is easy to confuse being treated with … respect … with being treated equally. However, the two are not the same. I believe that the widespread tendency to exaggerate the moral importance of egalitarianism is due, at least in part, to a misunderstanding of the relationship between treating people equally and treating them with respect.” (p. 76)

A simple example: I make a concerted effort to treat my students with respect, qua human beings. But they are certainly not my equals (I’m the teacher, they are the students), nor are they equal to each other in all pertinent respects, as some of them are smarter, have better background knowledge, or try harder than others. Some of my students may need to be treated differently precisely because they have different needs. But they all ought to be treated with the same respect.

Frankfurt quotes Isaiah Berlin, one of the defenders of the idea that equality should be the default moral position: “The assumption is that equality needs no reasons, only inequality does so. … If I have a cake and there are ten persons among whom I wish to divide it, then if I give exactly one tenth to each, this will not, at any rate automatically, call for justification; whereas if I depart from this principle of equal division I am expected to produce a special reason.”

This, Frankfurt comments, is surely right, but only because in the hypothetical case imagined by Berlin we have no knowledge whatsoever of the people involved, their needs, and their differences. Under such conditions of total lack of information (what John Rawls’ famous called a “veil of ignorance”) equality and respect coincide. But, argues Frankfurt, this is a happenstance, not a logical necessity, “for the only characteristics of each person that are relevant [in this case] — to wit, simply those that constitute his humanity — are necessarily shared by every other human being. Therefore, the criteria of impartiality and of equality must inescapably yield, in this case, the same result.” (p. 82) But, crucially, only in this case.

Frankfurt is perfectly aware that being ignored, discounted, or not taken seriously is disturbing and has real consequences. But he insists that demands for respect should be based on the realities of a person’s own conditions, and especially on her status as a human being, and not simply on the amount of wealth that she happens, or does not happen, to have. Which means that, even in terms of respect, the issue isn’t equality per se, but a recognition of the worth and dignity of being human.

_____

Heads up: our next book club series will focus on a completely different topic and discipline. We will tackle my colleague Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind. Stay tuned.

True love, fungible love, and the Flash

Barry Allen (the Flash) kisses Iris West

Is love possible across multiple worlds? Does True Love (TM) exist, metaphysically speaking? These and other fun questions are addressed by Mike LaBossiere in an article that is as much fun to read as it is an example of how philosophy — particularly metaphysics — can seriously miss the point when it comes to crucial aspects of our lives. Let me explain by way of a story concerning one of my favorite superheroes, the Flash.

In season 2, episode 13 of the television series, entitled “Welcome to Earth-2,” Flash — the fastest man on Earth (one) — travels to a parallel Earth in order to figure out how to defeat the supervillain of the season, a guy named Zoom. It doesn’t matter why all this is happening, what matters is that Flash, known to the world as Barry Allen, meets the doppelgänger of the love of his life, Iris West. Barry-1 is perfectly aware that he is in the presence of Iris-2, who is a parallel version of the woman he loves, not the “real” thing. And yet, he simply can’t avoid having the same feelings for Iris-2 that he has for Iris-1 back on Earth Prime.

LaBossiere’s article explores the metaphysics of this situation, considering two possibilities: (i) it is rational for Barry-1 to be in love with Iris-2, on the grounds that Iris-2 is, for all effective purposes, the same woman he fell in love with on Earth-1, meaning that she looks the same, has the same interests, sense of humor, way of speaking, and so forth. Love, in this scenario, is fungible. Or: (ii) it is not rational for Barry-1 to be in love with Iris-2 for the reason that she is not his True Love, even though she looks and acts like it.

LaBossiere defends position (i), pointing out that (ii) stems from a metaphysically Kantian, and highly doubtful, view of the self. I will argue that the real answer is indeed close to LaBossiere’s, but that it makes more sense to arrive at it by way of a very different route. Indeed, this has already been done by one of my favorite philosopher-comedians: Tim Minchin. Let’s parse this out carefully.

I’m going to start from option (ii), the “Kantian” view. As LaBossiere is careful to point out, Kant never actually wrote about True Love. But he famously did reject David Hume’s view of the self as just a “bundle of perceptions,” thinking instead that our experiences happen to a unified, metaphysically “thick,” self. A metaphysical self is necessary in order to talk about True Love, the sort of love that Barry Allen can feel only for Iris West-1, and not for her doppelgänger. This is because if Kant is right, and there is such thing as a metaphysical self, then each of us has one and only one self, and anyone who merely looks or talks like us is still (metaphysically) quite distinct from us.

One way to make sense of this position is to imagine meeting for the first time your loved one’s twin. You might naturally feel attracted by her, especially if the twin has no only the looks, but also the opinions and mannerisms of your love. But, according to LaBossiere’s interpretation of Kant, you would be mistaken: regardless of superficial similarities, the twin is metaphysically, at her core, simply not your love.

This, says LaBossiere, goes well with the Dysneyesque intuition that a lot of people seem to share that there really is such a thing as True Love. The problem, as he acknowledges, is that the notion is both scientifically unfalsifiable and metaphysically suspect, and that Hume’s view of the self is actually far more convincing than Kant’s. Hume’s take both makes fewer arbitrary assumptions and is more congruent with what we actually observe via introspection. Not to mention that it goes better with much modern research in cognitive science. As Hume famously put it:

“When I enter most intimately into what I call myself I always stumble on some particular perception or other….and never can observe anything but the perception.” (Treatise, 1.4.6. para. 3)

If that is the case, let us then consider option (i): love is fungible, as they say. It makes sense to be in love with anyone who shares a substantial number of physical and mental characteristics with your original love, other things being equal.

Here LaBossiere helps himself to the idea of parallel universes, in effect arguing that what Barry Allen finds attractive in, and the reason he falls in love with, Iris West, is just the sum total of Iris’ characteristics, from her physical appearance to her interests, from her mental abilities to her moral character. If this is true, it follows that it is perfectly reasonable for Barry to be in love not just with Iris-1, but with any Iris from any of the infinite parallel Earths (so long as, I suppose, their individual life trajectories don’t actually lead them to become adult human beings that are significantly different in some crucial respect from Iris-1).

As LaBossiere summarizes the point (I changed the specific names he uses to keep with the example from The Flash):

“While this is less romantic than the idea of metaphysical True Love, it is more realistic and intuitively appealing. When one person talks about why they love another, they talk about the qualities of the person. Some dating services also make a big deal about testing people for various qualities and using them to find compatibility and love. Scientists also talk about the emotion of love as being driven by genes in search of suitable genes to combine with. Given this evidence, it seems reasonable to conclude that when Barry loves Iris, he loves her qualities. As such, if it was rational for Barry-1 to love Iris-1, then it is just as rational for Barry-1 to love Iris-35756. There is, after all, no discernible difference between the Irises.”

LaBossiere’s conclusion strikes me as fundamentally correct, but it is also an example of what Italians sarcastically refer to as the Ufficio Complicazioni Affari Semplici (Office for the Complication of Simple Affairs). I mean, sure, one can invoke the multiverse to make the point, or — far more simply — one can do like the above mentioned Tim Minchin, and simply look at the issue from the point of view of basic statistics and biological-cultural diversity among human beings.

In his “If I didn’t have you,” a song dedicated to his real life wife and long time love, Sarah, Tim says, in part:

“If I didn’t have you [insert a number a good things about Sarah]
You would think I would have somebody else
If I didn’t have you, someone else would do
[more good things about Sarah]
Your love is one in a million, you couldn’t buy it at any price
But out of the other 999,999 possible others
Someone else would be equally nice
Or maybe not nice, but say smarter than you
Or dummer but better at sports
I’m just saying, I would probably have somebody else

It is just mathematically unlikely that I would stumble
On the one woman specifically designed for me

I don’t think you are special. I mean, you are special
But you fall within a Bell curve.”

You get the gist. Minchin has simply and straightforwardly gotten rid of the “Kantian” idea of True Love, as well as endorsed LaBossiere’s point about the fungibility of real love. All without the need for any metaphysical heavy lifting (assuming that metaphysics can, in fact, do any lifting at all). A simple understanding of how life works, a basic appreciation of probability theory, and a good sense of humor will do just fine.

I’ll leave you with the full video of Minchin’s song, well worth watching:

Book Club: On Inequality 1, Economic equality as a moral ideal

Time to get started with a new book! This time it’s going to be Harry Frankfurt’s On Inequality, an obviously current topic. Frankfurt, of course, is the author of a number of well received, often slim and incisive, books, most famously On Bullshit, where he clarifies, among other things, the distinction between a liar (one who knows the truth, and uses it to effectively deceive others) and a bullshitter (one who uses a chaotic mix of truths, half truths and lies in order to get whatever he wants — the current President of the United States arguably being the archetypal example).

Frankfurt divides On Inequality into two parts: economic equality as a moral ideal, and equality and respect. I will discuss the first part here and the second one in my next post.

The discussion of economic equality as a moral ideal begins with Frankfurt’s statement that the most fundamental social challenge in the United States today is not that people’s incomes are widely unequal, but that too many people are poor. In order to begin to back up this notion, he points out that, after all, we wouldn’t want to eliminate income inequality by making everyone poor. Frankfurt immediately acknowledges that there is a number of people in the US that have far more than is necessary to flourish, and whom he says are guilty of “economic gluttony.” Economic gluttony is a “ridiculous and disgusting spectacle,” but reduction in inequality would be a side effect of combating both poverty and economic gluttony, and should not be a primary goal in itself.

One of the reasons we need to rein in economic gluttony, argues Frankfurt, is that it carries a number of potentially anti-democratic effects, as when few very wealthy people essentially control political power through their money and influence. But, Frankfurt argues, many people seem to think that economic inequality is inherently morally problematic, a position that he regards as highly questionable. As he puts it:

“From the point of view of morality, it is not important that everyone should have the same. What is morally important is that each should have enough.” (p. 6)

Frankfurt writes that being preoccupied with how much money other people have is alienating, in the sense that such preoccupation distracts us from reflecting on what is it that makes our own life worth living, and therefore from which focusing on the resources we actually need to pursue such a life. The result is that:

“The doctrine of equality contributes to the moral disorientation and shallowness of our time.” (p. 14)

The chapter proceeds by considering a number of arguments often being made in support of the idea that inequality is undesirable. Frankfurt discusses several of these, aiming to show that economic equality only has value in a derivative manner, not per se. This strikes me as correct, but I also wonder who would, in fact, argue for an intrinsic moral worth — i.e., regardless of consequences — of equality.

One such example is based on the principle of diminishing marginal utility. The idea is that more economic equality maximizes aggregate utility, i.e. the aggregate satisfactions of members of society. This, in turn, derives from the rather hard to doubt notion that a marginal dollar brings less utility to someone who is already rich than to someone less wealthy.

But, Frankfurt counters, the demand for some kinds of goods will increase as a result of redistribution of wealth, because more people will want those goods, driving the corresponding prices higher. The outcome will be that any progress made by the most poor will be offset by a decreasing purchasing power of the middle class, thus — on average — nullifying the benefit of wealth redistribution.

Well, maybe. To begin with, it is not clear to me why one wouldn’t also see a countering effect due to the fact that at least some of the goods that come to be in large demand will be more efficiently produced, at increasingly lower costs. Moreover, this sort of conclusion cannot be reached simply on the basis of a qualitative argument. Detailed quantitative simulations, ideally backed up by empirical evidence gathered in the field, are necessary.

A bit later on (section IV of the first part), Frankfurt engages in one of those philosophical thought experiments which I increasingly think miss the point, and may arguably be misleading. He invites us to imagine an hypothetical situation where there is a limited number of resources, so that some, but not all members of a given population will survive. It is easy to contrive the numbers in such a way that forcing people to share equally — thus eliminating inequality — will result in everyone’s death. This is supposed to show that equality is not an inherent moral good.

No kidding, I would respond. First, again, I doubt anyone has sensibly suggested that equality is good per se (Frankfurt nowhere in the book provides direct evidence to back up this claim). Second — and most importantly — yes, in the highly artificial situation imagined by Frankfurt it would be grotesque to insist on equality. But no modern society is even close to being in such a situation, rendering the whole thought experiment rather silly.

In section V Frankfurt rejects what he sees as the widespread moral intuition that inequality is objectionable in itself, and suggests that what people find problematic is, again, that fact that some have to little:

“Mere differences in the amounts of money people have are not in themselves distressing. We tend to be quite unmoved, after all, by inequalities between those who are very well-to-do and those who are extremely rich.” (p. 41)

The underlying idea here is that the two doctrines of sufficiency and equality are logically independent, and that one cannot simply deploy arguments in favor of one as if they were pertinent to the other. Frankfurt goes so far as accusing egalitarians of hypocrisy, pointing out that many are quite happy to accept large incomes that are not justified on the basis of their own theories. This, again, is what happens when one confuses sufficiency (which truly is desirable) with equality (which is not, except in terms of certain indirect consequences, such as disproportionate political influence).

The flip side of the coin when it comes to the “hypocritical” egalitarians actually reinforces Frankfurt’s point: these same people don’t seem bothered by the fact that others make a lot more money, so long as they make enough to be reasonably free to pursue their own goals. I certainly count myself in the latter group: I don’t care, per se, how much more money some people make compared to me, because I’m lucky enough to be able to live the kind of life I want to live. That freedom of mine, however, does not obtain for a lot of people who make less than I do. But here Frankfurt strikes me as being right: the situation of these people is problematic not because they make less than others, but because they do not make enough. Insufficiency, not inequality, is the problem.

Frankfurt is careful (p. 25) to stress that “having enough” does not just mean enough to survive, or to live a tolerable life. That, for human beings in modern societies, is not, in fact, enough. “Enough” means an amount of wealth sufficient to pursue the kind of goals one is interested in pursuing. In my case, for instance, living in a large cosmopolitan city, enjoying at least some of its offerings, and being able to devote much of my time to reading and writing.

Moreover, “enough” also doesn’t mean that the person in question couldn’t benefit from, or would not welcome, additional income. When the State of New York finally renewed its teachers’ contract (after five years of stalling), I got a significantly larger paycheck. I did not need it in order to live the life I want to live, but it was welcome nonetheless. The point, as Frankfurt articulates it, is that I did not have an “active interest” in getting a higher salary. When I got it, it was a nice bonus, which allows me to do a few more things. But I was not preoccupied in the least by the missing money (as much as it was ethically and legally due to me by the State of New York).

It was refreshing to see that Frankfurt — going against what I will call the Wall Street ethos, for lack of a better term — doesn’t think there is anything wrong with people who take my attitude toward money:

“There are quite reasonable people who feel that their lives are good enough, and that it is not important to them whether their lives are as good as possible. The fact that a person lacks an active interest in getting something does not mean, of course, that he prefers not to have it.” (p. 55)

In a sense, says Frankfurt, the situation is similar to a man who is deeply in love with a woman and is happy about his relationship. It would be perverse to criticize him on the ground that, if he really tried, he could do “better,” quite regardless of the fact that there obviously isn’t a single measure of “better” out there, and of the even more obvious fact that the amount of money in one’s bank account certainly is no such measure.

This leads us to the end of the first part of On Inequality. In the next post of this series I will tackle what Frankfurt has to say about the relationship between equality and respect.

The Greeks were right about love

An Ancient Greek symposium

Love is a fundamental aspect of the human experience. When people are incapable of it we think there is something seriously wrong with them, something that makes them almost inhuman. And yet there are many different types of love, for one’s partner, one’s children, one’s parents, even one’s nation or God. Each carries with it its own version of pains and pleasures. Yet, our vocabulary may not be well suited to such an important phenomenon. When I first came to the United States I was struck by the fact that the English language, the tongue of Shakespeare, has only one word for the notion. In Italian, for instance, I would never say to my daughter “I love you,” it would sound weird and incestuous. And that’s only one problem.

Even if we limit ourselves to “romantic” love, i.e., to love for one’s partner or spouse, the modern notion is confused compared to both what sound psychology and modern neuroscience tell us. A classical study on the progression of love in human beings found major differences between the neural underpinnings of intense romantic love (see also this, more recent, study), of the type we feel at the beginning of a relationship, and what scientists call limerence, the more subdued, but deeper attachment we may or may not develop for a partner with whom we were initially “in love.” (If you are curious about the details, the first phase activates mostly the striatum, part of the nucleus accumbens, the pleasure center of the brain; the second one activates also the insula, which the brain uses to assign value to our activities so that we continue to engage in them.)

Insisting in using the same word to describe these two conditions confuses two very different biological and psychological (and hence also social and cultural) phenomena. More crucially, expecting — both individually and as a society — that once two people embark on the first phase they will necessarily move to the second, which will itself automatically last a life time (the famous “and they lived happily ever after” of Disney-style tales) is absurd and arguably the cause of a lot of unnecessary pain and suffering.

The modern idea of romantic love probably traces back to the medieval concept of chivalry, made popular as well as more specific by French, Italian and Spanish troubadours, and finally canonized for Western audiences during the Romantic era that followed, as a backlash, the Enlightenment. However, it is arguably the Ancient Greeks and Romans that got it right, or at least closer to the mark. Interestingly, they used three different words that translate to “love,” none of which actually corresponds to our (confused) conception of romantic love. (A good and short introduction can be found in this video.)

To begin with, there is agápē, which means love that comes with an aspect of charity, in the sense of benevolence, embedded into it. This is the sort of love we have for our children, but also for our spouse or partner. Early Christians adopted the term to mean the unconditional love that God has for his children. As Thomas Aquinas put it, agápē means “to will the good of another” (in Summa Theologiae I-II, 26, 4).

Second, we have érōs, which in part does mean, as the modern word “erotic” indicates, sexual attraction for someone (the sort of stuff that stimulated the striatum in the neuro-studies linked above). However, Plato for one expanded the concept to indicate, after maturation and contemplation, love for beauty itself. This is the origin of the phrase “Platonic love,” which does not mean love without sex, necessarily, but rather love of the ideal Form of Beauty itself. This may begin with erotic attraction, but eventually transcends it.

In the Symposium we are treated by a lesson on love by none other than Socrates, who says that érōs allows the soul to recall knowledge of beauty and thus to arrive at an understanding of spiritual truth. In this sense, then, both lovers and philosophers are inspired by érōs.

Finally, we have philía, which describes a sense of affection and regard among equals. Aristotle uses this word to characterize love between friends, for family members, or of community. It is a virtuous type of love, often cast as of a brotherly sort, but with a component of enjoyment.

Notice that what distinguishes the three types of love recognized by the Greeks is not the object of love, but rather the modality of the sentiment. Take the specific instance of a long-term “romantic” relationship. Ideally, what one wants in that case is a particular combination of the three modes: we unconditionally (i.e., not because it is to our advantage) want the good of the other person (agápē); we want not just physical attraction, but a deeper appreciation of the “beauty” of our partner, in terms of his or her character traits (érōs); and we want to be their friends and to enjoy their company for its own sake (philía).

What distinguishes love of a partner from love of our children, friends, country or God, then, is the specific modulation of all three types of Greek “love.” Notice that none of them obviously maps to the initial phase of romantic love in the modern conception. That’s because, arguably, that phase is not, in fact, love itself, but (potentially) preliminary to love. If it lasts, and if it matures, then it becomes love. If this is even approximately correct, than a lot of what our literature, movies and advertisements describe as love is — rightly conceived — no such thing.

_____

Postscriptum: various readers have commented on the fact that I have left out a fourth Greek word for love, storgē. That’s because it was actually rarely used in ancient texts, but still, it does help add even more nuance to the concept. Storgē means affection, especially (but not only) of the kind one has toward parents and children, and includes a component of empathy of the type felt naturally toward one’s children. Storgē was also used to indicate love for a country, or even a sports team, and — interestingly — in situations when one has to put up with unpleasant things, as in the oxymoronic phrase “love for a tyrant.”