Author Archives: Massimo

About Massimo

Massimo is the K.D. Irani Professor of Philosophy at the City College of New York. He blogs at platofootnote.org and howtobeastoic.org. He is the author of How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life.

Plato’s reading suggestions, episode 127

wine not so good for youHere it is, our regular Friday diet of suggested readings for the weekend:

The three things that justify a feeling of achievement.

Is science hitting a wall? (part II here)

Why Julian Baggini does not meditate.

How to serve a deranged tyrant, Stoically. (Any reference to current political figures in the US is most definitely on purpose.)

Wine in excess of (small) recommended daily serving will shorten your life, significantly.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Book Club: Darwin’s Unfinished Symphony, 9, the arts

painting elephants“The logic of cultural evolution is identical to that of biological evolution, even if the details differ. New ideas, behaviors, or products are devised through diverse creative processes; these differ in their attractiveness, appeal, or utility, and as a result are differentially adopted, with newfangled variants superseding the obsolete,” says Kevin Laland at the beginning of the last chapter of his book, Darwin’s Unfinished Symphony: How Culture Made the Human Mind (p. 292). It is, therefore, with a brief commentary on this chapter, focusing on the arts, that I will end my series on Kevin’s fascinating view of the young field of cultural evolution.

That introductory gambit actually illustrates where Laland’s and my views begin to diverge, though perhaps not as sharply as each of our perspectives differs from standard evolutionary psychology. I see cultural evolution as linked to its biological counterpart in two ways: first, because it originated from it; and second, because there is a broad analogy between the two. But I fall far short of Kevin’s strong statement that the two are “identical” in logic. They are not, in my mind, fundamentally because biological evolution is propelled by the teolonomic process of natural selection. Cultural evolution, by contrast, is moved by the teleological process of human cognition. The two are not the same, and I maintain that no currently available theory of cultural evolution satisfactorily accounts for either the difference or the relationship between the two. (I hasten to say, which should not be necessary, that I see nothing magical or “mysterian” about this. At all. It is simply an open scientific question, like many others.)

The cultural evolution of art is, obviously, a huge topic, which would require a book of its own. So Laland takes a reasonable approach, focusing on aspects of the evolution of a particular art form: dance. As we shall see, he has lots of interesting things to say, but not much that would surprise a historian of dance, and definitely not much that originates specifically from a biological evolutionary perspective.

Before getting to dancing, Kevin briefly discusses another art form, acting, making the case that it crucially (though not solely, of course) depends on imitation, which he has argued previously, is an important evolved skill in the human lineage. Since dancing also fully deploys our ability to imitate others, and given that neither acting nor dancing presumably were direct targets of natural selection, he can then conclude that both art forms are in fact a byproduct of natural selection for the capacity to imitate.

“Imitation is no trivial matter. Few other animals are capable of motor imitation, and even those that do exhibit this form of learning cannot imitate with anything like the accuracy and precision of our species.” (p. 295)

Our ancestors at some point became able to solve what Laland calls the correspondence problem: imagine, for instance, that you are trying to learn how to use chopsticks. This is done by imitation, which requires translating the visual cues obtained by watching someone using chopsticks into the motor control that our own muscles have to exercise in order for us to be able to do the same. The sensory experiences involved in watching and doing are utterly different, and yet somehow our brain has to be capable to solve this correspondence problem.

Recent research has shown that human beings solve the correspondence problem by using neural networks similar to the so-called mirror neurons discovered in other primates. Kevin suggests that it is plausible that the mirror neuron or equivalent network has been selected precisely to facilitate imitation, that this particular skill has been much more refined by natural selection in humans, and that one of its most astounding and least recognized byproducts is our ability to do and appreciate art — not just movies and dancing, but also painting, sculpture, theater, music, and even computer gaming.

Kevin doesn’t think much of the alleged ability of other animals to produce art, and I think he is right:

“The motor control that allows humans to produce artistic works and performances spontaneously is a capability that no other animal shares. … The claim that chimpanzees [for instance] are artists, in any meaningful sense, is greeted with skepticism by animal behaviorists and art scholars alike.” (p. 299)

He also thoroughly debunks the idea that elephants in Thailand can paint, referring instead to evidence that the animals have been well trained to respond to subtle cues provided by their handlers, through the simple device of tugging at the elephant’s ears.

What about dancing? Here again the suggestion has been made that some animals do it, though as Laland points out, much of the answer depends on how one defines dancing, and what counts as instances of the art form. Regardless, and more importantly, he highlights the fact that the only good candidates for dancing animals are, not surprisingly, those species that are most capable of imitation. (The same considerations apply to singing animals, by the way.)

“The most transparent connection between dance and imitation … will be readily apparent to just about anyone who has ever taken or observed a dance lesson; that is, dance sequences are typically learned through imitation. … It is no coincidence that dance rehearsal studios around the world almost always have large mirrors along one wall. These allow the learner to flit rapidly between observing the movements of the instructor or choreographer and observing their own performance.” (p. 307-308)

The other thing that makes for a good dancer is the ability to learn a long sequence of actions, and Kevin has shown before in the book that this type of learning is very difficult in a non-social setting, because it pretty much requires teachers. So the evolution of teaching, which he has discussed previously as a crucial component of early cultural evolution in the human lineage, is also a prerequisite for the wonderful byproduct of our biology that we call dance.

Much of the remainder of the chapter concerns itself with the history of dancing, and it is there, I think, that the limits of insights from biological evolution are most painfully clear. Laland asks whether dance could be said to have evolved in any “rigorous” sense of the term, by which he means to ask whether dance as a “system” possesses the characteristics that any evolving system has to possess: variation, differential fitness, and inheritance. But it should be obvious that while the evolution of dance does display all three, we have essentially no account whatsoever of the second element, differential fitness. This deficiency, I argue, at the moment makes cultural evolution into a tautological theory of the kind that Karl Popper (mistakenly) thought the theory of biological evolution was. While Darwin and his successors solved that problem in the biological case, neither evolutionary psychologists nor the more sophisticated approach advocated by Kevin and colleagues has been able to solve it in the case of cultural evolution.

Kevin presents readers with a number of examples showing that there is much variation among the world’s dances, and that this variation is culturally inherited via imitation (though, crucially, the equivalent of biological “mutation” and “recombination” result from conscious or unconscious human decision making, which follows, and indeed also shapes, human aesthetic judgments).

We therefore learn about European sword dances, which apparently first appeared in ancient Greece and were brought to Britain by invading Danes and Vikings. Waltz is Kevin’s favorite example of cultural fitness, as he calls it. And yet, here the limits of his approach are stark, in his own words:

“Relative to other dances in the late eighteenth century, the waltz could be said to possess high ‘cultural fitness,’ which really means little more than it was unusually appealing and as a result increased readily in frequency.” (p. 311)

Right. And that, right there, is the problem. Strip the fancy wording and we are left with: “waltz (at that particular time, in that particular culture) had high fitness because it had high fitness.” That’s the sort of vicious circularity that rightly annoyed Popper. You don’t find it in evolutionary biology because a separate discipline comes to the rescue: functional ecology. It is the latter that allows us to make predictions about which organismal traits are going to be adaptive in one environment or another, given the organism’s anatomy, physiology, and ecology (and given the laws of physics and chemistry). We don’t just say that natural selection favors the fit, and then immediately turn around and define the fit as those that are favored by natural selection. But that’s pretty much what cultural evolutionary theory does, at the moment, and it shares this limitation with other approaches, such as evolutionary psychology and memetics, though for different reasons that are specific to each approach.

To be fair, Kevin does attempt to sketch an elementary functional ecology of dance. For instance we are told that waltz was attractive in late 18th century Europe, in part because of the “dance’s intoxicating swirling, and the dangerously intimate contact between male and female were a major draw.”

Okay, but presumably swirling and close male and female contact have always been intoxicating. So why late 18th century Europe? Moreover, I don’t know much about the history of dance as an academic field of study, but I doubt anything Laland says in this chapter will come as a surprise to historians of dance — and I mean everything, from the genealogical patterns of evolution by imitation to the “mutations” introduced by different cultures at different times, to ad hoc explanations (which may even be true) like the intoxicating effect of a particular dance. In other words, invoking Darwin here does no work at all, or almost.

I don’t have a better alternative. I chose Kevin’s book precisely because I think it is one of the best in the field of cultural evolution, reflecting the incredible vigor and ingenuity of Kevin as a principal investigator, not to mention the many collaborators he gives due credit throughout the book. It’s all tantalizing and very, very interesting. But it falls far short of a comprehensive theory of cultural evolution. It is good to learn about the importance of social learning, of teaching, and of imitation throughout the history of hominins. It is fascinating to think that such biological history has a lot to do with the subsequent shaping of cultural evolution. But we are still nowhere near giving a decent scientific account of sword dancing, waltz, flamenco, polka, jitterbug, or rock’n’roll. Not to mention Michelangelo, Picasso, and de Kooning; or Mozart, Beethoven and Tchaikovsky; or Homer, Dante, and Shakespeare. And so on and so forth, encompassing the bewildering variety of manifestations of what we call culture.

_____

And now for something completely different: our next book will be Early Socratic Dialogues, edited by Trevor J. Saunders, Penguin 2005. I figured that this is a blog called Footnotes to Plato, and yet we have hardly talked about Plato. So, here we go…

Plato’s reading suggestions, episode 126

ancient orgiesHere it is, our regular Friday diet of suggested readings for the weekend:

Is meditating on death like putting on a fur coat in summer? (spoiler alert: no)

The History and Psychology of the Orgy.

The language of strategic planning (doesn’t make it any more meaningful of an exercise…).

The only thing you ever really need to read about (and, as bonus, by!) Jordan Peterson.

Particle physicists begin to invent reasons to build the next larger particle collider.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Socrates, weakness of the will, and addiction

Socrates“People are dying because we misunderstand how those with addiction think,” says the title of a recent article in Vox by philosopher Brendan de Kenessey, who argues that addiction is not a moral failure, and that it is the moralistic attitude of a number of politicians and a significant portion of the public that makes the problem more difficult to deal with. Addicts are not bad people who need to be punished, he says, they are sick and need help.

And he is completely right, I think. And yet, I also suggest that the bulk of the article is based on the wrong philosophical criticism. de Kenessey blames Socrates for the moralistic attitude, while he should blame certain brands of Christianity instead. Here I will not make the positive case against Christian moralism (which is well known among certain politicians of a certain party in the US), nor will I unpack the idea that addicts are sick, not bad, people, as de Kenessey does a very fine job of that in his article. But I will defend Socrates and use the occasion to talk a bit not just about addiction, but in general the phenomenon of what the Greeks called akrasia, or weakness of the will, and which Socrates thought simply does not exist.

The starting point of de Kenessey’s analysis of the Socratic problem is the Platonic dialogue known as the Protagoras, in which the discussion between the Athenian sage and one of the most famous sophists turns to the topic of akrasia. Let’s contrast two instances of alleged akrasia, brought up by de Kenessey to make his point against Socrates, and which I think, on the contrary, show pretty clearly why Socrates was correct (once we add an hidden premise to the Socratic position, a premise not discussed by de Kenessey).

Imagine yourself in front of the television, intending to binge watch a season of Black Mirror (or whatever your favorite show happens to be). You think, when you reflect on it, that this isn’t really the best use of your time, and that you should instead pick yourself up and go to the gym, as lately you’ve let yourself go a little, and you don’t feel good, both physically and psychologically. You mull it over a bit, but in the end decide to stay and watch television, with munchies to accompany the experience.

Now imagine, says de Kenessey, an addict who is driving down the projects, thinking that he really ought to stop what he is doing, turn his life around, clean up, get a job, and take care of his family. Nevertheless, he keeps driving to the corner where he regularly meets his dealer, and buys some cocaine instead.

The two cases appear to have a similar structure, like this:

Subject A has two courses of action available to him, X and Y.

A thinks that he should do X, even though he is very tempted by Y.

A ends up doing Y, rather than X.

Socrates has this to say, in the Protagoras, about this kind of situation:

“No one who knows or believes there is something else better than what he is doing, something possible, will go on doing what he had been doing when he could be doing what is better.”

This seems paradoxical, in the original meaning of the term (para doxan = uncommon opinion), as it is a straightforward observation that people, like both our hypothetical television binger and drug addict, very often don’t do what they believe to be the best thing for them. And yet, Socrates is not alone in taking this position. Modern economists such as Paul Samuelson have proposed an approach in behavioral economics known as “revealed preference,” according to which people show what they really like by what they do, not by what they say. Similarly, modern psychology has accumulated a pretty good amount of evidence that we often confabulate about the reasons why we do things, i.e., we make up reasons to justify our actions because we often don’t really have a good understanding of our own motivations.

How does Socrates defend his “paradoxical” position, which seems to fly so clearly in the face of the evidence? He thinks that people in these cases do not suffer from akrasia, i.e., weakness of the will, thus acting against their best judgment. He thinks instead that people are doing exactly what they want to do, but are doing it because of bad judgment. Doing bad things is, therefore, a matter of ignorance, not malice.

Ignorance my ass, one might easily retort. The television watcher is not ignorant, and neither is the drug addict. They don’t luck the pertinent information, they don’t need to be educated about what is going on. True, but the word used in the Platonic dialogues in this context is amathia, which although usually translated as ignorance actually means something closer to un-wisdom, the opposite of sophia, one of the roots of the word philosophy. Socrates is arguing that apparent cases of weakness of the will are actually cases of lack of wisdom — not of factual or empirical knowledge, but of the proper way to arrive at judgments given certain factual or empirical knowledge.

Ever since discovering the Socratic idea of replacing akrasia (and, more importantly, actual “evil”) with amathia I found myself to be significantly more prone to understand others’ motivations and actions, to sympathize with their manifest lack of wisdom even when I cannot possibly condone their actions, and to generally cultivate an attitude of sorrow rather than anger when people do bad things. I find this new approach liberating and far more constructive than either the akratic or, much worse, the moralistic one.

Still, isn’t de Kenessey right that Socrates ends up blaming the victim here, and that it is this sort of blame that justifies the kind of draconian measures implemented by politicians, and supported by the public, that made the so-called war on drugs a total disaster with a high cost to society, both in human and financial terms?

I don’t think so, and the reason is that if we want to read Socrates charitably we need to see that the two cases above are actually distinct, and they are distinct because of a hidden premise in the Socratic approach. That premise is that we are talking about a normally functioning human mind, not a diseased one. It was well known even in the ancient world that human beings have a tendency to reason very poorly when they are under the influence of a number of external conditions, particularly drugs (including wine). A good deal of Greek tragedy is built on that premise, such as Euripides’ The Bacchantes. That is why Diogenes Laertius, commenting on the Stoics — which were explicit followers of Socrates — says that “they will take wine, but not get drunk.” (VII.118) Getting drunk artificially impairs one’s judgment, so when one is under the influence, as we say today, one is not suffering from lack of wisdom, he’s suffering from a temporarily dysfunctional mind.

If this is a reasonable and charitable interpretation of Socrates’ take, then the two cases of the television binger and the drug addict are very different. The first is an actual case of what Socrates is arguing against Protagoras: the binger — in accordance with modern behavioral economics theory — really does prefer to stay at home to watch Black Mirror rather than going to the gym. Yes, of course he knows that in the long run he would be better off taking the second course of action, but he judges that for him, right here and right now, binging is better. His future self be damned. He is, of course, mistaken in such judgment, just like Socrates maintained.

The same reasoning, by contrast, does not apply to the drug addict, precisely because he is an addict, and therefore his judgment is impaired. He is not suffering from amathia, he is suffering from a chemical addiction. And that is why the moralist attitude criticized by de Kenessey is pernicious, because it does not recognize that the person in question is sick, not evil (or unwise, as Socrates would put it).

There is, of course, a wrinkle in all this, which de Kenessey must be aware of, and yet never mentions in his article: on the first occasion that the soon-to-be drug addict decided to take cocaine his judgment was not impaired by being sick, yet. Which means he is still responsible for the initial decision to go down that road. Now we only have two ways of looking at the onset of the addiction, then: either the person is morally bad (the moralist view), or he lacks wisdom (the Socratic view). Not only the second view is more humane, it also makes much more sense than invoking akrasia: the future drug user had not yet had the experience of being on drugs, so he couldn’t possibly have yielded to the temptation of temporary pleasure promised by the drug. More likely, he made the unwise judgment that the drug wasn’t as bad as people say, or that he will have the willpower to resist the addiction, or something along similar lines and to the same effect.

de Kenessey points out that several modern philosophers have attempted to come up with an anti-Socratic account, but they can’t agree on what’s going on: for Harry Frankfurt the desires that represent our true self are those desires that we want ourselves to have (Harry Frankfurt); for Gary Watson they are the desires that align with our judgments of what is valuable; for Michael Bratman they are the desires that cohere with our stable life plans; and for Susan Wolf they are the desires that are supported by rational deliberation (Susan Wolf).

This business of a “true self” is, however, a red herring. As de Kenessey argues, modern psychology has done away with that notion (so did David Hume, two a half century before modern psychology). But the fact remains that “we” do make decisions in response to our desires and as a function of our capacity to arrive at judgments. Whether “we” are made of a unitary self, a bundle of perceptions, or whatever, doesn’t matter. Our judgments are either made by a functional human mind (in which case we are responsible for them) or by a non-functional one (in which case we are sick and need help). The difference between the moralist and Socratic view pertains to the first, not the second case. And there one has a choice of blaming people for the evil doing, or pity them for their lack of wisdom. I find the latter course of action to be far more preferable.

Plato’s reading suggestions, episode 125

put down the smartphoneHere it is, our regular Friday diet of suggested readings for the weekend:

As If! An entirely uninspiring review of what nevertheless sounds like a really fascinating book by Anthony Appiah.

Jordan Peterson and fascist mysticism.

The dark truth about chocolate. (It’s not a health food, just a pleasant treat.)

What we know and don’t know about losing weight. We know that low carbs vs low fats doesn’t make a difference. And genetics neither.

Put down the damn smart phone! It’s rude, and it’s bad for your health.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Book Club: Darwin’s Unfinished Symphony, 8, foundations of cooperation

reciprocal altruismThink about the complexities involved in allowing you to do something that nowadays is fairly normal: getting on a plane and fly to another city, across an ocean. It’s not just the sophisticated machinery, ground transportation, the airports, and so forth. It’s the people. Accomplishing such a feat requires the coordinated cooperation of a large number of people who don’t know each other, and don’t know you or why you wish to get on that plane in the first place. This observation sets the stage for the next to the last chapter of Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind, which we have been discussing for a while now.

The first point Kevin makes in this chapter (n. 11 in the book) is that conventional evolutionary explanations, such as kin selection and other gene-based explanations are insufficient to account for the degree and sophistication of cooperative activities that have characterized human civilization ever since the agricultural revolution. A fully formed theory of cultural evolution is needed, to draw the outlines of which, of course, is Kevin’s goal. Obviously, the idea is not that cultural evolution is independent from its biological counterpart, but rather that it is a novel mode of evolutionary change that resulted from the particular path of biological evolution that hominins happen to have taken.

Two of the factors that make large-scale human cooperation possible are the ability to teach others, and language, which Laland has already argued itself evolved to facilitate teaching. A third factor was the origin of social norms. These specify how individuals are expected to behave within a group, including how to treat individuals who violate norms. Crucially, norms also make possible for people to identify with a particular group, as abiding by its norms carries privileges for in-group members.

Moreover, humans are pretty much the only animals capable of trading goods (there are a few alleged cases in other primates, but they are disputed), and certainly the only ones that arrived at that convenient abstraction we call money. This level of sophistication requires language, and it is both facilitated and made necessary by the existence of division of labor, something that evolved to a high degree of sophistication, again, after the agricultural revolution, which made possible the existence of large and stable groups of humans.

All of this coordination is beneficial thanks to the advantage provided to individuals by indirect reciprocity: I do something for you, you do something for someone else, and at some point down the line another person that has been benefiting from in-group membership does something for me. Like allowing me to safely cross the Atlantic to get from New York to Rome. Repeated bouts of indirect reciprocity require gossip, so that people have a sense of who they can trust and who to stay away from. Needless to say, gossiping, and hence the building and destroying of social reputations, is not possible, again, without language.

Language, in turn, also evolves, quickly generating local dialects. Dialects then rapidly become a mark of local membership, a quick heuristic to tell apart in- from out-group members. They increase within-group cooperation, and likely across-group conflict, which sets the stage for group selection at the cultural level:

“Cultural processes generate plenty of variation among human groups for natural selection to act upon. Extensive data now demonstrate that the differences between human societies result far more from cultural rather than genetic variation. … Symbolic group marker systems, such as rituals, dances, songs, languages, dress, and flags, make it considerably easier for cultures to maintain their identities and to resist imported cultural traits from immigrants, than it is for local gene pools to maintain their identity by resisting gene flow.” (p. 283)

This is something important to keep in mind, as it is intuitive to say that cultures change more rapidly than genes. While this is true if we are talking about mutations (which are, indeed, rare), it is not the case once we consider gene flow and genetic recombination, which happen far more frequently, as Kevin points out, than some types of cultural change.

Laland also remarks on the widespread existence of practices that synchronize the behavior of individuals, like group dancing, or military marches. These activities result in the simultaneous release of endorphins, which in turn promotes within-group bonding. The broader point is that humans evolved a psychology of group behavior that is entirely unknown in other animals, and that cannot be explained on the basis of standard genetic models of evolution. Pace the evolutionary psychologists, of course, for whom we have seen Laland has relatively little patience.

We are reaching the end of this series of posts on Darwin’s Unfinished Symphony. The next and last installment will focus on the cultural evolutionary origin and significance of art.

Plato’s reading suggestions, episode 124

Aztec sacrificeHere it is, our regular Friday diet of suggested readings for the weekend:

Why moral relativism is both incoherent and immoral.

The Infernal Library: books written by tyrants big and small.

We may disagree about morality, but not all the way down. Which means there is always room for dialogue.

The consciousness deniers, a natural history.

Sorry, but Hawking’s final paper is utterly unremarkable, and does not give us a way to prove the existence of parallel universes.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Meeting Jeremy Bentham

Jeremy Bentham auto-icon

Jeremy Bentham’s auto-icon, photo by the Author

I am not a Utilitarian, when it comes to moral philosophy. And I am not into celebrities. Nevertheless, I was looking forward to “meet” Jeremy Bentham, the father of Utilitarianism, who is currently in visit at the Met-Breuer museum in New York City. To be precise, of course, it is Bentham’s famous “auto-icon” that is visiting, as part of the fascinating “Like Life: Sculpture, Color, and the Body” exhibit, which has just opened and will go on until July 22nd.

The auto-icon is made of the preserved remains of Bentham, dressed with his own clothes and sporting his favorite cane, which he nicknamed “Dapple.” The head is actually a very realistic wax replica based on portraits from the time, as the actual one did not fare well during the chemical process that Bentham instructed his friend and physician Thomas Southwood Smith, to carry out for preservation purposes. It is currently exhibited where the auto-icon normally resides, at University College London, looking rather ghastly, I’m told.

(You can hear the whole fascinating story as told to Nigel Warburton by Philip Schofield, Director of the Bentham Project, over at the Philosophy Sites podcast.)

Bentham came up with this strange idea for a number of reasons. For one thing, he wanted to stick it to the Church of England, which at the time (he died in 1832) still had a monopoly on dead people. Bentham, who was a stunningly progressive man for his culture, thought the church was “irredeemably corrupt.” The second reason was so that his body could be dissected for medical research (by his friend Smith). At the time medical doctors had little available material for the purpose, mostly a few hanged criminals, courtesy of the Crown. Lastly, and most importantly, he had a vision of setting an example, ideally leading others to do the same, getting some utility from their bodies.

He explained what he meant in his last essay, entitled “Auto-Icon; or, farther uses of the dead to the living” (you can read it here). He wrote that “[Just as] instruction has been given to make ‘every man his own broker,’ or ‘every man his own lawyer’: so now may every man be his own statue,” envisioning auto-icons being waterproofed and dispersed among trees on people’s estates, kept as decoration, or even used as “actors” in historical theater and debates. I guess it did not occur to him that the most natural and probably useful way to get some “utility” from our dead bodies is to let them naturally decay (e.g., inside biodegradable coffins), thus slowly rejoining the cosmic cycles.

Needless to say, Bentham’s vision did not take hold, and people are still buried in expensive coffins that actually postpone the natural recycling process, though an increasing number of people now opts for cremation. Of course, that wasn’t the only idea he put forth that did not pan out. His most famous contribution to philosophy, the first version of Utilitarianism, was so badly flawed that his follower and student John Stuart Mill immediately had to propose major modifications.

Bentham put forth as the “most fundamental” axiom of moral philosophy that “it is the greatest happiness of the greatest number that is the measure of right and wrong.” This “greatest happiness principle” was to be cashed out by equating happiness with pleasure and unhappiness with pain:

“Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think.” (The Principles of Morals and Legislation, ch. 1, p. 1)

I guess I’ve got a good excuse not to go to the gym this morning, then. It’s painful, not pleasurable, to me… Bentham’s principle puts his philosophy squarely in the realm of psychological egoism, rarely endorsed by philosophers of any time or place, one of the few exceptions being Thomas Hobbes, of all people. (This is not to be confused with the so-called “rational” egoism of Ayn Rand, by the way.) The problem with Bentham’s version of Utilitarianism is that it inevitably leads to a sort of minimum common denominator of human actions. It defines, for instance, certain actions as immoral because they cause us pain, even though such actions (say, sacrificing us for the good of another) would appear to be moral under most other frameworks. Moreover, if pleasure is all there is to happiness, it is hard to see why we shouldn’t all hook ourselves up to a pleasure machine and be done with it.

These are the reasons that brought Mill to make his famous distinction between high and low pleasures:

“It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are of a different opinion, it is because they only know their own side of the question.” (The Collected Works of John Sturart Mill, vol. 10, p. 212)

Of course things are much, much more complex than this, and interested readers will find an in-depth discussion of Mill’s version of Utilitarianism, and of his critique of Bentham, in this article from the Stanford Encyclopedia of Philosophy.

But it wasn’t as much the intricacies of Utilitarianism, or the differences between Bentham and Mill (or between them and modern Utilitarians like Peter Singer) that was on my mind while I was contemplating the auto-icon. After all, the work was part of a broader exhibition that (as the Met-Breuer description puts it) “explores narratives of sculpture in which artists have sought to replicate the literal, living presence of the human body.”

Like Life

Another piece from the Like Life exhibit at the Met-Breuer, photo by the Author

Rather, I took Jeremy and many of the other works exhibited on the two floors that hosted “Like Life” as a memento mori: artistic reminders of what Seneca called the ultimate test of our character, because, in reality, we die (a little) every day, and have our whole lives to prepare for the final exit. Too often, especially in the United States, we are in denial of the ultimate natural phenomenon that affects all the biological world. We don’t talk about death; we try to stay away from the corpses of people who died, even when they belonged to our loved ones; we speak of death by way of euphemisms, as in “he passed away”; and some among us even dream of immortality through likely impossible new technologies such as mind uploading (which, conveniently, are always a few decades away).

Looking at Jeremy Bentham’s auto-icon I pondered why we are so scared of what is both natural and indeed necessary, if life is to continue. And, again, I was reminded of the words of Seneca:

“Reflect that the dead suffer no evils, that all those stories which make us dread the nether world are mere fables, that he who dies need fear no darkness, no prison, no blazing streams of fire, no river of Lethe, no judgment seat before which he must appear, and that Death is such utter freedom that he need fear no more despots. All that is a phantasy of the poets, who have terrified us without a cause.” (To Marcia, On Consolation, XIX)

What is left for us to do then? What another Stoic philosopher famously advised:

“I have to die. If it is now, well then I die now; if later, then now I will take my lunch, since the hour for lunch has arrived – and dying I will tend to later.” (Discourses I, 1.32)

So I did. After saying goodbye to Jeremy, my daughter, my partner, and my friend Skye went off to a nice lunch at a nearby Italian restaurant, enjoying once more good food and good company, the sort of thing that makes life worth living. Right now, though, it’s time to get to the gym…

Plato’s reading suggestions, episode 123

Oscar Wilde and leisureHere it is, our regular Friday diet of suggested readings for the weekend:

A really scary and thoughtful article on the rise of populism and the decline of liberal democracy.

Is it possible to train yourself to be a psychopath?

We should definitely talk about the future of leisure.

The multiworse is coming! The multiworse is coming!!

We should take seriously the move from the information age to the reputation age.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Why machine-information metaphors are bad for science education, part II: the search for new metaphors

metaphor vs simileWhile discussing some sections of a paper I wrote with Maarten Boudry, we have seen a number of reasons why using machine-information metaphors is bad for science education. As I pointed out before, the full paper also devotes quite a bit of space to arguing that those metaphors haven’t been particularly good in actual scientific research. One of the fascinating things to watch after I posted the first part of this commentary was the number of people who vehemently defended the “biological organisms are machines” take, both here on the blog and on my Twitter feed. It’s like here we are, in the second decade of the 21st century, and there are still a lot of Cartesians around, who have apparently never heard of David Hume. Oh well.

In the conclusion of this two-part series I am going to focus on the last section of my paper with Maarten, where we discuss the search for alternative metaphors, and in the end (spoiler alert!) suggest that the best thing to do at this point is just to describe things as they are, staying as clear as possible of metaphorical language. And when one really cannot avoid it, then use multiple metaphors and be very clear on the limits of their use. Let’s take a look.

In their classic work on metaphors, Lakoff and Johnson argue that the basic function of metaphorical concepts is to structure a new kind of experience in terms of a more familiar and delineated experience. In science as well as in everyday language, metaphors highlight particular aspects of whatever it is we are trying to grasp, but they will inevitably distort others. For example, the image of the “tree of life,” with new species branching off as budding twigs and extinct species as dead branches, is an instructive approximation of the relations of evolutionary descent. However, it can also foster misconceptions about “progress” in evolution, or lead to a simplistic conception of speciation events, or to a downplay of horizontal gene transfer and reticulate (i.e., by interspecies hybridization) speciation events. To give one more example, in physical chemistry the model of the atom as a miniature solar system, with electrons orbiting the nucleus as planets, though still having wide public appeal, is fundamentally inaccurate.

Of course, no metaphor will do its job perfectly, but it is crucial to realize, as Lakoff and Johnson have shown, that the widespread deployment of a particular metaphor can have a feedback effect on the way we perceive things, not just how we present them to others. In the examples discussed in my paper with Maarten, the lure of machine-information metaphors in the history of biology has invited scientists to think of genomes as “blueprints” for organisms, written in the four-letter alphabet of DNA and readable in a manner analogous to a computer code. But as we argue, the machine-information conception of living systems has led both the public and the scientific community astray.

In response to this problem, some scientists and science educators have proposed several alternative and improved metaphors to characterize the relationship between genotype and phenotype. Biologist Patrick Bateson, for instance, was probably the first to compare the DNA sequence of living organisms with a recipe for a cake. The idea of a genetic recipe has several advantages over the blueprint metaphor, the most important being that it takes into account pleiotropy (one gene affecting more than one trait) and epistasis (gene–gene interactions). As a consequence, the simple picture of a one-to-one (or close to) correspondence between particular genes and phenotypic traits is abandoned, which becomes clear when one considers that there is no way to locate particular ingredients in individual crumbs of a cake. Accordingly, there is no possibility of reverse-engineering the end product to the set of procedures (the “recipe”) that made the final product possible. This has important consequences not just for science education, but for research agendas, as the idea of ‘‘reverse engineering’’ is commonly invoked everywhere from genomic studies to the understanding of the brain.

Of course, if carried too far, the recipe metaphor can in turn be quite misleading. To get the desired result, a cook has to lump together different ingredients in the correct proportions and follow a set of instructions for handling the dough and preparing the oven. But actual developmental encoding in living organisms is an enormously more complex and very different sort of procedure, which is also highly dependent on epigenetic factors and unpredictable vagaries of the external environment. The expression of specific genes in the course of development resembles nothing like the way a cook handles the ingredients of a recipe. Living organisms are also highly differentiated in a number of functional parts or components (cell types, tissues, etc.), in contrast with the homogenous cake that comes out of the oven. Moreover, the genome is not written in anything like a ‘‘language,’’ as in the case of a recipe, and it certainly does not contain a description of the desired end product in any meaningful sense of the word ‘‘description.’’

Condit and colleagues have discussed the recipe metaphor as an alternative to talk of blueprints, pointing out that it was adopted ‘‘with surprising swiftness’’ by science popularizers and the media in the 1990s. However, they also remark that, as a new ‘‘master metaphor’’ to capture the relationship between genotype and phenotype, the image of a recipe for a cake has little to recommend either. For example, evoking recipes can invite people to think of the genome as a step-by-step manual that describes ‘‘how to make a human,’’ in that sense falling into the same trap as the idea of a blueprint.

That being said, if contrasted with the blueprint metaphor, the recipe metaphor conveys the point about lack of one-to-one correspondence between genes and phenotypes very well, and hence it highlights an important fact about development and what biologists call the Genotype => Phenotype map. If the recipe metaphor is used within this restricted context, for example in explicit contrast with the characteristics of a blueprint, it is immediately clear what are the salient points of connection with living systems, and people are less likely to be misled by stretching the metaphor beyond usefulness. If the recipe metaphor is presented as an alternative to the blueprint, however, it is bound to mislead people no less than its rival.

The same point applies to other interesting metaphors that have been proposed in this context, for example Lewis Wolpert’s comparison of early embryonic development with the Japanese art of origami. The analogy highlights the circuitous step-by-step development of the early embryo, but of course in a piece of origami art the structure is imposed top-down from an intelligent agent, whereas the functional differentiation in the embryo is regulated bottom-up by a complex interaction between genes and environment. Moreover, origami simply fold to yield the final product, which in a very real sense is already there from the beginning. This is definitely not the way embryos develop, with their ability to respond to local and external environmental fluctuations.

The general problem that we have been discussing seems to us to be not just that one kind of metaphor or another is woefully inadequate to conceptualize biological organisms and their evolution. It is that it simply does not seem to be possible to come up with a metaphor that is cogent and appropriate beyond a very limited conceptual space. Although some of the alternatives are more accurate than the blueprint metaphor (in some respects), Maarten and I certainly have not found one that we would recommend as a replacement. Should we therefore try to avoid the use of metaphors in biological teaching and research altogether? Or do we simply expect too much from metaphors in science and education?

Analogical and metaphorical thinking is widespread among human beings, although of course different cultures and historical moments inspire people to use different metaphors. After all, a metaphor is an attempt to make sense of novel concepts by pairing them with known ideas to increase our overall understanding. Metaphorical thinking is therefore part of our language, and language is inextricably connected to our thinking, but to put it as Wittgenstein did: ‘‘It is, in most cases, impossible to show an exact point where an analogy starts to mislead us.’’ Yet a great part of doing philosophy consists precisely in clarifying our language in an attempt to advance our thinking. To quote Wittgenstein again: ‘‘Philosophy is a battle against the bewitchment of our intelligence by means of our language.’’ To complicate matters further, there is emerging empirical evidence that the human brain processes metaphors in a specific fashion: research on Alzheimer’s patients, for instance (see ref. in the paper), found that impairment of the brain’s ‘‘executive’’ function, associated with the prefrontal cortex, leads to poor understanding of novel metaphors (while, interestingly, comprehension of familiar metaphors is unaffected). Metaphorical thinking seems to be a biologically entrenched functional mode of our brains, and may therefore be hard to avoid altogether.

Both science and philosophy have made ample use of metaphorical and analogical thinking, sometimes with spectacularly positive results, at other times more questionably so. Nonetheless, it seems that nowhere is metaphorical thinking so entrenched — and so potentially misleading — as in biology. Given the maturity of biology as a science, and considering that it deals with objects whose nature is not as alien to our daily experience as, say, those of quantum physics, Maarten and I do not actually see any good reason for clinging onto outdated metaphors in biological education and research for characterizing living organisms, their genomes and their means of development. Taking into account the fact that the machine information metaphors have been grist to the mill of ID creationism, fostering design intuitions and other misconceptions about living systems, we think it is time to dispense with them altogether. Still, we are also not as naive as to expect that this advice will be followed by scientists and science educators any time soon, precisely because the machine/information metaphor is so entrenched in biology education. What to do then? We propose two approaches, one for science educators, the other for practicing scientists.

In science education, talk of metaphorical thinking can be turned into a teaching moment. Students (and the public at large) would actually greatly benefit from explanations that contrast different metaphors with the express goal of highlighting the limitations intrinsic in metaphors and analogies. So, for instance, science educators and writers could talk about the human genome by introducing the blueprint metaphor, only to immediately point out why it does not capture much of what genomes and organisms are about; they could then proceed to familiarize their students and readers with alternative metaphors, say the recipe one, focusing on differences with the original metaphor while of course not neglecting to point out the (different) deficiencies of the new approach as well. The goal of this process would be to foster a cautious attitude about metaphorical thinking, as well as to develop a broader understanding of how unlike commonsense modern science really is. On the latter point, it is interesting to note, for instance, that a popular refrain among evolution or global warming deniers is that ‘‘simple commonsense’’ shows that the scientists are wrong, a position that ignores the proper weight of technical expertise in favor of a folk understanding of nature. It is therefore crucial that the public appreciates the limitations of common sense thinking about science.

There is an analogous teaching moment that can be brought to bear when research scientists engage in unbridled metaphorical thinking: we could refer to this as a philosophy appreciation moment. Scientists are notoriously insensitive to, or even downright dismissive of, considerations arising from the history and philosophy of their discipline, and often for good practical reasons: modern science is a highly specialized activity, where there is barely enough time to keep up with the overwhelming literature in one’s own narrow field of research, and certainly not enough incentive to indulge in historical readings or philosophical speculation. Nonetheless, historians and philosophers of science can easily show the pitfalls of metaphorical thinking (by using well-documented historical examples) and even get across to their colleagues some basic notions of philosophy (by analyzing the effects of particular metaphors on the development of specific lines of scientific inquiry). None of this will quickly amount to overcoming C.P. Snow’s infamous divide between ‘‘the two cultures,’’ but it may bring about better understanding and appreciation of philosophy by scientists, and perhaps even help science see new horizons that have been hitherto obscured by a superficially illuminating metaphor.