Author Archives: Massimo

About Massimo

Massimo is the K.D. Irani Professor of Philosophy at the City College of New York. He blogs at platofootnote.org and howtobeastoic.org. He is the author of How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life.

Plato’s reading suggestions, episode 122

aliens at RoswellHere it is, our regular Friday diet of suggested readings for the weekend:

A cultural anthropological field trip among the ufologists.

Is biography a thoroughly reprehensible genre? Or maybe only partially so?

Do you have any mirage friend? Should you get rid of them?

Someone familiar with the environmental literature checks Steven Pinker and finds him wanting. Again.

A different take from the currently dominant one on Monica Lewinsky.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Why machine-information metaphors are bad for science education, part I: biological machines and intelligent design

bacterial flagellum

bacterial flagellum, as often represented in biology education

Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Predictably, modern proponents of Intelligent Design so-called theory, the latest incarnation of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories.

In reality, the living organism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do.

My friend and collaborator Maarten Boudry and I have written an article several years ago in which we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. We argued that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.

We have discussed this topic twice in the last couple of years, once on the occasion of another paper with Maarten, on why machine metaphors in biology are misleading; more recently because of a paper I wrote about genes as blueprints; the current entry completes the trilogy, so to speak. In part I, here, I will present what Maarten and I had to say about the origin of machine-information metaphors in biology, as well as its questionable use in science education. In part II, next week, I’ll talk about the search for new and better metaphors in science and science education. Interested readers are referred to the original paper for references, as well as for a discussion of the misuse of machine-information metaphors in actual biological research (i.e., not just for educational purposes).

When delving into unknown territory, scientists have often naturally relied on their experiences in more familiar domains to make sense of what they encounter. In the early days of the scientific revolution, mechanical metaphors proved to be a powerful instrument to get a grip on new discoveries about the living world and the universe at large, and we can trace back the emergence of machine metaphors at least to the Middle Ages, when new achievements of technology had a profound cultural influence and captured the collective imagination. Against this background of technological innovation, it is not surprising that the pioneers of anatomy and physiology relied on the metaphor of the animal body as a complicated piece of machinery to make sense of their discoveries. The mechanical language provided a richness of meaning and allowed them to structure the new phenomena in terms of familiar experiences. For example, the image of the human heart as a pump with intricate mechanical components played an important role in William Harvey’s discoveries about blood circulation.

In the course of the 17th century, a new philosophy of nature became prominent that developed a conception of the universe in purely mechanical terms. According to this mechanical philosophy, which was developed by thinkers like Rene` Descartes, Pierre Gassendi and Robert Boyle, the phenomena of nature can be understood purely in terms of mechanical interactions of inert matter. This mechanization of nature proved an important driving force behind the Scientific Revolution, and at the end of the 17th century culminated in Newton’s theory of motion. Newton’s description of planetary orbits following the fixed laws of gravity conveyed an image of a clockwork universe set in motion by an intelligent First Cause. In fact, that was exactly how Newton conceived the universe and its relation to the Creator. For Newton and many of his contemporaries, the importance of the mechanical conception of nature was greater than the mere term ‘metaphor’ would suggest, as the development of mechanistic philosophy was itself largely inspired by religious motivations; indeed, the very employment of machine metaphors invited theological speculation.

In the second part of the 17th century, the mechanical pictures of living organisms and of the cosmos at large converged into an intellectual tradition where theology and science were intimately intertwined: natural theology. The most famous representative of this tradition was William Paley, whose work Natural Theology, of Evidence of Existence and Attributes of the Deity, Collected from the Appearances of Nature (1802) made a deep impression on the young Charles Darwin. As the title of the book makes clear, Paley and the natural theologians conceived of Nature as a complicated machinery of intricate wheels within wheels, in which every organism has its proper place and is adapted to its environment. According to Paley, the contrivance and usefulness of parts exhibited by living organisms attests to the intelligence and providence of a benevolent Creator. This so-called ‘design argument’ already had a long intellectual pedigree, dating back to Plato, Cicero and Thomas Aquinas, but its most famous formulation is found in the first chapter of Natural Theology, in which Paley famously relies on the analogy between living organisms and a pocket watch to support his design inference.

While Darwin was the one who gave the most decisive blow to the design argument by suggesting a natural explanation for adaptive complexity in the living world, many philosophers would agree that David Hume foreshadowed its demise, by exposing several problems with the central analogy. In his Dialogues Concerning Natural Religion (1779), which actually predates Paley’s magnum opus by more than 50 years, we find a discussion of the design argument among Philo, the skeptical character that voices Hume’s ideas, Demea, the orthodox religious believer, and Cleanthes, the advocate of natural theology.

After Cleanthes has set out the design argument in terms foreshadowing Paley’s analogy of the watch, Philo objects that it is dangerous to derive conclusions about the whole of the universe on the basis of a spurious analogy with one of its parts. Given that our experience with design is limited to human artifacts only, we have to proceed with great caution, and it would be presumptuous to take so minute and select a principle as the human mind as the model for the origin of the whole universe. Hume realized that, at least in some cases, appearances of intelligent design can be deceptive.

In contemplating that ‘‘many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out’’, Hume even comes close to Darwin’s crucial insight about the power of natural selection. Although Hume does not deny that we can discern similarities between nature and human artifacts, he warns us that the analogy is also defective in several respects. And if the effects are not sufficiently similar, conclusions about similar causes are premature. To illustrate this, Philo proposes another possible cosmogony on the basis of the analogy between the world and an animal:

“A continual circulation of matter in [the universe] produces no disorder; a continual waste in every part is incessantly repaired: The closest sympathy is perceived throughout the entire system: And each part or member, in performing its proper offices, operates both to its own preservation and to that of the whole. The world, therefore, I infer, is an animal.” (Hume 1779, p. 39)

In The Origin of Species, Charles Darwin (1859) finally proposed a natural explanation for the phenomenon that inspired Paley but failed to convince Hume. Although the design argument is still of interest to philosophers and historians of science, it has been widely discarded in the scientific community. However, the analogy on which Paley based his inference seems to be alive and well, not only in the minds of creationists and ID proponents, but also in the writings of science popularizers and educators. Many scientists have actually argued that Paley at least offered an incisive formulation of the problem as there is indeed a hard-to-shake intuition of contrivance and intelligent design in nature. As one of the most ardent defenders and popularizers of evolutionary theory, Richard Dawkins, put it, ‘‘Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Adaptive complexity, then, is still regarded as something that requires a special explanation.

In textbooks, science educators have presented the comparison of living organisms and man-made machines not just as a superficial analogy, but carrying it out to a considerable level of detail. For example, the cell has been described as a miniature factory, complete with assembly lines, messengers, transport vehicles, etc. Consider the following quote from Bruce Alberts, molecular biologist, and former president of the National Academy of Sciences:

“The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. … Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. Given the ubiquity of protein machines in biology, we should be seriously attempting a comparative analysis of all of the known machines, with the aim of classifying them into types and deriving some general principles for future analyses. Some of the methodologies that have been derived by the engineers who analyze the machines of our common experience are likely to be relevant.” (Alberts 1998, p. 291)

Creationists and their modern heirs of the Intelligent Design movement have been eager to exploit mechanical metaphors for their own purposes. For example, Bruce Alberts’ description of the living cell as a factory has been approvingly quoted by both Michael Behe and William Dembski, two leading figures in the ID movement. For ID proponents, of course, these are not metaphors at all, but literal descriptions of the living world, arching back to Newton’s conception of the Universe as a clock-like device made by the Creator. The very fact that scientists rely on mechanical analogies to make sense of living systems, while disclaiming any literal interpretation, strengthens creationists in their misconception that scientists are ”blinded” by a naturalistic prejudice. In the creationist textbook Of Pandas and People, which has been proposed by ID advocates as an alternative to standard biology textbooks in high school, we read that ‘‘Intelligent design […] locates the origin of new organisms in an immaterial cause: in a blueprint, a plan, a pattern, devised by an intelligent agent’’ (Davis et al. 1993, p. 14).

The analogy between living organisms and man-made machines has proven a persuasive rhetorical tool of the ID movement. In fact, for all the technical lingo and mathematical “demonstrations,” in much of their public presentations it is clear that ID theorists actually expect the analogies to do the argumentative work for them. In Darwin’s Black Box, Behe takes Alberts’ machine analogy to its extreme, describing the living cell as a complicated factory containing cargo-delivery systems, scanner machines, transportation systems and a library full of blueprints. Here is a typical instance of Behe’s reasoning:

“In the main area [cytoplasm] are many machines and machine parts; nuts, bolts, and wires float freely about. In this section reside many copies of what are called master machines [ribosomes], whose job it is to make other machines. They do this by reading the punch holes in a blueprint [DNA], grabbing nuts, bolts, and other parts that are floating by, and mechanically assembling the machine piece by piece.” (Behe 2006, pp. 104–5)

Behe’s favorite model of biochemical systems is a mechanical mousetrap, the familiar variant consisting of a wooden platform, a metal hammer, a spring etc. According to Behe, if any one of these components is missing, the mousetrap is no longer able to catch mice. He has termed this interlocking of parts ‘‘irreducible complexity’’ and thinks it characterizes typical biochemical systems. n other words, the mousetrap is to Behe what the well-designed pocket watch was for Paley. But whereas Paley can be excused on the grounds of the state of scientific knowledge in the 18th century, for Behe the situation is a little different. Modern biochemistry, nota bene Behe’s own discipline, has revealed that biochemical systems are not like mechanical artifacts at all. Moreover, even biological systems that are irreducibly complex under Behe’s definition pose no problem for evolution by natural selection, as has been in detail by people like cell biologist Ken Miller.

ID proponents have buttressed their analogies between living systems and mechanical contraptions with a lot of visual rhetoric as well. The flagellum of the bacterium E. coli, the hallmark of the ID movement, has been represented as a full-fledged outboard rotary motor, with a stator, drive shaft, fuel supply, etc.. It features on the cover of Dembski’s book No Free Lunch, and has been used numerous times in presentations and online articles. The idea seems to be that if it looks designed, it has to be designed. But as Mark Perakh has documented in a paper published in 2008, ID supporters invariably use idealized and heavily stylized representations of the flagellum, in order to make it more resemble a man-made contraption. Another striking example of this visual rhetoric is a video by Discovery Institute president Stephen C. Meyer, which presents a computer-simulated — and again heavily stylized — journey inside the cell, and describes the biochemical processes in terms of ‘‘digital characters in a machine code,’’ ‘‘information-recognition devices,’’ and ‘‘mechanical assembly lines.’’ Meyer commented that evolutionists will have a hard time now dissuading the public from the fact that ‘‘the evidence for design literally unfolds before them.’’

Of course, the mere observation that creationists have seized on machine metaphors in biology does not suffice to demonstrate that these metaphors do not make scientific sense. However, the fact that they tend to do so systematically, using full-length quotes from respectable scientists, should make us weary of the possible dangers of misleading metaphors. If the rhetoric of the ID movement is demonstrably based on these mechanical analogies, it can be instructive to reexamine their scientific merits. In the paper, Maarten and I argue that the machine-information analogy has indeed influenced the way scientists themselves think about biological structure, function, and evolution. By analyzing the consequences of and reactions to this analogy in actual biological research, we show that its scientific merits are very weak, and that its place in modern biology has become questionable. What then? Stay tuned for part II, on the search for new and better metaphors…

Plato’s reading suggestions, episode 121

laughing at jokesHere it is, our regular Friday diet of suggested readings for the weekend:

Psychologist discovers world’s funniest joke. Meh, it’s okay.

In praise of unfinished novels, a literary genre of its own.

You call that a sport??

The real reason many Americans stay poor. (Hint: it has nothing to do with them being lazy over-spenders.)

Pinker is disdainful and condescending, sympathetic to humanity in the abstract, but impervious to the suffering of actual human beings.”

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Book Club: Darwin’s Unfinished Symphony, 7, the dawn of civilization

Egyptian agricultureHomo sapiens is the only species on planet Earth to have experienced three phases of evolution: the standard biological one, driven by mutation and natural selection; gene-culture coevolution; and now the period of evolution driven primarily by culture. This is how chapter 10 of Kevin Laland’s Darwin’s Unfinished Symphony: How Culture Made the Human Mind, begins the transition to the author’s discussion of that very last, novel, and crucial phase. (More entries in this ongoing series here.)

It’s an obviously crucial topic for a variety of reasons. First off, to help explain why on earth we evolved such large and metabolically expensive brains. Keep in mind that the human brain accounts for only 2% of our total body weight, and yet it consumes a whopping 20% of our daily caloric intake. (It’s unfortunate that thinking harder doesn’t lead to weight loss…). Second, as Kevin has documented in the previous chapters of the book, it is our capacity for social learning (and teaching) that accounts for the incredible success of our species, as the third mode of evolution is what has made possible for us to build giant cities, go to the Moon, and waste our existence on social media.

Kevin begins by addressing a related question: why did it take so long for our species to develop complex civilizations, while hunter-gatherer societies still today have very limited technology and simple cultures? The likely answer has to do with the severe limitations imposed by a hunter-gatherer lifestyle. To begin with, of course, hunter-gatherers have to be constantly on the move, changing base location once the local resources are depleted. This means that it is impossible to settle down long enough to develop a large population size and the division of labor that foster new technological developments. And even if some new technology were to be developed, it would have to be of limited size and complexity, again because the entire population has to pick up and move every few weeks or so.

Similarly, in hunter-gatherer societies the birthrate is typically low, with new pregnancies well separated in time, as a human female cannot carry and care for many small children when the group is constantly on the move. Small population size and temporary abodes also means no accumulation of wealth of the kind that makes division of labor possible, leading in turn to the origin of specialized classes of workers that can rapidly accumulate specific technical knowledge over few generations.

“This helps us understand why hunter-gather technology was only slowly changing for such a long time, and also why, even today, many small-scale societies possess limited technology. Hunter-gatherers are effectively trapped in a vicious cycle that severely constrains their rate of cultural evolution.” (p. 248)

That’s also why the invention of agriculture, which took place multiple times after the last Ice Age, is tightly linked with the origin of complex human technological cultures. The reason agriculture did not originate earlier is because the conditions following that Ice Age, about 11,500 years ago, have actually been the most favorable — climatically speaking — for such an event over the last two million years of hominid evolution. And before then our ancestors simply did not have the required brain power and ability to communicate through language.

Plant and animal domestication of some sort preceded the full blown agriculture revolution, and the first plants to be domesticated were annuals, characterized by a rapid life cycle and hence easy to select artificially. These included peas, wheat, rye, barley, and maize. A new form of wheat, for instance, appeared around 9,600 BCE in the eastern Mediterranean region. Maize was farmed in southern Mexico around 9,000 years ago. Millet appeared in China between 10,300 and 8,700 years ago, rice around 9,000 years ago.

The invention of agriculture was not without its own problems. The more stable source of food led to population explosions, which in turn caused periodical famines. Indeed, the archeological data show that Europeans became shorter by about 7 cm. between 2,300 and as little as 400 years ago, because of poor nutrition.

As Laland points out, agriculture is a great example of niche construction on the part of human beings. The old idea, in ecology, that niches are “out there,” waiting to be filled by new species of organisms, has been questioned for some time now. Rather, living beings actively alter their environment, co-evolving with it, if you will. By far the most spectacular example in the history of earth is the fact that we have high levels of oxygen in our atmosphere, a byproduct of photosynthesis, an organic process that has made animal life possible in the first place.

Since agriculture was not an unqualified good, it is reasonable to ask how come the new mode of life largely and rapidly replaced the old hunter-gathering. Kevin offers two main reasons: first, agriculturalists simply outbred hunter-gatherers, because of the larger population size made possible by a sedentary lifestyle. Before the advent of agriculture the world’s human population had stabilized at around one million people. By the time of the Roman empire it was up to 60 millions.

The second factor was a wave of innovations triggered by agriculture. For instance, the invention of the wheel, which appeared simultaneously in Mesopotamia, Russia and central Europe around 5,500 years ago. The first organized religions also sprang in agricultural societies, with different cultures, predictably, worshiping gods related to agriculture: Inti, the sun-god of the Inca; Renenutet, the Egyptian god of harvest; Ashnan, the goddess of grain in Mesopotamia; and Ceres, the Roman goddess (counterpart of the Greek Demeter) who was credited with the discovery of wheat, the invention of ploughing, the yoking of oxen, and similar.

Here is another way to appreciate the difference between pre- and post-agriculture humanity:

“Prior to the advent of agriculture, each population would have possessed at most a few hundred types of artifacts, while today the inhabitants of New York are able to choose between 100 billion bar-coded items. … One recent estimate of the amount of information now stored on the internet is 1,200,000 terabytes.” (p. 263, 269)

Kevin points out that all this innovation has had dark sides, including environmental destruction, not just today, but throughout the last 10,000 years or so, with humanity being responsible for countless extinctions of other species; as well as of course the scale of war that technology has made possible; and the increasing inequality (compared to hunter-gatherer societies) among human beings themselves. It seems like both natural and cultural selection don’t really care about ethical considerations, although of course we should. But that’s another story.

Plato’s reading suggestions, episode 120

bullshitHere it is, our regular Friday diet of suggested readings for the weekend:

So long, Susy?

Higher education is drowning in bullshit, a partial list.

The puzzle of patriotism.

Put the Ph-ilosophy back into Ph-D.

Science’s inference problem: when data doesn’t mean what we think it does.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Against The Four: Amazon, Apple, Facebook, and Google

Amazon Apple Facebook Google“The Four” are the giant tech companies Amazon, Apple, Facebook and Google. You can read all about why they are a problem in Scott Galloway’s informative book, The Four: The Hidden Dna of Amazon, Apple, Facebook, and Google. Galloway is a Professor at NYU’s Stern School of Business, where he teaches brand strategy and digital marketing. He is also the founder of several firms including L2, Red Envelope, and Prophet. In 2012, he was named “one of the world’s 50 best business school professors” (no, this isn’t an oxymoron) by Poets & Quants. Moreover, he has served on the boards of Eddie Bauer, The New York Times Company, Gateway Computer, and Berkeley’s Haas School of Business. In other words, this is someone who knows a lot about corporate culture, and not at all a left wing moralist such as myself.

(If you don’t have time to read the book, look at these two articles that turned me onto it, in Wired magazine, and over at the BBC.)

In a nutshell, the problem with The Four is that they simply have far too much power in our lives, both in terms of the information they store about us (and how they use it), and of their financial muscle, which of course easily turns into political influence. From the BBC article:

“The four most important tech companies aren’t even just tech companies anymore. They each have embedded themselves in our lives, hugely influencing us by playing to our basic human instincts: from the eternal human search for answers to our need for love. … [Galloway] also says that the companies’ deep pockets and massive customer base are what allow the companies to start competing in different sectors and industries – like how Amazon is now producing original TV content, or how Facebook has more or less emerged as a news media platform. That has prompted scepticism and warnings from critics at places like the New York Times and Bloomberg.”

And that was before Amazon bought Whole Foods, for instance. You can dig into the details yourself, no point for me to repeat here easily found material. My objection to The Four is ethical: I am generally skeptical of any concentration of power, especially in the private sector (but not only: governments are a little better to the extent that they really are accountable to their people by means of a not too dysfunctional democracy. The US Government does not, at this point, qualify, for instance). But I also recognize that these and other tech companies have made possible a wonderful number of things, including, of course, this very blog (which in its initial incarnation, Rationally Speaking, was hosted by Google, with essays that were for many years written on a MacBook or an iPad, and are still broadcasted via Facebook).

Moreover, I am certainly not an anti-technologist. On the contrary, until recently I was a very early adopter of new technologies. I bought the first iPhone, and then an iPad (and then an iPad Pro, which replaced my laptop), signed up on Facebook very early on, have used Amazon for many years, and have a very very long browser history with Google.

And before you accuse me of naivete, I am perfectly aware that The Four aren’t the only giant tech companies to be wary of (add Twitter, Verizon, Samsung, just for starters), and the list gets far longer when one moves to large corporations in general. Ultimately, I think the only reasonable and effective way to curb corporate power is by legislation, along the model of the famous breakup of Bell back in 1982. As a society, we want innovation, and we certainly want private entities to benefit from their work. But innovation needs competition, not near monopolies, and benefiting from one’s work does not equate creating a very small class of ultra-billionaires who exploit their workers (like Amazon certainly does), including in other countries (like Apple equally certainly does).

But while we are waiting for governments to take action (more likely in Europe than in the US, at the moment — see Galloway’s take here), it doesn’t mean we have to be complicit enablers. I try to practice what in philosophy is called virtue ethics, which means that my first focus is on improving my own character, which in turn requires acting as virtuously (in the Greco-Roman, not the Christian sense) as possible. It follows, it seems to me, that I need to extricate myself as much as possible from The Four, as an initial step.

And that’s where I discovered two interesting things, which are the main objects of this post. First, it is much harder than one might at first imagine. Second, you are unlikely to get a lot of support even from friends and family, who might even exhibit hostility to your intentions. Let me explain.

They are called The Four for a reason. They are everywhere, and next to impossible to avoid, unless you are willing to completely disengage from the Internet. In the postscript, I detail the steps I have taken so far, in case anyone else wishes to try it. You can thank me later for having saved you endless hours of web searching (using DuckDuckGo, of course…).

Apple was actually the easiest to get rid of. Because their ecosystem is so tight and positively discourages any contact with the outside, once you decide to get out of it, you pretty much have to go the whole nine yards. This pained me, because I have been an Apple fan ever since I ditched Microsoft because of the poor quality of their products, back in 2004. But just a couple of weeks later, I hardly miss my iPad and iPhone, and I most certainly don’t miss the Watch, one of the most intrusive gadgets ever made.

Next was Amazon. The big steps here were to stop shopping on their online store (easy, plenty of alternatives), to replace the Kindle with one of several other high quality e-book readers, and to begin to direct readers of my own books to either publishers’ web sites or other e-book stores. of course, the bulk of my collection of books is on Amazon, but I’ll eventually get it back by way of available software that decrypts the files and turns them into the popular epub format. I still watch Amazon videos, because they are good and not available elsewhere. Before you accuse me of hypocrisy, however, keep in mind that the goal is to minimize my footprint on The Four, so to speak, not to eliminate them from my life altogether. It’s an incremental project, not a revolution.

Which brings me to Google. In a sense, I actually increased my use of their products, since now my office suite is the Google one, replacing Apple’s iWorks. But it is a temporary transition dictated by limited time available to search for long term suitable alternatives, and by the need not to disrupt several ongoing collaborative works. And at any rate, I hit Google where it hurts, their web search engine, which produces their advertisement revenues and is of course highly invasive of our privacy. DuckDuckGo does an excellent replacement job.

Finally, Facebook. This was the hardest, again unless I was willing to forgo keeping in touch with (real) friends and family, and also to give up my outreach presence (my “official” philosophy page, my participation to the largest online Stoic community, and a few other things). What I did was to get rid of their obnoxious Messenger app, as well as “unlike” and “unfollow” a crapload of pages that were, of course, generating lots of targeted advertisements. I am now using Facebook with a very small and tightly guarded circle of actual friends and family, as well as for the above mentioned outreach, nothing else.

So the bottom line of the first point is that this exercise showed me very clearly just how dependent our lives have become from The Four. Perhaps this should not have been surprising, but experiencing the full measure of it in such a short period was eye opening. The other thing that was eye opening relates the second point: the comparative lack of support, and occasionally more or less overt hostility, I got from friends and family (and, I’m sure, from some readers, now that I’ve put this out).

When I explained what I was doing and why, a good number of people were puzzled, and began immediately to mount arguments against my enterprise. “It’s useless.” “You won’t succeed” “It’s going to cost you a lot of money and time.” “What do you have against corporations?” “Are you a Luddite?” “Why do you hate America?” Okay, I made up the last one, but the others have been thrown at me fast and furious during the past few weeks.

So I patiently explained: no, I’m not a Luddite; on the contrary, I’ve always been an early user of especially electronic technology. No, it isn’t really that expensive (as I’m sure everyone knows, Apple alternatives in terms of phones and tablets are incredibly cheap by comparison). Yes, it took me some time, but I was helped by others who have similar objections and have done much of the legwork for me, and at any rate, it’s an ethical decision, it would be a bit too easy if it didn’t cost me money or time or effort.

My attitude toward corporations is the one already explained above. I am perfectly aware that if it weren’t Apple it would be someone else, but that’s not an argument about disinvesting from Apple. It’s the social activism equivalent of what in biology is called frequency dependent selection: you go for the rare phenotype, which eventually becomes the dominant, at which point you switch to the new rare, and so on.

In terms of success and utility, it depends on what one’s goal is. I am perfectly aware that Apple, Google and the others are not going to feel the pinch of my decisions. But from a virtue ethical perspective that’s not the objective: I just don’t want to be personally co-responsible for what they are doing. Moreover — and that’s why I’m writing this post and promoting Galloway’s book — if enough others do the same, the damage will be greater and greater, and it might bring about change.

Also, again in terms of success, as I said above my goal was never to completely disengage from The Four, only to distance myself from them. Many years ago I read Peter Singer’s How Are We to Live?: Ethics in an Age of Self-Interest, and it changed my life. No, I did not become a utilitarian like Singer, but I was struck by one of the first things he says in that book: don’t try to do everything at once, you will be overwhelmed, get discouraged, and fail. Instead, decide what your ethical priorities are, and then make some small but concrete steps in that direction. I discussed How Are We to Live? at a book club I founded in Knoxville, TN, and my wife at the time and I were talking about it on our way back home. We decided to follow Peter’s advice: we sold our house in the suburbs and moved downtown, near the bus lines and where we could bicycle to work; we also sold one of our two cars. Our life improved as a result, our carbon footprint went down, and we felt good about the decision. The current anti-Four action is along similar lines: I’m doing something, not everything, because I can do the former, but not the latter.

I thought my explanations were reasonable and cogent. One may still disagree, and indeed one may even agree with my take and still not act in a similar fashion, for all sorts of reasons. But my arguments hardly made I dent. Some people seemed not just to disagree with me, but to positively resent my chosen course of action. What was going on?

Then it hit me. It’s the same reaction I got when I stopped eating meat, and that my partner gets every time people find out she is a vegetarian. The same objections are immediately raised: it’s useless; it’s difficult; what’s wrong with the meat industry?; are you an environmental nuts?; do you feel somehow superior to the rest of?; why do you hate America??

It’s the next to the last one that should give you the clue. At least in my judgment, a lot of people who are not vegetarian recognize, at some level, that vegetarians have by far the better argument: no matter how you look at the issue — in terms of animal suffering, environmental degradation, treatment of labor, or even pure and simple self interest when it comes to health — vegetarianism is better. But it’s harder, too. Stakes are delicious; burgers are delightful; and everyone eats them, so it’s easier to just go along with the habit. But when you meet someone who is bucking the trend, and you are dimly aware that she has made the right choice and you haven’t, resentment kicks in. She simply must be mistaken, and you begin to rattle out a number of more or less incoherent “arguments” for why that is “obviously” the case.

I think something similar has been going on with my anti-Four strategy over the past few weeks. A number of my friends and family realize that I’m onto something (and Galloway’s book gives me plenty of well researched ammunitions, as well as the comfort to know that there are others who think and act the same). But it’s too hard, or expensive, or just inconvenient for them to follow suit. So I must be wrong. And once you know someone is wrong then you immediately begin to search for all the flaws in their reasoning, while ignoring the big ones in your own. It’s a well known cognitive fallacy.

Be that as it may. My conscience feels slightly better, in the same way and measure in which similar small decisions (to not eat meat, to try to shop locally, to voluntarily pay carbon footprint offsets when I travel by air, to change bank because my old one was a giant corporate monster, and so forth) have made me feel better. Is this going to change the world? Of course not. But what are you doing to help, right now?

_____

Postscript: Massimo’s (ongoing) anti-Four plan. Each item lists a service or product offered by Amazon, Apple, Facebook or Google, followed by a suitable, or at least less objectionable, alternative. All of these have already been implemented, it took about a week to switch completely. Remember, the goal is not perfection, just progress. Readers’ suggestions for further improvements are welcome.

Amazon > myriad alternative online retailers, obviously

Amazon Kindle > Nook, Kobo, Sony, etc.

(My) Amazon books > wherever possible I changed links to direct readers to the publishers themselves instead of the A-store

Amazon Video > Hulu, Netflix, though I still use some Amazon Video because some of their productions are unique and good

Apple’s iPad > any Android tablet (even though Android is Google, each manufacturer uses it differently, and the platform is more open than Apple’s)

Apple’s iPhone > any Android phone, except Google’s own, obviously

Apple’s Watch > back to analogical (and more stylish!)

Apple’s TV > back to simple smart TV native apps

Apple’s Music > Pandora, Spotify, or similar

Apple’s Mail > Aqua Mail (which does not track your search history), or any of a number of alternative third party clients

Apple’s office productivity (iWorks) > Google office, as first step, then independent systems, ideally open source

Apple’s Safari browser > Firefox (which does not track your history), Google Chrome not acceptable

Facebook > un-liked most pages, tightened security, limited who can ask me for “friendship”

Facebook Messenger > eliminated in favor of simple sms, or third-party apps

Google search > DuckDuckGo (which does not track your search history)

Google navigator > this is a tough one (particularly since Apple Maps is neither acceptable for this project, nor good, really), though for instance CityMappers works very well for major cities

Plato’s reading suggestions, episode 119

French Revolution paintingHere it is, our regular Friday diet of suggested readings for the weekend:

How loneliness generates empathy and shapes identity.

The strange order of things, by Antonio Damasio.

The ungrateful traveler: how we have come to take intercontinental travel for granted.

How we forgot the collective good and started to think of ourselves primarily as consumers.

How responsible are killers with brain damage?

Review of studies shows mindfulness meditation apparently no better than watching nature documentaries on tv, when it comes to fostering pro-social behavior.

Lessons from history on how to spot a financial bubble.

Steven Pinker’s debatable scholarship about the Enlightenment, part 1/2.

Steven Pinker’s debatable scholarship about the Enlightenment, part 2/2.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

Book Club: Darwin’s Unfinished Symphony, 6, gene-culture co-evolution

lactose tolerance

map of lactose tolerance

Kevin Laland’s book, Darwin’s Unfinished Symphony, which I have been discussing for several posts now, is basically one long argument in favor of the thesis that human evolution has been shaped by a feedback process involving a cultural drive mechanism initiated by natural selection, a mechanism that favored the acquisition of accurate and efficient copying. Chapter 9, to be examined here, is devoted to the classic approach of gene-culture co-evolution, the fundamental notion that cultural changes affect genetic evolution, and indeed that the more time passes the more human evolution is increasingly driven by culture and less so by biology (though biology always remains a fundamental constraint to be reckoned with):

“Genetic propensities, expressed throughout development, influence the cultural traits that are learned, while cultural knowledge, expressed in behavior and artifacts, spreads through populations and modifies how natural selection affects human populations in repeated, richly interwoven interactions.” (p. 217)

While the chapter begins with an interesting treatment of the phenomenon of right-handedness, the standard example of gene-culture co-evolution is, of course, lactose tolerance. In most humans, the ability to metabolize milk disappears in adulthood, as it was not pre-historically needed. But some populations have large numbers of adult individuals that retain a functional version of the gene coding for lactase activity, resulting in the phenotype of lactose tolerance. We now know that lactose tolerance evolved independently at least six times, and that this happened after the switch to agriculture following the last glaciation, making it a strong candidate for culture-driven genetic change in humans. Interestingly, mathematical models show that the rapidity of spread of the genetic trait depends on the fidelity of transmission of the cultural one: the more likely children of milk drinkers are to become milk drinkers themselves, the stronger the selection coefficient favoring the continued expression of the lactase gene into adulthood.

Several other traits have been shown to have evolved in a similar fashion in recent human history, including genes involved in skin pigmentation, salt retention, and heat stress, all obviously related to the sorts of climate changes experienced by human populations during their migrations. Unfortunately for us today, some of these strongly selected genes facilitate a highly efficient usage of food sources, as well as storage of energy into fats. Hence the trouble that many moderns are experiencing with obesity, leading to diabetes and heart problems, among other negative effects. Another fascinating example is the sarcomeric myosin gene MYH16, expressed mostly in the jawbone. A sizable chunk of the gene has been deleted, leading hominins to lose a lot of jaw muscles. This genetic event occurred at about the time we invented cooking, which made strong jaw muscles unnecessary (and likely metabolically expensive). And of course, many genes involved with brain development, particularly the neocortex, are now known to have undergone very strong positive selection in recent time.

As Kevin is careful to point out, none of this means that natural selection stopped working in humans. So long as there will be differential survival and reproduction, selection will be active on our genomes. But its mode and tempo have been dramatically altered by the onset of cultural evolution, which has become a drive, rather than an outcome, of natural selection in our species. As Laland puts it:

“Theoretical models consistently find that gene-culture dynamics are typically faster, stronger, and operate over a broader range of conditions than conventional evolutionary dynamics. … This picture of the evolution of the human mind is radically different from the portrayal advanced by evolutionary psychologists and many popular science writers.” (p. 239)

I think Kevin is a bit too mild when he discusses the limitations of evolutionary psychology (whose initial central hypothesis, a massive modularity of the human mind, has now been definitively rejected empirically). He states that current research in gene-culture co-evolution shows that the degree of mismatch between our genetic endowment and our culturally created environment is “far more limited” than evopsych authors envisioned. I’d say that’s a dramatic understatement, but certainly still an observation that should lead serious evolutionary psychologists to revise a great deal of what they are doing, abandoning the increasingly silly idea that the Pleistocene was a crucial “environment of evolutionary adaptedness” (EEA), as if the genetic evolution of Homo sapiens had suddenly stopped at that point in time.

“Far from being trapped in the past by an outdated biological legacy, humans are characterized by a remarkable plasticity. Our adaptiveness is reinforced by both cultural and biological evolution.” (p. 240)

If people who write about evopsych were to take this conclusion a bit more seriously, especially when they write for a general public, there would be a lot less garbage floating around the pop science literature. But I ain’t holding my breath…

Plato’s reading suggestions, episode 118

nude nymphsHere it is, our regular Friday diet of suggested readings for the weekend:

Gallery removes naked nymphs painting to “prompt conversation.”

What can, exactly, science tell us about morality? Something, but not the whole thing.

Can blockchain technology be used to solve the problem of inequality? Sure, but the hell it’s going to happen…

The good news about regret.

The philosophy of mid-life crisis.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Also, keep ‘em short, this is a comments section, not your own blog. Thanks!

The non-problem of moral luck

The Good Place - trolley dilemma

The Good Place is an unusual comedy on NBC, featuring a professor of moral philosophy among its main characters. My friend Skye Cleary has interviewed the real life philosopher who consults for the show, Todd May of Clemson University, for the blog of the American Philosophical Association. The exchange is definitely worth a read. In this post I will make an argument that one can learn more about moral philosophy from watching a single episode of the show than by listening to a technical talk in that same field while attending the APA’s own annual meeting.

Episode five of the second season of TGP features a sophisticated discussion of the infamous trolley problem, a thought experiment in ethics that has by now generated a cottage industry among both philosophers and neuroscientists. I will not explain for the n-th time what the problem consists of, you can look it up on Wikipedia. Suffice to say that the more I study virtue ethics, the more I become skeptical of the value of much modern moral philosophy, with its indulging in more and more convoluted hypothetical situations that seem to be designed more to show off the cleverness of the people working in the field than to actually help the rest of us live an ethical life. It is no coincidence that the dilemma is always framed in terms of what a deontologist or a utilitarian would do, those two frameworks having gotten further and further away from any relevance to real life, contra to what either Immanuel Kant or John Stuart Mill surely intended.

At any rate, the episode in question features a theoretical lecture on trolleys by the resident philosophical character, Chidi (played by the excellent William Jackson Harper). One of those on the receiving end of the lecture is the demon-turning-good-guy Michael (played by the awesome Ted Danson). During the lecture, Michael becomes impatient with the theory, so he snaps his fingers and transports Chidi, his friend Eleanor (played by Kristen Bell) and himself aboard an actual trolley, about to kill what appear to be real people. Michael then asks Chidi for a real-life demonstration: what is the philosopher going to do when suddenly faced with the dilemma, in the field, so to speak? Hilarity (and mayhem) quickly ensue. The episode is so good that I made my students watch it and comment on it.

Michael’s point is well taken: ethics is not (or ought not to be!) a theoretical exercise in cleverness, but a guide to navigating life’s real situations, and Chidi the philosopher — while very good in theory — fails spectacularly at it. I was thinking of that sit-com imparted lesson while attending a talk at the Eastern APA meeting last January, delivered by Philip Swenson of the College of William and Mary. In the following I will pick on Swenson a bit, not because his talk was bad (it wasn’t), but because it is an example of a way of doing philosophy that I increasingly object to, on ground of indulgence in irrelevant logic chopping.

Swenson set out to propose a solution to the “problem” of moral luck. He began, of course, with a couple of hypothetical situations:

Resultant luck case. Alice and Bill both go on walks along a riverbank. Both encounter a drowning child and attempt a rescue. They make the same choices and attempt the same actions. Alice’s rescue succeeds, but a sudden current prevents Bill’s attempt from succeeding, and the child drowns.

Circumstantial luck case. Alice goes for a walk along a riverbank and encounters a drowning child. She rescues the child. On a separate occasion, Claire goes for a walk along the riverbank. She does not encounter a drowning child. If Claire had encountered a drowning child she would have rescued the child.

What’s the problem? I mean, other than for the (fortunately hypothetical) child who occasionally drowns? Swenson is bothered by the fact that, in the first case, if we say that Alice is more praiseworthy than Bill, it looks as though we accept something apparently horrible called “resultant moral luck.” In the second case, if we say that Alice is more praiseworthy than Claire, then we accept something equally objectionable, called “circumstantial moral luck.” As Swenson puts it:

“Rejecting circumstantial moral luck appears to require a very significant revision to everyday moral judgment. Consider the plausible claim that a great many people all over the world are not so different from those who went along with the rise of the Nazis. Many people would have done similar things under similar circumstances. If we accept this and reject circumstantial luck then it looks as though some radical claim or other will follow.”

That would be, in case the reasoning isn’t clear, the radical claim that most of us are not as good as we think, and that if we had lived under the Nazi we would have been just as culpable as the majority of the German population of the time for the Holocaust. But it doesn’t end here, there is a third case to consider:

Constitutive luck case. Alice goes for a walk along a riverbank and encounters a drowning child. she rescues the child. On a separate occasion Daniel goes for a walk along the riverbank and also encounters a drowning child. Because Daniel is — through no previous fault of his own — cruel and uncaring, he refrains from rescuing the child. However, if he had possessed Alice’s naturally caring disposition, he would have rescued the child.

Swenson went on to remind the audience of the two classical “solutions” found in the philosophical literature for the problem of moral luck: “responsibility skepticism” (deny that anyone is ever praiseworthy or blameworthy at all), and the “responsibility explosion” (say that people are praiseworthy or blameworthy in virtue of what they would have done in various circumstances they never actually faced, equipped with character traits they never had).

He then goes on to present his own solution to the problem, which involves a strange calculation of moral desert levels, beginning with the assumption that the “expected desert level” for an agent is zero, and continuing with the notion that we can then assign points to different ethical situations according to a variety of criteria. I will not go into the details because they are irrelevant to my argument here. Which is that we should reject this whole approach to moral philosophy, period.

To begin with, I find bizarre the very idea that we should engage in some sort of morality ledger construction, keeping score of the praiseworthiness or blameworthiness of people. Why? What the heck is the point? Are we some sort of god who has to decide on where to send people in the afterlife? (That, incidentally, is the premise of TGP show. And it’s very funny.) Take the first scenario, the case of resultant luck. It wouldn’t cross my mind for a second to say that Alice is more praiseworthy than Bill just because Bill did not succeed in his attempt at rescuing the drowning child. On the contrary, I am in awe of anyone who would attempt the rescue, regardless of whether s/he succeeds or not.

The circumstantial luck case is even more out there: there is no reason for us to consider Claire at all. If the circumstances were such as not to test her moral fiber, fine, why should that be an issue of any sort? Alice is to be praised for her attempted (and successful) rescue, the question of what Claire would have done simply did not arise, and that’s the end of that.

The last scenario, that of constitutive luck, is interesting, but only academically. To begin with, my view — contra Swenson’s stated hypothesis — is that adult human beings are morally responsible by simple virtue of being adults. That’s what it means to be an adult, regardless of the circumstances of one’s childhood. But if Daniel has an aberrant character because, say, of some developmental abnormality in his brain, or perhaps because a tumor is interfering with his moral decision making brain network, then fine, he is not to be blamed for his inaction. That’s no skin off of Alice’s nose, because moral desert is not (or should not be) a competition! Again, why the karmic obsession with keeping scores?

What about the choice between responsibility skepticism and the responsibility explosion? It seems to me that a society cannot function without a reasonable attribution of responsibility for the actions of its (adult, normally functioning) members. But one shouldn’t be carried away and start thinking of all possible hypothetical scenarios. Ethics should be concerned with what actually happens to real people, not with how hypothetical individuals would behave under (infinite) hypothetical circumstances. If you care about the latter, I suggest you’ve got your priorities seriously screwed up.

In the end, the “problem” of moral luck is not a problem at all. When Thomas Nagel wrote his now classical paper by that title, back in 1979, I took it to call our attention to the humbling fact that we may be far less moral than we like to think, and that that observation ought to make us more sympathetic toward the above mentioned ordinary Germans under the Nazi. To cure us of moral hubris, as it were. That is a very good practical lesson, nudging us toward being both less complacent about our own abilities and more charitable toward the shortcomings of others. But if the whole thing degenerates into an entirely impractical mathematical exercise in the assignment of praise and blame we have lost track of what ethics should be about. As the Stoic philosopher Epictetus put it 19 centuries ago:

“If you didn’t learn these things in order to demonstrate them in practice, what did you learn them for?” (Discourses I, 29.35)