The problem with cognitive and moral psychology

quine

Willard Quine

Willard Quine, one of the most influential philosophers of the 20th century, famously thought that there was no discontinuity between philosophy and the natural sciences. Indeed, he went so far as to suggest that epistemology, for instance, will eventually become a branch of psychology.

I think he was seriously mistaken. While it is certainly true that there are continuities and reciprocal positive interactions between the sciences and philosophy, the two are fairly clearly distinct enterprises. Proof can easily be found in Quine’s own papers: I am a scientist as well as a philosopher, and every single one of his paper that I came across looks nothing at all like a science paper, but instead is very much written in an unmistakably philosophical style.

At any rate, I was reminded of this while reading a fascinating, in-depth article by Tamsin Shaw in the New York Review of Books, covering or referring to the claims of a number of books by psychologists that have recently made much noise to the effect that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.

The authors covered by Shaw’s essay include Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others. All of them should be pleased by Quine’s remarks on epistemology, and all of them make the same kind of fundamental mistake, regardless of the quality of their empirical research. It is that fundamental mistake I wish to explore here.

Let me begin by summarizing the most pertinent points made by Shaw, an Associate Professor of European and Mediterranean Studies and of Philosophy at NYU. (I will skip the parts of the article that deal with the recent empirical and moral failures of the psychological profession itself. While fascinating, they are tangential to my aims here.)

The first author to fall under scrutiny in Shaw’s sharp analysis is Joshua Greene, who has made a career (see, for instance, his Moral Tribes: Emotion, Reason, and the Gap Between Us and Them) of claiming that his neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism. (This, in my mind, is a colossal example of what in philosophy is known as a category mistake, after the felicitous phrase introduced by Gilbert Ryle.)

Here is Shaw: “Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.”

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following (indeed, for all I know, someone might have done this already):

  1. There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.
  2. There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.
  3. There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But none of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Next: Jonathan Haidt, who “denies that reason ordinarily plays any part in motivating moral judgments, seeing it rather as a post-hoc means of justifying the intuitions we form quickly and unreflectively.” In his The Righteous Mind: Why Good People Are Divided by Politics and Religion, he famously empirically identified six areas of moral concern for the populations he surveyed: care vs. harm, fairness vs. cheating, loyalty vs. betrayal, authority vs. subversion, sanctity vs. degradation, and liberty vs. oppression.

Interestingly, he also found that self-professed “liberals” (as opposed to “conservatives”) in 21st century America consider only a subset of these dimensions to be morallysalient, and tend to be focused especially on the care vs harm dimension. Moreover, he writes that “across many scales, surveys, and political controversies, liberals turn out to be more disturbed by signs of violence and suffering, compared to conservatives and especially to libertarians.”

Now, so far so (almost) good. These are empirical findings, and although a recent survey of psychological research (mentioned in Shaw’s article) has found that only about 40% of the results of empirical psychological studies can be confirmed upon replication, let us take them at face value.

The first, obvious, objection, is that Haidt is taking “liberal” and “conservative” to be fairly stable categories, at the least implicitly. But of course these terms have a particular meaning only in the United States, and only for the last few decades. What it meant to be liberal, conservative, or something else, in the United States in the past was different, and it certainly is different in other places on the planet (and a fortiori in other centuries, both in the US and elsewhere).

Setting that aside, however, it should be obvious that there is more than one possible normative interpretation of Haidt’s findings, besides the one he himself suggests (and why would a psychologist get in the business of prescribing moral stances anyway?). Haidt claims that liberals should try to appreciate the “richer” moral landscape contemplated by conservatives. Yes, or one could invite conservatives and especially libertarians to be a bit less callous about the harm done to other people, or even to consider that perhaps they may be mistaken when they think that respect of authority, or “sanctity,” are valid moral categories at all (again, think Ryle).

As Shaw puts it: “In offering this moral counsel [Haidt] presupposes that the norm of cooperation should take precedence over the values that divide us.” Right. But on what bases does he make such presupposition? Empirical ones? That would be begging the question. Philosophical ones? But then he would be outside of his proper area of expertise.

Third up: Steven Pinker. In his The Better Angels of Our Nature: Why Violence Has Declined he again straddles the empirical/descriptive vs normative/prescriptive divide.

(Please note that I (and likely Shaw, I don’t know, I haven’t asked her) am not advocating a sharp boundary between is and ought. I am simply saying — with David Hume — that one can’t gingerly jump from one domain to the other without argument or justification.)

Pinker argues that moral progress is the result of rational deliberation. No objections here. But, as Shaw points out, he holds to an explicitly narrow conception of rationality: “by ‘rationality’ he means specifically ‘the interchangeability of perspectives and the opportunity the world provides for positive-sum games,’ rather than any higher-order philosophical theory. He allows that empathy has played a part in promoting altruism, that ‘humanitarian reforms are driven in part by an enhanced sensitivity to the experiences of living things and a genuine desire to relieve their suffering.’ But nevertheless our ‘ultimate goal should be policies and norms that become second nature and render empathy unnecessary.’”

Pinker, in effect, just like Greene and Haidt, doesn’t seem to be aware (or perhaps doesn’t care) that he is building into his allegedly objective empirical assessment plenty of unexamined philosophical assumptions. As Daniel Dennett famously put it, there is no such thing as philosophy-free science, there is only science whose philosophical baggage goes unexamined.

Which brings me to the fourth psychologist under examination: Paul Bloom. He also is no friend of empathy, but he also begins with a reasonable, empirically substantiated, perspective, and then jumps off the philosophical window, so to speak.

Bloom, in his Just Babies: The Origins of Good and Evil, argues that human beings start out with an innate, rudimentary, sense of morality, based on empathy and a non articulated concept of fairness directed mostly at their care takers. But what it means to become an adult, Bloom says, is to gradually expand our sense of morality to include strangers, and we do this chiefly by reason, because if anything our instincts make us wary of people we don’t know, especially if they look different from us.

I’m on board thus far, and so is Shaw (and this isn’t really a new idea either, the Stoics, for instance, proposed it two millennia ago). But then she correctly remarks: “Bloom’s view of reasoning, like Haidt’s and Pinker’s, seems oddly restrictive: he equates it with impartiality in the sense of the development of ‘systems of reward and punishment that apply impartially within the community.’ The norm of cooperation is again presupposed as the fundamental means for deciding which of our moral intuitions we should heed. When discussing the more stringent moral principles that Peter Singer, for instance, takes to be rationally required of us concerning our duties to distant strangers, Bloom dismisses them as unrealistic in the sense that no plausible evolutionary theory could yield such requirements for human beings.” But of course evolution is what provided us with the very limited moral instinct that Bloom himself concedes needs to be expanded through the use of reason! He seems to want to have it both ways: we ought to build on what nature gave us, so long as what we come up with is compatible with nature’s narrow demands. But why?

Let me quote once more from Shaw, who I think puts her finger precisely where the problem lies: “it is a fallacy to suggest that expertise in psychology, a descriptive natural science, can itself qualify someone to determine what is morally right and wrong. The underlying prescriptive moral standards are always presupposed antecedently to any psychological research … No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks. Humanly created suffering will continue to demand of us not simply new ‘technologies of behavior’ [to use B.F. Skinner’s phrase] but genuine moral understanding. We will certainly not find it in the recent books claiming the superior wisdom of psychology.”

Please note that Shaw isn’t saying that moral philosophers are the high priests to be called on, though I’m sure she would agree that those are the people that have thought longer and harder about the issues in question, and so should certainly get a place at the discussion table. She is saying that good reasoning in general, and good moral reasoning in particular, are something we all need to engage in, for the sake of our own lives and of society at large.

And this is, I think, where the problem lies with both cognitive and moral psychology. It is an unqualified positive contribution of these fields of science to make us appreciate just how much unrecognized bias goes into human thinking. We do, indeed, rationalize more often then we think rationally. But that is not a reason to abandon philosophy, critical thinking, or trust in reason. On the contrary: it provides us added, urgent motivation to do so.

Here is another analogy I like to bring up in this context: psychological research has shown that people are really bad at reasoning about probabilities, a simple fact on which the entire gambling and lottery industries are built and thrive. But it would be odd, and indeed positively dangerous, for psychologists to tell us to stop teaching probability theory to people. On the contrary, it is precisely because human beings are so easily fooled in this department that we need to teach statistics and probabilities.

When people like Haidt and other psychologists claim that the speech of liberals and conservatives, or of advocates of science and their opponents, follow the same patterns and are affected by the same cognitive biases, they are absolutely right. But they are missing — willfully or not — the crucial element that, at the least some of the times, separates those positions: the epistemic one.

For instance, if you take a look at some of my debates with creationists (here is one example) and you examine them from a purely psychological perspective, you will not see much difference between the creationist and myself. We both make use of humor, attempt to diminish the stature of our opponent, possibly engage in a couple of informal fallacies (which, sometimes, are actually not fallacious), and so forth.

But the fact remains that I’m closer to the truth than any creatonist is. Why? Because it is a fact, as much as science can ascertain facts, that the earth is billions, not thousands, of years old. Period, end of discussion. But if you look at the debate from a purely descriptive, epistemically neutral, perspective, you won’t be able to see that. You may sound more ecumenical and fair minded to outsiders (“see? Prof. Pigliucci is engaging in the same kind of rhetoric as his creationist opponent!), but you will do so at the cost of a great casualty: truth.

157 thoughts on “The problem with cognitive and moral psychology

  1. Coel

    You might want to explore the possibility that moral decisions are different in kind to decisions about which foods to eat.

    But after exploring that possibility I’d end up rejecting it. 🙂

    Moral judgements basically *are* re-purposed aesthetic judgements. Evolution usually adapts what it already has. (E.g. it makes bats wings out of fingers and arms.) What evolution already had was an aesthetic system that was about choosing what foods to eat, etc. It then re-purposed that to — in essence — make aesthetic judgements about how people thought and acted. Et voila, morality!

    This is what Darwin originally proposed, in Descent of Man, and as I see it Darwin and Hume got this stuff right and sorted meta-ethics. But everyone since then has just been rejecting it because it is counter-intuitive.

    Like

  2. Coel

    Hi Robin,

    Do you realise you just spent an entire post telling me what I ought and ought not to do?

    Sure! I’m a moral agent — I have feelings and values — just as everyone else does. Telling you what to do is, of course, purely in my opinion. 🙂

    Like

  3. Coel

    Hi Robin,

    So there is no difference between the decision to kill or not to kill and the decision on whether to eat or not to eat seafood?

    Can you narrow down your question a bit more? What sort of differences are you asking about?

    I could answer your question by saying that one difference is that one decision is about killing and the other about seafood.

    Like

  4. Massimo Post author

    Mark,

    “But perhaps we could call that study “ethics”? If the answers to these questions must be called “morality” for historical reasons, perhaps we need to coin a new name for ‘morality’ as cooperation strategies?”

    Or perhaps the science crowd could just talk about cooperation strategies, their evolution and their neurological underpinning? Why on earth try to co-opt words like ethics and morality, which have had a pretty well defined meaning for literally thousands of years??

    Like

  5. Robin Herbert

    Hi Coel,

    Just now you were confidently telling me that you rejected a distinction between moral and aesthetic decisions. When I ask you to repeat this assertion for specific cases you appear to be backtracking.

    Like

  6. Coel

    Hi Robin,

    When I ask you to repeat this assertion for specific cases you appear to be backtracking.

    No, I’m simply not understanding the question you’re asking.

    Just now you were confidently telling me that you rejected a distinction between moral and aesthetic decisions.

    The fundamentals of the two are the same, as being human value judgements about something. But we do tend to use the two terms for different subject matters.

    We tend to use the word “aesthetic” for value judgements about food, clothes, art, music, etc. We tend to use the word “morals” for value judgements about how humans treat other humans. Both categories are fuzzy edged, however.

    Like

  7. SocraticGadfly

    Massimo again gets to the “is ≠ ought” with this:

    Or perhaps the science crowd could just talk about cooperation strategies, their evolution and their neurological underpinning? Why on earth try to co-opt words like ethics and morality, which have had a pretty well defined meaning for literally thousands of years??

    Per the same evolutionary biologists, we know said cooperation strategies have been limited to “tribes,” “clans” or whatever of approximately 140-150 people. That’s the “is.”

    The “ought,” which is outside the purview of evolutionary biology in general, and ev psych (and evolutionary sociology) in general, is to extend such issues beyond 140-150 person tribes in nation-states of tens to hundreds of millions of people and a world of more than 7 billion.

    Liked by 1 person

  8. Robin Herbert

    Also, if someone tells me that he has done some scientific research and found that Utilitarianism is the best moral system and I say “but I want to pursue my own selfish ends at the expense of everyone else” then he will have to say “Fair enough, Utilitarianism is not the best fit for you”.

    So he will have found no more that the trivial fact that Utilitarianism is the best moral system for those people whose moral preferences are best served by Utilitarianism. I could have told you that.

    Liked by 3 people

  9. Robin Herbert

    So, to take this further, the supporters of the new science of morality think they can help those people for whom utilitarian best suits their moral preferences.

    Can they equally help me with my selfish bastard moral preferences? I am not asking whether they would want to help me, I am asking if a science of morality is of the same value to someone who is a selfish bastard as it would be for someone who wants to maximise utility generally?

    So, would we have a science of morality which is of equal value to everybody, irrespective of their moral preferences?

    If not, then they are assuming moral realism. If so then in what sense is it a science of morality?

    Like

  10. brodix

    Robin,

    “Do unto others as you would have them do unto you.”

    As studies of various other primates have shown, they have a fairly well adapted sense of equitability and when one takes advantage, others will punish them. So yes, treat others like sh!t and likely they will return the favor. Reciprocity.

    Justice.

    Like

  11. marc levesque

    Brodix,

    “My position has been that we haven’t gone back and fully examined all the complex processes which gave rise to us and that rather than just move forward, we need to go back and fully examine those foundations”

    I agree.

    What I question are some ideas coming out of a lot of evolutionary psychology and neuroscience, because I often find their choice of data or their interpretation of what the data represents lacking reasonable justification, moreover and sometimes simultaneously, I disagree with what they think their results mean or with how they think that relates to various issues.

    Like

  12. brodix

    marc,

    That seems highly possible. While I haven’t followed psychology or neuroscience very closely, there does seem to be a fairly common human tendency to take ideas, information, insights, etc. out of context and run with them.

    As I keep trying to argue, we instinctively make linear assumptions, because that is foundational to the essence of being mobile organisms, but nature functions cyclically. There have been some interesting studies of how forests function and the degree of “rootedness” and interconnectivity is extensive, but linear motion isn’t such a big issue.

    Another point I’ve been making here is that our natural sense of relational reciprocity has been largely replaced by the medium of financial exchange, so there is a very strong and deep motivation for people to do whatever earns them money. If that is to puff up the results with a little additional hot air…..

    Like

  13. Robin Herbert

    Hi Coel,

    “No, I’m simply not understanding the question you’re asking.”

    Similarly I am simply not understanding the analogy you are making between food and moral choices.

    You appear to think that there is some commonality between those things. Yet when I ask, in order to clarify what this commonality is, if it is the case in specific examples you simply don’t understand the question.

    That implies that you simply don’t understand your own analogy.

    Like

  14. Robin Herbert

    Also, if we are really to put this on a scientific footing then we need to avoid loose terms like “value”. What exactly does that mean? I choose food that increases my well being, or that avoids decreasing my well being. Does “value” mean the same for moral choices? Do we make moral choices to increase our well being, or to avoid decreases in well being? Do soldiers or policemen make moral choices on this basis?

    If you just say they do things because they “value” them, then I am not really understanding what you are saying because I don’t know what you mean by the term.

    Like

  15. Robin Herbert

    Brodix,

    Remember my point about the triangle. If one person says “A square is a triangle” and the other says “A square is not a triangle”, they might both be right because they mean something different by triangle (for example the first person might mean by a triangle a shape with at least three sides.

    Similarly someone might say “Slavery is unjust” and another say “Slavery is not unjust” and they might both be right because they mean something different by just. For example the second might mean “justice” to imply reciprocity. Of course a slave is never expected to reciprocate, only to do what he or she is forced to do and thus, by a definition of “just” that includes reciprocity slavery is not unjust.

    That is what I meant, a moral statement can be true or false even without moral realism, if the truth of the statement depends on the meaning the user has for the word.

    Like

  16. brodix

    Robin,

    I don’t know that I’m contradicting your point, as trying to answer your question, as to what morality is.

    Remember my argument isn’t that we should seek out a moral ideal, but that the roots of what we call morality are as deeply imbedded in our nature, as that of amorality. Cooperation, competition and all the various combinations in between, as well as extremes of either.

    Like

  17. brodix

    I think it safe to say that slavery is an extreme of economic competition. Consequently it has proven to be an unstable economic model. The Romans had one of the more stable and long lasting slave systems, but then they were allowed to buy their freedom and become citizens, thus there was movement between the levels and so the pressure for revolution was limited.

    Cycles.

    Like

  18. Robin Herbert

    Brodix,

    I don’t recall asking what morality is. I did say that before developing a system of morality we should ask what the purpose of the system is, what we intend it to do.

    I don’t see how you have answered this. As I said earlier the fact that some behaviour has resulted from evolution is not a good reason for me to persist in it.

    Like

  19. Robin Herbert

    To me it makes no sense to say that the point of a system of morality is to help me better fulfil the role for which evolution programmed me.

    But that seems to be what a lot of people are suggesting.

    Maybe we don’t need a system of morality at all, maybe we should just wing all our decisions instead of trying to define concepts like Utilitarianism or Consequentialism. Probably we end up winging it either way.

    Like

  20. brodix

    Robin,

    The function of conscious thought would seem to be to define and decide, as to one’s course of action. Now what you decide is certainly up to you. The issue of morality would seem to be as to what degree of positive benefit it is to your context, however you chose to delineate it. You might want to say at home and tend your garden. You might want to go out and save the world. You might want to just accumulate as large a bank account as possible. Likely your course of action will have some foundation in your feelings and beliefs, otherwise your actions would be divorced from any meaning to you.

    Whether it constituents moral or amoral behavior will likely be a decision of those who chose to judge your actions. Now you might not care what other people think, but if that means you stay at home and tend the garden, or go out in the world and extract as much money as possible from others, because you are not concerned with others opinions, or even the currencies they use to mediate exchange, would all be factors in those assessments.

    As it is, you don’t seem to give much clue as to your preferred definition of morality.

    As to mine, to state it again, for the umpteenth time, I see morality as an aspect of co-existence. If you want a system, wouldn’t that fall in the category of ethics?

    As such, I am not, again, positing any ideal system, just that a workable system is required to undergird a functioning society and given the reality of positive and negative as unavoidable binary, there are a variety of potential models and none will be ideal. Consequently some models might work better in some situations, than others, but that is another conversation.

    Like

  21. Robin Herbert

    I don’t even know if there is such a thing as morality or not. I can only talk about it in family resemblance terms, analyse the way I use the words, how others use the words, what we seem to mean by them and what we dont.

    Like

Comments are closed.