The problem with cognitive and moral psychology

quine

Willard Quine

Willard Quine, one of the most influential philosophers of the 20th century, famously thought that there was no discontinuity between philosophy and the natural sciences. Indeed, he went so far as to suggest that epistemology, for instance, will eventually become a branch of psychology.

I think he was seriously mistaken. While it is certainly true that there are continuities and reciprocal positive interactions between the sciences and philosophy, the two are fairly clearly distinct enterprises. Proof can easily be found in Quine’s own papers: I am a scientist as well as a philosopher, and every single one of his paper that I came across looks nothing at all like a science paper, but instead is very much written in an unmistakably philosophical style.

At any rate, I was reminded of this while reading a fascinating, in-depth article by Tamsin Shaw in the New York Review of Books, covering or referring to the claims of a number of books by psychologists that have recently made much noise to the effect that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.

The authors covered by Shaw’s essay include Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others. All of them should be pleased by Quine’s remarks on epistemology, and all of them make the same kind of fundamental mistake, regardless of the quality of their empirical research. It is that fundamental mistake I wish to explore here.

Let me begin by summarizing the most pertinent points made by Shaw, an Associate Professor of European and Mediterranean Studies and of Philosophy at NYU. (I will skip the parts of the article that deal with the recent empirical and moral failures of the psychological profession itself. While fascinating, they are tangential to my aims here.)

The first author to fall under scrutiny in Shaw’s sharp analysis is Joshua Greene, who has made a career (see, for instance, his Moral Tribes: Emotion, Reason, and the Gap Between Us and Them) of claiming that his neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism. (This, in my mind, is a colossal example of what in philosophy is known as a category mistake, after the felicitous phrase introduced by Gilbert Ryle.)

Here is Shaw: “Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.”

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following (indeed, for all I know, someone might have done this already):

  1. There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.
  2. There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.
  3. There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But none of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Next: Jonathan Haidt, who “denies that reason ordinarily plays any part in motivating moral judgments, seeing it rather as a post-hoc means of justifying the intuitions we form quickly and unreflectively.” In his The Righteous Mind: Why Good People Are Divided by Politics and Religion, he famously empirically identified six areas of moral concern for the populations he surveyed: care vs. harm, fairness vs. cheating, loyalty vs. betrayal, authority vs. subversion, sanctity vs. degradation, and liberty vs. oppression.

Interestingly, he also found that self-professed “liberals” (as opposed to “conservatives”) in 21st century America consider only a subset of these dimensions to be morallysalient, and tend to be focused especially on the care vs harm dimension. Moreover, he writes that “across many scales, surveys, and political controversies, liberals turn out to be more disturbed by signs of violence and suffering, compared to conservatives and especially to libertarians.”

Now, so far so (almost) good. These are empirical findings, and although a recent survey of psychological research (mentioned in Shaw’s article) has found that only about 40% of the results of empirical psychological studies can be confirmed upon replication, let us take them at face value.

The first, obvious, objection, is that Haidt is taking “liberal” and “conservative” to be fairly stable categories, at the least implicitly. But of course these terms have a particular meaning only in the United States, and only for the last few decades. What it meant to be liberal, conservative, or something else, in the United States in the past was different, and it certainly is different in other places on the planet (and a fortiori in other centuries, both in the US and elsewhere).

Setting that aside, however, it should be obvious that there is more than one possible normative interpretation of Haidt’s findings, besides the one he himself suggests (and why would a psychologist get in the business of prescribing moral stances anyway?). Haidt claims that liberals should try to appreciate the “richer” moral landscape contemplated by conservatives. Yes, or one could invite conservatives and especially libertarians to be a bit less callous about the harm done to other people, or even to consider that perhaps they may be mistaken when they think that respect of authority, or “sanctity,” are valid moral categories at all (again, think Ryle).

As Shaw puts it: “In offering this moral counsel [Haidt] presupposes that the norm of cooperation should take precedence over the values that divide us.” Right. But on what bases does he make such presupposition? Empirical ones? That would be begging the question. Philosophical ones? But then he would be outside of his proper area of expertise.

Third up: Steven Pinker. In his The Better Angels of Our Nature: Why Violence Has Declined he again straddles the empirical/descriptive vs normative/prescriptive divide.

(Please note that I (and likely Shaw, I don’t know, I haven’t asked her) am not advocating a sharp boundary between is and ought. I am simply saying — with David Hume — that one can’t gingerly jump from one domain to the other without argument or justification.)

Pinker argues that moral progress is the result of rational deliberation. No objections here. But, as Shaw points out, he holds to an explicitly narrow conception of rationality: “by ‘rationality’ he means specifically ‘the interchangeability of perspectives and the opportunity the world provides for positive-sum games,’ rather than any higher-order philosophical theory. He allows that empathy has played a part in promoting altruism, that ‘humanitarian reforms are driven in part by an enhanced sensitivity to the experiences of living things and a genuine desire to relieve their suffering.’ But nevertheless our ‘ultimate goal should be policies and norms that become second nature and render empathy unnecessary.’”

Pinker, in effect, just like Greene and Haidt, doesn’t seem to be aware (or perhaps doesn’t care) that he is building into his allegedly objective empirical assessment plenty of unexamined philosophical assumptions. As Daniel Dennett famously put it, there is no such thing as philosophy-free science, there is only science whose philosophical baggage goes unexamined.

Which brings me to the fourth psychologist under examination: Paul Bloom. He also is no friend of empathy, but he also begins with a reasonable, empirically substantiated, perspective, and then jumps off the philosophical window, so to speak.

Bloom, in his Just Babies: The Origins of Good and Evil, argues that human beings start out with an innate, rudimentary, sense of morality, based on empathy and a non articulated concept of fairness directed mostly at their care takers. But what it means to become an adult, Bloom says, is to gradually expand our sense of morality to include strangers, and we do this chiefly by reason, because if anything our instincts make us wary of people we don’t know, especially if they look different from us.

I’m on board thus far, and so is Shaw (and this isn’t really a new idea either, the Stoics, for instance, proposed it two millennia ago). But then she correctly remarks: “Bloom’s view of reasoning, like Haidt’s and Pinker’s, seems oddly restrictive: he equates it with impartiality in the sense of the development of ‘systems of reward and punishment that apply impartially within the community.’ The norm of cooperation is again presupposed as the fundamental means for deciding which of our moral intuitions we should heed. When discussing the more stringent moral principles that Peter Singer, for instance, takes to be rationally required of us concerning our duties to distant strangers, Bloom dismisses them as unrealistic in the sense that no plausible evolutionary theory could yield such requirements for human beings.” But of course evolution is what provided us with the very limited moral instinct that Bloom himself concedes needs to be expanded through the use of reason! He seems to want to have it both ways: we ought to build on what nature gave us, so long as what we come up with is compatible with nature’s narrow demands. But why?

Let me quote once more from Shaw, who I think puts her finger precisely where the problem lies: “it is a fallacy to suggest that expertise in psychology, a descriptive natural science, can itself qualify someone to determine what is morally right and wrong. The underlying prescriptive moral standards are always presupposed antecedently to any psychological research … No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks. Humanly created suffering will continue to demand of us not simply new ‘technologies of behavior’ [to use B.F. Skinner’s phrase] but genuine moral understanding. We will certainly not find it in the recent books claiming the superior wisdom of psychology.”

Please note that Shaw isn’t saying that moral philosophers are the high priests to be called on, though I’m sure she would agree that those are the people that have thought longer and harder about the issues in question, and so should certainly get a place at the discussion table. She is saying that good reasoning in general, and good moral reasoning in particular, are something we all need to engage in, for the sake of our own lives and of society at large.

And this is, I think, where the problem lies with both cognitive and moral psychology. It is an unqualified positive contribution of these fields of science to make us appreciate just how much unrecognized bias goes into human thinking. We do, indeed, rationalize more often then we think rationally. But that is not a reason to abandon philosophy, critical thinking, or trust in reason. On the contrary: it provides us added, urgent motivation to do so.

Here is another analogy I like to bring up in this context: psychological research has shown that people are really bad at reasoning about probabilities, a simple fact on which the entire gambling and lottery industries are built and thrive. But it would be odd, and indeed positively dangerous, for psychologists to tell us to stop teaching probability theory to people. On the contrary, it is precisely because human beings are so easily fooled in this department that we need to teach statistics and probabilities.

When people like Haidt and other psychologists claim that the speech of liberals and conservatives, or of advocates of science and their opponents, follow the same patterns and are affected by the same cognitive biases, they are absolutely right. But they are missing — willfully or not — the crucial element that, at the least some of the times, separates those positions: the epistemic one.

For instance, if you take a look at some of my debates with creationists (here is one example) and you examine them from a purely psychological perspective, you will not see much difference between the creationist and myself. We both make use of humor, attempt to diminish the stature of our opponent, possibly engage in a couple of informal fallacies (which, sometimes, are actually not fallacious), and so forth.

But the fact remains that I’m closer to the truth than any creatonist is. Why? Because it is a fact, as much as science can ascertain facts, that the earth is billions, not thousands, of years old. Period, end of discussion. But if you look at the debate from a purely descriptive, epistemically neutral, perspective, you won’t be able to see that. You may sound more ecumenical and fair minded to outsiders (“see? Prof. Pigliucci is engaging in the same kind of rhetoric as his creationist opponent!), but you will do so at the cost of a great casualty: truth.

157 thoughts on “The problem with cognitive and moral psychology

  1. Coel

    Hi Massimo,

    … sciences and philosophy … are fairly clearly distinct enterprises. Proof can easily be found in Quine’s own papers: I am a scientist as well as a philosopher, and every single one of his paper that I came across looks nothing at all like a science paper, but instead is very much written in an unmistakably philosophical style.

    I’ve never found this argument convincing. First, if you take papers in theoretical cosmology and primate ethology, they look nothing like each other and are hugely different in style. That doesn’t prevent them both being “science”. Lots of different areas of science have very different styles. Paleontology vs particle physics is another example.

    Second, on a previous occasion that you made this argument I pointed you at a paper on the interpretation of quantum mechanics and asked you whether you would classify it as “philosophy” or “science”. If I recall you replied by denying that you had claimed that all papers must be clearly one or the other. Yet, if your argument above holds, surely it requires that every and all such papers must be clearly and distinctively one or the other, but not both?

    Like

  2. Massimo Post author

    Robin,

    Willard, Willard. A combination of the silly autocorrect and not having had enough coffee this morning… Thanks for catching it!

    Coel,

    funny (and highly predictable) that you would harp on that particular point. I’ve explained my position in the past, but here we go again:

    “if you take papers in theoretical cosmology and primate ethology, they look nothing like each other and are hugely different in style”

    I don’t doubt it, though that seems close to apple and oranges. But the fact is that neither looks even remotely like a paper in philosophy, any branch of philosophy. Just curious: have you ever actually read a technical paper in philosophy?

    “If I recall you replied by denying that you had claimed that all papers must be clearly one or the other. Yet, if your argument above holds, surely it requires that every and all such papers must be clearly and distinctively one or the other, but not both?”

    That is a strawman below your dignity, man. Since I keep repeating, very clearly, that there are areas of overlap and reciprocal concern, why on earth would I be committed to the position you impute me?

    Like

  3. Coel

    Hi Massimo,

    At this point I really am baffled.

    But the fact is that neither [science paper] looks even remotely like a paper in philosophy, any branch of philosophy.

    That seems to be arguing for a very clear distinction between scientific papers and philosophical ones.

    Since I keep repeating, very clearly, that there are areas of overlap and reciprocal concern, why on earth would I be committed to the position you impute me?

    And that seems to be arguing that there are many papers on areas of overlap that cannot be classified as clearly and distinctively one and not the other.

    Just curious: have you ever actually read a technical paper in philosophy?

    Yes. But now on to the substance of the piece …

    Like

  4. Coel

    Hi Massimo,

    Joshua Greene … [claims] that his neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism. (This, in my mind, is … a category mistake, …)

    If Greene’s claim is intended purely descriptively, as saying that there is empirical evidence that people think in ways that could be called utilitarian then there is no category error.

    If, though, Greene were claiming empirical evidence for a normative and realist moral scheme, then he would indeed be making a category error. That’s because moral realism itself is a category error.

    Shaw … puts her finger precisely where the problem lies: “it is a fallacy to suggest that expertise in psychology, a descriptive natural science, can itself qualify someone to determine what is morally right and wrong”.

    Agreed entirely. But, again, the even more basic fallacy is to suggest that there is a “morally right and wrong” in any sense other than that people have feelings on the matter — likes and dislikes. (Where the existence and nature of those feelings is indeed entirely open to empirical enquiry.)

    (NB: I can’t fully tell from Shaw’s piece whether Greene, Haidt and Pinker are actually committing those fallacies, or whether Shaw is merely attributing such to them out of a presumption of moral realism.)

    (PS Sorry, this comment has been just as predictable as my first one 🙂 )

    Like

  5. Robin Herbert

    Hi Massimo,

    Bloom dismisses them as unrealistic in the sense that no plausible evolutionary theory could yield such requirements for human beings.” But of course evolution is what provided us with the very limited moral instinct that Bloom himself concedes needs to be expanded through the use of reason! He seems to want to have it both ways: we ought to build on what nature gave us, so long as what we come up with is compatible with nature’s narrow demands. But why?

    Yes, exactly. There is a certain kind of person who seems to think that the evolutionary basis of morality puts some sort of obligation on us for the future.
    Hi Coel,

    If Greene’s claim is intended purely descriptively, as saying that there is empirical evidence that people think in ways that could be called utilitarian then there is no category error.

    But is he really saying something so trivial? It would be like saying there is empirical evidence for Theism, then saying – “Look, William Lane Craig and Pope Francis I are Theists therefore Theism exists.

    I read some speeches from some of this group in a conference about a science of morality and I still have no idea what they mean. They seem to commit to very little.

    Like

  6. Robin Herbert

    Rereading my comment above, that juxtaposition made it sound like I was accusing Coel of being the kind of person who thinks that the evolutionary basis of morality puts an obligation on us for the future. I did not intend that at all, I know Coel does not think this.

    Like

  7. SocraticGadfly

    ALL of them seem to commit some type of is/ought error. And, while not saying all Ev Psych is wrong, it sneaks its head in more than once. Of course, it has its own flaws on human psychological development, related to and overlapping with this.

    Like

  8. SocraticGadfly

    Sounds like you “passed” on a couple of books. That book by Matthews needs its own review, perhaps. That said, in the end of the review of it, we have this mot juste about Haidt:

    His priorities appear to align closely with those of the Department of Defense. And they are supported by his view of moral psychology. But we should be wary of accepting his prescriptions as those of an independent moral expert, qualified to dispense sound ethical guidance. The discipline of psychology cannot equip its practitioners to do that.

    Liked by 1 person

  9. SocraticGadfly

    Related is this paragraph at the end that wraps the whole schmeer up:

    No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks. Humanly created suffering will continue to demand of us not simply new “technologies of behavior” but genuine moral understanding. We will certainly not find it in the recent books claiming the superior wisdom of psychology.

    Like

  10. Robin Herbert

    If all they are saying is that science can inform our moral reasoning and help us to achieve our moral aims, then they are hardly saying anything significant or new. And even the religious right agree with that.

    Liked by 1 person

  11. Paul Braterman

    For me, the message of The Better Angels of Our Nature was that, despite even such horrors as the two world wars and assorted purges and famines, the twentieth century continued the trend of overall decreasing violence, and that this could be attributed to the emergence of strong governance (a point also made in Jared Diamond’s Until Yesterday), and enlightenment values. So we have a rather uncontentious moral claim (fewer people being massacred is a good thing), coupled to more contentious factual claims, but I didn’t see any serious is/ought confusion.

    Perhaps I need to reread it.

    Like

  12. Coel

    Hi Robin,

    If all they are saying is that science can inform our moral reasoning and help us to achieve our moral aims, then they are hardly saying anything significant or new.

    A “science of morality” is (being a science) about describing and understanding human morality. It is indeed not “new” to suggest this (e.g. Darwin started evo psych). And, yes, understanding this better can help achieve our aims, in the same way that understanding the human immune system better can help us stay healthy.

    The problem comes when one looks at this with a moral realist presumption and supposes that the above program must be about arriving at moral-realist prescriptions, or then gets puzzled about how one gets from the above to a moral-realist prescriptive account. (Answer: you don’t, because moral realism is a category error.)

    Like

  13. brodix

    Rushing through. lots to think about. One stream of consciousness;

    ” No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks.”

    This because morality is not top down. Good and bad are the biological binary code and as such, just like a computer program is based on masses of such binary distinctions, any moral code/model, frame, etc, is based on multitudes of such organic distinctions. So that any such frame, the more specific and focused it is, the more unique it will be and it is only that as these moral models are generalized, that they are broadly applicable.

    As for rationality, versus rationalization, we neurologically need to have narrative structure to our thought process, because that is how our memory functions and without it, would quickly forget anything not connected to that stream, just like we only remember the dreams we wake up during.

    Consequently it is more important we have that narrative sequence, than that it fit some larger, objective reality. Rationalization, over rationality.

    Now we could break it down. We equate sequence with causality, such as a batter hitting a ball causes it to fly away, but it is that transfer of energy that is causal, not the sequence. Yesterday didn’t cause today, simply because they are in sequence. The earth spinning relative to the sun creates this effect of days, as that is the energy transfer. Time is an effect of present activity, like temperature, which is also fundamental to our biological reality.

    As for whether math goes back to the Pleistocene, it is far more fundamental than that. Certainly bats developed sonar prior to WW2. As primates we evolved to swing around in trees, so the binocular vision functions like a bat’s sonar, in calculating distances. It becomes a short conceptual step from swinging in branches, to throwing sticks, thus further math. The flaw is considering these are the basis of reality, rather than distillations of its more basic features. No action, no connection. No math in the void.

    Got to run….

    Like

  14. Dee Dumars (@deedumars)

    I think Shaw is arguing against significant misreadings. I have in the past been critical of Haidt and Greene for interpreting their work in psychology as confirmation for their eliminativist metaethics but I believe it’s a mistake to suggest that they think scientific facts should determine our reasons for one or other philosophy. A more charitable interpretation is that our reasons for supporting one or other position should be scientifically informed.

    She begins her criticism of Greene by pointing out that his dual process theory depends on assumptions from evolutionary psychology as if his argument stands or falls on it alone, failing to engage the supporting evidence that motivates his position from a wealth of discussion in social psychology, anthropology, sociology, and yes, philosophy. if his popular book wasn’t sufficiently illustrative a quick perusal of the papers on his web site supports this (despite my frequent disagreement with his interpretations)

    I have suggested in the past that Greene’s advocacy for “utilitarianism’ hasn’t sufficiently considered Amartya Sen’s (Nobel prize winning) work on social choice theory which is quite critical of utilitarianism (the more philosophical accounts of that work are in Inequality Reexamined and Equality of What?). His response is simply that what he really means by utilitarianism is consequentialism which we could presumably reduce to trying to choose whatever results in the best outcome, while being impartial when thinking about common welfare. Thus it could be argued that Sen’s consequentialism bolsters his position and further saves him from the incompatibility of utliitarianism as a kind of decision procedure operating on units of utility with normal practical reasoning. This idea of moral peaks similar to hill climbing in genetic algorithms (in Greene and less careful expositors like Sam Harris) was anticipated by Sen in his work on social choice concluding that despite conflicting valuing of preferences and plural values that there can be a partial ordering of sets (Hilary Putnam employs this in his books The Collapse of the Fact/Value Dichotomy, Ethics Without Ontology, and elsewhere in support of a cognitivist metaethical position that perhaps isn’t too far from the eliminativist position when his deflationary epistomology is taken into account)

    There is tension in offering a moral framework for both public policy and individual deliberation, taking seriously the various skeptical arguments and evidence illustrating why we don’t do this well. I think Greene could make a sustained defense by pointing out what he thinks morality is which I think many moral psychologists would agree with that it’s social and collective, concerned with the regulation of behavior. Even when we react to moral emotions and evaluate what is ethical by ourselves alone it is penetrated by intersubjective beliefs, desires, goals, evidence, and reasons. Thus the principle that to value something doesn’t make it valuable in light of the opinions of others or for that matter further consideration.

    As I stated before, both Haidt and Greene are eliminativist which makes their normative positions constructivist in nature. So on that account they sidestep Hume’s is-ought dichotomy since they merely assume that humans are concerned with collective human welfare and further assume that the most persuasive justifications for policies will tend to be more impartial. They aren’t in any way putting forward propositions with truth values that don’t depend on our evaluations. We could in parallel tell a story about tribal allegiances and interests determining what we want and the ethical positions a form of confabulation but I don’t think there are many people who will suggest that we cant’ transcend some of these self serving tendencies either through what Adam Smith calls sympathy or the transactional nature of social and political decision making.

    (may continue later with Haidt, Pinker, and Bloom)

    Liked by 2 people

  15. Daniel Kaufman

    Really excellent essay, Massimo, though I think that their error is even worse than you do. I fear, however, that what is going to ensue is a very predictable conversation, with the usual suspects saying the same things they’ve been saying, now, for several years.

    So, I am going to stay out of it. Still, spot-on piece. And very important, given the rush to embrace these sorts of paper thin “theories.”

    Liked by 5 people

  16. Paul Crider

    I really share similar concerns, but I’m not convinced these writers make claims quite so strong as is suggested here. I’ve read three of the four works discussed, and I don’t get the sense from either Pinker or Bloom that they are trying to render moral philosophy obsolete. Bloom (whose book I read most recently), I remember, explicitly reserves a role for reason in morality. Haidt might be guilty though. He does really like his reason-as-rationalization pony, and he rides it a little further than it can really sustain.

    Like

  17. Massimo Post author

    “That seems to be arguing for a very clear distinction between scientific papers and philosophical ones. … And that seems to be arguing that there are many papers on areas of overlap that cannot be classified as clearly and distinctively one and not the other”

    That’s because you keep reading what I write in the most uncharitable way possible. So let me try once more:

    Philosophy papers almost always look very different from science papers. Even philosophy of science vs theoretical science papers. If you really have read some of the former you ought to know better than making the argument you are making.

    But, there are some areas of philosophy of science that do concern scientific theorizing, and in those areas either philosophers or scientists — and sometimes both — can cross the divide. How they write, how they argue, though, is still very different. And it should be obvious to anyone who has actually sampled the literature that that is indeed the case. Could you give me a good example of a philosophy paper that looks for all effective purposes like a science paper?

    “If Greene’s claim is intended purely descriptively, as saying that there is empirical evidence that people think in ways that could be called utilitarian then there is no category error”

    He very clearly doesn’t.

    “That’s because moral realism itself is a category error.”

    That makes no sense, or misuses the term “category error.” Moral realism may be untenable, but it isn’t a category error. Which category, exactly, would the error refer to?

    “the even more basic fallacy is to suggest that there is a “morally right and wrong” in any sense other than that people have feelings on the matter”

    This is not a discussion about realism vs antirealism in ethics. All my points (and Shaw’s) stand even if one is an antirealist. As you know, I’m a naturalist in ethics, which counts as an intermediate position, as I see it.

    Robin,

    “If all they are saying is that science can inform our moral reasoning and help us to achieve our moral aims, then they are hardly saying anything significant or new. And even the religious right agree with that”

    Correct. It isn’t *all* they are saying. But they do equivocate a lot about what, precisely they are saying, hence the confusion. My guess is that the equivocation arises from knowledge on their part that if all they were saying was what you suggest (which would be within their domain of competence) then it would be rather trivial, and it wouldn’t sell books or get grants.

    Paul B., Dee, Paul C.

    I think Shaw is correct in her interpretation of what this group is saying. Look up her direct quotes in the NYRB article. But of course there are differences of emphasis and, to some extent, of specific positions, within the group.

    Paul C.,

    “Haidt might be guilty though. He does really like his reason-as-rationalization pony, and he rides it a little further than it can really sustain.”

    Indeed. One would think that riding it that way would actually undermine his own work…

    Dee,

    “A more charitable interpretation is that our reasons for supporting one or other position should be scientifically informed.”

    But as Robin noted, this is entirely uncontroversial, and has been since the Stoics, at the least.

    “her criticism of Greene by pointing out that his dual process theory depends on assumptions from evolutionary psychology as if his argument stands or falls on it alone, failing to engage the supporting evidence that motivates his position from a wealth of discussion in social psychology, anthropology, sociology, and yes, philosophy”

    I fail to see what all these other fields have to do with the matter under discussion, except for the last one, and for the same exact reason that evopsych has nothing to do with it. All those fields are descriptive, not normative. Incidentally, as I commented on this blog recently, Greene and co. get dramatically wrong even what a utilitarian actually is or thinks: https://platofootnote.wordpress.com/2015/12/28/relax-your-utilitarian-friends-are-not-psychopaths/

    “There is tension in offering a moral framework for both public policy and individual deliberation”

    Agreed.

    “on that account they sidestep Hume’s is-ought dichotomy since they merely assume that humans are concerned with collective human welfare”

    I’m not sure that utilitarianism is quite that compatible with eliminativism, but Shaw remains correct that they build a lot of philosophy into their writings without discussing it or acknowledging it.

    Like

  18. ejwinner

    The problem with any metaethic is that it is of little value – and in most situations, no value – in providing us guidance in making ethical choices. One needs some basic normative principles, however derived, and however flexible, in order compare what we are immediately presented with, and possible choices consistent with these principles.

    Psychology can, at best, get us a metaethic; assuming it can do otherwise apparently left the psychologists Shaw discusses utterly blind-sided as to their own assumed normative principles. Arguing that ‘liberals’ (as Haidt defines them) ought to come around to more conservative moral beliefs, is not only a normative claim, it is a moral injunction; and really doesn’t have much to do with the research involved, as you and Shaw point oint out.

    But Shaw’s article underscores a much greater danger in assuming metaethics satisfies all our ethical needs – namely, it seems to guarantee us that we can find a position above normative ethics, and can re-write these as we please (given proper incentive). So any behavior can be justified – torture, murder, government sponsored misinformation – presumably even death-camps. Psychology, as behavioral technology, can get us there, free of anxiety or later guilt.

    Halleluiah! The promised land! A world wherein only the elite, free of any morality of their own, decide what should be ethical for the rest of us – and then hire psychologists to condition us to accept this.

    When Skinner wrote Walden II, he was actually assuming that a social consensus could be built around agreement to bring about the kind of Utopia he envisioned, based on the assumption that we all shared a common will to happiness.

    But in fact this two-fold assumption was unwarranted, on both folds. It is not clear that we all share a common will to happiness, or hold this as a priority in making ethical choices; and it is certainly not the case that only a consensus can produce the social change he envisioned – a top-down imposition by an oligarchy (buttressed with mis-information, misleading rhetoric, and outright propaganda) can achieve this much easier. But only to the benefit of a few.

    Liked by 2 people

  19. Robin Herbert

    Hi Coel,

    A “science of morality” is (being a science) about describing and understanding human morality.

    I wasn’t asking how you understand the term, I was talking about how the people I linked understood it, they clearly have something else in.mind.

    Liked by 1 person

  20. Coel

    Hi Massimo,

    Philosophy papers almost always look very different from science papers.

    And papers in theoretical cosmology almost always look very different from papers in primate ethology. Which is why I don’t see why that statement demonstrates anything.

    Could you give me a good example of a philosophy paper that looks for all effective purposes like a science paper?

    Let’s take the paper by Charles Sebens and Sean Carroll on quantum mechanics and justifiying the Born rule. The lead author is a philosopher, the second author a physicist.

    Moral realism may be untenable, but it isn’t a category error. Which category, exactly, would the error refer to?

    The categories are factual statements, to which one can attach truth values, and reports of preferences, to which one cannot attach truth values (no-one would attach a truth value to the claim “chocolate is delicious”, though they would to the statement “John finds chocolate delicious”). Moral realism mistakenly places moral claims in the first category.

    Like

  21. Dee Dumars (@deedumars)

    Massimo says:

    “I fail to see what all these other fields have to do with the matter under discussion, except for the last one, and for the same exact reason that evopsych has nothing to do with it. ”

    One example that comes to mind is in the work of Gilbert Harmon, Greene’s thesis advisor. He has argued back in the late 90s that evidence from social psychology (heuristics and biases, dispositional vs situational effects and various other priming results) undermines virtue ethics that are very dispositional and in turn suggested alternatives that reflect on the situations we put ourselves in (he may have pushed the pendulum too far in the other direction). Science is provisional with degrees of belief so we are often reevaluating the evidence along with new developments in philosophy.

    Like

  22. michaelfugate

    The categories are factual statements, to which one can attach truth values, and reports of preferences, to which one cannot attach truth values (no-one would attach a truth value to the claim “chocolate is delicious”, though they would to the statement “John finds chocolate delicious”). Moral realism mistakenly places moral claims in the first category.

    What if 51% of the population finds the first statement true?, 90%?
    What if John likes 51% of all chocolate?, 90%?

    Thinking about morality is hard work.

    Like

  23. Coel

    Hi michael,

    What if 51% of the population finds the first statement true?

    Then 51% of the population finds chocolate delicious. Statements like that have truth values. But someone in the minority who doesn’t like chocolate is not making an error.

    Liked by 1 person

Comments are closed.